The model that ate Europe is back, and it's bigger than ever

The FuturICT Knowledge Accelerator, a grand unified model of everything, is back in the news.

What if global scale computing facilities were available that could analyse most of the data available in the world? What insights could scientists gain about the way society functions? What new laws of nature would be revealed? Could society discover a more sustainable way of living? Developing planetary scale computing facilities that could deliver answers to such questions is the long term goal of FuturICT.

I’ve been rather critical of this effort before, but I think there’s also much to like.

  • An infrastructure for curated public data would be extremely useful.
  • There’s much to be gained through a multidisciplinary focus on simulation, which is increasingly essential and central to all fields.
  • Providing a public portal into the system could have valuable educational benefits.
  • Creating more modelers, and more sophisticated model users, helps build capacity for science-based self governance.

But I still think the value of the project is more about creating an infrastructure, within which interesting models can emerge, than it is in creating an oracle that decision makers and their constituents will consult for answers to life’s pressing problems.

  • Even with Twitter and Google, usable data spans only a small portion of human existence.
  • We’re not even close to having all the needed theory to go with the data. Consider that general equilibrium is the dominant modeling paradigm in economics, yet equilibrium is not a prevalent feature of reality.
  • Combinatorial explosion can overwhelm any increase in computing power for the foreseeable future, so the very idea of simulating everything social and physical at once is laughable.
  • Even if the technical hurdles can be overcome,
    • People are apparently happy to hold beliefs that are refuted by the facts, as long as buffering stocks afford them the luxury of a persistent gap between reality and mental models.
    • Decision makers are unlikely to cede control to models that they don’t understand or can’t manipulate to generate desired results.

I don’t think you need to look any further than the climate debate and the history of Limits to Growth to conclude that models are a long way from catalyzing a sustainable world.

If I had a billion Euros to spend on modeling, I think less of it would go into a single platform and more would go into distributed efforts that are working incrementally. It’s easier to evolve a planetary computing platform than to design one.

With the increasing accessibility of computing and visualization, we could be on the verge of a model-induced renaissance. Or, we could be on the verge of an explosion of fun and pretty but vacuous, non-transparent and unvalidated model rubbish that lends itself more to propaganda than thinking. So, I’d be plowing a BIG chunk of that billion into infrastructure and incentives for model and data quality.

2 thoughts on “The model that ate Europe is back, and it's bigger than ever”

  1. I had spotted FuturICT when I began researching cybersyn-style sub-projects. This FuturICT appeared to me as an attempt to reproduce Chile’s S4 model for the CORFO project lead by Stafford Beer in the early seventies. Futuro as they called it was inspired by Forrester’s models and ran on DYNAMO. This one has a much wider scope and indeed raises many questions. I understand it also makes heavy use of agent-based modeling which adds yet more twists.

    We are all aiming at understanding complex intertwined socio-technical systems. We seek a shared model of understanding. Unfortunately, we seem trapped by top-down thinking and centralized approaches e.g. Rio+20. A distributed approach to understanding would be terrific! Distributed groups could sense their portion of the domain in focus and iterate over sub-models. The democratization of ICT and sensing technologies in general (even environmental such as http://airqualityegg.wikispaces.com/) are leaving lots of opportunities for micro-contributions from instrument or software making for data gathering, collecting data, modeling and discussing. I wouldn’t be surprised this emerges in the coming years as mega-initiatives face difficulties due to their own weight…

  2. My guess is that the reality of this project would become rather decentralized as the unwieldiness of the big hardware/db at its core became apparent. But it seems a lot more efficient to design it that way from the outset.

Leave a Reply

Your email address will not be published. Required fields are marked *

2 × 3 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.