On emergence

25. September 2022 · 5 mins read (updated on 30. June 2024)

tl;dr: An emergent phenomenon is a reasonably accurate high-level description of a much more complex underlying low-level description. Different views on the same system might see completely different things.

Epistemic state: "Emergence" is one of these words people can't agree on what they mean by it. Note that this post contains some technical terminology.


All models are wrong, but some are useful.
— George E. P. Box

Fundamentally, the world is quantum mechanical. But this doesn't mean that we can easily predict everything from first principles. In most circumstances, a full quantum mechanical model of the system (or object) in front of you is incredibly hard to compute. Furthermore, such a model would have to account for any external influences, which possibly introduces huge uncertainties, and simulating the whole universe quantum mechanically is informationally impossible. Finally, quantum mechanics introduces by itself fundamental uncertainty that we just can't evade. Thus, obtaining the full truth — assuming it exists at all — is impossible; and trying to get close wouldn't even be very useful due to its computational hardness.

Luckily, this doesn't end all hope for intelligence to predict its environment. Most systems show at least some self-organizing behavior, by which I mean that relations between particles converge to stable fixed points. Stable fixed points are local equilibria, such that slight disturbances invoke a restoring force that return the system into the equilibrium state. Thus, such an equilibrium tends to be conserved over time. Such local conservation laws, in turn, usually allow to use much simpler models, shortcuts, to describe some behavior of the system with high accuracy. In other words, the system shows emergent behavior. Intelligent beings can use emergent properties to generalize across individual observations and thus obtain heuristics to predict their environment efficiently and accurately.

More technically, by emergence I mean that there exists a way simpler high-level model (few free parameters) that allows to predict part of the behavior of a fundamentally hugely complex subsystem (many free parameters) with reasonable accuracy. This view is usually called weak emergence, to separate it from strong emergence, which is the belief that the simple models are somehow inherent and fundamental to the world.

Let me give an example to show you what I mean. Whatever it is what you are reading this article on, it probably has physical edges. These edges consist of molecules and atoms, each of which has no clue that it is part of an edge. Even though the edge probably has an insane number of atoms, all their fundamental properties together don't contain the information "This is an edge." Instead, the edge emerges from the particular spacial relation between the individual particles that we observe. We can use a pretty simple model to represent edge, which allows to predict with astonishing accuracy much of the edge's behavior, for example how it will be aligned if we rotate it or how it continues if it is partially covered by something else. Where this extremely simple model fails, we can use more complex models which predict the edge's behavior more accurately, but are usually still way simpler than its quantum mechanical description.

Weak emergence directly implies supervenience, which means that any change in the high-level description necessarily requires a change in the low-level description. In our example, supervenience simply means that if you move your screen to another place, then also the individual particles must have moved. This means that the properties of an emergent phenomenon (or at least its lower-level explanation) are constrained by the admissible behaviors in the lower levels, thus, for example, transferring information faster than the speed of light is still not possible on any higher level of emergence.

Nevertheless, common physical models of reality are actually very permissive. And further, most real-world systems usually allow a huge space of high-level descriptions that provide better-than-random accuracy in predicting (at least some aspects of) its behavior. In other words, many beliefs humans have about the world are somewhat accurate in their own ways, but none of them is completely right. Thus, truth-seeking is less about uncovering truth and more about finding different perspectives with each their own domain of usefulness. I think this clearly calls for humility towards our own beliefs and openness to the beliefs of others, who might not be as wrong as we might think.


Edit 30-06-24: Add a paragraph on supervenience.