David Wallace argues for the ‘decoherent view’ of quantum mechanics, where at the fundamental level there is neither probability nor wavefunction collapse – and for its purest incarnation, the many-worlds interpretation of Hugh Everett III.
Physicists have long been suspicious of the “quantum measurement problem”: the supposed puzzle of how to make sense of quantum mechanics. Everyone agrees (don’t they?) on the formalism of quantum mechanics (QM); any additional discussion of the interpretation of that formalism can seem like empty words. And Hugh Everett III’s infamous “many-worlds interpretation” looks more dubious than most: not just unneeded words but unneeded worlds. Don’t waste your time on words or worlds; shut up and calculate.
But the measurement problem has driven more than philosophy. Questions of how to understand QM have always been entangled, so to speak, with questions of how to apply and use it, and even how to formulate it; the continued controversies about the measurement problem are also continuing controversies in how to apply, teach and mathematically describe QM. The Everett interpretation emerges as the natural reading of one strategy for doing QM, which I call the “decoherent view” and which has largely supplanted the rival “lab view”, and so – I will argue – the Everett interpretation can and should be understood not as a useless adjunct to modern QM but as part of the development in our understanding of QM over the past century.
The view from the lab
The lab view has its origins in the work of Bohr and Heisenberg, and it takes the word “observable” that appears in every QM textbook seriously. In the lab view, QM is not a theory like Newton’s or Einstein’s that aims at an objective description of an external world subject to its own dynamics; rather, it is essentially, irreducibly, a theory of observation and measurement. Quantum states, in the lab view, do not represent objective features of a system in the way that (say) points in classical phase space do: they represent the experimentalist’s partial knowledge of that system. The process of measurement is not something to describe within QM: ultimately it is external to QM. And the so-called “collapse” of quantum states upon measurement represents not a mysterious stochastic process but simply the updating of our knowledge upon gaining more information.
Valued measurements
The lab view has led to important physics. In particular, the “positive operator valued measure” idea, central to many aspects of quantum information, emerges most naturally from the lab view. So do the many extensions, total and partial, to QM of concepts initially from the classical theory of probability and information. Indeed, in quantum information more generally it is arguably the dominant approach. Yet outside that context, it faces severe difficulties. Most notably: if quantum mechanics describes not physical systems in themselves but some calculus of measurement results, if a quantum system can be described only relative to an experimental context, what theory describes those measurement results and experimental contexts themselves?

One popular answer – at least in quantum information – is that measurement is primitive: no dynamical theory is required to account for what measurement is, and the idea that we should describe measurement in dynamical terms is just another Newtonian prejudice. (The “QBist” approach to QM fairly unapologetically takes this line.)
One can criticise this answer on philosophical grounds, but more pressingly: that just isn’t how measurement is actually done in the lab. Experimental kit isn’t found scattered across the desert (each device perhaps stamped by the gods with the self-adjoint operator it measures); it is built using physical principles (see “Dynamical probes” figure). The fact that the LHC measures the momentum and particle spectra of various decay processes, for instance, is something established through vast amounts of scientific analysis, not something simply posited. We need an account of experimental practice that allows us to explain how measurement devices work and how to build them.
Perhaps this was viable in the 1930s, but today measurement devices rely on quantum principles
Bohr had such an account: quantum measurements are to be described through classical mechanics. The classical is ineliminable from QM precisely because it is to classical mechanics we turn when we want to describe the experimental context of a quantum system. To Bohr, the quantum–classical transition is a conceptual and philosophical matter as much as a technical one, and classical ideas are unavoidably required to make sense of any quantum description.
Perhaps this was viable in the 1930s. But today it is not only the measured systems but the measurement devices themselves that essentially rely on quantum principles, beyond anything that classical mechanics can describe. And so, whatever the philosophical strengths and weaknesses of this approach – or of the lab view in general – we need something more to make sense of modern QM, something that lets us apply QM itself to the measurement process.
Practice makes perfect
We can look to physics practice to see how. As von Neumann glimpsed, and Everett first showed clearly, nothing prevents us from modelling a measurement device itself inside unitary quantum mechanics. When we do so, we find that the measured system becomes entangled with the device, so that (for instance) if a measured atom is in a weighted superposition of spins with respect to some axis, after measurement then the device is in a similarly-weighted superposition of readout values.

In principle, this courts infinite regress: how is that new superposition to be interpreted, save by a still-larger measurement device? In practice, we simply treat the mod-squared amplitudes of the various readout values as probabilities, and compare them with observed frequencies. This sounds a bit like the lab view, but there is a subtle difference: these probabilities are understood not with respect to some hypothetical measurement, but as the actual probabilities of the system being in a given state.
Of course, if we could always understand mod-squared amplitudes that way, there would be no measurement problem! But interference precludes this. Set up, say, a Mach–Zehnder interferometer, with a particle beam split in two and then re-interfered, and two detectors after the re-interference (see “Superpositions are not probabilities” figure). We know that if either of the two paths is blocked, so that any particle detected must have gone along the other path, then each of the two outcomes is equally likely: for each particle sent through, detector A fires with 50% probability and detector B with 50% probability. So whichever path the particle went down, we get A with 50% probability and B with 50% probability. And yet we know that if the interferometer is properly tuned and both paths are open, we can get A with 100% probability or 0% probability or anything in between. Whatever microscopic superpositions are, they are not straightforwardly probabilities of classical goings-on.
Unfeasible interference
But macroscopic superpositions are another matter. There, interference is unfeasible (good luck reinterfering the two states of Schrödinger’s cat); nothing formally prevents us from treating mod-squared amplitudes like probabilities.
And decoherence theory has given us a clear understanding of just why interference is invisible in large systems, and more generally when we can and cannot get away with treating mod-squared amplitudes as probabilities. As the work of Zeh, Zurek, Gell-Mann, Hartle and many others (drawing inspiration from Everett and from work on the quantum/classical transition as far back as Mott) has shown, decoherence – that is, the suppression of interference – is simply an aspect of non-equilibrium statistical mechanics. The large-scale, collective degrees of freedom of a quantum system, be it the needle on a measurement device or the centre-of-mass of a dust mote, are constantly interacting with a much larger number of small-scale degrees of freedom: the short-wavelength phonons inside the object itself; the ambient light; the microwave background radiation. We can still find autonomous dynamics for the collective degrees of freedom, but because of the constant transfer of information to the small scale, the coherence of any macroscopic superposition rapidly bleeds into microscopic degrees of freedom, where it is dynamically inert and in practice unmeasurable.
Emergence and scale
Decoherence can be understood in the familiar language of emergence and scale separation. Quantum states are not fundamentally probabilistic, but they are emergently probabilistic. That emergence occurs because for macroscopic systems, the timescale by which energy is transferred from macroscopic to residual degrees of freedom is very long compared to the timescale of the macroscopic system’s own dynamics, which in turn is very long compared to the timescale by which information is transferred. (To take an extreme example, information about the location of the planet Jupiter is recorded very rapidly in the particles of the solar wind, or even the photons of the cosmic background radiation, but Jupiter loses only an infinitesimal fraction of its energy to either.) So the system decoheres very rapidly, but having done so it can still be treated as autonomous.
On this decoherent view of QM, there is ultimately only the unitary dynamics of closed systems; everything else is a limiting or special case. Probability and classicality emerge through dynamical processes that can be understood through known techniques of physics: understanding that emergence may be technically challenging but poses no problem of principle. And this means that the decoherent view can address the lab view’s deficiencies: it can analyse the measurement process quantum mechanically; it can apply quantum mechanics even in cosmological contexts where the “measurement” paradigm breaks down; it can even recover the lab view within itself as a limited special case. And so it is the decoherent view, not the lab view, that – I claim – underlies the way quantum theory is for the most part used in the 21st century, including in its applications in particle physics and cosmology (see “Two views of quantum mechanics” table).
Two views of quantum mechanics
Quantum phenomenon | Lab view | Decoherent view |
Dynamics |
Unitary (i.e. governed by the Schrödinger equation) only between measurements |
Always unitary |
Quantum/classical transition |
Conceptual jump between fundamentally different systems |
Purely dynamical: classical physics is a limiting case of quantum physics |
Measurements |
Cannot be treated internal to the formalism |
Just one more dynamical interaction |
Role of the observer |
Conceptually central |
Just one more physical system |
But if the decoherent view is correct, then at the fundamental level there is neither probability nor wavefunction collapse; nor is there a fundamental difference between a microscopic superposition like those in interference experiments and a macroscopic superposition like Schrödinger’s cat. The differences are differences of degree and scale: at the microscopic level, interference is manifest; as we move to larger and more complex systems it hides away more and more effectively; in practice it is invisible for macroscopic systems. But even if we cannot detect the coherence of the superposition of a live and dead cat, it does not thereby vanish. And so according to the decoherent view, the cat is simultaneously alive and dead in the same way that the superposed atom is simultaneously in two places. We don’t need a change in the dynamics of the theory, or even a reinterpretation of the theory, to explain why we don’t see the cat as alive and dead at once: decoherence has already explained it. There is a “live cat” branch of the quantum state, entangled with its surroundings to an ever-increasing degree; there is likewise a “dead cat” branch; the interference between them is rendered negligible by all that entanglement.
Many worlds
At last we come to the “many worlds” interpretation: for when we observe the cat ourselves, we too enter a superposition of seeing a live and a dead cat. But these “worlds” are not added to QM as exotic new ontology: they are discovered, as emergent features of collective degrees of freedom, simply by working out how to use QM in contexts beyond the lab view and then thinking clearly about its content. The Everett interpretation – the many-worlds theory – is just the decoherent view taken fully seriously. Interference explains why superpositions cannot be understood simply as parameterising our ignorance; unitarity explains how we end up in superpositions ourselves; decoherence explains why we have no awareness of it.

(Forty-five years ago, David Deutsch suggested testing the Everett interpretation by simulating an observer inside a quantum computer, so that we could recohere them after they made a measurement. Then, it was science fiction; in this era of rapid progress on AI and quantum computation, perhaps less so!)
Could we retain the decoherent view and yet avoid any commitment to “worlds”? Yes, but only in the same sense that we could retain general relativity and yet refuse to commit to what lies behind the cosmological event horizon: the theory gives a perfectly good account of the other Everett worlds, and the matter beyond the horizon, but perhaps epistemic caution might lead us not to overcommit. But even so, the content of QM includes the other worlds, just as the content of general relativity includes beyond-horizon physics, and we will only confuse ourselves if we avoid even talking about that content. (Thus Hawking, who famously observed that when he heard about Schrödinger’s cat he reached for his gun, was nonetheless happy to talk about Everettian branches when doing quantum cosmology.)
Alternative views
Could there be a different way to make sense of the decoherent view? Never say never; but the many-worlds perspective results almost automatically from simply taking that view as a literal description of quantum systems and how they evolve, so any alternative would have to be philosophically subtle, taking a different and less literal reading of QM. (Perhaps relationalism, discussed in this issue by Carlo Rovelli, see “Four ways to interpret quantum mechanics“, offers a way to do it, though in many ways it seems more a version of the lab view. The physical collapse and hidden variables interpretations modify the formalism, and so fall outside either category.)
The Everett interpretation is just the decoherent view taken fully seriously
Does the apparent absurdity, or the ontological extravagance, of the Everett interpretation force us, as good scientists, to abandon many-worlds, or if necessary the decoherent view itself? Only if we accept some scientific principle that throws out theories that are too strange or that postulate too large a universe. But physics accepts no such principle, as modern cosmology makes clear.
Are there philosophical problems for the Everett interpretation? Certainly: how are we to think of the emergent ontology of worlds and branches; how are we to understand probability when all outcomes occur? But problems of this kind arise across all physical theories. Probability is philosophically contested even apart from Everett, for instance: is it frequency, rational credence, symmetry or something else? In any case, these problems pose no barrier to the use of Everettian ideas in physics.
The case for the Everett interpretation is that it is the conservative, literal reading of the version of quantum mechanics we actually use in modern physics, and there is no scientific pressure for us to abandon that reading. We could, of course, look for alternatives. Who knows what we might find? Or we could shut up and calculate – within the Everett interpretation.
Further reading
S Coleman 2020 arXiv:2011.12671.
W Zurek 2003 arXiv:quant-ph/0306072.
D Wallace 2012 The Emergent Multiverse: Quantum Theory according to the Everett Interpretation (Oxford University Press).
D Wallace 2016 arXiv:1604.05973.