Theories and speculations

The Great Error

Quantum Physics and the Objectification of Probability

The subject of this paper is an error which has clouded the interpretation of quantum physics for over a century. It concerns the understanding of the meaning of probability in quantum science, and the understanding of the extent to which an event which is probabilistic is an event at all. 

Probabilistic calculations are central to the development of quantum physics, because the science deals with tiny objects which are so small they can barely be detected. Quantum science describes a world which is almost beyond the reach of observation and experimentation. The substitution of probabilistic calculation for these has been spectacularly successful. In fact, it is widely believed that the development of quantum theory is the most important advance in the history of scientific knowledge. It has underpinned almost all significant technological innovation for the last hundred years. However, common agreement about it ends at this point. There are a number of excellent accounts of the development of quantum thinking[1], and I don’t want to go over that territory here. Suffice to say, we are left with no answer to this question: quantum theory works, but what does it mean? 

What is more, it is not even universally accepted that this is a valid question. As Adam Becker says, “the majority of physicists … claim that it is somehow inappropriate or unscientific to ask what is going on in the quantum realm”[2]. The scale of the problem is well presented by Tim Maudlin, who says in his excellent Philosophy of Physics, “There just is no precise, exact physical theory called “quantum theory”… Instead, there is raging controversy”[3].

Quantum probabilistic calculations are based on Schrödinger’s founding equation, which is describes a wave function, and as I will explain, probability values (or ‘amplitudes’) relating to the position or movement of particles (but never both) are derived from this. My argument is that it is commonly assumed that these probabilistic calculations must either in some sense describe a material world, or are not real. This has led interpreters to understand them in a dualistic way, so particles are sometimes said to appear as objects, and sometimes waves. This is in turn because, if measured (which is an extremely difficult undertaking), the wave movement of the particles, which can be detected, simply vanishes, or ‘collapses’. Physicists called this ‘the measurement problem’. My argument is that, behind the measurement problem, there lies a deeper difficulty. I have called this ‘the objectification of probability’, which has led the nature of probability to be misunderstood. Put simply, the assumption is made that that a probability calculation describes an object, and is in itself objective. This is the error to which I refer. In this paper I explain how understanding probability correctly can resolve some of the most puzzling aspects of quantum science.

I also argue that the correct understanding of the nature of probability also resolves a further issue. ‘Standard’ classical physics offers us working, clear explanations of the macroscopic world, which offer a completely different set of rules to those put forward by quantum science. In important ways, quantum science and classical science give conflicting outcomes. It is still broadly accepted that standard classical physics works perfectly well to explain physical events at the macroscopic scale, whatever quantum science might say about very small things. But there is an unresolved debate about how the two are, or are not reconciled in science. This leads to the difficulty regarding the reality of quantum description. A dominant point of view for over ninety years has been that “the things that [quantum] theory describes aren’t truly real. …[Q]uantum physics proves that small objects do not exist in the same objectively real way as objects in our everyday lives do.”[4]

The debate has remained unresolved for a long time. John Bell, the great physicist who devised Bell’s Inequalities – which proved the inadequacy of classical physics to describe quantum events – identified the core issue in a way that has not been bettered: “The problem then is … how exactly is the world to be divided into speakable apparatus … that we can talk about … and unspeakable quantum systems that we can not talk about? … The mathematics of the ordinary [quantum] theory requires such a division, but says nothing about how it is to be made.”[5]

The Measurement Problem and the Double Slit Experiment

I will return to John Bell shortly. But if we are to understand the role of probability in detail, as we must, we need to understand exactly why and how it is important.

 A key (and rare) piece of more or less empirical evidence in quantum science is derived from the famous double slit experiment. This is clear and simple, and its conclusions are universally accepted. It is also possibly the least well understood experiment in the history of science. Richard Feynman said it demonstrates “a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum physics. In reality, it contains the only mystery.”[6] Why? On a small scale, Feynman continues, “things behave like nothing we know of, so it is impossible to explain this behaviour in any other than analytic ways”[7]. Feynman was writing in the ‘60s, but his observation is still generally accepted, and is often quoted. I referred to a ‘mythology’ about this experiment; Feynman’s comments have done much to help form it.

The double slit experiment appears to demonstrate that electrons sometimes behave like waves, and sometimes like particles, depending on how or whether we observe them.  

Because the particle/waves appear to change at the moment they are measured, the experiment demonstrates the ‘measurement problem’ to which I have already referred, which is often said to be the key difficulty in the interpretation of the science of small things. 

Here is a description of the experiment. 

We know that when a wave of water is passed through two small gaps the wave motion results in an interaction of ripples, so that, instead of a single impact on a wall or beach, the wave produces a series of small impacts. We have all seen this effect, or ‘interference’ pattern, which is indicative of wave motion. 

The same effect can be seen to happen with light, and is used to demonstrate that light has a wave motion. If you shine a light source through two parallel slits onto a surface on the other side you can see that the photons travel in waves because the light observed on a card on the other side of the slits is divided into a bar pattern. This is created by wave motion, which produces an interference pattern as the light passes through the gap. 

When invisibly small electrons are passed through two slits in a similar way it may at first seem unsurprising that the same effect is detected on the other side. The electrons are detected forming a series of bars, which reveal that they have a wave motion and then an interference pattern as they pass through the slits and interact with each other. However, two very odd things are observed, which Feynman describes as ‘mysteries’.  

First, the detection of the bar pattern of the electrons, indicating an interference effect, occurs even though electrons are fired at the slits one at a time. The characteristic bar pattern accumulates, building up over time. 

This is mysterious because the bar effect is ‘normally’ caused by particles meeting each other, and creates an interference pattern by means of interaction. What is a single electron interfering with? Since the electrons are fired one at a time, and have no other electrons to interact with, it has been suggested that the effect observed indicates that the individual particles must travel through both slits at the same time, and that the electrons somehow ‘interfere’ with themselves. This seems bizarre. 

And there is a second odd thing. If a detector is placed at the slits, and the particles are therefore ‘observed’, no such ‘wave motion’ is seen. Detectors indicate clearly that the electrons travel either through one slit or the other, and certainly do not pass through both slits. It is concluded that when a single particle is observed, and its movement is measured individually, there is no wave motion. It behaves, in fact, exactly as we would expect a single particle to behave. So it is said that observation appears to destroy the wave motion. (It is important, as will become clear, to remember that this is no casual observation. You can’t see an electron, and detecting one is extremely difficult.)

From this, scientists concluded that the observation of the particle changes the nature of the particle’s motion completely, and seems to alter the outcome of the experiment. Feynman’s conclusion from the slit experiment is as follows: “The electrons arrive in lumps, like particles, and the probability of the arrival of these lumps is distributed like the distribution of intensity of a wave. It is in this sense that the electron behaves “sometimes like a particle and sometimes like a wave”.[8]  So particles are waves, except when they are observed, when they are particles.

As we would expect, this experiment has been repeated and checked many times, and each time the result is verified. As a recent paper summarises, “Waves… are continuous; they are events, functions of time; have no definite positions; and require a medium for their propagation. Whereas particles are discrete material objects; have no intrinsic time dependency; do have definite positions; and need no medium to exist” and continues “The concepts ‘wave’ and ‘particle’ are rationally mutually exclusive. Making the wave}{particle model an irrational dichotomy with no possible rational relation between its two sides”[9]

This wave/particle duality has stumped physicists for a century. 

It may come as a surprise, then, that a number of quite plausible explanations have been offered. While they are not ‘mainstream’ – that is, they are not found in physics textbooks – there are at least two routes to explanation which do succeed quite well in making sense of what is happening. The question of why these are not universally acknowledged as interpretative solutions is a very interesting one, and it is not simply because the science establishment has a curious kind of blindness. But before I explain this, we need to understand more about what is meant when particles are said to behave like a ‘wave’, and the connection of the wave to the question of probability. 

The reference is to the ‘wave function’ I mentioned above, by which the possible movement of particles is calculated. This mathematical function is arguably Schrödinger’s most important contribution to the evolution of quantum physics, and is sometimes considered to be the closest equivalent we have to Newton’s Law of Motion for very small particles. Schrödinger’s equation looks something like the classical equation for a physical wave. The suggestion that this wave formulation might be used to calculate probability was then made by Max Born, and it was converted into a probability measurement by Born’s Rule, which was proposed by Born in 1926. Born’s Rule states that the probability of finding a particle at a given point, when measured, is proportional to the square of the magnitude of the particle’s wavefunction at that point. Schrödinger’s equation is then said to be converted into the ‘probability amplitude’ of the particle whose path is being calculated. Born’s intervention was something of a surprise. Tim Maudlin again: “Born’s rule comes out of nowhere, and it injects probabilistic considerations into the physics without warning. Nonetheless [it] works with spectacular accuracy.” [10]

It is the involvement of probability in the calculation which changes everything. The probability amplitude allows the calculation of the possible behaviour of particles we can’t actually see. This means, crucially, that the ‘wave’ we are talking about when we refer to a ‘wave function’ refers to a wave ‘motion’ in the probability of the position of a particle, not in the position of the particle itself.  Or as Feynman says, “the probability of the arrival of these [electrons] is distributed like the distribution of intensity of a wave”[11]. This is an essential point. Instead of being the calculation of wave movement in a body of physical objects, the equation calculates the wave motion of the probability of the position and movement of particles. We are therefore not talking about a wave of particles of any kind. It is impossible to understate the importance of this distinction. 

This calculation of probability is really what we might, in everyday language, call a quantum shift, indicating the change from knowing where something is to only being able to predict the probability of where it is. We can now begin to understand the central role of calculation. It is the way quantum science works. 

It sounds simple, but it leads to a distinction that is often inexplicably missed. Unless the particle is actually observed, we are not dealing with the position of the particle itself, but with the probability of the position of the particle. This obvious fact is a pervasive source of confusion. If we consider that the particles must in fact have a ‘real’ position, then we have to say that we are not dealing with reality here, but with the probability of reality. We are at one remove from the things we are perhaps used to believing we see. 

Now, it might occur to us that this displacement of reality could resolve some aspects of the measurement problem, because the wave function, which is used to indicate the probability of the position of a particle, is quite clearly not the same thing as the position of the particle itself. It is only a set of probabilities. Once we actually spot the particle (which we must remember is a complex and near impossible task) the probability of its position ceases to be applicable. We will return to this point.

When we look at the bars in the slit experiment, it now seems quite straightforward to say that the bars are densest where the particles are most likely to land, and that the gaps between them represent the places where the particles are least likely to land. In fact, this was exactly the conclusion reached by John Bell.[12]

Objects and probability

Thinking about ‘measurement’ like this involves an often under-estimated shift in the way we think about what is going on. The outcome of the slit experiment leaves us with the conclusion that we are not really looking at the detection of concrete objects at all but are observing the probabilities of their whereabouts. We have a kind of probability map.

This is quite a simple concept if we think about how the bar pattern is formed, as long as we remember the function of time in the process. We have to consider that we are looking at an accumulative number of outcomes over time. To understand this simply, at macroscopic scale, think of the number of times a football might strike a particular area behind a goal, across a number of games. You would (if the teams were competent) see a pattern of strikes which is densest at and around the goal, which would reflect a higher probability of the ball striking there. So while it might seem odd in principle to have a physical representation of something as conceptual as probability, it is not hard to understand if we accept that the effect will accumulate over time. The bars in the slit experiment reflect the probability that electrons will land in certain places, like the footballs, over time (although the particles are tiny, and the timescale is commensurately so brief as to appear almost instantaneous). They simply land where they are most likely to land, and because of the interference effect they accumulate to form bars. So the bars do indeed represent a high probability of detection, and the gaps between them represent a low probability. This in turn quite clearly explains why single electrons produce the effect. Again, think of the footballs aimed at the goal. We wouldn’t think of a cluster of strikes as representing actual footballs. Imagine going to a football stand after plotting 200 strikes and wondering why there aren’t 200 balls in the stand!

We now have the at least comprehensible observation that the bars formed by the electrons show us where electrons are most likely to be – where the bars are densest – and where they are least likely to be. We will now need to consider how and why the interference effect happens even though electrons are fired one at a time, which still seems puzzling.

From Copenhagen to Interpretation

As I have said, there have been a number of serious attempts to offer a better answer to this question than the one offered by the Copenhagen view, that the probability map just comes about as calculation, and does not have ‘real’ existence other than as mathematics.

These interpretations fall broadly into three categories. There are Pilot Wave theories, Objective or Spontaneous Collapse Theories, and Many Worlds theories. Most good modern introductions to Quantum Science will offer an explanatory ‘tour’ of these, along with a number of acknowledged reservations. (I recommend the two writers I have already mentioned, Becker and Maudlin.) I don’t want to do that here, as it would take all the space I have. 

Of the three approaches, I want to discuss a common peculiarity in Objective Collapse and Pilot Wave theories. (I will not discuss the Many Worlds approach in this essay. It should be pretty clear why this approach involves an objectification of probability, since it proposes that each layer of possible outcome is objectively real. Again, there are many good accounts of this approach.) 

The two theories I am interested in are related in that both concentrate on properties of the particle, respectively in terms of position and movement, and on how these properties are specified by the wave function. Both theories are, to a degree, persuasive. Both resolve the measurement problem by dealing – in opposite ways – with the problematic issue of wave function collapse, and both propose an alternative to regarding collapse as an outcome of observation.

Pilot Wave theory proposes a formula for tracing the path of a mathematically collapsed particle. It describes the position and trajectory of the particle as it is determined by the defined wave. Pilot Wave theory therefore presents a kind of extended view of the bar-like image we already have as the outcome of the slit experiment. Perhaps because it seems such a natural extension of the slit experiment, Pilot Wave theories have roots early in the development of Quantum Science. A version was first proposed by a physicist called Louis de Broglie, in 1924. It was rapidly dropped by de Broglie in the face of fierce opposition from the Copenhagen clan, in 1927, but it was taken up again by David Bohm in 1957, and his interpretation of it became known as Bohmian mechanics. 

Bohmian mechanics is a theory of motion describing a particle (or particles) guided by a wave. It is founded on the assumption that “While each [particle] trajectory passes through only one slit, the wave passes through both; the interference profile that therefore develops in the wave generates a similar pattern in the trajectories guided by the wave.” [My italics][13]

As we will see, the idea that it is the wave of probability that passes through both slits can offer a significant advance. However, Bohm is not really interested in the wave per se. Instead, he proposed an equation for the actual motion (the trajectory) of the particles, showing the path of each as it is guided by the wave form (that is, along the paths it is most likely to take). We should note that Bohm does not calculate the interference pattern of the wave of probability which determines where particles might be. In Bohm’s calculation, this is said to be ‘collapsed’, simply because the particle motion is calculated as it would be if it were detected, and is therefore the result of an imaginary measurement. However, the positions calculated by Bohm show how the particles accumulatively form the distinctive bar pattern, taking the most probable paths, as they pass through one slit or the other. 

An ensemble of trajectories for the two-slit experiment, uniform in the slits. (Gernot Bauer from Philippidis, Dewdney, & Hiley 1979: 23, fig. 3)

The wave motion doesn’t collapse and ‘disappear’ in Bohm’s calculation, because we are not observing actual electrons, but instead calculating all the paths they may probably take. The motion is effectively rendered as collapsed already, Particles are seen to behave as particles probably do, and are no longer sometimes waves and sometimes particles. 

Objective Collapse theory is similar only in adopting the opposite form of this kind of deduction. Instead of producing a specification for the movement of single particles, objective collapse theories offer us a much broader picture. In any wave function, collapse can occur anywhere, to any particle. Pilot Wave theory treats collapse as a directional (and therefore predictable) trajectory. Objective Collapse theory paints collapse as a series of entirely random events across the whole field of particles in any wave, and across any number of waves to infinity. In this way, Objective Collapse theory offers a way to imagine tiny particles as constituents of much larger material objects. It imagines the random occurrence of continual wavefunction collapses, as they arise across all matter, as momentary or ‘point events’, or what are sometimes referred to as ‘flashes’. These are regarded as occurring entirely by chance, and their occurrence is calculated as such. From billions of these accumulated random flashes, macroscopically observable matter is generated. In this theory, God emphatically plays dice. Collapse is entertained as a value of probability. No question therefore needs to arise of observation or measurement. Collapses happen without such values, by necessity. In this theory, as in pilot wave theory, the collapse is projected as having happened, but it is evidenced not by particle trajectories, which are imaginary calculations, but by visible objects at the instant of collapse. Therefore, the collapsed position of the object precedes any other observation or position. If we assume that the collapse happens ‘now’, the present, if you like, comes first. Maudlin explains this very well, writing that collapse is not regarded as revealing a pre-existing state of the particles. He uses the analogy of a flipped coin: the coin is neither heads nor tails up prior to the event. Instead, a reverse logic is applied to identify ‘heads’ or ‘tails’[14]. The landing of the coin is, if you like, a point event, and his analogy reveals a critical feature of point events which is that they can be recognised only in retrospect. Conceptually, they have no pre-existence and cannot be considered as a matter of cause and effect. We need to hold this thought, and I will return to Maudlin’s point about the coin. 

Maudlin depends on Bell, who explained as follows: “the… jumps [flashes] are well localised in ordinary space. Indeed each is centred on a particular space-time point. So we can propose these events as the basis of the ‘local beables’ of the theory”[15]. [‘Beable’ is Bell’s expression for the ontological features of an object or process.] Bell tells us that these ‘beables’ are the “mathematical counterparts to real events” and that matter is therefore “a galaxy of such events”. Observable macroscopic matter is a ‘galaxy’ of unpredictable and random collapses, which are not in any other way measured or observed. The theory therefore works backwards, as it were, from larger and more complex objects which are already formed and observed. 

The important point to make here about these two theories is that, in both approaches, collapse is viewed as if it had happened. It is either deduced from a observable object, or all possible paths are considered. It is thus literally objectified.

Both of these explanations offer clear and simple solutions to great mysteries. Bohm’s specification of particle trajectories prevents the wave/particle duality in a straightforward manner, while Objective collapse theory begins to give an apparently plausible explanation of how the collapse of probability forms our materially evident world from invisible matter.

If these theories are so clear and simple, and make so much sense, we should return to the question of why they have not been widely accepted, or are not at least generally regarded as a plausible and authoritative explanation of the mysteries thrown up by the standard approach. 

Curiously, there is an anxiety common to both approaches, rooted in the objectification they propose. This has been more clearly expressed in relation to Bohm’s theory than in relation to Objective Collapse theory, perhaps only because it has been around for longer. The reservations with regarde to this theory centre around the idea that the electrons are being ‘guided’. Bell notes this, observing that the outcome is “quite deterministic”[16].  Human observation or intervention is given no role. Carlo Rovelli echoes the objection. In his book Helgoland, he says “The behaviour of the electron is determined by variables (the wave) that for us remain hidden.  The variables are hidden in principle: we can never determine them” and he continues “The price to be paid for taking this theory seriously is to accept the idea that an entire physical reality exists that is in principle inaccessible to us”[17] (Helgoland, 56).  The path and position of the particles is not determined by us but by Something Else – ‘an entirely physical reality’ as Rovelli has it – so that the effect is to make reality inaccessible to humans. 

However, there is an obvious counter to this objection. This is essentially the same with regard to both theories, but again I will deal with Pilot Wave theory first. The reason Bohm’s trajectories appear deterministic is because they are focussed on a material outcome. Bohm’s theory follows Schrödinger’s equation in that it is still linear – it describes the motion of electrons. If we return to my previous analogy, and the point about the separate nature of wave and electron, we might observe that the wave is no more guiding the electrons than the strike pattern is guiding the football. The pattern is accumulative over time. We have no idea where an individual electron will go, any more than we can predict an accurate free kick or a goal. The wave is only present in the sense that, as with the detector screen, it reflects the outcome of what has happened. There is no active guidance at play other than in retrospect. The wave motion is in the probability of the electron’s position, and is not in the motion of the electrons themselves. Electrons do not move in waves, any more than kicked footballs do. ‘Pilot wave’ suggests that the wave is somehow secondary to the electron, whereas in fact it exists in its own right. It is a little like observing a wave by tracking the seaweed it carries. We seem to have forgotten that probability is a priori, like the wave. Therefore, although it is possible to calculate the trajectories of the particles, it is slightly odd to do so.

And we can make the same observation in relation to Objective Collapse. The flashes trigger a calculation of frequency, and objects are determined, insofar as we can say they are determined at all, only by these and not by any cause or pre-existence. The collapse just happens as it will, at random, so as with Maudlin’s example of the flipped coin the process according to this theory is entirely predicated by the outcome. If this is the case, we might wonder why we need to know, as perception would then be determined only by unpredictable events. The argument seems to be a circular one, where events are determined by outcomes which are random, so that they can only be said to have happened because they have happened.  Clearly, there is no way to influence or change or add to an outcome by understanding it, or to influence its consequence. Heads is heads, tails is tails. An event only has ontological validity because it has happened. Causality appears to have caught us out.

In Bell’s memorable words, this seems a high price to pay to make a world.[18]

Probabilities not particles 

So this, I believe, is the unsatisfactory state of things. 

It prompted me to ask the question with which I began this essay. Why are we so fixated on giving probabilistic events objective outcomes? And this question in turn made me wonder what actually happens if we avoid doing that? Do we really have to accept a vision of the world which only works in retrospect? Perhaps it might be better to imagine a quantum state the other way around?

So here is a way of doing that.

We return to the slit experiment. If the bar pattern is a given, let’s imagine we have a supercomputer the size of a planet, and might therefore able to work out a proportion of the infinite number of probabilities of all the possible positions of all electrons in the space approaching the slits and on the detector side of the slits. This is not simple. If we are calculating the probability of the position of an electron which may have passed through slit A, we would have to take into account, not just the fact that it may have passed through slit B, but all the positions it may have ended up in if it had done so. Moreover, because all probabilities of one event, when calculated, must in total add up to one (or 100 if we use percentages), each calculation of each possible position must have an effect on the calculation of each other possible position. We then have to observe that, given that the probability of the position of a particle moves in wave form, it is the accepted case (and the premise of Bohm’s calculations) that the probabilities on the detector side of the slits would, if calculated, show that the probabilities of the electron passing through either one slit or the other would interfere with each other, more or less in the same way as a wave of water would.[19] Each calculation of the probability of an electron’s position beyond the slits would have to take into account the effects of other calculations that the electrons might be in positions which slightly change the probabilities of these positions. This interference would have an effect on the probability of the position we are trying to calculate. 

We can then imagine that this wave of probability, having passed through both slits equally, might produce ripples of probability emanating from slit A which would ‘collide’ with other ripples of probability emanating from slit B. While there is only one electron at a specific instant, there is a whole body of probability attached to its possible position. It is important to reiterate that we are talking about calculations of probability, not physical measurements, and not electrons. This calculation applies to the movement of probability with respect to other particles and also with respect to the past and future positions of the particle that has been calculated. The probabilities of these positions progress through time like a physical wave, but that is just our imagination at play. They are quite clearly different. 

This picture is perhaps quite clear, but we might ask, why is this different to Bohm’s proposal or objective collapse theories, both of which indeed assume all of the above?

The difference is that we are remembering that the interference pattern occurs in a wave of probability, and that this non-object is our object of interest. We are not really interested in the paths or positions of the particles at all. The position of the particle may be determined, but it is determined by probability. Both position and movement are only calculable and comprehensible as probability. But these are secondary to the wave form, which is a priori. Therefore our only objective concern is this wave, which is not objective at all in the normal sense.

Isn’t it then the case that, if we treat the probability wave as the primary reality, we can resolve both the measurement paradox, and the objection to the deterministic nature of the pilot wave, and while we are at it the issue of dice-playing in objective collapse too[20]? This idea allows us to consider that the electrons are particles with a linear trajectory and position at any time, and never waves themselves. The probability wave, meanwhile, is not in this case linear. We might say it is an un-collapsed field. It must always be there, quite different to any dimensions of space we normally imagine. It is a latent multi-dimensional space any electron might fill, and we could imagine that this latent space might indeed have crests and troughs and ripples and interference patterns far beyond the scope of the crude equipment set up to conduct experiments. 

The picture then is of an infinite number of possible paths, all of which would exhibit the effects of interference from other possible paths, and these possible paths, and their interferences, explain why the electrons form bars at the detectors, but are only ever at one slit or the other if measured. The bars formed would then indeed be bars of varying probability. 


This is interesting, but vague. In order to find some more useful language we need to consider in more detail what a wave of probability might mean. 

At this point in the argument in is essential to be clear what we mean by ‘probability’, and here I return to Maudlin’s reference to a flipped coin. I pointed out that while to consider the outcome of a coin toss to be only understandable in retrospect might enable an interpretation of quantum science, it is not terribly useful if we do actually want to consider the probability of events. In this simple instance we can say that the probability of heads or tails is about 50% with some certainty, but the same is not the case if we are considering the probability of being run over by a car, or catching Covid. These are far more obviously complex events, and they show as that the probability of an event is never separate from contextual events and actions. An approach to this sort of calculation was suggest by Thomas Bayes, who proved a special case of what is now called Bayes’ theorem in a paper titled “An Essay towards solving a Problem in the Doctrine of Chances[21]. I don’t want to go into all the complexities of Baysean probability and its descendants here, except to say that the Baysean formula takes into account both the incidence of the occurrence of an event and, critically, the incidence of the information from which the probability of the event is calculated. This is a very important addition. So to put this simply, if you want to know whether you have Covid, the calculation of probability of a positive verdict must take into account not only the overall possibility that you might have the disease but also the accuracy of the test providing the information (and the ‘test’ could range from a simple one against a checklist of symptoms, to the taking of temperature, to chemical tests of varying accuracy). In a coin toss, the measurement of outcome is simple, and because it focuses us on the outcome it fools us into thinking that this must be what probability measures. But in most everyday encounters with probability inputs are much more important important than outcomes. It is the consideration of all entangled information that allows us to avoid an overly deterministic (and therefore misleading) conclusion.

We can understand this point quite easily if we use the football analogy again. When we watch a game, of course, we make the underlying assumption that probability is not at all deterministic. We generally consider that the flight of the ball – and the probability that it will beat the keeper and fly in a perfect trajectory into the top right-hand corner – is not entirely out of the hands, or the feet, of the kicker. It might ultimately be determined by probability, but the probability is to a high degree directed by other factors, one of which might in this case be the skill and experience of the player. The probability that I could put the ball into the top right hand corner, for instance, is significantly different to the probability that Cristiano Ronaldo could do it. Probability is integral with action and context, but they are by no means the same thing. Physicists use that word, ‘entangled’.

The lesson we must learn from all this is that we must always remember that a wave of probability is not somehow a wave of something else. It is not a wave of electrons, or footballs. It is not another physical reality. To adapt de Broglie’s words, we do not have to choose between waves and footballs. They are different things, so we can have them both![22]

We can also see, using this macroscopic example, why it is easier to calculate trajectories or positions than probabilities. It is, of course, very much simpler to calculate the range of trajectories a ball might take, or the positions in which you might find the ball, than to calculate all the physical entanglements that actually do determine whether a goal will be scored this time, and with what frequency. In fact, even with several planets of supercomputers, it would be impossible to calculate all the contextual factors involved. They can only be imagined. This is quite different to a coin toss, or even to the double slit experiment where, in a tiny and well defined arena, the analogy of the probability wave with the physical wave we see in the movement of water can be easily imagined and its outcome calculated. The action of an electron is very simple. The action of a large animate object may well be analogous, but its complexity is beyond physics as we know it. I have mentioned Bayseian theory. In fact, a whole literature of theory about probability has evolved to accommodate this complexity.

I have been talking, here, about how macroscopic objects become ‘entangled’ their probabilistic context. The idea that the lives and skills and feelings of people become intertwined with events is a familiar one. The entanglement of very small objects has on the other hand been a highly contested area of quantum physics, and its meaning is poorly understood. We can see how poorly if we consider a further supposedly mysterious outcome of the double slit experiment. 

We have seen, and attempted to explain, a wave motion in particles. But here is a further twist: if a detector is placed at the slits to determine which slit the particle passed through, the interference pattern, and therefore the wave motion, vanishes. As a result, ‘observation’ is said to change the experiment. Feynman helpfully explains this in terms of probability: “If an experiment is performed which is capable of determining whether one or other alternative is actually taken, the probability of the event is the sum of the probabilities for each alternative. The interference is lost”[23]. In principle, the idea that removing the need to calculate probabilities might have an effect on outcomes which are effectively the observation of probability does not seem too mystifying. But while what Feynman says is correct, he does not account for how the effect happens in any other way. That is, he does not attempt to understand the entangled nature of the probabilities involved.

Bell explains a little more what actually happens: “Suppose… that detectors are added to the setup just behind slits 1 and 2 to register the passage of particles. If we wish to follow the story after these detectors have or have not registered we cannot pretend that they are passive devices… They have to be included in the system.”[24] Any detection device, Bell says, will become entangled with the particles. 

Entanglement is therefore critical. Schrödinger wrote of it: “I would not call that one but the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought”[25]. And Bell says, of the slit experiment apparatus, “We see that our treatment of the electron gun so far is neither complete nor accurate. If we wish to say more, about its performance, then we have to say that it is made of atoms, of electrons and nuclei. We have to apply to these entities the only mechanics that we know to be applicable … wave mechanics.” He continues, we are led “to include more and more of the world in the wavy quantum mechanical ‘system’.”[26]

Tim Maudlin explains in simple terms how entanglement actually might take place to remove interference in a slit experiment, under the conditions Bell describes.[27] In fact, the reason why Maudlin’s book is such a stellar exception among the hundreds of discussions and explanations that exist of this disappearance of interference under observation, is that he describes in full an apparatus that might be used to detect the electron, in a way that can be easily understood. What actually happens is less mysterious than we are led to belive.

The set up Maudlin describes is simple. It has a compartment between the slits which contains a proton, which allows the proton to be drawn to one slit or another by means of attraction to the electron. This will be made visible, since the interaction will emit a flash at one slit or the other. 

What happens is exactly what we might expect. The attraction of the proton to slit A or slit B is divided more or less equally between the slits over time. But what is surprising is that when this detection apparatus is operated, in this hypothetically perfect experiment at least, the interference pattern is said to completely disappear (in reality, it will fade in direct proportion to the clarity and degree of detection). Maudlin then makes the same observation as Bell, that the effect of entanglement in the Double Slit with Monitoring (sic) is to “make the interference pattern in the regular Double Slit go away[28], an effect, he says, that Feynman failed to explain. 

How does this entanglement happen? Maudlin uses the concept of ‘configuration space’ to explain this. Configuration space is the idea that the calculation of the possible positions of a single particle –  its mathematical configuration – defines a three dimensional space which has to be considered as separate to the space indicated by the calculation of the possible positions of any other particle. That is, the probability of the particle’s position is calculated for a defined three dimensional space. But each configuration for each separate particle then occupies a further set of three dimensions which have to be considered separately. This is simply because each is a different set of probabilities. We then have a system where each configuration of each particle adds a further three dimensions. The configuration of two distinguishable particles is then a six dimensional space and each further distinguishable particle adds a further three dimensions. The number of dimensions we have to consider for n particles is therefore 3n dimensions. Although the outcome of this is clearly quite bewilderingly complex, the principle is easy to understand as long as we remember that all these dimensions represent sets of the three dimensional probabilities attached to the possible positions of each particle we attempt to configure. So we can say that configuration space of a particle, in the simplest terms, at any moment in time, is the three dimensional space occupied by all the possible positions of that particle. These spaces are literally dimensions in probability. It follows that configuration space in general is a multi-dimensional space occupied by all the possible positions of all distinguishable particles.

As we have seen in the double slit experiment, the action of the slits means that a single electron must then have a double configuration space once it passes through the slits, because there is a set of probabilities for each possible position at each slit. If there is no detecting proton, the calculation of the probabilities of the electron’s position – its configuration space – is made as if the electron had passed through both slits at once. (And we have seen how the probability of the particle is often confused with the particle itself.) The configuration space on the detector side of the screen must be calculated for the two sets of possible positions, one for each slit, and then ‘superposed’, that is considered simultaneously, as if it were in one configuration space. We can see how we can use configuration space to produce an explanation of the interference pattern in terms of where the particle is likely to be, which then explains the bar pattern on the detector screen. 

However, the presence of the proton has a dramatic effect on this process. The two sets of possible positions become separated in configuration space because of the proton’s contribution to the configuration[29]. The action of the proton effectively means that we have to ‘start again’ in our calculation of possible positions. It identifies one of two spaces, and therefore divides a space which we previously had to consider as single into two. This in turn means that we then have to consider a single separate possible configuration space for each electron once it has passed through the slits, meaning that there will no longer be any superposition nor any interference.

This still perhaps seems rather abstract. But the important conclusion we need to understand is that this effect is produced by the action of the proton, not by observation. The probability is not affected by detection, but by the means of detection. 

This is again critical. We can see how it is tempting to work backwards, and suggest that if the electron is observed the configuration space must be changed by observation. Clearly, if we know which slit the electron is passing through, the question of probability vanishes because we know. But this is reverse logic, or ‘coin toss probability’. We don’t know what we are about to know. Certainty only displaces probability in retrospect. 

What actually happens is much more obvious and prosaic. Observation doesn’t come into it at all. The probability changes simply because the apparatus has changed it. As Maudlin says, it is due to the coupling of the electron and the proton[30]. The football analogy is useful again. Imagine that our footballer kicked the ball into a long tube directed at the goal. We wouldn’t conclude that the consequent delivery of 100 consecutive accurate strikes was then due to the fact we were looking! ‘Looking’, in this instance, would involve the introduction of a long tube, just as ‘looking’, in the monitored slit experiment, involves the introduction of a detecting proton. This might seem a frivolous comparison, but it makes a serious point, which is that we have introduced a new piece of equipment to the experiment which is directional. If we were to believe that the understanding of probability was the result of observing and recording the outcome we would be in a very strange and dangerous place. But this is exactly what Feynman appears to do. Moreover, it is what both Objective Collapse and Pilot Wave theories appear to practise, because both calculate backwards from ‘objective’ appearances of particles as matter. 

The confusion has arisen, of course, because of scale. We can’t see what we’re doing, and we are therefore guessing at processes from outcomes, which is why in both of these otherwise plausible theories our attention is focussed on the collapse of the underlying wave. But we cannot then assume that the outcome determines the process, or that there is no determination at all. It is more sensible to conclude that we have not been able to see it.

Is there anything spooky going on?

So far I have been a little dismissive of some of the claimed obscurities of quantum physics. But now we come to a difficult one. This is the phenomenon that Einstein called “spooky action at a distance”. The spooky action is as follows, and it was set out as a thought experiment, known as the EPR experiment after its creators, Einstein, Boris Podolsky, and Nathan Rosen. 

The experiment is as follows. Electrons have spin. The spin of an electron can be oriented around any axis. If two electrons are prepared together, then released, and then one is measured by means of a magnetic apparatus at a random distance, its spin will beknown to be the exact opposite of the spin of the other instantly, even though no such correlation is measured at the point of origin. 

The magnetic apparatus involved in this measurement is called a Stern-Gerlach magnet, and is an arrangement of two magnets placed asymmetrically, so that the North pole creates a locally stronger magnetic field than the south. A spinning electron creates a magnetic field, and so spin can be measured by this apparatus as the electron will be deflected by the magnet. 

Bell explained the conundrum in this phenomenon in a characteristically engaging way. One of his colleagues always wore odd socks. So, he said, if you observed a pink sock as he stepped into a room, you knew, faster than the speed of light, that the other sock was not pink. No mystery. But, as he then observed, it would indeed be mysterious if this information did not previously exist at all, and neither sock previously had any attributed colour. This is the problem posed by EPR. Einstein believed that the outcome must be wrong, not because the socks suddenly acquired colour (although that would be a mystery) but because understanding previously non-existent information about one object at the same instant as understanding corresponding new information about the other is impossible. Information cannot travel faster than the speed of light, however transmitted, and this ‘spooky action’, Einstein said, indicates that quantum science is incomplete in its understanding of the characteristics of particles. 

This led Bell to invent his experimental proof of the phenomenon, known as Bell’s Inequality. Without going into detail, he proved beyond doubt that the spooky action does indeed happen, and that particles do not have innate properties (‘colours’, if you like) which would enable information to be transmitted instantly. Therefore, it is argued, there are no such ‘hidden variables’ in the particle, and ‘spooky action’ is the only explanation. 

Was Einstein therefore correct?

Now, I have been a little vague and over generalised in this explanation, but I have been intentionally so, because to go into all the detail here would be a waste of time. The reason is again eloquently set out by Maudlin: 

“As a piece of physics, [the] experiment essentially involves the precise geometry and orientation of a Stern-Gerlach magnet and the magnetic field it creates. A truly fundamental and universal physics ought to treat this situation via physical description, irrespective of conceptualising it as the ‘measurement’ of anything.”  [31](68)

Maudlin indicates the same old problem. Just as in the slit experiment, where detection and measurement are involved, there is much calculation but no detailed description of the apparatus. We have seen that even in a simple process such as detecting which of two slits an electron passes through, apparatus has a key role to play. In this experiment the apparatus is much more complex, but as Maudlin objects, it is described and used without rigor. The magnets apparently float in mid-air, according to the standard description and accompanying diagram, and while mention is made of the two electrons being ‘prepared’ together, there is no description of how this is done and what exactly the preparation entails. The calculations appear to present a choice between there being a hidden and undiscovered property in the particle or some inexplicable spooky action but, while the maths might be precise, the physical reality is not. We saw in the slit experiment that the so called mystery was brought about by the entanglement of particle and apparatus. We might infer that a similar thing happens here. Maudlin continues: “the approach… short-circuits all the real gritty physics. Rather, we are invited to just somehow conceptualise the entire physical situation” so that none of the conceptual deduction “follows in any rigorous way from the physical description of the laboratory apparatus”. His conclusion is damning. “We are told to regard the physical set-up as a measurement but are not told why this assumption is legitimate, or how to determine whether some other laboratory arrangement is a measurement, and if so of what.” [32]

So Einstein, Maudlin argues (and I think in this he follows Bell entirely), was right, if not perhaps for the reason it has been argued he was wrong. The science is incomplete. As he says, “a completed physics should illuminate why just this sort of physical situation ought to be treated with this particular mathematics. The standard approach systematically hides this basic physical question from view”.[33]

I should add a note here on the subject of ‘locality’, since the argument about what makes the particles behave as they do is often framed in terms of whether they have ‘local’ properties or not. In this context, it is another way of describing ‘hidden variables’. Locality would dictate, in this instance, that the particles must have intrinsic ‘local’ properties which would always make them behave as they do. So does the argument lead us to the assumption that this is indeed that case? It cannot, because Bell’s Inequalities prove beyond doubt that the particles have no such properties. Does this mean spooky action is the only recourse? What Maudlin is saying, and what Bell implied, is that while it is the only recourse in quantum theory at present, it is not a sound conclusion because of the failure to prove its legitimacy by any means other than the usual one, that it works. It is possibly correct, of course, but if only for reasons of apparent absurdity, we would have to consider that it is improbable.

The lack of precision is particularly frustrating, because in its absence we are once again asked to make assumptions that defy common sense upon the basis of functionality. ‘It works’, irrespective of whether it makes sense. But it is possible to draw our own more rational conclusions from what we are asked to believe is happening. 

Let us begin with what we most nearly know. Whatever is done to the electrons, they are ‘prepared’ together, so we assume that they must be entangled. We then need to avoid the mistake we saw in the case of the double slit experiment, of creating a ‘measurement problem’. This problem arises if we assume that the particles and the probability of their position and movement are somehow the same thing. We have seen that this leads to the error of having to conclude that the particles are sometimes particles and sometimes waves. But entanglement is never an intrinsic property of particles. They are not the same as their ‘configuration space’. So Bell’s Inequalities stand, as they must, and the particles themselves do not have ‘local’ properties. 

We might then assume, from the reported behaviour of the electrons at the magnet, that in the course of ‘preparation’, whatever that is, an equal and opposite charge is in some way applied to the electrons which is undetectable until they encounter the magnets. This, of course, is exactly what we are told we cannot assume, ironically because we don’t know. 

We can depend on one certainty in place of this knowledge, however, and it is this: while we might have two particles, their configuration space – the calculation we make of the probability of their whereabouts – will be entangled. This means that we have to regard it, in terms of the probability of the properties we have entangled, as occupying same configuration space. We must then assume that, because we are dealing with this single configuration space, we must always configure the electrons so that the probability of the position or movement of both, whatever that might be, will total one, or one hundred percent. In terms of the configuration of probability, each must affect the other. Entering into a common configuration space is the real meaning of entanglement. Each electron no longer now has its own configuration space, because, at least with regard to the entangled properties we are testing, their configuration must now be part of the same calculation of probability. Therefore, the behaviour of one electron, in configuration space – which is to say in terms of probability – will, if it is determined, also determine the other. This is to say that the ‘collapse’ of the wave function in relation to one particle (which is said to happen when we measure the particle) will instantaneously result in the collapse of the wave function in relation to the other (that is, we will understand the same properties in the other electron), because their entanglement in configuration space means that the determination of one collapsed wave function is also the determination of the other. And of course, it is certain that there must also be further entanglements with the preparation process and with the eventual detection apparatus which we are not in a position to calculate but which would, if known, also have to be incorporated into the same configuration space and which would then affect our calculation in the same way.

So this phenomenon is explicable to the same degree as the outcomes of the slit experiment, and by the same means, by the fact that we are not talking about the characteristics of particles but about the complex web of probabilities that determine those characteristics. Because we are talking about the configuration of a single probability incorporating both particles (and our apparatus) what happens to one particle determines (to a degree, subject to the apparatus) what happens to the other. It is all part of the same sum. And the actual behaviour of the particles has the same relationship with reality. While we are talking about determination as if it is predictive, once again we must remember that the determination is of the complex kind of probability, so while there is clear causality we only understand it in retrospect. Probability is, as before, at one remove from a ‘real’ state.

Going back to Bell’s socks, the analogy can be applied, except we have to consider that the colours of the two socks are related by probability, and entangled. That is, they may well have no specified colour to start with, but whatever colour one ends up with directly affects the colour of the other. This perhaps becomes a little far fetched, but, if we must have a story about socks, let’s say that we have an ancient dyeing machine that doesn’t work very well, and that dyeing one sock jet black means the other sock will come out pure white, and that dyeing one sock dark grey means the other will be proportionately light grey, and so on. What’s more, we don’t see the outcome of the dyeing process until one sock is spat out of the machine. Any shade of grey is possible thanks to the unpredictability of the machine. Putting two socks into the dyeing machine would then be our ‘preparation’ of the socks, entangling the outcome with respect to the characteristic of colour. While we wait, any attempt at determination of the colour of both socks is in the same configuration space, which is to say that any guess at the outcome for one sock is also a guess at the outcome for the other, subject to the operation of the apparatus of course. Actually observing the first sock that the machine spits out – which more or less determines the colour of both socks – is like passing the electron through the magnet. And quite clearly, the eventual acquisition of related colours is absolutely nothing to do with any intrinsic property of the socks themselves, even though determining the property in one determines the equivalent property in the other!

Relativity and probability

We have now examined three of the best known tests of quantum theory, and in each we have found the same error with regard to probability. The conceptual confusion of the position of a particle and the probability of the position of a particle, or of the movement with the probability of movement, is pervasive, and we have seen that it is found both in the standard text book interpretations, and in otherwise credible proposals we have examined which seek to overcome the measurement problem. It was the presentation of entanglement as an objective reality by physicists that led Einstein to call it ‘spooky action’. It is this fault line alone that has created the perception of quantum science as drastically different and, as Bell observed, the confusion has allowed thought experiments and interpretative conclusions that drastically lack rigor in their conceptual processes.

To reiterate, the fault is this. There is a repeated failure in many interpretations to understand that talking about the probability of events is not the same as talking about events themselves. I have said that this error is ‘the objectification of probability’. Probability is repeatedly treated as if it were materially real. It is not. This is not to say, of course, that probability is not in any sense real. It is still part of a description of an event. But it is displaced from our ‘normal’ perception of reality in the following way. If a series of probabilities relating to a single event or object is calculated, the actual event or object is only represented by the sum of all the probabilities considered. Together, the set of probabilities adds up to 1, or to 100%, if you like. Many probabilities can therefore relate to a single event. Many probable or possible events can ‘be’ a single event. In theory, an infinite number of possible events might ‘be’ a single event. So a single probability can never constitute a complete material event, and without corresponding probabilities it is not meaningful at all. This unreality is not a difficult concept to understand. It is meaningless to talk about a coin landing heads up. It is not an event in itself. It is part of a probability which also includes tails up. The two possibilities might together make a single possibility, subject to conditions under which the coin is flipped. The strangest thing about the treatment of probability in quantum physics, perhaps, is the way this simple distinction between a chance and an actual event is continually blurred. Of course, because we are talking about events we can’t see, probability is all we know – but of course that does not change their probabilistic nature. The objectification of probability has become a significant obstacle to understanding. It means that, conceptually, particles and waves have sometimes become mistaken for each other, to the extent that, absurdly, particles are considered sometimes to be waves. We have seen that this is not the case. Waves of probability and particles have independent coexistence.

As a concluding thought, we might consider whether, if we do not objectify probability, our understanding of the relationship between quantum and classical science might change? 

It is, perhaps, quite easy to see why every plausible (probability based) explanation of quantum physics might violate special relativity. Einstein realised that if the speed of light is a constant, then time must be variable, and ‘local’ to a specific frame reference, which is to say a specific event and context. But the calculation of probability quite clearly does not respect this frame of action, because many probabilities are simultaneously required to constitute a single event. It gets worse, too, because as we have seen, the calculation of the probability of any macroscopic event is an entangled task. Even the calculation of an approximation of an event must be accumulative across both time and space – that is, a single event is always subject to modification by consideration of another adjacent probability, and events are modified as probabilities accumulate and are superposed, so that their actual probability is continually varied and is almost always unknowable as a precise value. To estimate the probability of an event – as we conitually do in order to negotiate everyday life – requires a large number of possible event outcomes to be viewed simultaneously not just as one moment in time, but continuously. 

It is therefore said that special relativity means this is not possible.  According to special relativity, time is local, so there is no objective time structure. 

Consciousness tell us that this is an error, and the error lies in a false assumption. A probability in itself is never a complete event, because events are never static and are continually entangled. It is in this respect that Einstein was right to say that a quantum state is not complete. We might say that it is instead part of an accumulating event. This means that probability has a strange and special relationship with time. In probability, time is always non-relativistic because each set of probabilities is a closed system. The sum is always one event, or if the probability is expressed as a percentage of one event, the sum is always 100. This means that all probabilities are relative, in fact, only to each other. The examination of probability is accumulative over time, even though the calculation of those probabilities as a single event must be simultaneous, and this accumulative understanding is how we make probability useful. As I have explained, while at first glance this is a paradox, it is actually quite easy to understand the error that makes it look so. We just need to think about footballs again. The map of probability fading away from the goal is compiled over time, but it gives us a single and instantaneous probabilistic understanding of the destination of the ball. But crucially, it is ‘non local’ because it is independent of time and place – there are never 200 balls in the stand behind the goal! That would be to objectify our understanding of probability. And that error is precisely the one we make when we say that quantum mechanics violates special relativity. 

[1] See for example Tim Maudlin, Philosophy of Physics – Quantum Theory Princeton 2019, Adam Becker, What is Real?

[2] Adam Becker, What is Real? p5

[3] Tim Maudlin, Philosophy of Physics – Quantum Theory p2

[4] Ibid

[5] J. S. Bell Speakable and Unspeakable in Quantum Physics, p171

[6] Richard Feynman, Six Easy Pieces, Basics Books Ed2011, 116.

[7] Ibid


[9] Jeremy Fiennes, The Copenhagen Trip – the Dicey Interpretation of Quantum Physics, 2020 

[10] Op Cit p47

[11] Feynman

[12] Speakable and Unspeakable in Quantum Physics, “[T]he particle is guided by the wave towards places where [probability] is large, and away from places where [probability] is small.” ptbc

[13] June 2021

[14] Op. cit. p103

[15] Op. cit p112

[16] Bell observes that “The [De Broglie – Bohm] picture is … quite deterministic. The initial configuration of the wave-particle system completely fixes the subsequent development” (Six possible worlds of quantum mechanics. )

[17] Rovelli, Helgoland p56

[18] Bell, Speakable and Unspeakable in Quantum Mechanics pp

[19] [19] June 2021

[20] Einstein observed “God does not play dice”

[21] Published 1763. Reprinted in Bayes’s Theorem, 117-125 Richard Swinburne (Ed), May 2005

[22] Ref De Broglie

[23] Six Easy Pieces, p134

[24] Bell,

[25] Schrodinger,

[26] Bell,

[27] See Maudlin, Philosophy of Physics: Quantum Theory, Chapter 2

[28] Maudlin

[29] Maudlin Philosophy of Physics, p57

[30] Maudlin Philosophy of Physics, p57

[31] Maudlin, Philosophy of Physics, 68

[32] Op. cit. p68

[33] Op. cit. p69

%d bloggers like this: