It is often assumed that AI, quantum physics and quantum computing are in some way linked. They are not.
It is true that quantum science has informed the development of better conventional chips and conductors, but this is like saying that it has helped to develop better internal combustion engines (which is also true). Quantum science is of course the science of neither.
In the simplest terms, quantum science is the science of the smallest things. These are often called particles, although there is no such ‘thing’ as a particle. When you think about it this is obvious. To define the smallest thing would be to define a first cause, which is not possible. So quantum physics works with what are literally probable things, and it calculates probabilities.
It is often assumed that this is the way computers think, and that this is the connection between AI and quantum physics. Nothing could actually be further from the truth. Standard computers do not calculate probabilities, but they do use probability in a crude way. If there are a number of possible responses to a question they might select the most readily available answer, the commonest answer, or the answer which appears closest to the question asked. None of these represent a true consideration of probability. This is because classical computers do not really engage with probability at all. They perform ‘if not, then’ linear path calculations at lightening speed. As a recent Apple paper pointed out, they cannot make comparative decisions or assess multiple scenarios simultaneously. However fast, classical computer calculations are sequential.
This is the opposite of the way quantum calculations work. Quantum science uses equations which consider multiple possibilities simultaneously. It is forced to do this, because the objects upon which it builds – particles – are not objects at all.
You can perhaps see why there is great interest in the building of a quantum computer. To have a machine capable of simultaneous consideration of probability would be a vast step forward.
However, there is a problem with this project. The problem is that in order to perform good calculations a quantum computer would have to be able to consider all relevant probabilities relating to any defined event. This is conceivable if what is being calculated is the possible position of an invisible particle. However, it is less imaginable if that relatively simple task is exponentially magnified to the scale implied by our concept of ‘computer’, We have a linguistic issue similar to the one encountered in our understanding of ‘particle’. The classical concept is not related to the quantum concept. Th scale of the environmental and experiential probabilities at macroscopic scale is uncomputable in classical terms.
However, this difficulty is also where it gets interesting. Conceptually, the problem issue is concealed and obscured by the use of classical language. Quantum scientist often appear proud of this obscurity, resorting to the argument that quantum physics ‘just works’. It does. But at many of the frontiers of quantum knowledge investigation is conceptually directed, and then there is a problem which is fundamentally linguistic.
Consider what I have said about the word ‘Particle’. This is a label from classical physics which has no clearly defined meaning when used as a label in quantum physics. A quantum particle is not a classical object but a probability, or to be more accurate a quantity of probability. This has absolutely no relationship to the concept of an object. There is a dislocation. In fact, quantum physicists consider the quantum world which becomes classically observable as subject to “decoherence” from a quantum state. We make this inadmissible leap whenever we use any of the apparently classical labels we use to discuss quantum physics.
This problem, of course, applies to the words used to describe current research into quantum computing, which as a concept appears to depend on the decoherence of quantum concepts to classical effect. Wikipedia summarises the broad approach as follows: “A quantum computer is a (real or theoretical) computer that exploits superposed and entangled states, and the intrinsically non-deterministic outcomes of quantum mechanics, as features of its computation.” This concept is developed from Richard Feynman’s suggestion that if there are two ‘entangled’ particles, understanding the ‘spin state’ of one instantly communicates the spin state of the other, wherever it might be. The knowledge is then famously considered to travel faster than the speed of light, breaking all classical restrictions. ‘Superposed’ here means that the states instantly co-exist. It is almost as if we are promised a kind of magical interaction.
However, you can see that if a ‘particle’ does not exist as an object in the classical way this whole approach is built on a foundation of sand. The problem is not that the maths behind quantum computing is invalid, but that the words used do not refer to objects or to states of objects that we can understand or conceptualise. At best, they refer to possibilities of these states, which could have many variations in interpretation.
Quantum physics can therefore often appear to use classical language in a highly misleading way. In quantum science, probabilities are all that exist.
Just acknowledging this, though, opens the way to a different set of linguistic connections. Quantum calculations of this kind, which evaluate whole environments and draw on experience simultaneously, are indeed performed, if inexactly, by the brains of sentient beings. They are often communicated to the organism instinctively or genetically, as reactions, intuitions and feelings, as well as in more conscious thought. These animal computers, which have been working for millenniae, are perhaps a more sensible starting place for investigations into quantum computing.
Are probabilities therefore what “exist”?
What is the underlying reality of a measurement outcome? That’s the million-dollar question — and there are two conventional views which cover all current approaches and answers to it.
An Instrumentalist view holds that only measurement outcomes (and the probabilities thereof) exist. The rest is scaffolding for prediction. This is the ‘because it works’ argument, and it is often known by philosophers as utilitarian or pragmatist.
A Realist view would say that there is some underlying reality (fields, wavefunction, hidden variables, etc.) that gives rise to probabilities, even if we can’t observe it directly. This covers almost all other approaches attempting to offer a non-utilitarian view.
As an aside, we find this division wherever philosophers have attempted to describe reality, so perhaps it is no surprise we find it in the attempt to interpret physics.
But there is an alternative to these two points of view.
There may be no need to be either an instrumentalist or a realist. It is just necessary to make a simple change to the defining statement which amends both positions. Neither measurement outcomes nor underlying realities exist. Only the probabilities of measurement outcomes exist. in other words, measurement outcomes do not have probabilities – they are probabilities. What is more, perhaps we do not need an objective language to describe them. We get along quite well without this because human brains feel them.
If this is the case, we might quite plausibly say that there is an intimate connection between the scientific knowledge that constitutes a quantum understanding of reality and the kinds of knowledge and perception that inform human invention, creativity, art and the development of ideas. And if this is the case, quantum physics is not the physics behind AI, but the physics behind human intelligence. It is a thought which needs expansion and development, but it is one with a clear connection to a scientific and mathematically demonstrable understanding of reality.