Artificial intelligence is driven by algorithms. An article by Andrew Smith in the Guardian points out that there is nothing mysterious about these. An algorithm is the basic process by which a computer makes simple binary decisions. If a happens then do b, else do c.
Computers can execute this process at unimaginable speed, which has led us to talk about them in terms of possessing intelligence.
But there is a fundamental and dangerous error in doing so. This is not intelligence, it’s just a computer process happening at speed. As Smith observes “On the micro level, nothing could be simpler. If computers appear to be performing magic, it’s because they are fast, not intelligent”. Computers are not thinking, in the human sense of the word, because, for the most part, human thought is not sequential in a computer’s ‘if then else’ way. This is not because we are irrational, or because we have some woolly capacity to respond to things emotionally, though it’s probably true that we do. It is because as George Dyson observes to Smith, if you are going to understand the world as a sequential entity, you have to have a complete model of everything. As Toby Walsh then says, also quoted in Smith’s article, this is evidenced in the most basic ways. “No one knows how to write a piece of code to recognize [sic] a stop sign”. In order to survive, humans, along with almost every other living organism, have had to evolve a completely different and more practical way of thinking. The human brain does not contain a complete model of everything, but is does contain a risk model which is designed to cover the gap between what is known and what isn’t. This produces something critically quicker than a series of rational calculations. It produces emotion, especially the one we call fear. We habitually step back and consider that everything that can happen does happen, but we do it without thinking. We use emotion to make an evaluation of what we think we are about to do based on risk and probability. This is a highly sophisticated task which means making a fluid and instant assessment of two infinite variables, timing and likelihood of occurrence – a task which would blow up a computer. Why? Well, first because both the variables are infinite, second because a computer has no means of assessing likelihood unless it can compare with other outcomes, and third because multiple outcomes have to be considered simultaneously.
This radical deficiency in AI leads to events like the one Smith begins his article with, the killing of a cyclist by an AI driven vehicle. He also considers the Google DeepMind arcade game player which has reinforcement learning to work out how to gain points. As he observes, “the machine has no context for what it’s doing and can’t do anything else. Neither, crucially, can it transfer knowledge from one game to the next (so-called “transfer learning”), which makes it less generally intelligent than a toddler, or even a cuttlefish. We might as well call an oil derrick or an aphid “intelligent”. “
He also cites the world financial markets as an illustration of how unstable and unpredictable algorithm driven calculations are. He writes “A “flash crash” had occurred in 2010, during which the market went into freefall for five traumatic minutes, then righted itself over another five – for no apparent reason. I travelled to Chicago to see a man named Eric Hunsader, whose prodigious programming skills allowed him to see market data in far more detail than regulators, and he showed me that by 2014, “mini flash crashes” were happening every week. “
Humans can be beaten pretty easily by machines at defined linear tasks, like chess, or playing an arcade game, or the stock market, or any kind of gambling device. This is because humans mostly don’t even try to work things out in terms of sequential logic. Instead, we might say that they have a feel for what might or might not happen. This is dangerous when gambling but essential if avoiding tigers or cyclists. Smith calls this human resource “a broad accumulation of experience and knowledge” but that doesn’t really rule out the sort of knowledge a computer could acquire. If we want to be more scientific we can say that humans, along with most other evolved organisms, are brilliant at making multiple intuitive – or subconscious – calculations of probability simultaneously, and feeling them.
It seems bizarre, on the face of it, to state that a computer cannot calculate probability. Of course computers can do this. But in order to assess risk, humans weigh up and consider multiple probabilities, by degree, and when broken down into ‘if then else’ sequences this task is almost impossible. Any given set of probabilities is simultaneous. Probability at any one moment has to be considered in infinite variations, and however fast that calculation is made, at the next instant it may be different. Human brains are designed to cope with this, whereas ‘If then else’ processing works in the opposite way, and will never get close to human risk assessment.
It is because of this fundamental structural shortcoming in conventional computer technology that computer scientists have been attempting to build a quantum computer. Quantum mechanics is founded on the consideration of events as probabilities, working more like the human brain does. Quantum computing would render the entire architecture of even the fastest current computer completely redundant. A standard computer chip contains simple binary switches making then… else choices. So a number of laboratories have started the quest to make a quantum computer. A quantum chip would need to make decisions based on simultaneous calculations of probability in turn based on a model of previous outcomes. So far scientists have built a large random decision potato which is generously considered to be a 32 bit device. The chips are still in the kitchen. Quantum computing is not impossible but it appears to be a very long way off.
Where we are meanwhile is a dangerous place, where we have begun to depend on a kind of intelligence that is not intelligent at all, but radically limited and restricted in its capability. Cyclists will continue to be killed. But more seriously if we allow it to develop beyond our control it will dominate all areas of our lives. We will increasingly be asked to inhabit an inhuman world.
Article by Andrew Smith in the Guardian 30 Aug 2018