Recently there has been a lot of talk about Artificial Intelligence (AI), and the dangers that it could pose to humankind. The CEO of Tesla and SpaceX, Elon Musk, made comments last October that AI may pose the ‘biggest existential threat’ to humanity. He also said that ‘with artificial intelligence we are summoning the demon.’ That is some pretty strong words, and considering many people think him an important technology, and business genius (he even invented the now-in-design super-speed transport system Hyperloop), one can not help wondering whether he ought to be taken seriously.
Of course, Asimov’s Three Laws of Robotics are supposed to protect us from the dystopic outcome presented in the Terminator movies, but would they? In the films, self-replicating robots that communicate via a vast network called Skynet have taken over, and a minority of humans must wage war against them. In theory, Asimov’s famous sci-fi laws would protect us from such an outcome,
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
What if, however, those laws did not work? Or perhaps got hacked and interfered with by some kind of AI malware? Firstly, Musk is not the only person who is worried. Some influential voices agree with the revered entrepreneur and have started making unapologetic complaints against the military industrial complex’s AI arms race.
Apple co-founder Steve Wozniak, infamous physicist Stephen Hawking, and Google DeepMind executive Demis Hassabis are also all among high ranking critics of AI. They, along with Musk and 1000 other experts, have signed an open letter that states,
‘AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.’
It is pretty obvious that the impact of an AI arms race gone wrong would be devastating. In reality, though, wartime is not the only scenario in which robots could reek irrevocably damaging effects on society.
In Tokyo, robots in bikinis entertain crowds while serving a limited menu of drinks and food. Admittedly for now it’s a gimmick, but how long will it be before AI really can outdo humans, and do a waiter’s job properly? At no further cost to an employer, other than upkeep and the electricity it costs to run the robot? Troublingly, the evidence seems to be swaying in the direction of sooner rather than later.
At Hitachi Ltd, the Japanese electronics firm has been making leaps and bounds with the design of its robotic employees. Last month, it revealed a high-speed two-armed robot that it believes can replace humans. Designed to do simple warehouse packing and sorting jobs, the robot can collect items from storage and deliver them to other areas in the factory.
As bad as that may seem, now the firm has revealed even more worrying ambitions for its robot employees. According to a spokeswoman for the company, Hitachi has designed an AI program that will allow robots to give human employees orders by analyzing ‘on site activity and demand fluctuations’. Yes, Hitachi has developed managerial robots and according to its website they improve efficiency,
‘The development of artificial intelligence technology (henceforth, AI) which provides appropriate work orders based on an understanding of demand fluctuation and on-site activity derived from big data accumulated daily in corporate business systems, and its verification in logistics tasks by improving efficiency by 8%. By integrating the AI into business systems, it may become possible to realize efficient operations in a diverse range of areas through human and AI cooperation.’
While maximizing efficiency in wartime, or in manufacturing processes, might seem like a great idea in some ways, it is hard not also to wonder about the effect that receiving orders from machines could have on people. How would you feel if robots were in charge of performance, and could issue warnings or punishments to their human subordinates?
What if people become completely unnecessary in warehouses and elsewhere? What would the loss of jobs mean for humanity really? What about armed drones? Would they also be completely independent when making critical choices like killing targets? These are all worrying questions.
‘The accumulated doubling of Moore’s Law, and the ample doubling still to come, gives us a world where supercomputer power becomes available to toys in just a few years, where ever-cheaper sensors enable inexpensive solutions to previously intractable problems, and where science fiction keeps becoming reality.’
The first season of this years hit British TV series ‘Humans’ reveals a world where robots can indeed do everything that we do, and better than us. Street cleaners, fruit pickers, care workers for the old and infirm – robots do everything – they even drive. Some have undergone what is referred to in popular science as ‘the singularity’, robots that are programmed with full, indiscernible consciousness. That same concept is also the premise of the recent movie Ex Machina, in which AI robots are tested using the Turing test – a test designed to check if a robot is self-aware. In the film they pass, and turn on their maker.
With the race firmly underway: companies investing in robot managers; popular culture seemingly awakening to an inevitable future; and experts starting to quake in their boots. It would appear that the big question has decisively shifted from: ‘Will robots take over the world?’ To the rather more nightmare-inducing question of: ‘When will robots take over the world?’