AI and the Killer Robot Theory – Myth or Mayhem?

“Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.” Max Tegmark, President of the Future of Life Institute

In my previous blogpost, I discussed the developments in artificial intelligence and how it really works. In this article, I will dive deeper into the darker areas of AI. More specifically, what are the risks involved in developing such technologies, and how should we contain them?

It is human nature to evolve and push forward, but some inventions cannot be un-invented, such as the nuclear bomb. Many AI researchers and developers have raised this concern, saying that it is necessary to slow down enough to consider the potential dangers of such technologies in order to first establish the rules of the game and make sure that there are strict regulations in place for ethical and political governance on a worldwide basis before we set off for the horizon.

Narrow vs Strong AI

We can probably all agree that AI technology will have major benefits both for society and the workplace. As mentioned in my previous post, AI shows great potential in all industries to help us increase profits, come up with smarter solutions, solve problems that humans cannot even comprehend and even save lives. The future will undoubtedly be autonomous, where close cooperation between man and machine seems like the only option. It is therefore urgent to ensure that this cooperation remains beneficial and doesn’t backfire. “Oh honey, the robot killed the kids,” is surely an outcome we’d like to avoid.

The goal of keeping AI’s impact on society beneficial is what drives research in many areas, from economics and law to technical topics such as verification, validity, security and control. Although it may be little more than a minor nuisance if your laptop crashes or gets hacked, it is a far more serious matter that an AI system does what you want it to do if it is controlling your car, your airplane, your pacemaker, your automated trading system or your power grid.

Artificial intelligence today is properly known as narrow AI (or weak AI) in so far as it is designed to perform a narrow task. This refers, for instance, to only facial recognition or only internet searches or only driving a car. However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at its given specific task, e.g. playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Designing Superintelligence

As I. J. Good pointed out in 1965, designing smarter AI systems is in itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion that would leave the human intellect far behind. And what happens then? There is no way of knowing what would happen if AI reached this point, which is a bit worrying, of course. Because if AI did outsmart us, what’s to say we would remain in control?

And it is exactly this thought – our fear of the unknown – which has sparked the very current debate about AI risks and fears. And it would seem like a good idea to plan ahead so that there is at least some kind of “oops” button that will allow us to cool down if things get too heated. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease and poverty, so the creation of strong AI could be the greatest event in human history. Some experts have expressed concern, though, that it might also be the last – unless we learn to align the goals of AI with our own before it becomes super intelligent.

“As AI continues reaching milestones that experts thought were decades away, many have started to take seriously the possibility of superintelligence in our lifetime.”

Whether we will ever get to the point where such super intelligent machines exist is speculation at this point. However, as AI continues reaching milestones that experts thought were decades away, many have started to take seriously the possibility of superintelligence in our lifetime, and by the year 2060, say advocacy organisation Future of Life Institute (FLI), who are engaged in questions of AI and the impact it will have on society. This organisation was founded by AI specialists such as professors and scientists from MIT and Harvard, the co-founders of Skype and Google DeepMind researchers. Prominent researchers and scientists, including Elon Musk and Stephen Hawking, sit on their scientific advisory board, as do public figures and actors such as Morgan Freeman and Alan Alda, to name just a few.

Research is Key

FLI call for more research on AI safety. “There are some who question whether strong AI will ever be achieved, and others who insist that the creation of super-intelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.”

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates and many other big names in science and technology have recently expressed major concern in the media and through open letters about the risks posed by AI. They have been joined by many leading AI researchers. Elon Musk says that “humanity risks summoning the demon with AI” and calls for better regulation of who gets to control AI technologies.

“I have exposure to the most cutting-edge AI and I think people should be really concerned about it,” Musk has said in an interview. “I keep sounding the alarm bell, but until people see robots going down the street and killing people, they don’t know how to react because it seems so ethereal.”

Professor Stephen Hawking has declared AI to be the most serious threat to the survival of the human race. Both innovators agree that while AI technology can be harnessed to help humans in areas such as medicine, business and care for the aged, AI’s ability to self-learn and act on that new knowledge is a real and present threat and needs immediate and tough governmental regulatory oversight – something that is not happening now.

No Real Concern?

But even if there are many who have expressed their concerns, many experts also refute the idea that the technology poses any real risks, such as Facebook CEO and founder Mark Zuckerberg.

“I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it,” Zuckerberg said at a live Facebook broadcast. “It’s really negative and in some ways, I actually think it is pretty irresponsible.”

Two Bad What-if’s

Most researchers agree that a super-intelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios (listed on the FLI website) most likely:

1. An AI is programmed to do something devastating

Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. These weapons would be designed to be extremely difficult to simply “turn off”, so humans could plausibly lose control of such a situation. This risk is one that’s present even within narrow AI as we know it today, but it grows as levels of AI intelligence and autonomy increase.

2. An AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal

This can happen whenever we fail to fully align an AI’s goals with our own, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there, but covered in vomit and with helicopters in pursuit, by doing not what you wanted but literally what you asked for. If a super-intelligent system were tasked with an ambitious geoengineering project, it might view human attempts to stop it as a threat to be met, while wreaking havoc on our ecosystem as a side effect.

Risky Intelligence

In Brynjolfsson and McAfee’s Harvard Business Review report, they indicate that machine-learning systems already in existence often present such risks as low “interpretability”, meaning that it is difficult for humans to figure out how the systems have reached their decisions.

Another risk they point out is that machines may have hidden biases, derived not from any intent of the designer but from the data provided to train the system. Yet a further risk is that, in contrast with traditional systems built on rules of explicit logic and dealing with literal truths, neural network systems deal with statistical truths. Lastly, when a machine-learning system does make errors, as it almost inevitably will, diagnosing and correcting exactly what’s gone wrong can be difficult.

As these examples illustrate, the concern about advanced AI relates not to malevolence but to competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. And if super-intelligent AI starts to do things that we don’t understand and don’t really want it to do, that could mean a very bad day at the office.

“We don’t want to accept arbitrary decisions by entities, people or AIs that we don’t understand,” says Uber AI researcher Jason Yosinkski, co-organiser of the Interpretable AI workshop. “In order for machine-learning models to be accepted by society, we’re going to need to know why they’re making the decisions they’re making.”

Future of Life Institute illustrate the most common myths and misconceptions about AI.myths-and-misconceptions-AI-part-2

Controlling the Machines

This question of who gets to control the development and programming of AI is clearly an important issue. But who makes sure that it doesn’t fall into the wrong hands? As with everything else, how can we best avoid this powerful technology being used against us? And what happens if we start programming machines to kill humans?

“Building up a new breed of military equipment using artificial intelligence is one thing—deciding what uses of this new power are acceptable is another.”

“Some technologies are so powerful as to be irresistible,” says Greg Allen, a fellow at the Center for New American Security and co-author of a new report on the effect of artificial intelligence on national security produced by Harvard’s Belfer Center for Science and International Affairs. It lays out why technologies such as drones with bird-like agility, robot hackers and software that generates photo-real fake video are on track to make the American military and its rivals much more powerful.

New technologies like these can be expected to bring with them a series of excruciating moral, political and diplomatic choices for America and other nations. Building up a new breed of military equipment using artificial intelligence is one thing—deciding what uses of this new power are acceptable is another. The report recommends that the US start considering what uses of AI in war should be restricted using international treaties.

One trending video right now is of the humanoid robot Atlas, the latest of many innovations from the US Department of Defense’s research agency DARPA. Atlas is designed to take on some of the most dangerous and high-stakes jobs imaginable, such as search and rescue. But as with all military-funded AI, wouldn’t that also mean it can be used to kill? To search and destroy?

AI and Future Warfare

If we are talking war, then AI can no doubt revolutionise the playing field as much as nuclear missiles have. And right now, there is a serious arms race for AI. The idea of Donald Trump or Kim Jong-Un controlling something even more powerful than a nuclear missile is, to put it mildly, a little concerning, if you ask me. As time goes on, improvements in AI and related technology may also shake up the balance of power internationally by making it easier for smaller nations and organisations to threaten big powers like the US.

The Harvard report warns that commoditisation of technologies such as drone delivery and autonomous passenger vehicles could become powerful tools of asymmetric warfare. ISIS has already started using consumer quadcopters to drop grenades on opposing forces.

The thought of nations building superhuman armies of robots that decide for themselves whether to kill humans is no joke, and it should also be more than a little worrying. And some people who have helped build up machine learning and artificial intelligence certainly are worried, as mentioned earlier.

In fact, more than 3,000 researchers, scientists and executives from companies including Microsoft and Google signed a 2015 letter to the Obama administration asking for a ban on autonomous weapons. “I think most people would be very uncomfortable with the idea that you would launch a fully autonomous system that would decide when and if to kill someone,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence and a signatory to the 2015 letter.

“There’s reason to think that AI diplomacy can’t be effective, unless countries can avoid the trap of thinking that the technology is a race in which there will be only one winner.”

AI Diplomacy

In 2012, the Department of Defense set a temporary policy requiring a human to be involved in decisions to use lethal force; it was updated to be permanent in May 2017. But as is still the case in most areas of AI technology, development is moving much faster than any regulations or ethical limitations on a global scale. Which is another reason why so many are calling for it to slow down.

The Harvard report recommends that the National Security Council, the Department of Defense and the State Department start studying now what internationally agreed limits ought to be imposed on AI. Miles Brundage, who researches the impact of AI on society at the University of Oxford, says there’s reason to think that AI diplomacy can’t be effective, unless countries can avoid the trap of thinking that the technology is a race in which there will be only one winner. “One concern is that if we put such a high premium on being first, then things like safety and ethics will go by the wayside,” he says. “We saw in the various historical arms races that collaboration and dialogue can pay dividends.”

As Musk has warned, people are far more likely to be killed by artificial intelligence than nuclear war with North Korea. He wants the companies working on AI to slow down to ensure they don’t unintentionally build something unsafe. And it seems that this would most definitely also apply to national security.

So…Now What?

The discussion of AI often brings to mind the familiar lines from the 1984 blockbuster The Terminator when Michael Biehn’s character Kyle Reese explains how the apocalypse started. “Defense network computers. New… powerful… hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat. Decided our fate in a microsecond.” Sound like an early definition of AI? Umm, maybe a little!

I guess, for now, speculation is still rife as to what will happen in the future. There are definitely risks involved, and it seems that the key to a happy and beneficial future lies in more research on safety procedures, serious governance and planning ahead in order to stay in control. Let us try to stay at the top of the food chain and things will probably be great. One thing’s for sure, which is that AI continues to prompt discussion and disagreement and inevitably evokes many questions that are not so easily answered.

What sort of future do we want? Should we develop lethal autonomous weapons? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us or merge with us? What will it mean to be human in the age of artificial intelligence?

Pretty heavy stuff to go with your morning coffee. In my next post, I will discuss further what AI might mean for employment. Maybe it doesn’t have to be so bleak, after all.

The robot Atlas is one of the latest developments of the US Department of Defence.

Is the future of warfare really headed in this direction? This video was issued as a warning from the organisation Autonomousweapons.org as a part of their campaign to stop autonomous weapons.