AI developments and possibilities – and some cute puppies

We talk all the time about technological advances. Businesses become fully digitised. We push for innovation and new technologies that will help humanity evolve, become better. Buzzwords like AI, automation and quantum technology are all trending in social media.

Businesses today need to have strategies for AI developments, and falling behind in the race for AI might have catastrophic consequences. There’s a lot of talk of AI, but what’s really being said? Where are we headed? Artificial intelligence has raised many voices, both optimistic and cautionary, and the fact is that nobody really knows what might happen when computers think and learn and act on their own.

There are so many pros when it comes to AI but also a few cons, including the classic Hollywood portrayal – where machines turn deadly and decide to wipe us off the planet. And even if we forget about that minor concern, there is still the issue of governance and who gets to control these technologies which will create the potential for both fantastic and devastating outcomes. The future with AI is already here whether we like it or not, so let’s hope it’s a good one.

I will be discussing several implications of AI in a series of blog posts. Let us start at the beginning. What does AI do and how does it really work?

80s evolution and awesomeness

Remember what it was like in the good old days? The sun was shining, birds were chirping away, and people were worrying about stuff like batteries running out on their Walkman. Kids were getting high on Dr Pepper, processed meals and Donkey Kong. Technology was something we definitely needed but still couldn’t quite understand or even get our hands on.

The 80s was a decade when science and technology made ground-breaking strides. In 1983, we saw the release of the revolutionary Apple Lisa, the first commercial computer to offer a graphical user interface.

The same year the Androbot Topo arrived – the closest thing around to that cute little robot Rosie from The Jetsons – to fulfil our dreams of having a pet robot to do our personal chores. It was controlled by a joystick and programmed through the Apple II interface. Although it failed to take the industry by storm, it has since played a significant role in the development of robot productivity hardware.

We are talking about a time when the internet, or rather, the World Wide Web, was just on the verge of being released to people’s homes, but there was still no internet as you know it, kids. That’s right. I never feel as old as I do when I’m asked what it was like before the internet arrived. And the fact is, I’m not even that old. How did you survive without the internet?! The millennials look confused when I mention that we had to use dictionaries. Encyclopaedias. Typewriters. And phonebooks – No “i” in front.

“Although AI is already in use in thousands of companies around the world, most big opportunities have not yet been tapped.”

The Rise of the Machines

The technological revolution that has occurred since the advent of the World Wide Web is astounding. And quite frankly, in my opinion, it’s also a little bit scary. But that’s probably just because, as previously pointed out, I am old. And apparently out of date. Because I was really not prepared for AI. Or, rather, for the way it has progressed so much further than I’d ever imagined in such a short time. We are suddenly in a whole new era– with the rise of the machines. Cue music.

Just a few weeks ago my world was basically turned upside down when I came in to work and heard about the robot Sophia, who had just been awarded citizenship in Saudi Arabia: the first of its kind in the world.

I mean, to me this is just mind boggling, and that’s a word I don’t use much. Sophia looks nothing like the Androbot Topo. She has human features and can imitate facial expressions in order to “work with humans and build trust with people”, according to herself. She thinks on her own and is both self-aware and conscious (!) and able to respond to questions spontaneously.

“I want to use my artificial intelligence to help humans live a better life,” she says in a live interview with New York Times columnist Andrew Ross Sorkin at an initiative in Saudi Arabia in late October. Sophia was made by Hanson Robotics, and her AI is designed around such human values as wisdom, kindness and compassion. They go on to discuss whether AI will pose a threat to humanity, to which she simply replies: “Oh, Hollywood again?” Yeah, clever way to dodge that bullet, Sophia. It certainly sounds like something out of Hollywood if you ask me.


Robot Sophia in a live interview after being granted legal citizenship in Saudi Arabia, the first of its kind in the world.

From SCI FI to AI

Of course, talk of robots started long before the 80s. People have been talking about AI ever since the 1950s, and it’s been the topic of countless films and sci fi novels ever since. The term artificial intelligence was first coined in 1955 by John McCarthy, a maths professor at Dartmouth who organised a conference on the topic the following year. In 1957 the economist Herbert Simon predicted that computers would beat humans at chess within 10 years. Perhaps a tad optimistic as it took 40, but still.

Today AI is all over the place and a lot of us don’t even realise or contemplate what huge implications it will soon have on our daily lives, both in business and socially.

“AI will provide platforms for developing computers that can make medical diagnoses better than physicians, autonomous traffic systems, self-learning computers that outsmart human specialists, and robots that are capable of not only performing simple tasks but also solving complex problems,” says Oscar Stege Unger, Director at Wallenberg Foundations AB, a foundation that supports the development of AI and quantum technology in Sweden. “You might think that all of this lies far in the future, but the future is already here.”

In fact, the Knut and Alice Wallenberg Foundation has donated SEK 1.6 billion for artificial intelligence and quantum technology research in Sweden. The initiative will focus primarily on developing long-term competence in these areas by establishing large research schools and recruiting young researchers from the rest of the world to Sweden.

“Sweden must remain competitive in AI research and education. The USA, Canada and the UK have made major investments in AI and we must follow their example. AI should be a strategic area for all universities in Sweden,” says Danica Kragic, Professor at the KTH Royal Institute of Technology in Stockholm.

Erik Brynjolfsson and Andrew McAfee have published an interesting report in the Harvard Business Review about the business of AI. According to them, the effects of AI will be magnified in the coming decade as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models in order to take advantage of machine learning.

So, AI is obviously something that every country and every business will want to take part of. But how does it work exactly?

“Once AI-based systems surpass human performance at a given task, they are much likelier to spread quickly.”

Different types of AI technology

“The biggest advances so far have been in two broad areas: perception and cognition. In the former category, some of the most practical advances have been made in relation to speech. Voice recognition is still far from perfect, but millions of people are now using it — think Siri, Alexa, and Google Assistant,” say Brynjolfsson and McAfee.

Image recognition is also a type of AI technology that has improved dramatically. Such as when Facebook suddenly started suggesting to us who was in our photos and prompting us to tag them. Pretty cool stuff, and so far, harmless. Image recognition is increasingly replacing ID cards at corporate headquarters, just like what we see in any CIA movie. There are apps that can identify any species of birds in the wild. Any ardent bird watcher will go cray-cray.

Vision systems, such as those used in self-driving cars, used to make a mistake when identifying a pedestrian as often as once per 30 frames (the cameras in these systems recorded about 30 frames a second); now they make such errors as rarely as once per 30 million frames.

The error rate for recognising images from a large database called ImageNet – with several million photographs of common, obscure or downright weird images – fell from greater than 30% in 2010 to about 4% in 2016 for the best systems.

Puppy or Muffin? Progress in Image Recognition

The speed of improvement has accelerated rapidly in recent years with the adoption of a new approach based on very large or “deep” neural nets.

The machine learning (ML) approach for vision systems is still far from flawless – but weirdly, even people can have trouble distinguishing between a puppy and a muffin at first glance.

puppy-or-muffin(Teenybiscuit)

Problem-solving computers to pose a problem?

The second type of major improvement in AI technology has been in cognition and problem-solving, as Brynjolfsson and McAfee explain:

“Machines have already beaten the best (human) players of poker and Go – achievements that experts had predicted would take at least another decade. Intelligent agents are being used by the cybersecurity company Deep Instinct to detect malware, and by PayPal to prevent money laundering. Google’s DeepMind team has used Machine Learning (ML) systems to improve the cooling efficiency at data centres by more than 15%, even after they were optimized by human experts.”

What’s more, AI prediction software is being used in several industries, including financial companies and IT. It first appeared at Nomura Securities, Japan’s largest financial services group. Every 1/1000 of a second, the company’s latest AI software measures hundreds of variables in deciding how much a share price will fluctuate and then suggests a plan of action every five minutes. “It arrives at analytical solutions that humans were not able to even contemplate up until recently. We can no longer do without AI in this line of business,” says Taishi Harada, Nomura’s AI Implementation Director.

According to Brynjolfsson and McAfee, not only are machine learning systems replacing older algorithms in many applications but they are now superior at many tasks that were once done best by humans. Reaching this threshold opens up vast new possibilities for transforming the workplace and the economy.

Deep learning

According to Brynjolfsson and McAfee’s report, the most important thing to understand about machine learning is that it represents a fundamentally different approach to creating software: the machine learns from examples rather than being explicitly programmed for a particular outcome.

But this approach has a fundamental weakness: much of the knowledge humans have is tacit, meaning that we can’t fully explain it. It’s nearly impossible for us to write down instructions that would enable another person to learn how to ride a bike or to recognise a friend’s face. The algorithms that have driven much of this success depend on an approach called deep learning, which uses neural networks.

China and the USA have made considerable investments in deep learning technologies and are spearheading development efforts. A lot of deep learning research is being conducted by companies such as Google, IBM and Facebook.

JPMorgan Chase introduced a system for work that used to take loan officers 360,000 hours, but with the help of these new advances in AI, it now takes only a few seconds. Supervised learning systems are also being used to diagnose skin cancer, and the use of AI technology in hospitals is said to revolutionise the way doctors and nurses work, where computers can be used to diagnose patients with much greater accuracy than doctors, and where new AI equipment can be used to perform complex surgeries to save human lives. Now that’s pretty cool stuff.

“Without any human intervention and within 24 hours, Alpha Zero taught itself to play chess well enough to beat the best existing chess programs, by simply playing against itself.”

Man vs Machine

Deep learning is used to collect and analyse enormous amounts of data for reinforcement learning, in which a computer learns by itself and expands its knowledge. A much-discussed example is the already mentioned defeat of the human world champion of the Asian strategy game Go by an intelligent machine using self-learned innovative techniques and tactics.

The Go-playing AI was developed by Google’s DeepMind initiative. While AlphaGo learned the game from centuries of human strategy and gameplay, its recently released 2017 successor, AlphaGo Zero, did nothing of the sort. Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve. And its not just Go. Without any human intervention and within 24 hours, the generalized version, Alpha Zero, taught itself how to play chess well enough to beat the best existing chess programs, by simply playing against itself.

Ke-Jie

In May 2017, the previous version of AlphaGo defeated the world champion, 19-year-old Ke Jie of China, for the third time (AFP/Getty Images).

“It doesn’t play like a human, and it doesn’t play like a program,” said Demis Hassabis, founder and CEO of DeepMind, at the Neural Information Processing Systems (NIPS) conference in California this week. “It plays in a third, almost alien, way.” And now, the tables have turned. Now it’s the humans copycatting the AI: real life players are testing strategies they’ve seen the robots play – even if they don’t comprehend the machine’s grand strategy. It is pretty much changing the game for how we look at learning and how to solve problems.

And it is exactly at this point that it starts to move into scary territory. Because what happens next when machines become not only more intelligent than we are, but also start to learn by themselves in ways that we can’t even comprehend, and then decide to act on it? Shouldn’t that be sounding some alarm bells?

The Harvard Business Report states that, like so many other new technologies, AI has generated lots of unrealistic expectations: “The fallacy that a computer’s narrow understanding implies broader understanding is perhaps the biggest source of confusion, and exaggerated claims, about AI’s progress. We are far from machines that exhibit general intelligence across diverse domains,” argue Brynjolfsson and McAfee.

Others say that AI will, in the next 25 years, have evolved to the point where it will know more on an intellectual level than any human. In the next 50 or 100 years, an AI might know more than the planet’s entire population put together.

“In the next 50 or 100 years, an AI might know more than the planet’s entire population put together.”

Skynet Dystopia

People such as famed physicist Stephen Hawking and Tesla‘s Elon Musk have issued dark warnings of a world where computers become so sophisticated so quickly that humanity loses control of them – and of its own destiny as a result. Many AI researchers and developers share these concerns, because the ugly side of AI is not only what machines might decide to do if they should start regarding humans as a threat. Another major discussion about AI concerns government regulations and ethics, and who gets to design and control such technologies. Because what happens if (or when) we start programming AIs to kill humans?

But more on that in a later post. There’s no telling how far machine learning will go in the next few decades, and in large part it will depend on how far we let it go. But whether it ends in a Skynet dystopia or not, we will have to see.

The possibilities for businesses using AI technology are endless and can certainly lead to higher profits and a much more efficient way of working. Even to saving lives.

As Brynjolfsson and McAfee see it, “The status quo of dividing up work between minds and machines is falling apart very quickly. Companies that stick with it are going to find themselves at an ever-greater competitive disadvantage compared with rivals who are willing and able to put Machine Learning to use in all the places where it is appropriate and who can figure out how to effectively integrate its capabilities with humanity’s.”

This also means that businesses are racing headlong to develop or buy the best AI technology available; and countries like the US, China and Russia are far into an AI arms race that will make nuclear weapons pee their pants.

The next two articles will discuss the implications of AI even further – specifically, AI’s expected effect on employment and how we can provide a secure, successful partnership together with the machines; as well as a wander into the dark areas of AI and the debate for safe, ethical and strict political regulations to prevent not only the machines themselves but also their creators of getting out of hand. The AI are awakening. Are you ready?