Artificial intelligence, machine learning and neural networks have seen their share of exposure in the past few years, but, as we’ve witnessed, their impact on our everyday, along with the exponential amounts of data we are harnessing that feeds them, is only beginning.
We may read about IBM’s cognitive computing system Watson taking on the healthcare system, or about Google’s humanoid robot taking its first steps outdoors, but we may struggle to understand what these advances mean to us. That’s why it’s important to read up on the bigger issues when it comes to AI and Machine Learning:
Since the 1940s, visionaries in the field of artificial intelligence have been proclaiming that machines are just a few years away from being able to think like people. In “Superintelligence,” an intriguing look at the past, present and above all the future of AI — Nick Bostrom, founding director of Oxford University’s Future of Humanity Institute, starts off by mocking the futurists.
“Two decades is a sweet spot for prognosticators of radical change: near enough to be attention-grabbing and relevant, yet far enough to make it possible that a string of breakthroughs, currently only vaguely imaginable, might by then have occurred,” he explains. Not coincidentally, he adds, 20 years may be the typical remaining duration of a forecaster’s career, limiting “the reputational risk of a bold decision.”
Bostrom’s book is based on the premise that AI research will sooner or later produce a computer with a general intelligence (rather than a special capability such as winning games) that is a real match for the human brain. We can separate approaches to AI into two overlapping classes. The first one, based on neurobiology, tries to understand the workings of the human brain. The second one, based on computer science, uses the inorganic architecture of electronics and software to produce intelligence, without worrying about how people think. Bostrom has no opinion about and provides no judgment on which one is more likely to succeed.
We are decades away from real AI. About half of the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 percent say it will arrive by 2075. Bostrom is more cautious when it comes to timing but believes that human-level AI is likely to lead to a far higher level of “superintelligence” faster than most experts expect. How this will impact humanity is unclear.
This book is most interesting and original when discussing the emergence of superintelligence. The visionary scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain’s, Bostrom argues. At that point, machines would be able to improve their own capabilities far faster than human computer scientists could.
“Machines have a number of fundamental advantages, which will give them overwhelming superiority,” he writes. “Biological humans, even if enhanced, will be outclassed. He explains in detail various options for AI to escape the physical bonds of the hardware in which it developed. For example, AI might use its hacking superpower to take control of robotic manipulators and automated labs, or deploy its powers of social manipulation to persuade human collaborators to work for it. How would the world look after an AI takeover? There would be more intricate and intelligent structures than anything we can imagine today — but no conscious beings whose welfare has moral significance. “A society of economic miracles and technological awesomeness, with nobody there to benefit,” as Bostrom puts it. “A Disneyland without children.”
If you’re interested in AI, Machine Learning and Data Strategy, please sign up to receive the next issue of THINK Review Magazine: The Data Strategy Issue.