The late Stephen Hawking was a major voice in the debate about how humanity can benefit from artificial intelligence. Hawking made no secret of his fears that artificial intelligence could one day bring doom. He went as far as predicting that future developments in AI “could spell the end of the human race.”
But Hawking’s relationship with AI was far more complex than this often-cited soundbite. The deep concerns he expressed were about superhuman AI, the point at which AI systems not only replicate human intelligence processes, but also keep expanding them, without our support – a stage that is at best decades away, if it ever happens at all.
And yet Hawking’s very ability to communicate those fears, and all his other ideas, came to depend on basic AI technology.
Hawking’s Conflicted Relationship With AI
At the intellectual property and health law centres at DePaul University, my colleagues and I study the effects of emerging technologies like the ones Stephen Hawking worried about. At its core, the concept of AI involves computational technology designed to make machines function with foresight that mimics, and ultimately surpasses, human thinking processes.
Hawking cautioned against an extreme form of AI, in which thinking machines would “take off” on their own, modifying themselves and independently designing and building ever more capable systems. Humans, bound by the slow pace of biological evolution, would be tragically outwitted.
AI as a Threat to Humanity?
Well before it gets to the point of superhuman technology, AI can be put to terrible uses. Already, scholars and commentators worry that self-flying drones may be precursors to lethal autonomous robots.
Today’s early-stage AI raises several other ethical and practical problems, too. AI systems are largely based on opaque algorithms that make decisions even their own designers may be unable to explain. The underlying mathematical models can be biased, and computational errors may occur. AI may progressively displace human skills and increase unemployment. And limited access to AI might increase global inequality.
The One Hundred Year Study on Artificial Intelligence, launched by Stanford University in 2014, highlighted some of these concerns. But so far it has identified no evidence that AI will pose any “imminent threat” to humankind, as Hawking feared.
Still, Hawking’s views on AI are somewhat less alarmist and more nuanced than he usually gets credit for. At their heart, they describe the need to understand and regulate emerging technologies. He repeatedly called for more research on the benefits and dangers of AI. And he believed that even non-superhuman AI systems could help eradicate war, poverty and disease.
This apparent contradiction – a fear of humanity being eventually overtaken by AI but optimism about its benefits in the meantime – may have come from his own life: Hawking had come to rely on AI to interact with the world.
The first iteration of the computer programme was exasperatingly slow and prone to errors. Very basic AI changed that. An open-source programme made his word selection significantly faster. More importantly, it used artificial intelligence to analyse Hawking’s own words, and then used that information to help him express new ideas. By processing Hawking’s books, articles and lecture scripts, the system got so good that he did not even have to type the term people most associate with him, “the black hole.” When he selected “the,” “black” would automatically be suggested to follow it, and “black” would prompt “hole” onto the screen.
AI Improves People’s Health
Stephen Hawking’s experience with such a basic form of AI illustrates how non-superhuman AI can indeed change people’s lives for the better. Speech prediction helped him cope with a devastating neurological disease. Other AI-based systems are already helping prevent, fight and lessen the burden of disease.
For instance, AI can analyse medical sensors and other health data to predict how likely a patient is to develop a severe blood infection. In studies, it was substantially more accurate – and provided much more advance warning – than other methods.
Another group of researchers created an AI programme to sift through electronic health records of 700,000 patients. The programme, called “Deep Patient,” unearthed linkages that had not been apparent to doctors – identifying new risk patterns for certain cancers, diabetes and psychiatric disorders.
AI has even powered a robotic surgery system that outperformed human surgeons in a procedure on pigs that’s very similar to one type of operation on human patients.
There’s so much promise for AI to improve people’s health that collecting medical data has become a cornerstone of both software development and public-health policy in the U.S.
All of these benefits from AI are available right now, and more are in the works. They do suggest that superhuman AI systems could be extremely powerful, but despite warnings from Hawking and fellow technology visionary Elon Musk, that day may never come. In the meantime, as Hawking knew, there is much to be gained. AI gave him a better and more efficient voice than his body was able to provide, with which he called for both research and restraint.
(This is an opinion piece and the views expressed above are the author’s own. The Quint neither endorses nor is responsible for the same. This article was originally published on The Conversation. Read the original article here.)