Why Elon Musk is worried about artificial intelligence

The Tesla founder calls digital superintelligence 'potentially more dangerous than nukes.' Here are some terrifying scenarios that explain why.

By MSN Money Partner Aug 4, 2014 12:16PM

'Terminator 2: Judgment Day' © REX/Courtesy Everett Collectionquartz on MSN moneyBy Adam Pasick, Quartz


Elon Musk, the Tesla (TSLA) and Space-X founder who is occasionally compared to comic book hero Tony Stark, is worried about a new villain that could threaten humanity -- specifically the potential creation of an artificial intelligence that is radically smarter than humans, with catastrophic results. This weekend, Musk tweeted:  

@elonmusk: Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
@elonmusk: Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.

Musk is talking about "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom of the University of Oxford's Future of Humanity Institute. The book addresses the prospect of an artificial superintelligence that could feasibly be created in the next few decades. According to theorists, once the AI is able to make itself smarter, it would quickly surpass human intelligence. 


What would happen next? The consequences of such a radical development are inherently difficult to predict. But that hasn't stopped philosophers, futurists, scientists and fiction writers from thinking very hard about some of the possible outcomes. The results of their thought experiments sound like science fiction -- and maybe that's exactly what Elon Musk is afraid of.


AIs: They're not just like us

"We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans -- scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth," Bostrom has written (pdf, pg. 14). (Keep in mind, as well, that those values are often in short supply among humans.)


"It might be possible through deliberate effort to construct a superintelligence that values such things, or to build one that values human welfare, moral goodness, or any other complex purpose that its designers might want it to serve," Bolstroms adds. "But it is no less possible -- and probably technically easier -- to build a superintelligence that places final value on nothing but calculating the decimals of pi."


And it's in the ruthless pursuit of those decimals that problems arise.


Unintended consequences

Artificial intelligences could be created with the best of intentions -- to conduct scientific research aimed at curing cancer, for example. But when AIs become superhumanly intelligent, their single-minded realization of those goals could have apocalyptic consequences.


"The basic problem is that the strong realization of most motivations is incompatible with human existence," Daniel Dewey, a research fellow at the Future of Humanity Institute, said in an extensive interview with Aeon magazine. "An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don't take root systems or ant colonies into account when we go to construct a building."


Put another way by AI theorist Eliezer Yudkowsky of the Machine Intelligence Research Institute: "The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else."


Be careful what you wish for

Say you're an AI researcher and you've decided to build an altruistic intelligence -- something that is directed to maximize human happiness. As Ross Anderson of Aeon noted, "an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin" is the best way to reach that goal.


Or what if you direct the AI to "protect human life" -- nothing wrong with that, right? Except  if the AI, vastly intelligent and unencumbered by human conceptions of right and wrong, decides that the best way to protect humans is to physically restrain them and lock them into climate-controlled rooms, so they can't do any harm to themselves or others? Human lives would be safe, but it wouldn't be much consolation.


AI Mission Accomplished

James Barrat, the author of "Our Final Invention: Artificial Intelligence and the End of the Human Era," (another book endorsed by Musk) suggests that AIs, whatever their ostensible purpose, will have a drive for self-preservation and resource acquisition. Barrat concludes that "without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we'd deem ridiculous to fulfill its goals."


Even an AI custom-built for a specific purpose could interpret its mission to disastrous effect. Here's Stuart Armstrong of the Future of Humanity Institute in an interview with The Next Web:


Take an anti-virus program that's dedicated to filtering out viruses from incoming emails and wants to achieve the highest success, and is cunning and you make that super-intelligent. Well it will realize that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and as a side effect no viruses will be sent. This is sort of a silly example but the point it illustrates is that for so many desires or motivations or programmings, "kill all humans" is an outcome that is desirable in their programming.

Even an "oracular" AI could be dangerous

Ok, what if we create a computer that can only answer questions posed to it by humans. What could possibly go wrong? Here's Dewey again:

Let's say the Oracle AI has some goal it wants to achieve. Say you've designed it as a reinforcement learner, and you've put a button on the side of it, and when it gets an engineering problem right, you press the button and that's its reward. Its goal is to maximize the number of button presses it
Eventually the AI -- which, remember, is unimaginably smart compared to the smartest humans -- might figure out a way to escape the computer lab and make its way into the physical world, perhaps by bribing or threatening a human stooge into creating a virus or a special-purpose nanomachine factory. And then it's off to the races. Dewey:
Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it's going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button.

Roko's Basilisk

The dire scenarios listed above are only the consequences of a benevolent AI, or at worst one that's indifferent to the needs and desires of humanity. But what if there was a malicious artificial intelligence that not only wished to do us harm, but that retroactively punished every person who refused to help create it in the first place?


This theory is a mind-boggler, most recently explained in great detail by Slate, but it goes something like this: An omniscient evil AI that is created at some future date has the ability to simulate the universe itself, along with everyone who has ever lived. And if you don't help the AI come into being, it will torture the simulated version of you—and, P.S., we might be living in that simulation already.


This thought experiment was deemed so dangerous by Eliezer "The AI does not love you" Yudkowsky that he has deleted all mentions of it on LessWrong, the website he founded where people discuss these sorts of conundrums. His reaction, as highlighted by Slate, is worth quoting in full:

Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought.
More from Quartz
2Comments
avatar
How about an AI dedicated to preventing the creation of AIs which have unintended or negative consequences for humanity?
Aug 4, 2014 4:06PM
avatar

Many of the examples given seem to describe an AI that is substantially LESS intelligent than its human parents. But I digress.

 

What does Mr. Musk think about Kurt Gödel' incompleteness theorems? The last I heard, they still demonstrate conclusively that all axiomatic arithmetic systems (basic, essential components of an AI) are, with some trivial exceptions, incapable of proving what the human parents can prove.

 

Assuming Herr Gödel is correct -- no one who has examined his theorems, including A. Einstein, has found him to be in error -- no AI in this universe can ever be the equal of, let alone the superior to, human intelligence.

Report
Please help us to maintain a healthy and vibrant community by reporting any illegal or inappropriate behavior. If you believe a message violates theCode of Conductplease use this form to notify the moderators. They will investigate your report and take appropriate action. If necessary, they report all illegal activity to the proper authorities.
Categories
100 character limit
Are you sure you want to delete this comment?

DATA PROVIDERS

Copyright © 2014 Microsoft. All rights reserved.

Fundamental company data and historical chart data provided by Morningstar Inc. Real-time index quotes and delayed quotes supplied by Morningstar Inc. Quotes delayed by up to 15 minutes, except where indicated otherwise. Fund summary, fund performance and dividend data provided by Morningstar Inc. Analyst recommendations provided by Zacks Investment Research. StockScouter data provided by Verus Analytics. IPO data provided by Hoover's Inc. Index membership data provided by Morningstar Inc.

ABOUT TECHBIZ

Start investing in technology companies with help from financial writers and experts who know the industry best. Learn what to look for in a technology company to make the right investment decisions.

RECENT POSTS

VIDEO ON MSN MONEY

RECENT QUOTES

WATCHLIST

Symbol
Last
Change
Shares
Quotes delayed at least 15 min

MSN MONEY'S