Screwing Up Artificial Intelligence Could Be Disastrous, Experts Say
Get the world’s most fascinating discoveries delivered straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Delivered Daily
Daily Newsletter
Sign up for the latest discoveries, groundbreaking research and fascinating breakthroughs that impact you and the wider world direct to your inbox.
Once a week
Life's Little Mysteries
Feed your curiosity with an exclusive mystery every week, solved with science and delivered direct to your inbox before it's seen anywhere else.
Once a week
How It Works
Sign up to our free science & technology newsletter for your weekly fix of fascinating articles, quick quizzes, amazing images, and more
Delivered daily
Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
Once a month
Watch This Space
Sign up to our monthly entertainment newsletter to keep up with all our coverage of the latest sci-fi and space movies, tv shows, games and books.
Once a week
Night Sky This Week
Discover this week's must-see night sky events, moon phases, and stunning astrophotos. Sign up for our skywatching newsletter and explore the universe with us!
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
From smartphone apps like Siri to features like facial recognition of photos, artificial intelligence (AI) is becoming a part of everyday life. But humanity should take more care in developing AI than with other technologies, experts say.
Science and tech heavyweights Elon Musk, Bill Gates and Stephen Hawking have warned that intelligent machines could be one of humanity's biggest existential threats. But throughout history, human inventions, such as fire, have also posed dangers. Why should people treat AI any differently?
"With fire, it was OK that we screwed up a bunch of times," Max Tegmark, a physicist at the Massachusetts Institute of Technology, said April 10 on the radio show Science Friday. But in developing artificial intelligence, as with nuclear weapons, "we really want to get it right the first time, because it might be the only chance we have," he said. [5 Reasons to Fear Robots]
On the one hand, AI has the potential to achieve enormous good in society, experts say. "This technology could save thousands of lives," whether by preventing car accidents or avoiding errors in medicine, Eric Horvitz, managing director of Microsoft Research lab in Seattle, said on the show. The downside is the possibility of creating a computer program capable of continually improving itself that "we might lose control of," he added.
For a long time, society has believed that things that are smarter must be better, Stuart Russell, a computer scientist at the University of California, Berkeley, said on the show. But just like the Greek myth of King Midas, who transformed everything he touched into gold, ever-smarter machines may not turn out to be what society wished for. In fact, the goal of making machines smarter may not be aligned with the goals of the human race, Russell said.
For example, nuclear power gave us access to the almost unlimited energy stored in an atom, but "unfortunately, the first thing we did was create an atom bomb," Russell said. Today, "99 percent of fusion research is containment," he said, and "AI is going to go the same way."
Tegmark called the development of AI "a race between the growing power of technology and humanity's growing wisdom" in handling that technology. Rather than try to slow down the former, humanity should invest more in the latter, he said.
Get the world’s most fascinating discoveries delivered straight to your inbox.
At a conference in Puerto Rico in January organized by the nonprofit Future of Life Institute (which Tegmark co-founded), AI leaders from academia and industry (including Elon Musk) agreed that it's time to redefine the goal of making machines as smart and as fast as possible. The goal should now be to make machines beneficial for society. Musk donated $10 million to the institute in order to further that goal.
After the January conference, hundreds of scientists, including Musk, signed an open letter describing the potential benefits of AI, yet warned of its pitfalls.
Follow Tanya Lewis on Twitter. Follow us @livescience, Facebook & Google+. Original article on Live Science.

