There is an article of interviews with 17 top scientists on the issue of artificial intelligence.1 From the conversations of these scientists, one can see how confus- ing the understanding of AI in the scientific community is.
Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory, said, “I am optimis- tic about the future of AI in enabling people and machines to work together to make lives better.” Andrew NG, VP and chief scientist of Baidu, said: “Worrying about evil- killer AI today is like worrying about overpopulation on the planet Mars. Perhaps it'll be a problem someday, but we haven't even landed on the planet yet.” Said Steven Pinker, a psychology professor at Harvard University, “We should worry a lot about climate change, nuclear weapons, antibiotic-resistant pathogens, and reactionary and neo-fascist political movements. We should worry some about the displacement of workers in an automating economy. We should not worry that artificial intelligence enslaving us.” Sebastian Thrun, computer science professor at Stanford University even said, “I am infinitely excited about artificial intelligence and not worried at all. Not in the slightest. AI will free us humans from highly repetitive mindless office work, and give us much more time to be truly creative. I can't wait.”
But Margaret Martonosi, computer science professor at Princeton University, said: “It would be foolish to ignore the dangers of AI entirely.” Sean Carroll, professor of cosmology and physics at California Institute of Techno- logy, said, “That raises the prospect of unintended consequences in a serious way.” Said Nick Bostrom, director of the Future Humanity Institute at Oxford University, “We should take seriously the possibility that things could go radically wrong.” But they did not say what degree the consequences would be.
Some scientists talked in more detail what kind of threats AI might bring. The threats they listed includ