There is an article of interviews with 17 top scientists on the issue of artificial intelligence.1 From the conversations of these scientists, one can see how confus- ing the understanding of AI in the scientific community is.
Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory, said, “I am optimis- tic about the future of AI in enabling people and machines to work together to make lives better.” Andrew NG, VP and chief scientist of Baidu, said: “Worrying about evil- killer AI today is like worrying about overpopulation on the planet Mars. Perhaps it'll be a problem someday, but we haven't even landed on the planet yet.” Said Steven Pinker, a psychology professor at Harvard University, “We should worry a lot about climate change, nuclear weapons, antibiotic-resistant pathogens, and reactionary and neo-fascist political movements. We should worry some about the displacement of workers in an automating economy. We should not worry that artificial intelligence enslaving us.” Sebastian Thrun, computer science professor at Stanford University even said, “I am infinitely excited about artificial intelligence and not worried at all. Not in the slightest. AI will free us humans from highly repetitive mindless office work, and give us much more time to be truly creative. I can't wait.”
But Margaret Martonosi, computer science professor at Princeton University, said: “It would be foolish to ignore the dangers of AI entirely.” Sean Carroll, professor of cosmology and physics at California Institute of Techno- logy, said, “That raises the prospect of unintended consequences in a serious way.” Said Nick Bostrom, director of the Future Humanity Institute at Oxford University, “We should take seriously the possibility that things could go radically wrong.” But they did not say what degree the consequences would be.
Some scientists talked in more detail what kind of threats AI might bring. The threats they listed include:
1.The threat to human employment.
2. AI will make existing systems more vulnerable to hacking. Sophisticated cyber-hacking could undermine the reliability of information we obtain from the network, and weaken national and international infrastructures.
3. Humans can lose control of smarter malware and the use of unsafe AI for crime.
4. Things like AI contributing the Brexit vote and the U.S. presidential election may occur.
5. As part of the socio-technological forces that have led to increases of wealth inequality and political polarization like the ones in the late 19th and early 20th centuries that brought us two world wars and a great depression.
6. Lethal weapons automatic system.
These threats listed by the scientists are all limited threats. They believe that robots can only hide in a corner to do bad things, they do not have the ability to openly confront humans and do not threaten the survival of mankind as a whole. In the words of Stuart Russell, a professor of computer science at the UC, Berkeley, this is just an