“accidental value misalignment”. These scientists do not believe that robots can have the power of God, by gently moving their fingers, they can exterminate humans.
Bryan Caplan, a professor of economics at George Mason University, said: “AI is no more scary than the humans behind it. Like domesticated animals, is designed to serve its creators. AI in North Korean hands is scary in the same way that long-range missiles in North Korea are scary. But that's it.Terminator scenarios where AI turns on mankind are just paranoid.” Moshe Vardi, professor of computa- tional engineering at Rice University, said, “The super intelligence risk, which gets more headlines, is not an immediate risk.” Daniela Rus said: “AI is an incredibly powerful tool that, like other tools, isn't inherently good or bad — it's about what we choose to do with it.”
The argument that robots can only be human tools seriously underestimates the capabilities of future robots. Stephen William Hawking does not think that robots are just human's tools. In a visit of The Times, he warned, Mankind needs to control emerging technologies represented by AI in order to prevent the devastating threat they may bring to human survival in the future.
In the stage of weak AI and strong AI, people can use AI as a tool, but after entering the stage of super AI, the robot is no longer a human tool. Robots are like human children. When they are young, parents can use them. When they grow up, parents can no longer use them. AI may be filial or not filial. Humans have no way to control it. The exponential growth of intelligence makes them no longer human tools, but the masters of human fate.
Only Jaan Priisalu, the senior fellow at NATO Cooperative Cyber Defense, former general director of the Estonian Information System's Authority, mentioned the exponential growth of robot intelligence. He said: “We also shouldn't deny the fact of exponential AI growth. Ignoring means condemning us to be irrelevant when rules will be redefined.” But he did not explain what is “condemning us to be irrelevant when rules will be redefined”. Is it like China was excluded from the United Nations before 1972, or is it like animals being excluded outside human civilization.
Facing AI, how should humans respond? Joanna Bryson said, we should not panic. But he did not say how could we. Obviously, if there is no clear way to deal with it, it is very difficult for human beings not to fall into panic. Moshe Vardi hopes to find a solution for the best of both worlds. He said, “We need to have a serious discussion regarding which decisions should be made by humans and which by machines.” He seems to think that as long as we decide that some decisions cannot be made by robots, then robot would not dare to make these decisions. Sean Carroll said, “It is absolutely right to think very carefully and thoroughly about what those consequences might be, and how we might guard against them, without preventing real progress on improved artificial intelligence.