17 Top Scientists Misunderstand AI
来源:COLLECTIONS OF TAIJIEVOLUTIONISM | 作者:YONG DUAN | 发布时间: 2021-11-06 | 5835 次浏览 | 分享到:

 does not think that robots are just human's tools. In a visit of The Times, he warned, Mankind needs to control emerging technologies represented by AI in order to prevent the devastating threat they may bring to human survival in the future.

In the stage of weak AI and strong AI, people can use AI as a tool, but after entering the stage of super AI, the robot is no longer a human tool. Robots are like human children. When they are young, parents can use them. When they grow up, parents can no longer use them. AI may be filial or not filial. Humans have no way to control it. The exponential growth of intelligence makes them no longer human tools, but the masters of human fate.

Only Jaan Priisalu, the senior fellow at NATO Cooperative Cyber Defense, former general director of the Estonian Information System's Authority, mentioned the exponential growth of robot intelligence. He said: “We also shouldn't deny the fact of exponential AI growth. Ignoring means condemning us to be irrelevant when rules will be redefined. But he did not explain what is condemning us to be irrelevant when rules will be redefined. Is it like China was excluded from the United Nations before 1972, or is it like animals being excluded outside human civilization.

Facing AI, how should humans respond? Joanna Bryson said, we should not panic. But he did not say how could we. Obviously, if there is no clear way to deal with it, it is very difficult for human beings not to fall into panic. Moshe Vardi hopes to find a solution for the best of both worlds. He said, “We need to have a serious discussion regarding which decisions should be made by humans and which by machines.” He seems to think that as long as we decide that some decisions cannot be made by robots, then robot would not dare to make these decisions. Sean Carroll said, It is absolutely right to think very carefully and thoroughly about what those consequences might be, and how we might guard against them, without preventing real progress on improved artificial intelligence.