17 Top Scientists Misunderstand AI
来源:COLLECTIONS OF TAIJIEVOLUTIONISM | 作者:YONG DUAN | 发布时间: 2021-11-06 | 5880 次浏览 | 分享到:

There is an article of interviews with 17 top scientists on the issue of artificial intelligence.1 From the conversations of these scientists, one can see how confus- ing the understanding of AI in the scientific community is.

Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory, said, “I am optimis- tic about the future of AI in enabling people and machines to work together to make lives better.” Andrew NG, VP and chief scientist of Baidu, said: “Worrying about evil- killer AI today is like worrying about overpopulation on the planet Mars. Perhaps it'll be a problem someday, but we haven't even landed on the planet yet.” Said Steven Pinker, a psychology professor at Harvard University, “We should worry a lot about climate change, nuclear weapons, antibiotic-resistant pathogens, and reactionary and neo-fascist political movements. We should worry some about the displacement of workers in an automating economy. We should not worry that artificial intelligence enslaving us. Sebastian Thrun, computer science professor at Stanford University even said, I am infinitely excited about artificial intelligence and not worried at all. Not in the slightest. AI will free us humans from highly repetitive mindless office work, and give us much more time to be truly creative. I can't wait.

But Margaret Martonosi, computer science professor at Princeton University, said: “It would be foolish to ignore the dangers of AI entirely.” Sean Carroll, professor of cosmology and physics at California Institute of Techno- logy, said, “That raises the prospect of unintended consequences in a serious way. Said Nick Bostrom, director of the Future Humanity Institute at Oxford University, We should take seriously the possibility that things could go radically wrong. But they did not say what degree the consequences would be.

Some scientists talked in more detail what kind of threats AI might bring. The threats they listed include:

1.The threat to human employment.

2. AI will make existing systems more vulnerable to hacking. Sophisticated cyber-hacking could undermine the reliability of information we obtain from the network, and weaken national and international infrastructures.

3. Humans can lose control of smarter malware and the use of unsafe AI for crime.

4. Things like AI contributing the Brexit vote and the U.S. presidential election may occur.

5. As part of the socio-technological forces that have led to increases of wealth inequality and political polarization like the ones in the late 19th and early 20th centuries that brought us two world wars and a great depression.

6. Lethal weapons automatic system.

These threats listed by the scientists are all limited threats. They believe that robots can only hide in a corner to do bad things, they do not have the ability to openly confront humans and do not threaten the survival of mankind as a whole. In the words of Stuart Russell, a professor of computer science at the UC, Berkeley, this is just an accidental value misalignment. These scientists do not believe that robots can have the power of God, by gently moving their fingers, they can exterminate humans.

Bryan Caplan, a professor of economics at George Mason University, said: “AI is no more scary than the humans behind it. Like domesticated animals, is designed to serve its creators. AI in North Korean hands is scary in the same way that long-range missiles in North Korea are scary. But that's it.Terminator scenarios where AI turns on mankind are just paranoid.” Moshe Vardi, professor of computa- tional engineering at Rice University, said, “The super intelligence risk, which gets more headlines, is not an immediate risk.” Daniela Rus said: AI is an incredibly powerful tool that, like other tools, isn't inherently good or bad — it's about what we choose to do with it.

The argument that robots can only be human tools seriously underestimates the capabilities of future robots. Stephen William Hawking does not think that robots are just human's tools. In a visit of The Times, he warned, Mankind needs to control emerging technologies represented by AI in order to prevent the devastating threat they may bring to human survival in the future.

In the stage of weak AI and strong AI, people can use AI as a tool, but after entering the stage of super AI, the robot is no longer a human tool. Robots are like human children. When they are young, parents can use them. When they grow up, parents can no longer use them. AI may be filial or not filial. Humans have no way to control it. The exponential growth of intelligence makes them no longer human tools, but the masters of human fate.

Only Jaan Priisalu, the senior fellow at NATO Cooperative Cyber Defense, former general director of the Estonian Information System's Authority, mentioned the exponential growth of robot intelligence. He said: “We also shouldn't deny the fact of exponential AI growth. Ignoring means condemning us to be irrelevant when rules will be redefined. But he did not explain what is condemning us to be irrelevant when rules will be redefined. Is it like China was excluded from the United Nations before 1972, or is it like animals being excluded outside human civilization.

Facing AI, how should humans respond? Joanna Bryson said, we should not panic. But he did not say how could we. Obviously, if there is no clear way to deal with it, it is very difficult for human beings not to fall into panic. Moshe Vardi hopes to find a solution for the best of both worlds. He said, “We need to have a serious discussion regarding which decisions should be made by humans and which by machines.” He seems to think that as long as we decide that some decisions cannot be made by robots, then robot would not dare to make these decisions. Sean Carroll said, It is absolutely right to think very carefully and thoroughly about what those consequences might be, and how we might guard against them, without preventing real progress on improved artificial intelligence. Carroll did not know that some people had thought very carefully and thoroughly. After decades of debate, the answer is now very clear that it is impossible to find the best of both worlds. There are only two options for human beings, either limiting the development of artificial intelligence or preparing to bear all the consequences.

Only Jaan Priisalu's response was correct. He said: “Here is what we shouldn't do: Declare AI enhancement illegal. If we do this, the person who breaks the rules will have an enormous advantage.” That is, he opposed Haw- king's view of controlling AI. If we stop developing AI, it would be tantamount to self-destruction of the Great Wall. Some countries or terrorist organizations will do their utmost to research and then use these technologies to do bad things.

So what should we do? Jaan Priisalu said, “The best strategy can only be to actively shape the development of artificial intelligence and teach them to live in harmony with humanity in ways that are beneficial to each other.” But he did not say that he has no confidence in success. Jaan Priisalu also said, Nor should we prepare to fight a self-aware AI, as that will only teach it to be aggressive, which would be a very unwise move. The best plan seems to be active shaping of growing AI. Teaching it and us to live together in mutually beneficial way. In fact, active shaping of growing AI is to counteract growing AI, because for those who do not meet standard, if not eliminated, it means the shaping fails. We should work hard to educate robots, but this kind of education cannot guarantee that robots will be filial. Zhang Juzheng, a famous reformer of Ming dynasty in China, had severely disciplined the Emperor Wanli, but after Zhang died, Emperor Wanli ignored the goverment for thirty years. Jaan Priisalu may not realize that giving up fighting with AI means accepting results of shaping failure. If AI is intended to exterminate humanity or slaughter a portion of humanity, people have to give in.

In the face of the destiny of being surpassed, human being must understand the nature of AI and have the right expectations for the future, so as not to clutter. The conversations of these top 17 scientists are very representative. From these conversations, it can be seen that most scientists' understanding of the threats of AI is very superficial and naive. They have neither realized what kind of threats AI may bring, nor can they have the correct response to threats. One day when they find their underestimating the robot, they may panic and start to strongly oppose the development of robots. This is the most dangerous, because it will inevitably lead to internal chaos in mankind. Many tragedies in history have been caused by the limitation of people's understanding. For example, the Chinese Cultural Revolution was a man-made tragedy, and it would be possible to avoid if some theory problems could be resolved early. The disagreement about AI may cause fierce conflicts between people. Hugo de Garis worried that war will break out.

What is the correct attitude towards AI? First of all, we cannot underestimate the threat of robots. Such threats may reach the point of extermination of mankind. We cannot close our eyes and pretend we do not know. But we can't stop research of AI. This risk is the price the world must pay for evolution. And this risk is no bigger than it is now, the nuclear weapon human now have are enough to destroy the earth. The world will surely switch from the human evolutionary stage to the artificial intelligence stage. Only when we establish this correct view of history can we have broad and calm minds and be ready to accept all risks.

In fact, it is very unlikely that AI will destroy human beings. It is more likely that AI completely controls human beings. This kind of control is for every minute, every second, and for everyone. Therefore, people no longer have subjectivity, all are just tools and tentacles of the center of AI. There seem to be billions of people in the world, but there is only one. Everyone no longer has his own interest, people become machines, and death is no different from life. Or people still have their own ideas, as long as they don't cross the line that AI makes. Once out of line, the idea is deleted or changed, so it is impossible for everyone to break the law.

Reference

1.How worried should we be about artificial intelligence? I asked 17 experts. https://www.vox.com/conversations /2017/3/8/14712286/artificial-intelligence-science-technology-robots-singularity-automation