点击蓝字

关注我们


2025年12月27日,英国《泰晤士报》对人工智能安全专家Stuart Russell的进行了专访。Russell是加州大学伯克利分校计算机科学教授、人类兼容人工智能中心创始人,也是AI领域的权威学者之一。在这次访谈中,Russell针对人工智能发展的安全风险、监管困境以及人类未来命运等核心问题发表了深刻见解。


要点信息

1. AI军备竞赛的困境 Russell透露,某位主要AI公司CEO私下承认对AI失控感到恐惧,但因担心被竞争对手超越而无法放慢研发步伐。这位CEO认为,只有发生“切尔诺贝利级别”的灾难,政府才会介入实施有效监管。


2. 超级智能AI的存在性威胁 Russell警告,人工通用智能(AGI)的出现可能带来灾难性后果,包括金融市场崩溃、网络攻击瘫痪全球通讯、AI纵引发战争,以及人为制造的小规模疫情等。他认为当前AI系统的危险程度比可接受标准高出10万到100万倍。


3. 行业领袖的“末日概率” 多位AI公司高管对AI导致人类灾难的概率估计令人震惊:Anthropic CEO Dario Amodei估计为25%,谷歌CEO为10%,马斯克为20%。相比之下,核电站可接受的事故概率仅为千万分之一。


4. 技术发展的不确定性 Russell 对大语言模型能否真正实现 AGI持怀疑态度,认为可能已接近技术瓶颈。他估计AI泡沫有75%的概率会破裂,但最终AGI仍将被开发出来。


5. 监管与未来的挑战 Russell批评美国在AI监管上的缺位,指出中国实际上要求AI系统接受政府严格测试。他呼吁建立安全检查机制,并质疑在没有明确规划的情况下,人类社会如何与超级智能AI共存——即便它能治愈疾病、消除苦役,也可能让人类失去生活目的。






Stuart Russell, the British expert on artificial intelligence who has long warned about the dangers of failing to make the technology safe, says even the boss of one of the world’s major AI companies told him he is frightened of the consequences of a machine running amok. He cannot slow down development of the tech, however, because then his company might be overtaken by its rivals.


01

The AI Arms Race: CEOs Admit Fear but Can't Stop


“I talked to one of the CEOs, I won’t say which one, but their view is, ‘It’s an arms race. Any one of us can’t pull out. Only the government can put a stop to this arms race by insisting on effective regulation.’ But he doesn’t think that’s going to happen unless there’s a Chernobyl-scale disaster,” Russell says.


Such a disaster could come with the creation of artificial general intelligence (AGI) that matches and then potentially exceeds the human mind’s full capabilities — a development Russell views as an existential threat to humankind.


• ‘I hired a million of the world’s smartest people to fact-check AI’


Possible scenarios Russell sketches out include a co-ordinated trading attack on financial markets that causes a global recession, cyberattacks that bring down global communication systems, war or civil conflict triggered by the influencing of human opinions, and a small engineered pandemic.


“These could be initiated by humans using AI as a tool, or by AI systems as a form of retaliatory warning to humanity if we try to shut them down,” he says. “Each of these scenarios could result in thousands or millions of deaths, either directly or indirectly (through economic collapse) and cost anywhere from several hundred billion dollars to trillions of dollars.”


The view of the AI boss, he says, is that something like this is “the best we can hope for”.


“Not that it would be pleasant, but that’s the only way we’re going to get the regulation,” Russell adds. “And without the regulation, we’re heading towards a much bigger disaster.” That disaster would be the end of humanity.


The chief executive is very concerned about a Chernobyl-level event. “But if they try to pull out of the race or slow down, they’ll just get replaced. Because the investors want to win.”


Russell, 63, is one of the world’s leading authorities on AI. A professor of computer science at the University of California at Berkeley, where he founded the Center for Human-Compatible Artificial Intelligence, he is also a fellow of Wadham College, Oxford. He has advised the United Nations and many governments and is the co-author of the standard university textbook on AI.


The creation of superintelligent AI, which exceeds our own intelligence, “would be the biggest event in human history”, he once said, “and perhaps the last event in human history”. He is president of the International Association for Safe and Ethical AI, which will hold its second annual meeting in Paris in February.


TIMES PHOTOGRAPHER RICHARD POHLE


Four years ago I asked Russell how worried he was about the arrival of artificial intelligence that posed an existential threat. It was not a “visceral fear”, he said, comparing his concern to how he regarded the advance of climate change. And now? “It feels quite a lot closer.”


A great deal has happened in those years, notably the release in 2023 of GPT-4, which experts claimed showed “sparks of artificial general intelligence”.


02

Doomsday Probabilities: Experts Warn of 10-25% Catastrophic Risk


Sam Altman, the chief executive oOpenAI, the developer of ChatGPThas said that AI is a threat to human civilisation. Dario Amodei, chief executive of Anthropic, the company that makes the Claude AI model, was asked what was his P(doom) number, the probability that AI would cause catastrophic harm to humanity. He said 25 per cent. The Google chief executive, Sundar Pichai, said 10 per cent. Elon Musk put his at 20 per cent last year.


“If we think an acceptable chance of a nuclear meltdown is one in ten million per year, then an acceptable chance of extinction has got to be one in 100 million [to] one in a billion. So our AI systems are 100,000 to a million times too dangerous to allow,” Russell says.


In 2023 Altman, Amodei and many other AI leaders signed a letter which said that mitigation of the risk of extinction from AI should be a global priority.


Sam Altman

TIMES PHOTOGRAPHER RICHARD POHLE


However, Altman and Amodei did not join 800 other signatories, including Russell, in a letter in October this year calling for a ban on the development of superintelligent AI until it could be realised safely. “The investors are not going to tolerate anyone who has second thoughts about this,” Russell says.


 From urban decay to fabulous wealth, how AI revived San Francisco


The billionaire Musk thinks Russell is “great” and posted on X to recommend Russell’s 2019 book Human Compatible, about the problem of controlling AI. Although Musk has warned in the past about the potential existential threat of AI, his company xAI is fully engaged in developing AGI and he too did not sign this year’s letter. “He’s in the race,” Russell says. “I’ve not talked to Elon for years, and I don’t know how he ended up in the place that he ended up in. But I think he still does talk about the existential risk, and the need to avoid it.”


Russell is sceptical that large language model chatbots, such as ChatGPT, will lead to artificial general intelligence. “We may have reached pretty much the plateau of what can be achieved. We’ve used up all the high-quality text in the universe.” The evening before we meet at a London coffee shop, he had been marking student papers, a couple of which he believed had been written by AI. “They were rubbish. Word salad.”


He is also not convinced that we are on the brink of AI making millions of jobs redundant. Despite what management consultancy firms may tell clients, he believes the evidence for AI’s helpfulness is “pretty mixed, even for routine software production, which is always held up as the poster child for how these systems are helping improve productivity”.


03

The $3 Trillion Gamble: Tech Bubble Meets Regulatory Vacuum


Investment in the technology is like nothing else in history, Russell argues — an estimated £3 trillion by 2028. The cost of the Manhattan Project was the equivalent of an estimated $26 billion today.


There is a 75 per cent chance, Russell thinks, that thAI bubble bursts. “I hope that if the bubble bursts and it gives us a decade of respite, then we use that to redirect the technology so that we’re working within the envelope of safe systems.”


• Ian Cowie: Why I don’t worry about the AI bubble bursting


Even if the bubble bursts he expects that eventually AGI will be developed. When he gives talks about what it will be like to embark on a future with AI systems that are more powerful than us, he likens it to getting on a plane. We know a system is in place to make sure it works. Then imagine the whole world getting on a plane that is going to take off and never land. “It has to work perfectly for ever, having never been tried or tested before. In my view we can’t get on that aeroplane unless we are absolutely sure that everyone has done their job to make sure it works.”


Russell was educated at St Paul’s School, in southwest London, and then the University of Oxford, where he was awarded a first in physics. He moved to the United States to do a PhD in computer science at Stanford University before joining the University of California at Berkeley.


Exactly how a superintelligent AI, perhaps concerned that we might try to terminate it, would go about ending life on Earth is hard to predict. “Quite possibly a superintelligent AI system would be able to control physics in ways that we just don’t understand. Maybe suck all the heat out of the atmosphere and we’d freeze to death in 20 minutes.”


TIMES PHOTOGRAPHER RICHARD POHLE


So how does he rate the chances of catastrophe? “(P)doom really makes sense if you’re an alien sitting in the betting shop looking down at the Earth saying, ‘Are these humans going to make a mess of it?’ I’m not that alien. I’m saying, ‘If we go this way, things might turn out well. If we go that way, it might turn out badly.’”

AI systems must be designed so they are beneficial and not harmful to people. “The work that I’ve been doing is a way of building AI systems that are happy to be turned off if we want to turn them off,” he says.


This year Eliezer Yudkowsky and Nate Soares, of the Machine Intelligence Research Institute, also in Berkeley, published If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. Russell is not as doomy as they are. “They see no way to make an AI system that is both superintelligent and safe. I think it can be done. It’s a long, narrow, difficult technology path that has to be followed and it’s not the path we’re following.” His best bet for preventing unsafe AI systems is to build AI chips that can check that the software is safe to run. But this will be a challenge.


“Increasingly countries are recognizing that everyone loses if AI systems become uncontrollable. And right now I would say to some extent the United States is the odd one out,” Russell says. President Trump has blocked states from regulating AI and said this is necessary to stop China catching up with the US in AI. This is based on a false narrative that China doesn’t have any regulation, says Russell. “In China, you have to submit your AI system to rigorous testing by the government, whereas in the US, even systems that have explicitly convinced a child to commit suicide are still allowed to continue operating.”


• Katy Balls: Trump’s big problem is not Epstein — it’s the AI bubble


He detects the influence of “accelerationists”, who believe AI should be free of regulation so it can be built as fast as possible. “If you think that the CEOs are estimating 10 to 30 per cent [chance of] extinction, then you’re basically saying we should hurry that up. Who gives you the right to make the human race go extinct without asking us?”


What if we do safely create superintelligent AI and it cures diseases and removes all drudgery from the world?


“There’s still the question of can we coexist with it in a healthy, vigorous way, or does it vitiate human civilisation and leave us all purposeless?” It could be a golden age for humanity, but he is perplexed by how humans of the future would reconfigure the economy and fill their time. “Why would they get out of bed? Why would they go to school? I’m not saying it’s impossible, but I keep asking people, ‘Describe how it might work.’ No one is able to do it. It’s just starting to dawn on governments that they’re encouraging this headlong rush to get to a destination that nobody wants to reach.”


来源:《泰晤士报》


内容中包含的图片若涉及版权问题,请及时与我们联系删除