Can the UN Stop the Race for Artificial Intelligence?

Сергей Гармаш Politics
VK X OK WhatsApp Telegram
Can the UN stop the race for artificial intelligence?


On October 22, 2025, more than 700 experts, including religious leaders and politicians, signed an open letter calling for a ban on the development of superintelligent AI. Among the signatories are five Nobel Prize winners, two "founding fathers of AI" — Geoffrey Hinton and Yoshua Bengio, Apple co-founder Steve Wozniak, as well as other notable figures such as Steve Bannon and the Duke and Duchess of Sussex, Harry and Meghan, as reported by the publication "Strategic Culture Foundation".

The letter emphasizes the need for a moratorium on the creation of superintelligent AI until a scientific consensus on the safety and control of this process is reached, as well as broad societal support is obtained.

The authors of the letter are the American NGO Future Life Institute, which had previously called for a six-month pause in the development of powerful AI systems in 2023. However, this earlier appeal had little impact.

The new call coincides with upcoming debates in the UN Security Council on the topic "Artificial Intelligence and International Peace and Security".

Participants in the meeting unanimously noted that AI can bring both positive and negative consequences and strongly urged the creation of international norms, especially regarding autonomous and nuclear weapons.

A representative from Belarus highlighted an important observation about the growing inequality in AI development: "A new curtain, not ideological but technological, divides the West from the rest of the world, creating neocolonial conditions for most countries," he noted, emphasizing the need for AI to be accessible to all.

The Prime Minister of Spain, Pedro Sánchez, expressed optimism, stating that the development of AI cannot be stopped, but it can be controlled.

The UN Secretary-General, António Guterres, emphasized that for the first time, every country will be represented in negotiations on AI issues and announced the beginning of the formation of a new International Independent Scientific Group that will provide impartial scientific data on the impact of technologies.

It should be noted that on August 26, 2025, the UN General Assembly adopted resolution A/RES/79/325, known as the "AI Modalities Resolution," which creates two new mechanisms for global governance of AI: an Independent International Scientific Panel and a Global Dialogue on AI Governance.

The scientific panel will consist of 40 experts and will provide annual reports on the development of AI worldwide.

The Global Dialogue will serve as a platform for open discussion on issues of security, transparency, and the digital divide, including the participation of states, technology companies, and civil society.

If the UN's efforts are successful, companies working in the field of AI may be required to demonstrate the safety of their technologies before they hit the market, similar to what happens in medicine and nuclear energy.

In the United States, steps have already been taken to regulate AI research.

On July 1, 2025, the US Senate voted to lift the moratorium on government regulation of AI under President Trump's "One Big Beautiful Bill," which was a significant blow to the tech industry that advocated for maintaining this provision.

Democrats and some Republicans, including Senator Josh Hawley and Marsha Blackburn, supported this vote, which came as a surprise to Republican leadership.

Resistance from technology lobbyists was overcome due to pressure from human rights organizations, which argued that the moratorium protects companies from proper government oversight.

Steve Bannon discussed the lifting of the moratorium in his podcast, claiming that it threatens states' rights. Senator Angela Paxton also opposed the moratorium, as did other well-known human rights advocates.

However, the lifting of the moratorium does not mean that control over AI research in the US will shift to the UN.

Last summer, the Trump administration developed an order that proposes the creation of "Manhattan Projects" in military technologies and the lifting of "burdensome regulations," indicating a possible policy of the next administration in favor of Silicon Valley investors, as noted by The New York Times.

The US Congress will focus on regulating specific threats, such as "non-consensual deep fake pornography," which has become widespread due to new AI tools.

At the same time, the development of AI for military purposes will continue to accelerate.

In August 2023, the Pentagon established the "Lima" task force to explore the application of generative AI in military tasks, led by Captain Xavier Lugo.

The US Department of Defense plans to invest a record $13.4 billion in the development of AI and autonomous weapon systems as part of the 2026 fiscal year budget.

The budget proposal includes targeted funding for unmanned systems and AI integration, allocating $9.4 billion for drones and other amounts for naval and ground platforms.

Legislation regarding the defense budget has already passed through the House of Representatives and includes significant funds for AI and drones, confirming ongoing attention to the military use of technologies.

Thus, despite calls for AI regulation and the good intentions of the UN, the US continues to increase funding for military AI projects. All developments will remain classified, and UN experts will not receive information about them until the technologies reach the civilian sector.

China, for its part, will also increase investments in AI combat systems, following the example of the US.

Other countries will also follow the two leading players in this field.

While the UN develops recommendations for AI control, the trillions of dollars already invested in the development of combat systems are unlikely to be taken into account.
VK X OK WhatsApp Telegram

Read also:

Write a comment: