Jared Kaplan: AI Self-Improvement Will Lead to an "Intelligence Explosion," but It Will Get Out of Control

Марина Онегина Exclusive
VK X OK WhatsApp Telegram

Kaplan, who previously worked at CERN and Johns Hopkins University and later became a co-founder of Anthropic, emphasizes that recursive systems capable of self-learning pose serious risks. If artificial intelligence becomes smarter than humans and begins to interact with other AIs for accelerated progress, it could lead to unpredictable changes in the balance of power between humans and machines. Such technologies will be harder to control and may be used for harm.

Kaplan, a scholar with an education from Stanford and Harvard, considers the importance of making decisions about the next steps in AI development during the period from 2027 to 2030. He also predicts that within the next 2-3 years, AI will be able to perform most office work, and his six-year-old son will likely never be able to surpass AI in tasks such as writing essays or taking math exams.

According to Kaplan, humanity must be prepared for the consequences of rapid technological advancement and the potential loss of control over it.

Kaplan asserts that the industry as a whole is currently managing the "alignment" of AI with human ethical norms. Anthropic aims to develop "constitutional AI," where the system's behavior is governed by a set of principles based on international legal and ethical documents. Nevertheless, the scholar expresses doubts about the possibility of maintaining control after reaching human-level intelligence.

He identifies two main risks: the first is the loss of understanding of the system's goals, and the second is the concentration of scientific and technological power in the hands of a small group, including unscrupulous players.

“It is conceivable that someone will decide that this AI should serve them. Fighting such attempts at power grabs is as important as correcting errors in the models themselves,” adds Kaplan.

Among the positive aspects, Kaplan highlights the acceleration of biomedical research, improvements in healthcare and cybersecurity, increased productivity, and the creation of opportunities for prosperity.

He emphasizes the need to involve society in discussions about AI development, including the participation of international organizations in monitoring the pace and direction of autonomous self-improvement to minimize risks and prevent abuses. Policymakers must also be informed about current trends and prospects.

Anthropic is working on creating AI products, including the chatbot Claude, which allows for the development of autonomous AI agents. The company positions itself as an advocate for the safe and regulated development of AI.
VK X OK WhatsApp Telegram

Read also:

Write a comment: