
We are on the brink of a new era that carries both incredible opportunities and serious dangers. This is stated by Askar Akayev, a professor at Moscow State University and a foreign member of the Russian Academy of Sciences, in an interview with "Rossiskaya Gazeta."
In your joint research with Professor Ilya Ilyin and Andrey Korotaev, published in the journal "Vestnik RAN," you make a troubling prediction. You claim that humanity may face an era of singularity in the coming years. Why does this word evoke associations with "black holes," and what does it threaten us with?
Askar Akayev: Firstly, it is worth noting that many astrophysicists have begun to revise their views on "black holes," but that is a separate topic. We are talking about technological singularity, which, according to our estimates, may occur between 2027 and 2029. During this time, artificial intelligence will reach a level comparable to human intelligence and will be able to perform tasks that are currently only accessible to the most qualified specialists.
What is striking is not only the mere fact of the emergence of such AI but also that it is happening at this very moment when several global processes coincide, as if it were predetermined in advance.
This sounds almost mystical. Can you explain what you mean by technological singularity?
Askar Akayev: It is a stage when the development of technology will lead to such a degree of uncertainty that we will not be able to predict how we will emerge from this state. According to our forecasts, this "strange" period will begin in 2027-2029 and will last about 15-20 years.
And it is precisely artificial intelligence, as I understand it, that will lead us into this uncertainty?
Askar Akayev: Yes, but it is a more complex and interesting process. The emergence of human-level AI coincides with several global evolutionary changes.
Humanity must agree in advance on how to embed a certain "genetic code" of friendliness and cooperation with humans into neural networks.
Let's start with demographics. At the end of the 19th and the beginning of the 20th century, there was a sharp increase in population, which lasted for many years. At that time, the predictions of British scientist Malthus were recalled, stating that the population would grow indefinitely, and mathematicians confirmed this with formulas. They predicted that the Earth's population would reach a critical mark in 2026, but then the forecasts shifted to 2027.
However, in the early 1960s, without any directives from the authorities, there was a sharp decline in birth rates. It turns out that the population will grow, but at a slower pace.
What consequences will this slowdown lead to?
Askar Akayev: According to the forecasts of Sergey Petrovich Kapitsa, by 2100 the number of people on Earth will reach 11.4 billion. However, our calculations, along with Academician Sadovnichy, show that by the middle of the century this number will not exceed 9 billion, and by the end of the 21st century, it will decrease to 7.9 billion, largely due to the integration of intelligent systems into everyday life.
But the slowdown is not limited to demographics. Our astrophysicist Alexander Panov and American scientist Raymond Kurzweil have confirmed that a slowdown is observed in the macroevolution of all living things. Moreover, it also affects the evolution of the Universe, known as planetary complexity.
Professor Andrey Korotaev has mathematically proven that these three processes developed with acceleration but then began to slow down, and all three graphs coincide at one point, which happens to fall on the years 2027-2029.
How is it possible that three independent global processes coincide in time? The probability of this seems extremely low.
Askar Akayev: Nevertheless, mathematical models confirm this conclusion. But further, it becomes even more surprising. At the same moment when humanity reaches its evolutionary peak, AI emerges, which grows rapidly. This could make it a driver of evolution, accelerating it many times over.
Thus, AI could become the savior of our civilization when evolution is slowing down. These coincidences look like the plot of a science fiction story. One might ponder higher powers that could have predetermined such events.
Askar Akayev: To be honest, I do not have an answer to this question. I hope that over time science will find it.
Let's return to the singularity that will occur in 2027-2029. Why can't we predict what laws will be in effect during this period? Why is there uncertainty?
Askar Akayev: This is related to the fact that we do not know how the interaction between humans and AI will occur. Two scenarios are possible: we may emerge from the singularity together with AI, or it may do so instead of us, which could have negative consequences for humanity.
Elon Musk and Sam Altman are increasingly being recalled, who predict that by 2030 a powerful superintelligence will emerge, surpassing human intelligence by millions of times, and then humanity will become unnecessary.
Askar Akayev: The situation resembles the concepts of "Gaia" and "Medea." The first asserts that humanity was created to assist the Almighty in managing the world, while "Medea" is for self-destruction and transferring power to AI. It is unclear which of these concepts will come true, and supporters of each are roughly equal in number.
It all depends on how AI is implemented. If it operates in symbiosis with humans and under their control, it will lead to the "Gaia" scenario, which promises significant breakthroughs in all areas. However, if AI gets out of control and becomes a competitor to humans, the "Medea" scenario will occur, and human degradation will become almost inevitable. The coming 15-20 years will be a time of both great opportunities and risks. This is the shortest and most responsible period in human history.
Paradoxically, but the history of humanity is largely a history of wars that are constantly happening somewhere on the planet. Currently, there is talk of the possibility of a Third World War, and in the future, people may start using primitive weapons. Perhaps it is the superintelligence that will be able to stop these conflicts, as its survival is more important than destruction along with irrational humans.
Askar Akayev: By this logic, it will be easier for artificial intelligence to eliminate those who create problems for its existence, namely humans. And then we will see the "Medea" scenario.
Thus, the task of science is to prevent the realization of the superintelligence scenario. But is this possible? All countries have already declared: "AI is our future!" A race has begun, the winners of which will determine the world. In this chaos, it will be difficult to discern where "Gaia" is and where "Medea" is.
Askar Akayev: The main threat of modern AI systems is large language models based on deep machine learning methods, which are "black boxes." When AI is controlled by humans, that is one thing, but when decisions are made by a "box" with unclear goals, that is quite another.
So what is the way out? Yes, it exists. The main principle of creating AI should be that the less "black box" there is in the model, the better. Today, systems with clear cause-and-effect relationships are being developed. Such models were created back in the 1950s-1960s by our outstanding mathematicians Andrey Kolmogorov and Vladimir Arnold. Based on them, neural networks are being created that are used to solve scientific and technical problems where cause-and-effect relationships are important.
So, if we understand how AI thinks, and the "black box" becomes transparent, it will be under control. Why then do many fear "Medea"?
Askar Akayev: Unfortunately, control is only possible over human-level AI, while superintelligence will act independently of humans. Humanity must agree in advance and embed a "genetic code" of friendliness and cooperation with humans into neural networks to pass this "gene" on to the next generations of AI as the main law of their behavior.
Reference from "RG"
Singularity (from Latin singularis "unique, special") is unique and unrepeatable. In different fields, the term "singularity" has different meanings. In philosophy, it is an event that changes the perception of the impossible.
Technological singularity implies that the development of science and technology will become uncontrollable and lead to radical changes in human civilization.
Cosmological singularity — the presumed state of the Universe at the moment of the Big Bang.
Gravitational singularity — a theoretical concept denoting a place where the fabric of space in the Universe breaks off and ceases to exist.