In Kyrgyzstan, there are plans to regulate artificial intelligence. What is behind this?

Яна Орехова Economy
VK X OK WhatsApp Telegram
With the development of artificial intelligence, numerous questions arise regarding its regulation. How relevant is this for Kyrgyzstan? What are the opinions of experts and lawmakers? What has already been done in this area?

These and other aspects were discussed at the Central Asian Cyber Legal Forum.

From the perspective of experts and lawmakers

According to developers in Kyrgyzstan, artificial intelligence is an opportunity to enter the international IT market, as noted by the director of the High Technology Park, Azis Abakirov.

We should not destroy technology with excessive regulation. There is still a certain freedom on the internet here.

“Of course, it is necessary to consider the ethical and other risks associated with AI. However, developers see it as a way to enter the global market. This year, the U.S. president allocated $500 billion for AI development, and in China, DeepSeek has emerged. Meanwhile, we have only a few people trying to compete with them,” he added.

Thus, representatives of the IT sector view AI as an economic opportunity rather than a threat requiring immediate restrictions. Similar views are held by parliamentarians. Deputy of the Jogorku Kenesh, Dastan Bekeshev, emphasized that regulation of AI in Kyrgyzstan should be approached with caution.

“When it comes to regulation, I fear certain consequences, especially in Central Asia and the post-Soviet space. Generally, regulation implies various prohibitions,” Bekeshev noted.

He also added that Kyrgyzstan is already developing its own AI projects, such as Akyl AI and AiRun.

But this can be easily destroyed. It is enough to simply introduce strict rules.

“I believe that in Central Asia, we need to consider the experience of Europe and other countries,” he added.

Bekeshev pointed out that Europe can impose strict regulatory measures on AI, affecting major players like OpenAI, Google, Meta, and others.

“Can we do this? No, it would only provoke ridicule. Why? Because our market is too small. And although we are creating musical works using AI, discussions about bans have already begun. The same is happening with video generation. To avoid deepfakes. I fear that such bans could harm us and prevent our talents from emerging. But for now, we can say that we have developments in the field of AI,” Bekeshev noted.

Essentially, this is about the fear of repeating the region's familiar practice—applying restrictions instead of a comprehensive approach to development that takes risks into account.

“When there is talk of the need for regulation, in different jurisdictions, the possibility of technology being used by fraudsters is mentioned. However, it is important to understand that fraudsters can use any technology,” emphasized Denis Sadovnikov, head of the data protection department at Yango Group, adding that “AI regulation should not be perceived as a ban.”

As an example for the region, practices from other countries are often cited. Currently, there are several conceptual approaches to AI regulation worldwide: the European approach, based on human rights; the Anglo-Saxon approach, focused on self-regulation; and the Singaporean approach, which combines both, explained Kazakhstan Majilis deputy Ekaterina Smyshlyaeva.

“Kazakhstan has chosen a model that combines the best aspects of these approaches. Of course, illegal actions should be the focus, and some prohibitive regulatory measures should be linked exclusively to such actions,” she clarified.

At the same time, Smyshlyaeva added that many algorithmic AI solutions may not be illegal but can still cause harm to society. The deputy noted that Kazakhstan has developed and adopted an AI law that includes measures to support technology development and defines the role of the regulator in choosing a regulatory model and using copyright.

In this situation, neighbors are guaranteed equal access for developers to big data and computing resources, regulated by legislative acts. Ethical aspects and attention to high-risk technologies are taken into account in the regulator's position. “To be more precise—regarding systems with a high degree of autonomy. The approach is differentiated—other market participants can develop in a self-regulatory format,” she added.

Regarding copyright compliance when using AI, Kazakhstan has allowed training large models on open copyrighted works unless the author has previously established a machine-readable prohibition. “That is, we have chosen the reverse principle: by default, use is permitted, and protection only applies if the author has explicitly expressed disagreement with the use of their content,” Smyshlyaeva explained. This approach helps reduce risks and does not limit market development at the initial stage.

AI in the Digital Code

In the Digital Code of Kyrgyzstan, there are already provisions for AI regulation, said deputy Dastan Bekeshev. The development of technologies in this context is viewed from the perspective of “technological neutrality.”

“This means that we regulate not the algorithm itself but the sphere of its application. For example, if AI is used in fintech for creditworthiness assessment, then the rules of the financial regulator apply. If in medicine for diagnostics—the rules of the Ministry of Health. We did not create some ‘digital monster’ that will control every line of code,” the deputy explained.

The leading partner of the consortium for developing digital legislation, Tattu Mambetalieva, clarified that the main task of the Digital Code is to abandon administrative regulation. Instead, interaction with the state is proposed at four levels. The first is in the telecom sector, as it is the foundation of technologies. The second is at the systems level, the third at the services level, and the fourth at the data level (the main element that unites all levels).

The Digital Code classifies AI systems used to ensure digital well-being as high-risk. Owners and users of AI, regardless of the level of danger of such systems, are required to take reasonable measures to mitigate potential harm and are responsible for it under Kyrgyzstan's legislation.

At the same time, specific rules and standards for the development and application of AI are effectively transferred to the professional community. They can be established by associations of AI system owners. Such regulations should cover issues of professional ethics, risk assessment, data management, quality of development and operation of systems, and must not contradict the provisions of the code.

Particular attention is paid to transparency. If AI is used to interact with consumers, users must notify them, except in cases where the presence of the algorithm is obvious. Information about the use of AI in digital legal relations is considered publicly available and should be presented in an understandable form on the websites of companies and the industry regulator of the national ecosystem.

Additional requirements are introduced for sensitive technologies. Users of emotion recognition systems and classification of people by biometric characteristics are required to separately inform about the use of such solutions. Similar rules apply to deepfakes: the use of AI to create or modify content must be accompanied by disclosure of its artificial origin.

At this stage, Kyrgyzstan focuses on cautious regulation of artificial intelligence without strict prohibitions, leaning towards self-regulation, accountability for harm caused, and transparency in the use of technologies. This approach should preserve space for the development of local AI projects without limiting the market at the start, while also delineating “red lines” for high-risk and sensitive solutions. The main question remains: will it be possible to achieve this balance in practice?

Photo on the main page: Shutterstock.
VK X OK WhatsApp Telegram

Read also:

Write a comment: