These questions were discussed at the Central Asian Cyber Legal Forum.
Opinions of Experts and Elected Officials
In Kyrgyzstan, developers see artificial intelligence as an opportunity to enter the international IT market, as stated by the director of the High Technology Park, Azis Abakirov.We should not suppress technological achievements with excessive restrictions. We still have a relatively free internet.According to him, "it is certainly necessary to consider the ethical aspects and potential threats associated with AI. However, developers see it as an opportunity for global competition. This year, the US president allocated $500 billion for AI development, and soon DeepSeek emerged in China. Here, only a small group of people is trying to compete with them."
Thus, the IT community perceives AI as an opportunity for economic growth rather than a threat requiring immediate restrictions. This position is also shared by MP Dastan Bekeshev, who emphasized that the regulation of artificial intelligence in the country should be very cautious.
"When it comes to regulation, I fear that in Central Asia, as well as in the post-Soviet space, this could lead to bans," Bekeshev noted.
He added that Kyrgyzstan already has its own developments in the field of AI, such as Akyl AI and AiRun.
But all of this can be easily destroyed. Simply by imposing excessive restrictions."I believe that Central Asia should pay attention to the experience of Europe and other countries," he added.
The MP also noted that Europe may introduce strict regulatory measures that will affect companies like OpenAI, Google, and Meta.
"Can we do this? No, it would only make them laugh. Why? Because our market is too small. We are creating musical works using AI, and discussions about bans are already starting. The same applies to video generation to avoid deepfakes. I fear that such restrictions could harm us and prevent us from revealing the talents we have. But for now, we can say that we have developments in the field of AI," Bekeshev added.
This essentially reflects the fear of repeating the region's familiar practice—applying restrictions instead of a comprehensive approach to development that considers existing risks.
"When it comes to the need for regulation, different jurisdictions talk about fraudsters who may use new technologies. But it is important to understand that fraudsters can use anything," noted Denis Sadovnikov, head of the data protection department at Yango Group, adding that "AI regulation should not be perceived as a ban."
Many countries serve as examples for the region. Currently, there are several conceptual approaches to AI regulation: the European model, based on human rights; the Anglo-Saxon model, which assumes self-regulation; and the Singaporean model, which considers both human rights and self-regulation, explained MP Ekaterina Smyshlyaeva from Kazakhstan.
"Kazakhstan has chosen a model that combines the best elements of different approaches. Illegal actions must remain in focus, and some prohibitive regulatory measures should only be related to such actions," she reported.
Smyshlyaeva also noted that many algorithmic AI solutions may not be illegal but still pose a threat to society. The MP emphasized that Kazakhstan has developed and adopted a law on AI that includes measures to support technology development and defines the regulator's position regarding the choice of regulatory model and the use of copyright.
Kazakhstani developers have equal access to big data and computing resources, which is regulated by legislation. The regulator's position considers ethical aspects and attention to high-risk technologies. "To be more precise, we are talking about systems with a high degree of autonomy. The approach is differentiated; other market participants can develop in a self-regulatory format," she added.
Regarding copyright compliance when using AI, Kazakhstan has allowed the training of large models on open copyrighted works unless the author has previously established a machine-readable prohibition. "Thus, we applied a reverse principle: by default, use is permitted, and protection only applies if the author clearly expressed disagreement with the use of their content," Smyshlyaeva explained. This approach reduces risks and does not limit market development at the initial stage.
AI within the Digital Code
The norms for regulating AI have already been included in the Digital Code of Kyrgyzstan, reported MP Dastan Bekeshev. The development of technologies is considered in the context of the principle of "technological neutrality.""This means that we regulate not the algorithm itself but the area of its application. For example, if AI is used in fintech to assess creditworthiness, the rules of the financial regulator apply. If in medicine for diagnostics - the rules of the Ministry of Health. We are not creating some 'digital monster' that will monitor every line of code," the MP explained.
Tattu Mambetalieva, lead partner of the consortium for developing digital legislation, clarified that the main task of the Digital Code is to abandon administrative regulation. Instead, it proposes interaction with the state at four levels: the first - telecom sectors, as they are the foundation of technologies; the second - at the system level; the third - services; and the fourth - data (the core element that unites all levels).
The Digital Code classifies AI systems used to ensure digital well-being as high-risk. Owners and users of AI, regardless of their level of danger, are required to take reasonable measures to mitigate potential harm and are liable for any harm caused in accordance with Kyrgyzstan's legislation.
Specific rules and standards for the development and use of AI are effectively transferred to the professional community. They can be established by associations of AI system owners, covering issues of professional ethics, risk assessment, data management, as well as the quality of development and operation of systems, which should not contradict the provisions of the code.
Particular attention is paid to transparency. If AI is used for interaction with consumers, users must be notified, except in cases where the presence of the algorithm is obvious. Information about the use of AI in digital legal relations is considered publicly available and must be presented in an accessible form on the websites of companies and the industry regulator of the national ecosystem.
Additional requirements are introduced for sensitive technologies. Users of systems engaged in emotion recognition and classification of individuals based on biometric characteristics must separately notify about the use of such solutions. A similar rule applies to deepfakes: the use of AI to create or modify content must be accompanied by an indication of its artificial origin.
At this point, Kyrgyzstan is aiming for cautious regulation of artificial intelligence without strict bans, focusing on self-regulation, liability for harm caused, and transparency in the use of technologies. This approach should preserve space for the growth of local AI projects without limiting the market at the outset, while also delineating "red lines" for high-risk and sensitive solutions. The main question remains open: will it be possible to maintain this balance in reality?
Photo on the main page: Shutterstock.