LLM, Corporate Social Responsibility, Expert in International Business Law (IBL) Central European University (CEU), Independent Legal Expert in IBL, Tajikistan, Dushanbe
CORPORATE LIABILITY AND REGULATORY COMPLIANCE FOR AI IN CENTRAL ASIA
ABSTRACT
Artificial intelligence (AI) is growingly coordinated into business operations across Central Asia, proposing efficiency gains, innovative opportunities, and enhanced competitiveness. In 2025, the UN General Assembly adopted a resolution initiated by Tajikistan on “The Role of Artificial Intelligence in Creating New Opportunities for Sustainable Development in Central Asia, regional AI Center to be Established in Dushanbe” [1, p.1]. While these developments reflect political commitment and regional enthusiasm, corporate liability and regulatory compliance frameworks remain underdeveloped. Businesses face legal uncertainties regarding responsibility for AI-driven decisions, data protection, and cross-border compliance. “International actors hold varying positions on AI governance, highlighting the geopolitical complexity of the regulatory landscape” [2, p.1]. This article analyzes existing business law frameworks in Central Asian countries, identifies gaps in liability and compliance, and explores realistic corporate scenarios where AI adoption could create legal challenges. Recommendations are provided for policymakers, corporate actors, and legal practitioners to implement harmonized, responsible, and legally secure AI practices, contributing to the region’s sustainable and safe adoption of AI technologies.
АННОТАЦИЯ
Искусственный интеллект (ИИ) все чаще внедряется в бизнес-операции по всей Центральной Азии, обеспечивая повышение эффективности, инновационные возможности и усиление конкурентоспособности. В 2025 году Генеральная Ассамблея ООН приняла резолюцию «Роль искусственного интеллекта в создании новых возможностей для устойчивого развития в Центральной Азии. В Душанбе будет создан региональный центр ИИ» [1, с.1]. Хотя эти события отражают политическую приверженность и региональный энтузиазм, рамки корпоративной ответственности и соблюдения нормативных требований остаются недостаточно развитыми. Предприятия сталкиваются с правовой неопределенностью в вопросах ответственности за решения, принимаемые с помощью ИИ, защиты данных и соблюдения трансграничных требований. «Международные участники занимают разные позиции по вопросу управления ИИ, что подчеркивает гео-политическую сложность нормативно-правовой базы» [2, с. 1]. В данной статье анализируются существующие рамки законодательства в сфере бизнеса в странах Центральной Азии, выявляются пробелы в области ответственности и соблюдения нормативных требований, а также рассматриваются корпоративные сценарии, в которых внедрение ИИ может создать правовые проблемы. Также предоставляются рекомендации для политиков, корпоративных субъектов и юристов по внедрению согласованных, ответственных и юридически безопасных практик ИИ, способствующих устойчивому и безопасному внедрению технологий ИИ в регионе.
Keywords: Artificial Intelligence, business law, regulatory compliance, liability, Central Asia, corporate governance.
Ключевые слова: искусственный интеллект, бизнес право, соблюдение нормативных требований, ответственность, Центральная Азия, корпоративное управление.
Introduction
Artificial intelligence (AI) is transforming the corporate landscape globally, and Central Asia is no exception. Businesses in the region are imposing AI for tasks ranging from financial analysis and risk assessment to human resources management and compliance automation. Recognizing both the potential and the risks of AI, the UN General Assembly adopted in 2025 a resolution titled “The Role of Artificial Intelligence in Creating New Opportunities for Sustainable Development in Central Asia” [1, p.1], initiated by Tajikistan. The resolution calls for the harmonization of national AI monitoring initiatives and coordinated, responsible governance, including the “establishment of a regional AI center in Dushanbe”[1, p.1].
While these initiatives represent an important step toward regional cooperation and the promotion of ethical AI use, corporate legal frameworks have yet to fully address liability and regulatory compliance issues arising from AI adoption. This article argues that the growth of AI in Central Asian corporate practice creates a pressing need for legal clarity regarding corporate responsibility, oversight, and risk management. By examining existing regulations, regional coordination efforts, and potential corporate scenarios, the article seeks to provide actionable insights for businesses, policymakers, and legal practitioners.
Thesis: With Central Asia pursuing coordinated AI governance through recent UN-backed initiatives, there is an emerging need to clarify corporate liability and regulatory compliance to ensure responsible and legally secure AI adoption in the region.
Materials and Methods
This research is based on a qualitative legal analysis combining three methodological approaches: 1. Comparative legal method - comparing the regulatory frameworks of Kazakhstan, Uzbekistan, Kyrgyzstan, Tajikistan and Turkmenistan to identify gaps in AI liability and compliance. 2. Document analysis - examining UN resolutions, national digital strategies, and OECD AI principles and press releases as primary sources. 3. Case-based analytical method - evaluating a hypothetical case of AI use to demonstrate practical liability risks.
This combination of methods allows the study to systematically identify legal challenges arising from corporate AI adoption and to propose grounded policy recommendations.
Legal and Regulatory Landscape in Central Asia
Central Asia’s legal systems have evolved through several stages, beginning with Soviet-era civil law foundations, followed by the adoption of national constitutions, and more recently the launch of digital transformation initiatives. Despite this evolution, most jurisdictions continue to rely on traditional corporate and civil liability doctrines that are not designed to address autonomous algorithmic behavior or emerging forms of algorithmic bias. As noted, “countries such as Kazakhstan, Uzbekistan, Kyrgyzstan, Tajikistan, and Turkmenistan are at varying stages of building digital governance structures” [3, p.1].
Although each state has adopted its own digitalization strategy, none offer comprehensive regulation on AI-specific issues such as data protection in automated systems, algorithmic transparency, explainability obligations, or corporate responsibility for AI-driven decisions. Existing business, contract, and corporate governance laws provide only partial coverage, leaving critical gaps in the regulation of AI liability and compliance.
For example:
“Kazakhstan has adopted a Digital Kazakhstan program and a national AI roadmap” [4, p.1], yet lacks rules defining liability for AI-generated corporate decisions.
“Uzbekistan focuses on digital economy reform through its Digital Uzbekistan 2030 strategy” [5, p.1], but does not articulate standards for corporate compliance in autonomous decision-making.
“Tajikistan, as initiator of the UN resolution, is promoting coordination efforts, though its domestic legal framework” [6, p.1] for AI remains nascent.
Throughout the region, legal frameworks continue to treat AI as a conventional technical instrument rather than a system capable of producing independent or semi-autonomous effects. This results in significant legal ambiguity when AI systems cause harm, breach contractual obligations, or violate regulatory norms.
Corporate AI Adoption and Legal Risks
AI technologies are increasingly used across corporate functions such as finance, human resources, and operational management. While these tools enhance efficiency, they also introduce new categories of legal risk, including algorithmic discrimination, privacy violations arising from data-intensive models, and uncertainty over contractual liability for automated actions.
Determining responsibility for AI-driven harm presents a central challenge. Key legal questions arise: Who is accountable when an AI system acts autonomously—the corporation deploying the technology, the software developer, or the data provider? Can existing doctrines such as negligence, force majeure, or third-party liability adequately govern algorithmic behavior? In most Central Asian jurisdictions, traditional corporate and civil liability rules still apply, but they are often insufficient to capture the complexities of AI-assisted decision-making.
Recommendations
For policymakers:
- Introduce a Model Law on AI liability applicable within national civil codes or corporate laws across Central Asia.
- Define legal obligations for: 1) human oversight; 2) auditability; 3) transparency of algorithmic systems.
- Establish a regional AI Risk Classification System aligned with OECD principles;
- Engage the Dushanbe AI Center in creating model regulations and corporate compliance programs.
For corporations:
- Create internal AI Governance Frameworks (risk committees to oversee algorithmic integrity and compliance).
- Establish internal policies addressing bias, transparency, and data ethics.
- Include explicit AI risk and liability clauses in contracts with developers and vendors.
- Implement periodic algorithmic impact assessments to detect bias or discrimination.
For regulators:
- Develop sector-specific compliance guidelines (finance, HR, program management) for AI use.
- Encourage voluntary Responsible AI Certification for companies.
- Advise clients on cross-border compliance and algorithmic liability.
- Use the Regional AI Center in Dushanbe as a training hub for inspectors and compliance officers.
For legal practitioners:
- Conduct due diligence on AI vendors and datasets.
- Draft contract templates assigning liability for algorithmic harm.
- Participate in public consultations to shape emerging AI regulation.
Conclusion
The analysis demonstrates that while Central Asia is actively promoting AI through regional initiatives and the 2025 UN resolution, its corporate liability and regulatory compliance frameworks remain underdeveloped. AI is already integrated into corporate management systems, HR, and operational decision-making, yet corporations lack clear legal guidance on responsibility, oversight, and risk management.
To ensure safe and legally secure AI adoption, Central Asian states must harmonize liability rules, introduce transparent compliance standards, and strengthen regulatory capacity. The creation of the regional AI Center in Dushanbe provides a unique opportunity to develop unified approaches and support both public and private stakeholders.
By implementing the recommendations outlined in this study, the region can position itself as a model for responsible, ethical, and future-ready AI governance.
References:
- United Nations. Resolution “The Role of Artificial Intelligence in Creating New Opportunities for Sustainable Development in Central Asia”. // UN General Assembly. – New York, 2025. – 12s.
- UNESCO. Recommendation on the Ethics of Artificial Intelligence. // United Nations Educational, Scientific and Cultural Organization. – Paris : UNESCO Publishing. – 2021. – 41 s.
- Jamal Ali., Central Asia’s transition to a Digital Economy: Press Release//Caspian-Alpine Society-2024-8s.
- Zhaslan Madiyev, Kazakhstan’s Digital Evolution: From EGov To AI Governance: Press release//UN Department of Economic and Social Affairs Public Institutions, Blog on SDGs-2025-3s.
- Ministry of Development of Information Technologies and Communications of Uzbekistan, Organization for Economic Cooperation and Development // The OECD AI Policy Navigator- Digital Uzbekistan 2030:Press Release-2025-10s
- Avesta Information Agency, UN General Assembly Adopts Historic Resolution on Artificial Intelligence Initiated by Tajikistan:Press Release//TAJ-2025