South Korea’s Act on the Development of Artificial Intelligence and Establishment of Trust (AI Basic Act) came into force on January 22, 2026, positioning the country alongside the European Union as one of the first jurisdictions with a comprehensive AI regulatory framework. The law sets high-level obligations around transparency, safety and trust in AI systems, while enabling detailed enforcement through future regulations.
The AI Basic Act applies to both AI development business operators (those who build or supply AI) and AI utilization business operators (those who use AI within products or services). AI is broadly defined as technology that electronically implements human intellectual abilities such as learning, reasoning, perception, decision-making and language understanding.
1. Transparency
Operators must clearly disclose when AI-generated audio, images or videos are difficult to distinguish from human-created content. Providers of generative AI and high-impact AI must inform users in advance that AI is being used, and generative AI outputs must carry appropriate labels.
2. High-impact AI
High-impact AI includes systems used in critical areas such as healthcare, energy, transport, recruitment and biometric analysis. Operators must:
Assess and confirm whether their system qualifies as high-impact AI
Provide meaningful explanations of outcomes, decision criteria and training data
Establish user protection plans and human oversight mechanisms
Maintain documentation and conduct impact assessments on fundamental rights
3. High-performance AI
AI systems trained with 10²⁶ FLOPs or more are classified as high-performance AI. These systems may be subject to additional lifecycle risk management, user protection measures and reporting obligations, with further guidance expected from the Ministry of Science and ICT (MSIT).
The Act applies to foreign AI services that affect users or markets in South Korea. Companies meeting revenue or user thresholds must appoint a local representative, responsible for regulatory communication and safety reporting.
MSIT can issue corrective orders, including service suspension for safety risks. Administrative fines of up to 30 million KRW may apply for violations, although a one-year grace period has been announced to allow companies to prepare.
The law establishes new bodies such as the National AI Committee, AI Policy Center, and AI Safety Research Institute, while also mandating government support for R&D, infrastructure, startups and SMEs.
Moving forward, companies operating in or targeting South Korea should review their AI practices to ensure compliance. As global AI regulation accelerates, aligning governance, transparency and risk management frameworks will be critical for sustainable innovation.