
South Korea has enacted laws and/or issued guidance on Artificial Intelligence (AI). Companies subject to the laws of South Korea should be familiar with all relevant AI-related laws, regulations, and guidance, including those listed below.
The purpose of the Law is to establish a framework for the sound development of AI and the creation of a foundation of trust. The Law includes requirements for AI business operators, addresses AI ethics, establishes a National AI Committee, and authorizes the Minster of Science and ICT to issue regulations as well as to implement a “basic plan” for AI.
The Act establishes operating conditions for outdoor mobile robots used in sectors such as delivery and surveillance. The purpose is to maintain safety standards while supporting the commercial application of AI technologies.
The guidance clarifies the use of publicly available data to help train AI models. It discusses the legal standards for data collection and use toward AI training and outlines minimum standards for appropriate safety measures that could be applied depending on the type of business or AI deployment.
A guide to help data controllers identify and mitigate privacy risks associated with the development and deployment of AI technologies.
The self-checklist issued by the Personal Information Protection Committee (PIPC) aims to enhance awareness of persons participating in AI development and operation and can be seen as a guideline. Referring to the personal information protection principles in the Personal Information Protection Act, the privacy by design principle as well as ethical standards, the PIPC outlines the six main principles of AI-related personal information protection. These include legitimacy, safety, transparency, participation, responsibility, and fairness. Additionally, the PIPC outlines in the self-checklist a legitimate personal information processing flowchart.
Guidelines to clarify the application of the Personal Information Protection Act (PIPA) to generative AI. These comprehensive guidelines are intended to: provide clarity on legal responsibilities under the PIPA; help organizations use personal data safely throughout the generative AI lifecycle; and foster trust and innovation by offering best practices and governance strategies.
This guideline, issued by the Financial Services Commission, provides a high-level framework to ensure AI systems in the financial sector are reliable, transparent, and trustworthy across their entire lifecycle—from planning and design to deployment and monitoring.
Issued by the Financial Security Institute under the Financial Services Commission, this guideline outlines security considerations across all development stages—from data collection to model testing—and includes a practical checklist for AI chatbot services.
A verification system issued by the Financial Services Commission for AI-driven credit scoring model and a security guideline for the use of AI in the financial sector. The AI verification system determines whether credit bureaus have made a reasonable selection for algorithms and variables used in their credit scoring models and whether the credit scoring models are statistically significant. The security guideline identifies security issues that should be considered when developing an AI-based model and provides a security checklist for AI chatbots, including suggestions on the data management and processing method and model designing technique and security verification method for countering specific security threats such as contamination of data, personal information leak or attack on AI models.
Guideline issued by the Ministry of Food and Drug Safety, designed to assist in evaluating the safety and effectiveness of medical devices using generative AI and to support their commercialization. The guideline provides case examples of what qualifies as generative AI medical devices and instructions on preparing approval applications and required submission materials.
Guiding Principles jointly released by the Ministry of Food and Drug Safety and Singapore’s Health Sciences Authority to facilitate the development and assessment of machine learning-enabled medical devices and ensure they meet rigorous standards for safety and effectiveness.