South Korea Launches 'World's First' AI Laws Amid Controversy Over Regulation, Industry Growth and Citizen Protection.
The South Korean government has introduced an ambitious set of laws governing Artificial Intelligence (AI), dubbed the "world's first" comprehensive regulations on AI. The new legislation aims to promote industry growth while ensuring citizen protection against potential risks posed by rapidly advancing technologies. However, critics argue that the law does not go far enough in safeguarding individuals and may create uncertainty for companies.
Under the Act, which took effect last Thursday, companies providing AI services are required to add digital watermarks to clearly artificial outputs such as cartoons or artwork, and visible labels for realistic deepfakes. High-impact AI systems used for medical diagnosis, hiring, and loan approvals must also undergo risk assessments and document their decision-making processes.
The law sets a high threshold for extremely powerful AI models, with government officials acknowledging that no current models worldwide meet the standard. Companies violating the rules face fines of up to 30 million won (ยฃ15,000), but the government has promised a grace period of at least a year before penalties are imposed.
Industry players have expressed frustration with the new legislation, citing concerns over competitiveness and regulatory burden. Local tech startups claim that the law goes too far in limiting their innovation potential, while civil society groups argue that it does not provide sufficient protection for people harmed by AI systems.
The push for regulation has unfolded amidst growing global unease over artificially created media and automated decision-making. South Korea accounts for 53% of all global deepfake pornography victims, according to a recent report, highlighting the need for more effective measures to combat this issue.
Critics argue that the law's origins predate the latest crisis surrounding AI-generated sexual imagery, and previous provisions have been accused of prioritising industry interests over citizen protection. The new legislation has sparked debate about the balance between promoting innovation and ensuring public safety in the rapidly evolving field of AI.
Experts say South Korea's approach to regulation is distinct from other jurisdictions, opting for a more flexible framework that relies on trust-based promotion and regulation rather than strict risk assessment models.
The South Korean government has introduced an ambitious set of laws governing Artificial Intelligence (AI), dubbed the "world's first" comprehensive regulations on AI. The new legislation aims to promote industry growth while ensuring citizen protection against potential risks posed by rapidly advancing technologies. However, critics argue that the law does not go far enough in safeguarding individuals and may create uncertainty for companies.
Under the Act, which took effect last Thursday, companies providing AI services are required to add digital watermarks to clearly artificial outputs such as cartoons or artwork, and visible labels for realistic deepfakes. High-impact AI systems used for medical diagnosis, hiring, and loan approvals must also undergo risk assessments and document their decision-making processes.
The law sets a high threshold for extremely powerful AI models, with government officials acknowledging that no current models worldwide meet the standard. Companies violating the rules face fines of up to 30 million won (ยฃ15,000), but the government has promised a grace period of at least a year before penalties are imposed.
Industry players have expressed frustration with the new legislation, citing concerns over competitiveness and regulatory burden. Local tech startups claim that the law goes too far in limiting their innovation potential, while civil society groups argue that it does not provide sufficient protection for people harmed by AI systems.
The push for regulation has unfolded amidst growing global unease over artificially created media and automated decision-making. South Korea accounts for 53% of all global deepfake pornography victims, according to a recent report, highlighting the need for more effective measures to combat this issue.
Critics argue that the law's origins predate the latest crisis surrounding AI-generated sexual imagery, and previous provisions have been accused of prioritising industry interests over citizen protection. The new legislation has sparked debate about the balance between promoting innovation and ensuring public safety in the rapidly evolving field of AI.
Experts say South Korea's approach to regulation is distinct from other jurisdictions, opting for a more flexible framework that relies on trust-based promotion and regulation rather than strict risk assessment models.