로그인|회원가입|고객센터|HBR Korea
페이지 맨 위로 이동
검색버튼 메뉴버튼

AI Trend

Editorial: No Need to Rush Korea’s AI Act

Dong-A Ilbo | Updated 2025.12.22
Stonefilm held the 2026 AI Media Summit on December 15, 2025 / Source=IT동아
With the enforcement of the “Framework Act on the Development of Artificial Intelligence and the Establishment of Trust-based Foundation (AI Framework Act)” scheduled for January 22 next year, controversy is mounting. If implemented as planned, Korea will become the first country in the world to fully introduce AI regulations. However, criticism is rising that even the basic concepts and definitions remain unclear, despite the law being just a month away from coming into force. Moreover, the European Union (EU), which served as a benchmark when drafting the law, is postponing full implementation of its own AI legislation amid concerns that it could hinder the development of the AI industry.

The Ministry of Science and ICT has issued a legislative notice for the enforcement decree of the AI Framework Act, targeting its implementation on January 22 next year. The law defines the “three fundamental principles of AI ethics” as human dignity, the public good of society, and the purposiveness of technology. At the same time, it classifies “high-impact AI,” which can have a significant impact on or pose risks to human life, physical safety, or fundamental rights, separately from lower-risk “general AI,” and imposes stronger obligations and liabilities on high-impact AI.

Despite the law’s imminent enforcement, most domestic AI companies say they do not know whether their AI systems fall under the category of “high-impact AI,” which is the main target of regulation, or, if they do, how they should prepare. In a survey of more than 100 domestic AI startups, 98% of companies responded that they were not prepared to comply with obligations such as “designation, registration, and verification of high-impact AI” and “watermarking requirements for AI-generated outputs.”

A further complication is that the global regulatory landscape has changed completely since the National Assembly passed the bill at the end of December last year. The EU, which had taken the lead in driving regulation, implemented only parts of its “AI Act” in February this year. However, last month the European Commission postponed the application date of regulations on “high-risk AI” — the original concept behind Korea’s “high-impact AI” in the AI Framework Act — from August next year to the end of 2027. The move is intended to avoid hampering European companies competing with US AI big tech firms. US President Donald Trump has stated that excessive AI regulations being introduced at the state level are becoming an obstacle to innovation and that the federal government will establish a rational AI regulatory framework.

Fostering the AI industry and ensuring the safe development of AI are both goals that must not be abandoned. However, if regulation races ahead with a sole focus on safety, achieving the national objective of becoming one of the “top three AI powers” will become difficult. Excessive regulation is particularly damaging for domestic AI startups that lack sufficient talent and capital. The enforcement timeline of the AI Framework Act needs to be adjusted again in line with global trends.

AI-translated with ChatGPT. Provided as is; original Korean text prevails.
Popular News

경영·경제 질문은 AI 비서에게,
무엇이든 물어보세요.

Click!