Breaking down the country’s bold move to regulate artificial intelligence—and why some in the industry are pushing back
This piece draws on recent reporting from Reuters and Korean media outlets about South Korea’s AI Basic Act, which went into effect on January 22, 2026.
South Korea just made history by rolling out what it’s calling the world’s first comprehensive set of laws to regulate artificial intelligence. The AI Basic Act took effect on January 22, 2026—faster than Europe’s much-discussed AI Act and with a clear message: the country wants to lead in AI while keeping risks in check.
The government says the rules will build trust in the technology, especially as tools like ChatGPT and deepfake generators become part of everyday life. But not everyone’s celebrating. Startups and smaller companies are already complaining that the requirements are vague and could slow down innovation at a time when South Korea is trying to compete with the U.S. and China.
What the Law Actually Requires
The rules focus on two main areas: “high-impact” AI systems that could seriously affect people’s lives, and generative AI that can create realistic content.
For high-impact uses—think AI deciding loan approvals, screening credit, managing transport systems, running healthcare diagnostics, or even overseeing nuclear plants or drinking water production—companies have to make sure there’s always a human in the loop. Someone needs to be able to step in and override the system if things go wrong.
Generative AI gets its own set of obligations. If a product or service uses it and the output might be hard to tell apart from real content, companies have to label it clearly as AI-made. Users also need advance notice that they’re dealing with AI-generated stuff. This is aimed squarely at problems like deepfakes, misinformation, and fake ads that have been popping up more often.
Penalties aren’t huge compared to Europe—fines top out around 30 million won (about $20,400) for things like skipping the labeling requirement. But the government is giving everyone breathing room: there’s at least a one-year grace period before any administrative fines kick in, and officials say they might extend it if the industry needs more time.
To soften the blow, the Ministry of Science and ICT plans to set up guidance platforms and a dedicated support center to help companies figure out compliance.
Why South Korea Moved So Fast
The country has big ambitions in AI. Leaders want South Korea to crack the global top three alongside the U.S. and China. It’s already home to heavyweights like Samsung and Naver, and the government has poured money into chips, data centers, and research.
But rapid growth has come with worries. Deepfakes have hit politics hard—fake audio and video targeting candidates during elections—and everyday scams using AI voices or images are on the rise. Public trust was taking a hit, and officials decided waiting wasn’t an option.
President Lee Jae Myung put it plainly: the goal is to “maximise the industry’s potential through institutional support, while pre-emptively managing anticipated side effects.” In other words, grow fast but don’t let things get out of hand.
The law went through plenty of consultation with companies before passing late last year. Still, some details were left open, and now that it’s live, the real work of filling in the blanks begins.
The Pushback from Startups
Not everyone is on board. Lim Jung-wook, co-head of the Startup Alliance, summed up the mood for many smaller players: “There’s a bit of resentment—why do we have to be the first to do this?”
Startups say the language in the law is too vague. Without clear guidelines on exactly what counts as “high-impact” or how detailed labeling has to be, companies might play it safe and dial back on ambitious projects just to avoid trouble. That could mean less experimentation and slower growth at the exact moment South Korea wants its homegrown firms to shine.
Being first also means there’s no roadmap to copy. Europe has been debating its AI Act for years and is rolling it out slowly through 2027. The U.S. has mostly stuck to voluntary guidelines and light-touch executive orders. China has some rules but focuses more on state control than broad industry mandates.
South Korean startups worry they’ll bear the compliance costs while bigger foreign competitors face fewer hurdles in the short term.
How It Stacks Up Against Europe and Elsewhere
Europe’s AI Act is the one everyone compares to. It’s risk-based like South Korea’s—banning outright some uses and demanding strict oversight for high-risk ones—but the fines are way steeper, up to 7% of global turnover for serious violations.
South Korea’s approach feels lighter in punishment but heavier on speed. By getting rules on the books now, the country hopes to shape how companies build AI from the ground up rather than retrofitting later.
The U.S. has largely let industry self-regulate, with the Biden administration pushing safety testing pledges from big players and some state-level laws popping up. Critics say that’s too hands-off and risks falling behind in setting global standards.
China has moved on specific issues like deepfakes and algorithmic recommendations but keeps tight central control over content and data.
South Korea is trying to thread the needle: firm enough to protect people, flexible enough to keep innovating.
What Happens Next
The grace period buys time. Over the next year or more, the government will flesh out details through secondary regulations and the promised support programs. Companies will test the boundaries—what kind of human oversight is enough? How visible does labeling need to be?
If startups’ fears play out, we might see slower product launches or talent moving abroad. On the flip side, clear rules could attract investment from firms that want predictable markets and public trust.
Public pressure will matter too. As deepfake incidents keep making headlines, support for tougher measures could grow. Or if the economy feels the pinch, calls to loosen up might get louder.
The Bigger Picture
South Korea’s move puts pressure on other countries. If it works—boosting safety without killing growth—it could become a model for mid-sized economies that want influence in tech without the market dominance of the U.S. or China’s state power.
For now, the country’s betting that getting ahead on rules will pay off long-term. Startups might grumble about being the guinea pigs, but the government insists the support measures will keep everyone in the game.
Whether this turns into a real edge in the global AI race or just another layer of red tape is the question everyone will be watching in 2026 and beyond.