EU reaches agreement on world’s first AI legislation

2023. 12. 11. 11:36
자동요약 기사 제목과 주요 문장을 기반으로 자동요약한 결과입니다.
전체 맥락을 이해하기 위해서는 본문 보기를 권장합니다.

The AI Act "aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while booting innovation and making Europe a leader in the field," the European Parliament said in a statement. "The rules establish obligations for AI based on its potential risks and level of impact."

It "wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain," and "to this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market."

글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

[Image source: Midjourney]
European Union (EU) lawmakers enacted the world’s first laws to regulate artificial intelligence (AI) on Friday as the number of AI led major tech companies in the United States increases rapidly.

The AI Act “aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while booting innovation and making Europe a leader in the field,” the European Parliament said in a statement. “The rules establish obligations for AI based on its potential risks and level of impact.”

The AI law focuses on prohibiting AI that could threaten citizens’ rights and democracy, with exceptions for law enforcement agencies, setting guardrails for general purpose AI (GPAI), and supporting innovation for small and mid-size enterprises.

Guardrails for GPAI are particularly expected to act as significant constraints on big tech, requiring compliance with transparency requirements.

“These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training,” the European Parliament said.

For high-impact GPAI models with systemic risks, the EU demanded even stronger compliance measures, requiring adherence to model evaluation, system evaluation, risk mitigation measures and security testing, on top of reporting serious incidents to the EU Commission, ensuring cybersecurity, and reporting on energy efficiency.

For its part, the EU promised extensive support for small AI companies operating in Europe.

It “wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain,” and “to this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.”

The law will affect big tech companies in the meantime, as they will likely face significant challenges in their businesses.

Companies like Microsoft Corp., which is developing its Copilot service by integrating OpenAI GPT, and Google LLC, which is enhancing AI-based services with Gemini’s release, may need to create separate services tailored for the EU, resulting in additional development costs.

The United States and United Kingdom are investigating Microsoft and OpenAI for potential anti-trust violations.

Open AI, the developer of leading AI models including ChatGPT, had been outside the regulatory scope until now but the recent ousting and return Sam Altman as CEO has shifted the perception, placing the company under regulatory scrutiny.

The European and U.S. governments have already announced investigations into potential antitrust law violations related to Microsoft’s investment in OpenAI.

The United States signed an executive order in October 2023 that focuses on AI learning, reporting to the federal government, and providing guidelines for watermarks on AI content.

Korea is viewed as lagging behind the United States, but ahead of the EU technologically, and aims to establish minimal regulations necessary without impeding growth. But legislation for promoting AI industries and ensuring safety has faced challenges at the National Assembly.

Content industry demands immediate protection of intellectual property rights, while the AI industry fears a loss of growth momentum.

“Korea should promote AI industry growth based on corporate self-regulation and address arising issues for improvement,” Naver Cloud head of AI Innovation Ha Jung-woo said.

Experts noted that Korea should still prioritize industry promotion although regulations for high-risk areas need to be considered.

The United Nations is actively working to come up with a broad framework for regulating AI by August 2024, according to Ko Hak-soo, chairman of the Personal Information Protection Commission and a member of the United Nations high-level advisory body on AI.

Earlier this month, Ko visited the UN headquarters in New York to attend the first offline meeting of the UN AI body, which was established at the end of October 2023 as per the proposal of UN Secretary General António Guterres.

Ko is the only member from Korea among the 39 AI experts.

“There are recent AI regulation documents from major G7 countries, such as the Hiroshima AI Process and the White House AI Executive Order,” he said, stating that “red teaming” and “watermark” were notable regulatory proposals in both documents.

Red teaming involves AI service development companies verifying and reporting issues on their own before launching and watermark is a system where AI creates a mark indicating that the image was created by AI, allowing it to be visually or algorithmically filtered.

“The term watermark has been mentioned but detailed discussions have yet to take place,” Ko said.

Copyright © 매일경제 & mk.co.kr. 무단 전재, 재배포 및 AI학습 이용 금지

이 기사에 대해 어떻게 생각하시나요?