EU reaches agreement on world’s first AI legislation
전체 맥락을 이해하기 위해서는 본문 보기를 권장합니다.
The AI Act "aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while booting innovation and making Europe a leader in the field," the European Parliament said in a statement. "The rules establish obligations for AI based on its potential risks and level of impact."
It "wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain," and "to this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market."
이 글자크기로 변경됩니다.
(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.
The AI Act “aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while booting innovation and making Europe a leader in the field,” the European Parliament said in a statement. “The rules establish obligations for AI based on its potential risks and level of impact.”
The AI law focuses on prohibiting AI that could threaten citizens’ rights and democracy, with exceptions for law enforcement agencies, setting guardrails for general purpose AI (GPAI), and supporting innovation for small and mid-size enterprises.
Guardrails for GPAI are particularly expected to act as significant constraints on big tech, requiring compliance with transparency requirements.
“These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training,” the European Parliament said.
For high-impact GPAI models with systemic risks, the EU demanded even stronger compliance measures, requiring adherence to model evaluation, system evaluation, risk mitigation measures and security testing, on top of reporting serious incidents to the EU Commission, ensuring cybersecurity, and reporting on energy efficiency.
For its part, the EU promised extensive support for small AI companies operating in Europe.
It “wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain,” and “to this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.”
The law will affect big tech companies in the meantime, as they will likely face significant challenges in their businesses.
Companies like Microsoft Corp., which is developing its Copilot service by integrating OpenAI GPT, and Google LLC, which is enhancing AI-based services with Gemini’s release, may need to create separate services tailored for the EU, resulting in additional development costs.
The United States and United Kingdom are investigating Microsoft and OpenAI for potential anti-trust violations.
Open AI, the developer of leading AI models including ChatGPT, had been outside the regulatory scope until now but the recent ousting and return Sam Altman as CEO has shifted the perception, placing the company under regulatory scrutiny.
The European and U.S. governments have already announced investigations into potential antitrust law violations related to Microsoft’s investment in OpenAI.
The United States signed an executive order in October 2023 that focuses on AI learning, reporting to the federal government, and providing guidelines for watermarks on AI content.
Korea is viewed as lagging behind the United States, but ahead of the EU technologically, and aims to establish minimal regulations necessary without impeding growth. But legislation for promoting AI industries and ensuring safety has faced challenges at the National Assembly.
Content industry demands immediate protection of intellectual property rights, while the AI industry fears a loss of growth momentum.
“Korea should promote AI industry growth based on corporate self-regulation and address arising issues for improvement,” Naver Cloud head of AI Innovation Ha Jung-woo said.
Experts noted that Korea should still prioritize industry promotion although regulations for high-risk areas need to be considered.
The United Nations is actively working to come up with a broad framework for regulating AI by August 2024, according to Ko Hak-soo, chairman of the Personal Information Protection Commission and a member of the United Nations high-level advisory body on AI.
Earlier this month, Ko visited the UN headquarters in New York to attend the first offline meeting of the UN AI body, which was established at the end of October 2023 as per the proposal of UN Secretary General António Guterres.
Ko is the only member from Korea among the 39 AI experts.
“There are recent AI regulation documents from major G7 countries, such as the Hiroshima AI Process and the White House AI Executive Order,” he said, stating that “red teaming” and “watermark” were notable regulatory proposals in both documents.
Red teaming involves AI service development companies verifying and reporting issues on their own before launching and watermark is a system where AI creates a mark indicating that the image was created by AI, allowing it to be visually or algorithmically filtered.
“The term watermark has been mentioned but detailed discussions have yet to take place,” Ko said.
Copyright © 매일경제 & mk.co.kr. 무단 전재, 재배포 및 AI학습 이용 금지
- 이재용이 국물 더 달라던 어묵집…“사진 한장으로 10억 홍보효과” - 매일경제
- 한국은 부자들이 떠나는 나라 ‘7위’…전세계 톱10 국가는? - 매일경제
- “엄마 저 이제 어떡하죠”…20대 영끌족 연체율 전 연령층서 가장 높아 - 매일경제
- 소중한 나, 얼어죽으면 안 돼…‘얼죽숏’에서 ‘얼죽롱’으로 갈아타기? - 매일경제
- “35세 넘으면 아무리 예뻐도”…엄정화도 놀란 한국드라마 ‘대반전’ - 매일경제
- “결혼말고 이혼하니 더 잘 산다”…요즘 CEO들 고민한다는 ‘이것’ - 매일경제
- ‘국민연금공단 문자’ 함부로 열지 마세요 …스미싱 주의보 - 매일경제
- 딥페이크 ‘옷 벗기기’ 앱 성행…AI 성범죄 악용 우려 높아져 - 매일경제
- 서울 ‘노른자 땅’ 줄줄이 공매로 넘어간 까닭…전국에 50곳 있다는데 - 매일경제
- 뉴캐슬 오른쪽 수비가 엉망이 됐다, 윙으로 돌아온 ‘쏘니’ 1골 2도움 대활약→평점 9.5+6호 MOTM [