LG joins global front countering deepfake content: Sources

2024. 3. 14. 08:54
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

[Photo by MK DB]
A South Korean company was among the 23 technology companies that have formed a global united front to prevent the spread of artificial intelligence (AI) deepfake, which has the potential for information manipulation, sources said on Wednesday.

Maeil Business Newspaper learnt that LG AI Research is the sole Korean company in the AI Elections Accord, a kind of agreement created by major global tech companies to address AI deepfake.

The coalition includes companies such as OpenAI, Microsoft Corp., Google LLC, Meta, X (formerly Twitter), TikTok, Anthropic, and Adobe Inc.

These influential companies leading AI development are both directly and indirectly responsible for the distribution of generative AI content.

Analysts suggest that the coalition’s influence could play a strong role in the development and standardization of international deepfake countermeasures.

Among Asian companies, only LG AI Research and Japanese security company Trend Micro were mentioned. The coalition formation is reported to have been led by Microsoft.

Overseas tech companies reportedly think highly of LG’s efforts in developing its own generative AI model EXAONE and participating in the UNESCO Business Council for Ethics of AI.

The 23 tech companies announced a technology agreement to address deceptive use of AI, with a focus on preventing the side effects of deepfake, during the Munich Security Conference in February 2024.

While some argue that the agreement lacks enforceability, the potential for concrete follow-up measures and implementation caught the public’s attention.

The 23 companies plan to initially collaborate on developing technology to minimize the risks of AI-generated content related to elections, with discussions expected to be active on technology standards for identifying and interpreting AI-generated content.

[Graphics by Song Ji-yoon and Lee Eun-joo]
Experts suggest that topics such as applying invisible watermarks to services could also be on the agenda. Industry-standard alternatives currently include the C2PA invisible watermark led by Microsoft and Adobe and SythID developed by Google Deepmind.

“We understand that AI accord companies have reached an agreement to use robust proof methods against deepfake content,” an industry official said. “Discussions are expected to focus on jointly developing new source technologies, aligning with international standards while labeling work is done individually by each company.”

For their part, Korean companies are also accelerating efforts to counter harmful deepfake content.

Naver, Kakao, and SK communications adopted a joint declaration in March 2024 to prevent malicious deepfake use during elections, with Kakao also announcing the introduction of invisibility watermark technology to its AI image generation model Karlo.

This move is in response to the amendment of the Public Official Election Act in December 2023, which prohibits the production, editing, distribution, projection, and posting of AI-based deepfake targeting voters during the campaign period for the general elections on April 10th, 2024.

Naver began displaying a type of warning label at the top of search results when keywords associated with harmful deepfake content are entered on its portal from February 28th.

Naver is conducting long-term research and development with the goal of adopting the global technical standard for confirming the source information of content and securing technology to detect invisible meta information and generative content.

Copyright © 매일경제 & mk.co.kr. 무단 전재, 재배포 및 AI학습 이용 금지

이 기사에 대해 어떻게 생각하시나요?