Social media platforms enhance fake content measures

2023. 10. 16. 11:33
글자크기 설정 파란원을 좌우로 움직이시면 글자크기가 변경 됩니다.

이 글자크기로 변경됩니다.

(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.

[Image source: Pixabay]
Meta Platforms Inc., operator of social networking platforms Facebook and Instagram, is strengthening measures against the spread of fake news related to the Israel-Palestine war. The move comes after the platforms were accused of allowing fake news in connection with the conflict to spread.

Meta announced on Friday local time that it would enhance policies to block violent content and fake news. The company has since established a special operations center to respond to violent and explicit content, staffed with experts proficient in Hebrew and Arabic.

According to Meta, it has removed or marked as disturbing more than 795,000 pieces of content. It has also made specific Instagram hashtags that violate its policies unsearchable, and individuals who have violated policies in the past will face restrictions on using Facebook and Instagram Live.

The moves come after the European Union demanded responses from major social media platforms regarding concrete measures to prevent fake news during the previous week.

The EU has enforced the Digital Services Act to regulate SNS platforms in preventing the distribution of fake news and violent content since August 2023.

Under the law, SNS platforms must promptly remove and prevent harmful and illegal content, and failure to comply may result in fines of up to 6 percent of annual global revenue.

X, formerly Twitter, also responded to the EU demand, stating that it deleted tens of thousands of pieces of content or labeled them as potentially misleading.

Alongside the social media platforms’ responses, shields against artificial intelligence (AI) deep fakes are being developed, but the progress and application speed are both low.

OpenAI, the creator of ChatGPT, released Classifier, which can detect generative AI, in February 2023. Intel introduced the FakeCatcher technology in November 2022 to discern whether a video is a deep fake.

But the industry believes that the accuracy of such deep fake detection technology is not yet at the level where it can reliably determine authenticity, as a recent advertisements featuring virtual humans are easily accessible and fake videos continue to be circulated.

According to analysis firm Immersion Research, the global virtual human market is expected to grow at an annual average of 36.4 percent from 2021 to reach $527.5 billion in 2030.

Copyright © 매일경제 & mk.co.kr. 무단 전재, 재배포 및 AI학습 이용 금지

이 기사에 대해 어떻게 생각하시나요?