Defending against a tsunami of deepfakes
이 글자크기로 변경됩니다.
(예시) 가장 빠른 뉴스가 있고 다양한 정보, 쌍방향 소통이 숨쉬는 다음뉴스를 만나보세요. 다음뉴스는 국내외 주요이슈와 실시간 속보, 문화생활 및 다양한 분야의 뉴스를 입체적으로 전달하고 있습니다.
Photographic “deepfakes” of American superstar Taylor Swift went viral on X, formerly Twitter, over the weekend. Deepfakes refer to explicit videos, audio or images concocted to make people appear to do or say things they didn’t, with the help of AI tools. X, with a policy to not allow synthetic or manipulated media on its platform, took action to remove the fake images of Swift more than 17 hours after they were posted and viewed over 47 million times.
A week earlier, a doctored phone message sounding like President Joe Biden circulated, telling voters in New Hampshire not to waste their vote in the Democratic primary. The bogus message spread quickly, shortly before the primary voting period. Deepfakes can be easily produced with generative AI technology. We must build protection against a deepfake tsunami.
The Korean legislature last December passed a revision to the Public Office Election Act outlawing the use of deepfakes for campaigning 90 days prior to election day. The law on sexual violence also has grounds to punish posting and circulation of deepfake-based explicit materials. Whether or not that will be sufficient to sanction fabricated content, that can be made within minutes, is questionable.
Obscene materials and fake campaign contents generated with AI-enabled deepfake tools can go viral on social media in seconds. But removing and blocking the materials or regulative actions from authorities can come too late. Deepfakes overruled facts in last year’s elections in Turkey and Slovakia.
Fortunately, the tech industry is quickly moving to filter deepfakes. Intel has introduced AI-enabled real-time deepfake detectors. Big-tech giants like Google and Microsoft are using AI tools to digitally watermark fabricated images to help contents consumers better identify misinformation. Platform operators must employ these tools to combat the spread.
X announced it would be creating a team to monitor sexually explicit materials. The platform is grappling with the embarrassment after Elon Musk sacked contents inspecting staff when it acquired Twitter in October 2022. Korean platform operators also must take heed to the danger.
But reactions must be contained. Governments are vying for leadership in AI technology and application. Deepfake technology itself is not evil. The late Song Hae — the legendary host of the National Singing Contest from 1988 until his death in 2022 — was able to “appear” on a TV drama all thanks to deepfake technology. The technology is being actively employed in broadcasting, entertainment and games. Synergy would enlarge when the technology is met with augmented reality and virtual reality capabilities. But side effects should be controlled and directed in a beneficial way to adopt new technologies for the good of society.
Copyright © 코리아중앙데일리. 무단전재 및 재배포 금지.
- (G)I-DLE drops '2' for every 'Super Lady' as group powers through health, lyric fights
- Seoul subway uses bald eagle images to scare off pigeons
- Hwang Ui-jo departs for England as Seoul lifts travel ban
- David Beckham in Seoul
- North fires another set of cruise missiles
- 'Squid Game' actor Lee Byung-hun's Los Angeles house hit in burglary: Report
- Defaults, foreclosures and bankruptcies surge amid Korea's perfect economic storm
- Starfield Suwon draws 230,000 visitors in two days, prompting emergency traffic alerts
- A quarter of Korea’s car repair shops close as EV boom takes toll
- PPP leader picks underground railroad relocation as second campaign promise