AI’s advanced voice cloning capabilities create realistic deepfakes, raising concerns for misinformation.
As deepfake content infiltrates mainstream channels, misinformation is rampant.
Aconvincing artificial intelligence voice, closely resembling that of former President Barack Obama, emerged on TikTok in late August, addressing allegations surrounding the death of his former chef.
“I am deeply saddened by the tragic loss of Tafari Campbell [mispelled in the video as Talafi Cambell],” fake Obama says in the video, “who was not just an employee, but a valued member of our extended family.”
The (eerily convincing) voice is the result of new tools allowing AI to clone and manipulate real voices with ease and humanlike mimicry, commonly known as “deepfakes.”
Since the release of these tools in 2017, AI-generated voices have been employed in videos to propagate disinformation, as exemplified by the fake Obama video identified by NewsGuard, The New York Times reported.
TikTok CEO Shou Zi Chew listens to questions from U.S. representatives during his testimony at a Congressional hearing on TikTok in Washington, DC on March 23rd, 2023. Nathan Posner/Anadolu Agency | Getty Images.
The NewsGuard report released late last month found that the deepfake Obama video was one of 17 in a network of accounts generating and posting fake news content. Thirteen of the 17 accounts were discovered to be deceptively branded to resemble mainstream news outlets, TV shows, and well-known TikTok accounts, blurring the lines between what is real and AI-generated for an unsuspecting scroller.
Platforms like TikTok are now grappling with the challenge of flagging and labeling AI-generated content, and although TikTok has introduced tools for labeling AI-generated content, the issue remains a concern as new content is rapidly posted on various platforms, including YouTube, Instagram, and Facebook.
While motives may vary, AI-generated content can serve as a means for bad actors to manipulate public opinion and spread falsehoods.
EU Calls on Meta and X to Take Action Against Misinformation
On Wednesday, European regulator Thierry Breton penned a letter to Mark Zuckerberg, CEO of Meta, urging him to be “vigilant” in combating disinformation on his company’s platforms amidst the ongoing Israel-Hamas conflict, CNBC reported.
“After the terrorist attacks by Hamas on Israel on Saturday, we quickly established a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation,” a Meta spokesperson told the outlet. “Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact-checkers in the region to limit the spread of misinformation.”
Breton sent a letter to X owner Elon Musk the day before, stating that there were “indications” that groups were sharing misinformation and content of a “violent and terrorist” nature concerning the Israel-Hamas conflict on the platform.
X CEO Linda Yaccarino responded to the letter on Thursday on X, stating that the platform is taking action to remove or label “tens of thousands” of pieces of content that could indicate misinformation.
“In response to the recent terrorist attack on Israel by Hamas, we’ve redistributed resources and refocused internal teams who are working around the clock to address this rapidly evolving situation,” the CEO added.
Since the attack by Hamas on Saturday, Reuters has fact-checked and identified at least 15 videos labeled as occurring during the current conflict, when, in reality, they were found to either predate the attacks or occurred in entirely different parts of the world.
Reuters also found that a video of CNN journalists seeking shelter from rockets near the Israel-Gaza border was fabricated and manipulated with a voiceover instructing them to stage their coverage, giving a false impression that CNN orchestrated the attack.
Post Views: 2,188