
Photo by Clint Patterson on Unsplash
Google Reports Over 250 Complaints Of AI-Generated Deepfake Terrorism Content
Google shared a report with the Australian authorities revealing that its artificial intelligence tool, Gemini, received over 250 complaints globally for its use to generate deepfake terrorism and more than 80 regarding child abuse material.
In a Rush? Here are the Quick Facts!
- Google reported over 250 global complaints over the use of Gemini AI to generate deepfake terrorism and more than 80 on child abuse content.
- The data was submitted to Australia’s eSafety Commission under the country’s Online Safety Act.
- Australian authorities warn that AI safety measures must improve as platforms struggle to detect harmful content.
The information has been handed to Australia’s online safety watchdog, the eSafety Commission, after tech companies—including Meta, Telegram, Reddit, X, and Google—were required to comply with the local laws and report notices under Australia’s Online Safety Act in March 2024.
eSafety shared an official document this Thursday raising concerns about AI’s safety. Google’s report considered user’s reports from April 1st, 2023, to February 29, 2024.
“It (Google) received 258 user reports about suspected AI-generated deepfake terrorist or violent extremist material or activity generated by Gemini, the company’s own generative AI, and 86 user reports of suspected AI-generated child sexual exploitation and abuse material,” said eSafety Commissioner Julie Inman Grant. “This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated.”
According to Reuters, a Google spokesperson said they have not allowed illegal activities. “The number of Gemini user reports we provided to eSafety represent the total global volume of user reports, not confirmed policy violations,” said Google’s spokesperson to Reuters via email.
Inman Grant considered the report as the “world-first insights” into understanding how tech companies are managing the “online proliferation of terrorist and violent extremist material.”
The Commissioner also highlighted how platforms like Facebook, WhatsApp, and Telegram have failed to detect live-streamed terrorist content. Inman Grant considered the 2019 Christchurch attack—a terrorist mass shooting where a white supremacist gunman attacked two mosques in Christchurch, New Zealand, killing 51 people, back in March 2019—as an example of how extremist and deadly attacks have been live-streamed without detecting or removing the content.
A few days ago, another report shared by ESafety revealed that children can easily bypass current age verification systems used by the most popular social media platforms.
.
Leave a Comment
Cancel