The Alphabet-owned tech giant also said it had received dozens of user reports warning that its AI program, Gemini, was being used to create child abuse material, according to the Australian eSafety Commission.
Since OpenAI's ChatGPT exploded into the public consciousness in late 2022, regulators around the world have called for better guardrails so AI can't be used to enable terrorism, fraud, deepfake pornography and other abuse.
“This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated,“ eSafety Commissioner Julie Inman Grant said in a statement.
It did not say how many of the complaints it verified, according to the regulator.
“We are committed to expanding on our efforts to help keep Australians safe online,“ the spokesperson said by email.
Google used hatch-matching - a system of automatically matching newly-uploaded images with already-known images - to identify and remove child abuse material made with Gemini.
The regulator has fined Telegram and Twitter, later renamed X, for what it called shortcomings in their reports. X has lost one appeal about its fine of A$610,500 ($382,000) but plans to appeal again. Telegram also plans to challenge its fine.
Read More Details
Finally We wish PressBee provided you with enough information of ( Google reports scale of complaints about AI deepfake terrorism content to Australian regulator )
Also on site :