The biggest tech companies in Australia have fallen short in preventing the proliferation of child sexual exploitation material, Australia’s eSafety Commissioner has found, with search engine giant Google issued with a formal warning and social media company Twitter (subsequently rebranded as ‘X’) fined $610,500.
The latest report from the Australian national independent regulator for online safety has compiled responses from each of the tech companies – among which include Twitter/X, Google, TikTok, Twitch and Discord, each of whom were issued with legal notices in February following breaches of Australia’s Online Safety Act – about the measures they had in place to combat child sexual abuse.
Under the new transparency powers, the bigtechs are required to provide the regulator with information regarding their measures to tackle suspect material. However, two providers, Twitter/X and Google, were found not to have complied with the notices given to them, with both companies failing to “adequately respond to a number of questions in their respective notices”, the report revealed.
Following this, the eSafety Commissioner issued Google with a formal warning, notifying the firm of its failure to comply. The company was found to have provided a number of generic responses to specific questions, as well as to have aggregated information.
Twitter/X’s non-compliance was found to be more serious, with the company failing to provide any response to certain questions, the regulator said.
In some instances, Twitter/X has left some sections entirely blank or provided a response that was ‘otherwise incomplete and/or inaccurate’, in particular with regards to the measures it had in place to detect child sexual exploitation in livestreams and tools and technologies it used to detect such material.
Following its infringement notice for $610,500, Twitter/X will now have less than 30 days to request the withdrawal of the infringement notice or to pay the penalty, the regulator said.
If Twitter chooses not to pay the infringement notice, it is open to the Commissioner to take other actions.
“We really can’t hope to have any accountability from the online industry in tackling this issue without meaningful transparency which is what these notices are designed to surface,” eSafety Commissioner Julie Inman Grant said.
“What we are talking about here are serious crimes playing out on these platforms committed by predatory adults against innocent children and the community expects every tech company to be taking meaningful action.
“Importantly, next year we will have industry codes and standards in place which work hand-in-hand with these Basic Online Safety Expectations transparency powers to ensure companies are living up to these responsibilities to protect children.”
The eSafety Commissioner’s first report featured Apple, Meta, Microsoft, Skype, Snap, WhatsApp and Omegle, uncovering serious shortfalls in how these companies were tackling the issue.
Some of the key findings of the Safety Commissioner’s latest report included:
- While YouTube, TikTok and Twitch are taking steps to detect child sexual exploitation in livestreams, Discord is not, saying it is ‘prohibitively expensive’. Twitter/X did not provide the information required.
- TikTok and Twitch use language analysis technology to detect CSEA activity such as sexual extortion across all parts of their services whereas Discord does not use any such detection technology at all. Twitter/X uses tools on public content, but not on direct messages. Google uses technology on YouTube, but not on Chat, Gmail, Meet and Messages.
- Google (with the exception of its search service) and Discord are not blocking links to known child sexual exploitation material, despite the availability of databases from expert organisations like the UK-based Internet Watch Foundation.
- YouTube, TikTok and Twitch are using technology to detect grooming, whereas Twitter/X, Discord and other Google services are not (Meet, Chat, Gmail, Messages).
- Google is not using its own technology to detect known child sexual exploitation videos on some of its services – Gmail, Chat, Messages.
- In the three months after Twitter/X’s change in ownership in October 2022, the proactive detection of child sexual exploitation material fell from 90 per cent to 75 per cent. It said its proactive detection rate had subsequently improved in 2023.
- For Discord and Twitch, which are partly community-moderated services, professional safety staff are not automatically notified when a volunteer moderator identifies child sexual exploitation and abuse material.
- Significant variations in median response times to user reports of child sexual exploitation material exist – TikTok says it responds within 5 minutes for public content, Twitch takes 8 minutes, Discord takes 13 hours for direct messages, while Twitter/X and Google did not provide the information required.
- Significant variation in the languages covered by content moderators. Google said it covers at least 71 and TikTok 73. In comparison Twitter said it covered only 12 languages, Twitch reported 24 and Discord reported 29. This means that some of the top 5 non-English languages spoken at home in Australia are not by default covered by Twitter, Discord and Twitch moderators. This is particularly important for harms like grooming or hate speech which require context to identify.