eSafety Comm. probes tech companies on online material spread

tech companies

The eSafety Commissioner has written to six major tech companies with Australian operations to report back on their efforts in protecting online users from “terrorist and violent extremist material and activity”.

Google, Meta, Twitter/X, WhatsApp, Telegram and Reddit have all been hit with notices, issued with powers from the Online Safety Act, requiring them to respond to several detailed questions about how they are detecting, monitoring and removing harmful online content from its platforms.

The dissemination of this type of content came to the forefront of global communities and regulatory bodies after two terrorist attacks were carried out in Christchurch, New Zealand and Halle, Germany in 2019, as well as more recently in Buffalo, New York. The attacks proved the ability of social media to be exploited by violent extremists to push violent their violent agendas.

“We remain concerned about how extremists weaponise technology like live-streaming, algorithms and recommender systems and other features to promote or share this hugely harmful material,” eSafety Commissioner, Julie Inman Grant, said.

“We are also concerned by reports that terrorists and violent extremists are moving to capitalise on the emergence of generative AI and are experimenting with ways this new technology can be misused to cause harm.

“Earlier this month the UN-backed Tech against Terrorism reported that it had identified users of an Islamic State forum comparing the attributes of Google’s Gemini, ChatGPT, and Microsoft’s Copilot.

“The tech companies that provide these services have a responsibility to ensure that these features and their services cannot be exploited to perpetrate such harm and that’s why we are sending these notices to get a look under the hood at what they are and are not doing.”

Inman Grant said the commission has continued to receive reports of harmful material being re-shared online, including that from 2019’s Christchurch attack.

An OECD report from 2022, Transparency reporting on terrorist and violent extremist content online, found Telegram ranked first for presence of terrorist and violent extremist material on its platform, followed by Google’s YouTube and Twitter/X in third. Meta-owned Facebook and Instagram came fourth and fifth.

“It’s no coincidence we have chosen these companies to send notices to as there is evidence that their services are exploited by terrorists and violent extremists. We want to know why this is and what they are doing to tackle the issue,” Inman Grant said.

“Transparency and accountability are essential for ensuring the online industry is meeting the community’s expectations by protecting their users from these harms. Also, understanding proactive steps being taken by platforms to effectively combat TVEC is in the public and national interest.

“That’s why transparency is a key pillar of the Global Internet Forum to Counter Terrorism and the Christchurch Call, global initiatives that many of these companies are signed up to. And yet we do not know the answer to many of these basic questions.

“And, disappointingly, none of these companies have chosen to provide this information through the existing voluntary framework – developed in conjunction with industry – provided by the OECD. This shows why regulation, and mandatory notices, are needed to truly understand the true scope of challenges, and opportunities.”

The companies issued with notices have until the beginning of May to respond.