Twitter warns Aus Govt of foreign interference via social media

TwitterSenate540

Social media giant Twitter has conceded users on its network have the potential to significantly disrupt the operations of government and industry, as malicious, state-sponsored actors and misinformation campaigns proliferate on the platform.
 

The likelihood and risk of future disruption were highlighted in a submission by the San Francisco-based social media giant to the Australian Government’s Select Committee on Foreign Interference Through Social Media inquiry held last Monday.

The inquiry took online submissions from Twitter alongside those of security experts representing the Government, higher education, and the private sector.

Last year, the Australian Senate established the Select Committee to establish the risks of foreign interference posed to Australia by state-sponsored cyber actors.

The Committee was chaired by Labor’s Jenny McAllister, together with Liberal Jim Molan as Deputy Chair, alongside Senators Kimberley Kitching and David Van.

The inquiry exposed the increasing exploitation of social media by foreign state actors to spread misinformation and confuse public discourse, both in Australia and overseas.

According to Twitter, the spectrum of state actor activities – from so-called “white propaganda” originating from self-declared ‘agents of the state’ to higher-level messaging and coordinated activities from state-controlled cyber actors – has become more pervasive in recent months.

This includes state-backed actors creating fake accounts and leveraging both human proxies and automated bots to spread disinformation.

Additionally, it said, proxy-generated content was being spread farther and wider – both unintendedly and deliberately – by negligent media companies.

According to research from the University of Southern California and Indiana University, up to 15 per cent of Twitter accounts were bots rather than people.

With upwards of 319 million active users on Twitter each month, this translates to nearly 48 million bot accounts on the platform.

The proliferation of fake accounts has proved troubling for Twitter, which has struggled to grow its user base in the face of growing competition from Facebook, Instagram, Snapchat, and others.

Twitter’s submission to the parliamentary inquiry, delivered by the company’s director of public policy Kara Hinesley, acknowledged the growing threat of hostile cyber actors in the distribution of media content.

“As we have seen in other policy areas, this issue is a challenge where domestic media actors distribute the contents of a hack through their reporting.”

Interference in the affairs of other nations by state and non-state actors was by no means a new phenomenon. However, new technologies and social channels have changed how this interference could be carried out, amplifying the instigators’ messaging.

Twitter’s microblogging service, which places a 280-character limit on messages, or ‘tweets’, makes it easy for propagandists to distil their messaging, automate bulk message sends, and spread unsubstantiated facts.

Twitter also revealed evidence that tools used to spread disinformation – once considered “the domain of a small number of state-sponsored actors” – have effectively been ‘commercialised’, with companies offering services to spread disinformation, engage in engineered conversations and, ultimately, “manipulate discourse”.

According to Twitter, “this commercialisation is being exploited by hostile actors”, making it difficult to spot where the messaging originated.

“The monetisation of misinformation risks further obscuring the commercially motivated domestic actors from foreign-supported ones, highlighting the need for a broad approach to tackling this issue.”

Artificial amplification of content, a common strategy used by disinformation campaigners, involved actions to make an account or concept more popular or controversial, sometimes through “inauthentic engagements” – effectively seeing legitimate users getting caught up in ‘fake’ discussions with state proxies.

This more coordinated method was marked by efforts to artificially influence conversations, use multiple, often deceptive fake accounts, or a combination of both, Twitter observed.

Tracking state-backed activity

Amping up its efforts to detect and remove fake news content, since 2018, Twitter said it was providing “greater clarity on state-backed foreign information operations” and suspect content being removed from the service.

Twitter has since created a comprehensive archive of tweets and media that have been flagged as state-backed operations.

“The archive is the largest of its kind in the industry, and now includes more than 160 million Tweets and more than eight terabytes of media,” Twitter said.

In a fast-changing social media landscape, Twitter noted that foreign interference was an unavoidable part of a globalised communications landscape.

“While the activity may be visible on social media, it is just one part of the information ecosystem,” the submission said.

“A response that looks at what happens on social media in isolation risks neglecting the wider challenge posed by foreign information operations.”

Recently, in response to growing concerns around the spread of misinformation at the top levels of government, Twitter also took steps to label some tweets involving US President Donald Trump with a notification informing users that he had “violated the service’s policy against abusive behavior (sic)”.

The company said it had also taken steps to remove paid political advertising.