eSafety Commissioner hails CBA’s anti-abuse tech

Technology facilitated abuse AI CBA eSafety

The Commonwealth Bank’s (CBA’s) efforts to stamp out technology-facilitated abuse have received high praise from eSafety Commissioner Julie Inman Grant, who singled out the bank’s anti-abuse AI tool as a model of best practice.

Speaking at the ASIC/UTS co-hosted AI Regulators Symposium on Tuesday evening, Inman Grant praised CBA’s in-house developed AI abuse detection tool and the bank’s willingness to share the capability freely, which late last year was made available to all financial institutions as open source code.

The tool is designed to scan financial transactional activity, and particularly within transaction messages, to identify patterns and instances of financial abuse.

“When we were talking to women who were experiencing technology-facilitated abuse, those who, for instance, were receiving child support payments with microaggressions within [online transaction] messages, the Commonwealth Bank took it seriously.”

“They used our ‘Safety by Design’ framework and open-sourced that technology for use by banks all over the world,” Inman Grant said.

She added: “I want to call out best practice, because there’s some really good stuff that’s being done here and we want to share these solutions too.

“There’s so much work to be done and we want to be partners with industry in doing this.”

She revealed that the office of the eSafety Commissioner is also currently working with industry partners to help develop a dedicated system to identify workplace technology-facilitated sexual harassment.

Inman Grant, who is serving her second five-year term as eSafety chief, urged businesses to do better to utilise their “vast troves” of data to detect and remove suspicious or abusive content in online channels.

She further questioned why corporate entities were not utilising finely tuned marketing technologies, such as targeted advertising tools, to detect and snuff out abusive, sexual or hateful content, instead putting the “burden on users” to do so.

“I often ask why these companies aren’t using similar technologies that are available to them to detect hateful speech or to detect sexualised comments”, she said.

“They’re not really embracing and utilising the promise of technology to safeguard people, or [instead they’re] using it for profit incentives to drive engagement.”