
The Commonwealth Bank (CBA) has released details on an artificial intelligence (AI) model it has used to root out abusive messages sent via online financial transactions, with Australia’s biggest bank revealing the technology was used to block more than 100,000 transactions in just three months.
The new model was developed in-house by CommBank’s AI Labs team – the bank’s dedicated artificial intelligence research and development arm.
According to the bank, the new tech has enabled it to “proactively identify instances of technology-facilitated abuse, a targeted form of domestic and family violence”.
“The AI model complements the Bank’s automatic block filter that was implemented last year across its digital banking channels to stop transaction descriptions that include threatening, harassing or abusive language.”
The AI identifies potentially abusive messages – which, CBA notes, often contain offensive language – sent through transaction descriptions made via the CommBank App and Netbank web-banking portal.
Within a three-month period, between 1 May to 31 July 2021, the AI detected, and help to automatically filter and block more than 100,000 transactions containing “offensive content”, the bank said.
“Of those instances, the new AI model detected 229 unique senders of potentially serious abuse, which were then manually reviewed to determine severity and the appropriate action required from the Bank.”
The AI model complements an automatic block feature and technology use policy introduced last year by the bank to stamp out abusive behaviour and prevent the sending of “transaction descriptions that include threatening, harassing or abusive language”.
Justin Tsuei, CBA’s general manager community and customer vulnerability, said the new model’s embrace of AI and machine learning techniques has enabled the bank to deliver “a more targeted and proactive response than ever before”.
“It builds on the work we have already done to fortify our digital channels from being used to perpetrate technology-facilitated abuse, including updating our Acceptable Use Policy and implementing an automatic block on offensive language being used in transaction descriptions,” Tsuei said.
He notes that the AI model not only allows the bank to “proactively detect possible instances of abuse in transaction descriptions, but… do so at an incredible scale”.
Once abuse is detected via transactions, depending on the severity of the abuse, CBA notes it can move to “de-link” the victim-survivor’s bank account from PayID, preventing perpetrators from using their email address, mobile number or ABN to send victim-survivors abusive transactions; to set up new “safe accounts” for victim-survivors; and, in escalated cases, send a warning letter to perpetrators or, in extreme cases, have their banking relationships “terminated” for breach of CBA’s Acceptable Use Policy.
Speaking of the bank’s dedicated AI Labs team in a podcast earlier this year, CBA chief data scientist Dan Jermyn, said the team places the bank “at the cutting edge of what’s happening in AI, with an eye to making sure that we are developing capabilities then that can actually be used within the bank and across the group.”