
Australian Securities and Investment Commission (ASIC) chair Joe Longo has quietly expressed concerns over deficiencies within Australian regulators to effectively regulate emerging artificial intelligence (AI) technologies.
Joining a featured regulator-only panel session at the ASIC/UTS co-hosted AI Regulators Symposium on Tuesday evening, Longo acknowledged the current lack of “tools” and knowhow within Australia’s regulatory bodies as well as the legislative firepower to effectively regulate AI.
Longo called for a comprehensive reskilling and digital literacy uplift across Australia’s regulatory bodies, ensuring compliance standards for AI technologies meet world’s best practice.
“The technologies we’re trying to regulate will require people who understand those technologies well enough to be able to understand those issues,” Longo said.
“And from a regulator’s perspective, some of the skills [we] need to actually do enforcement and regulation in this area are scarce.”
Longo, a highly credentialled lawyer himself by training, urged Australian regulators and their staff to uplift their science – and particularly data science – skillsets to meet this challenge.
“Most regulators are dominated by lawyers, so we’re needing to retrain and give people some exposure to make them digitally literate.
“The technologies we’re trying to regulate will require people who understand [them] well enough to be able to understand the issues that arise from them.”
He acknowledged that, “from an evidence perspective, I’m not so sure we have all the tools”.
As well, he said, Australia’s legal framework demands “new thinking on how to establish corporate liability for machine-facilitated legal breaches.
“As a society, we never really got [corporate liability] right, frankly”.
Longo added: “Trying to prosecute a corporation in Australia is very rare and very difficult.”
He further added that all stakeholders, “whether they be a superannuation fund, a bank, an insurer, or a director of a company, must make it their business to understand these technologies”.
“You have to be curious. You have to know where your data is and understand the supply chain well enough to know where your vulnerabilities are.
“It’s simply unacceptable to say, ‘The machine did it! I didn’t understand how the algorithm worked’.
“I think if we allow that kind of thinking to be acceptable then we’ve lost the game.”
Longo also rejected calls for the creation of a dedicated regulator to oversee AI – at least for now.
“I think our current regulatory architecture could probably do the job in the short term. Five years from now, who knows.”
AI a profound challenge to Australia’s national sovereignty
During the panel session, Longo also expressed concerns over AI’s potential to fundamentally undermine Australia’s national sovereignty, noting that these closed-sourced, patent-protected, and capital-intensive technologies (including the recent appearance of large language models ChatGPT, Llama, and Copilot) are being developed almost exclusively overseas by bigtech corporations.
“The explosive impact [of AI technologies] over the last 18 months, particularly with the release of ChatGPT… does pose a profound challenge to national sovereignty.
“At the bottom, these technologies are basically being developed by the US and China. And vast amounts of capital are required to do so.
He noted that historically, academia “led the way” with machine learning developments. However, over recent years, the development and innovation of these technologies have been increasingly captured by the private sector.
“Trust in the economy is also going down because people are worried about the implications of this technology and whether it’s safe or not,” he said.
“From a regulatory perspective and policy perspective, that should worry all of us because my basic position on getting through life is trust.
“Every day you trust hundreds of strangers – when, say, going to the doctor, taking public transport, or talking to an insurance company – most of the time we take at face value what we’re told.
“What AI is beginning to teach a lot of people is that you can’t have that trust. That poses a big challenge for regulators.”
He concluded: “The challenge to national sovereignty is equalled only by the erosion of free will, which is essentially what [we’re facing] in interacting with these technologies.
“That’s a huge problem for us.”