The Financial Services Union (FSU) has overwhelmingly backed the recommendations made by a Senate Select Committee advising the Federal Government on actions it should take to address risks and opportunities related to artificial intelligence (AI).
The Committee laid out legislative and regulatory action points for the Federal Government to not only address risks, but to also promote the safe development and adoption of AI technologies by Australian entities.
Among the recommendations backed by the FSU include a call for the Government to introduce dedicated whole-of-economy legislation to regulate high-risk uses of AI. As well, the Union welcomed a recommendation it pushed to ensure workers and stakeholder organisations (including unions and peak bodies) are consulted when addressing the impact of AI implementations on workplaces.
“This is a major step towards proper AI regulation”, the FSU declared.
The inquiry received more than 240 submissions, including responses from the Insurance Council of Australia and Financial Services Council (which called for a “cutting of red tape” where industry-specific, risk mitigation regulation exists for AI), bigtech industry stakeholders Google, Microsoft, Meta, and OpenAI, Federal Government agencies the ASD and CSIRO, alongside the FSU. Six separate public hearings were held on the matter, beginning in May this year and concluding in September.
Among the 12 recommendations in the Final Report for proposed action by the Australian Government include:
- The adoption of a principles-based approach to defining high-risk AI uses, supplemented by a non-exhaustive list of explicitly defined high-risk AI uses.
- The creation of non-exhaustive list of high-risk AI uses which explicitly includes general-purpose AI models, such as large language models (LLMs).
- An increase in financial and non-financial support to grow Australia’s sovereign AI capability.
- A clarification of a final definition of high-risk AI that clearly includes the use of AI that impacts on the rights of people at work.
- The extension and application of the existing work health and safety legislative framework to the workplace risks posed by the adoption of AI.
- An assurance that workers, worker organisations, employers and employer organisations are thoroughly consulted on the need for, and best approach to, further regulatory responses to address the impact of AI on work and workplaces.
- An implementation of recommendations pertaining to automated decision-making in the review of the Privacy Act, including Proposal 19.3 to introduce a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made.
- An implementation of recommendation 17.2 of the Robodebt Royal Commission: ‘to establish a body, or expand an existing body, with the power to monitor and audit automate decision-making processes with regard to their technical aspects and their impact in respect of fairness, the avoiding of bias, and client usability’.
- Adopting a coordinated, holistic approach to managing the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value for Australians and is in the national interest.