DTA pilots AI assurance framework with select depts., agencies

AI Factory launch

The Digital Transformation Agency (DTA) has commenced a pilot test with select Australian Government departments and agencies of a new artificial intelligence (AI) assurance framework.

The information collected through the pilot from the departments and agencies at varying stages of their AI journey will guide the DTA’s recommendations to the government regarding the range of AI assurance environments it will undeniably have to navigate in the future.

The pilot also follows the release of the DTA’s policy for the responsible use of AI in government, and will work towards establishing the federal government as a “leader in the safe, ethical and responsible use of AI”.

“Hearing from these diverse agency perspectives is invaluable for us,” Lucy Poole, General Manager of Strategy, Planning and Performance, said.

“Their insights will help refine the assurance framework and guidance to ensure they work effectively in different contexts. Our guidance is iterative. It is meant to change and adapt based on the shifting AI landscape within the APS.

“The framework and guidance are subject to amendments based on feedback from pilot participants and other stakeholders.”

This draft assurance framework will see agencies undertake a ‘threshold assessment’ that asks them to confront the impact of their AI use cases against Australia’s existing AI Ethics Principles, weighing up the risks and benefits of leveraging AI-driven solutions compared with non-AI alternatives.

“This draft does not represent a final Australian Government position on AI assurance,” Poole said.

“We want agencies to carefully consider viable alternatives. For instance, non-AI services could be more cost-effective, secure, or dependable.

“Evaluating these options will help agencies understand the advantages and limitations of implementing AI. This enables them to make a better-informed decision on whether to move forward with their planned use case.”

The draft framework seeks to help agencies identify, navigate and mitigate risks generated from AI use at each period of the ‘AI lifecycle’, by asking them to document the measures in place to ensure their AI use is responsible.

The full assessment will consider the list of eight AI ethics principles, including:

  • Fairness. Agencies are to reflect on potential biases that may arise in training data where it could be incomplete, unrepresentative, or reflects societal prejudices. AI models may reproduce these, which can generate misleading or unfair outputs, insights, or recommendations. This may disproportionally impact some groups.
  • Reliability and safety. Our draft framework and guidance provide suggestions for how agencies should consider ensuring the reliable and safe delivery and use of AI systems. We particularly focus on data suitability, Indigenous data governance, AI model procurement, testing, monitoring, and preparedness to intervene or disengage.
  • Privacy protection and security. Privacy protection, data minimisation, and security in AI systems under Australian regulations are vital in approaching the development and roll-out of AI services. These solutions must comply with Australian Privacy Principles, use of privacy-enhancing technologies, and mandatory privacy impact assessments for high-risk projects.
  • Transparency and explainability. Our resources highlight the need to consult diverse stakeholders, maintain public visibility, document AI systems, and disclose AI interactions and outputs. It also provides guidance on offering appropriate explanations and maintaining reliable records.

“By recognising and addressing public concern regarding AI, the policy aims to strengthen trust through transparency, accountability, and responsible implementation,” Poole said.

“This is achieved through mandatory transparency statements and appointing accountable officials for AI. Our goal is to provide a unified approach for government agencies to engage with AI confidently. It establishes baseline requirements for governance, assurance, and transparency, removing barriers to adoption and encouraging safe use for public benefit.”

The DTA confirmed this framework is intended to “complement and strengthen – not replace or duplicate” frameworks, legislation and practices that are already in place to guide the government’s use of AI.