DTA releases new policy for responsible AI use in Govt

artificial intelligence

The Digital Transformation Agency (DTA) has released a new policy to inform the whole-of-government responsible use of artificial intelligence (AI).

The policy, to come into effect from 1 September, follows in the wake of significant progress made in the last 12 months to improve the Australian Public Service’s (APS’) approach to emerging AI technologies, having conducted major stakeholder consultations and recently concluded the AI in Government Taskforce, co-led by the DTA and Department of Industry, Science and Resources (DISR).

To maintain consistency across all digital policies implemented across government, the new policy will apply to all non-corporate Commonwealth entities (NCEs) except for the defence portfolio and several organisations under the national intelligence community (NIC) including the Australian Signals Directorate (ASD), the Australian Security Intelligence Organisation (ASIO) and certain functions within Australian Transaction Reports and Analysis Centre (AUSTRAC), Australian Federal Police (AFP), the Department of Home Affairs and the Department of Defence.

The policy, leveraging the existing ‘enable, engage and evolve’ framework, informs the APS on how to:

  • “embrace the benefits of AI by engaging with it confidently, safely and responsibly;
  • strengthen public trust through enhanced transparency, governance and risk assurance; and
  • adapt over time by embedding a forward-learning approach to changes in both technology and policy environments.”

“This policy will ensure the Australian Government demonstrates leadership in embracing AI to benefit Australians,” Lucy Poole, General Manager for Strategy, Planning, and Performance, said.

“Engaging with AI in a safe, ethical and responsible way is how we will meet community expectations and build public trust.”

According to the DTA, agencies will need to:

  • “safely engage with AI to enhance productivity, decision-making, policy outcomes and government service delivery by establishing clear accountabilities for its adoption and use;
  • identify accountable officials and provide them to the DTA within 90 days of the policy effect date;
  • use proportional, targeted risk mitigation and ensure their use of AI is transparent and explainable to the public [to protect Australians from harm];
  • publish a public transparency statement outlining their approach to adopting and using AI within 6 months of the policy effect date; and
  • [maintain] flexibility and adaptability… to accommodate technological advances, requiring ongoing review and evaluation of AI uses, and embedding feedback mechanisms throughout government.”

The DTA has also published a standard for accountable officials (AOs) to understand what is required and how to support their agencies’ implementation of the policy, as well as to ensure they foster a culture that “balances risk management and innovation” and contributes to cross-government collaboration.

“We’re encouraging AOs to be the primary point of partnership and cooperation inside their agency and between others,” Poole said.

“They connect the appropriate internal areas to responsibilities under the policy, collect information and drive agency participation in cross-government activities.

“Whole-of-government forums will continue to support a coordinated integration of AI into our workplaces and track current and emerging issues.”

The DTA also said it would publish a standard to guide agencies in creating AI transparency statements to ensure they use, according to Poole, “clear, plain language and avoid technical jargon”, including:

  • “intentions for why it uses or is considering adoption of AI;
  • categories of use where there may be direct public interaction without a human intermediary;
  • governance, processes or other measures to monitor the effectiveness of deployed AI systems;
  • compliance with applicable legislation and regulation; and
  • efforts to protect the public against negative impacts.”