CSIRO report reveals how SMEs can build responsible AI

The vast majority of businesses in Australia (82 per cent) believe they practice artificial intelligence (AI) responsibly; however, less than a quarter had actual measures in place to ensure they aligned with responsible AI practices, according to the latest Australian Responsible AI Index.

The report, released for Australia’s National AI Centre, coordinated by the CSIRO and developed by Fifth Quadrant in partnership with Gradient Institute, looks at how businesses can effectively implement the Australian Government’s eight AI ethics principles.

According to National AI Centre director, Stela Solar, many Australian businesses see the commercial opportunities of AI but are unable to navigate the fast-changing environment and meet expectations.

At the same time, many small and medium enterprises (SMEs) cited trust, privacy, security, data quality and skills as top roadblocks for AI projects, and their ability to innovate with AI remained correlated with how quickly they were able to earn trust from their respective communities.

On top of that, various companies pointed to a lack of appropriate checks and balances which may lead to unintended consequences that can, potentially, cause reputational damage to their brands.

Bill Simpson-Young, chief executive of Gradient Institute, said that even though Responsible AI practices, resources and standards would keep evolving at a fast pace, this should not distract organisations from implementing practices that are known to be effective today.

“For example, when an AI system is engaging with people, informing users of an AI’s operation builds trust and empowers them to make informed decisions. Transparency for impacted individuals could be as simple as informing the user when they are interacting with an AI system,” he said.

“While it is broadly accepted that fairness is important, what constitutes fair outcomes or fair treatment is open to interpretation and highly contextual. What constitutes a fair outcome can depend on the harms and benefits of the system and how impactful they are.

“It is the role of the system owner to consult relevant affected parties, domain and legal experts and system stakeholders to determine how to contextualise fairness to their specific AI use‑case. The report helps organisations address these challenges,” he said.

The report, ‘Implementing Australia’s AI Ethics Principles: A selection of Responsible AI practices and resources’, advises organisations for each of eight Australian AI Ethics Principles, to focus on:

  • suggesting some key practices that a business can cultivate in order to promote the principle
  • pointing to resources to help conduct the selected practices
  • when there are gaps in existing resources, suggesting alternative courses of action.

 

The principles are:

  • human, societal and environmental wellbeing
  • human-centred values
  • fairness • privacy protection and security
  • reliability and safety • transparency and explainability
  • contestability
  • accountability

“Even though the tools will keep on evolving at a fast pace, it should be noted that many practices are likely to stay relevant and new practices will emerge. This suggests that it is advisable that organisations invest in developing their culture and governance processes so as to eventually elevate Responsible AI to a level of standard routine – in a way that is agnostic to the particular choice of tools or resources required for execution,” the report reads.

“The need to retire practices or create new ones will eventually arise, but this should not distract organisations from the task of instituting and developing practices that are today known to be effective and are likely to continue to be for the foreseeable future.”