The Australian Human Rights Commission (AHRC) and Actuaries Institute have released new guidance to help insurers and actuaries comply with the federal anti-discrimination laws when adopting artificial intelligence (AI) technologies, specifically for pricing or underwriting insurance products.
The practical Guidance outlines and explains relevant laws, and offers strategies for insurance sector adoptees of AI to address algorithmic bias and avoid discriminatory outcomes.
Practical tips are also offered for insurers to help minimise the risks of a successful discrimination claim arising from the use of AI for pricing risk.
While both the AHRC and Actuaries Institute, the sector’s peak body, recognise the transformational impact of AI within the insurance industry, which “promises faster and smarter decision-making”, misuse of the technology – however inadvertent – could erode consumer trust, breach ethical norms, and fall afoul of anti-discrimination laws.
The pair argue that existing laws governing AI’s use in Australia are complex and inscrutable, and without clear guidance for insurers and risk assessors who are rapidly adopting the technology.
“Australia’s anti-discrimination laws are long-standing but there is limited guidance and case law available to practitioners,” said Elayne Grace, chief executive of the Actuaries Institute.
“The complexity arising from differing anti-discrimination legislation in Australia at the federal, state and territory levels, compounds the challenges facing actuaries, and may reflect an opportunity for reform.”
Grace noted that intersecting megatrends, including the proliferation of ‘big data’, increased use and power of AI and algorithmic decision-making, and growing and changing consumer awareness and expectations about what is ‘fair’, have also complicated the matter for risk assessors.
“Actuaries seek to responsibly leverage the potential benefits of these digital megatrends for the consumer, society and business. To do so with confidence, however, requires authoritative guidance to make the law clear,” Grace said.
AI is seeing rapid adoption within the insurance sector, not only for front-line customer services (in the form of the increasingly ubiquitous ‘chatbot’), but also in claims evaluation, fraud detection, price and product customisation, and in the provision of ‘proactive care’ in the health and life insurance space, which takes advantage of ‘big data’ generated through telematics and IoT sensors.
In the realm of risk pricing (which the paper specially addresses), an AI system could be used to, for instance, create a dynamic pricing structure for insurance products. In this case, cheaper policies may be offered to customers assessed as low-risk, while high-risk policyholders are set on a different premium model calculated based on, say, a user’s specific behaviours or customer profile.
While an attractive proposition for both the insurance sector and customers, without proper controls, such a system could introduce unintended biases that unfairly favour or disfavour certain users (based on, say, their ethnicity, disability, or gender).
According to a global PwC survey conducted last year, 25 per cent of insurance companies reported business-wide adoption of AI in 2021, up from 18 per cent in the previous year.
Professional services firm and AI specialist Genpact, in its own industry survey, found that 87 per cent of insurance carriers worldwide are investing more than $5 million in AI-related technologies each year.
“With AI increasingly being used by businesses to make decisions that may affect people’s basic rights, it is essential that we have rigorous protections in place to ensure the integrity of our anti-discrimination laws,” said Human Rights Commissioner Lorraine Finlay.
“But without adequate safeguards, there is the possibility that algorithmic bias might cause people to suffer discrimination due to characteristics such as age, race, disability, or sex,” she said.
She added: “This Guidance Resource, prepared in conjunction with the Actuaries Institute, provides practical guidance for insurers on complying with the various federal anti-discrimination laws when using AI”.
The guidance comes in response to a report released by the AHRC in 2021, and led by then Commissioner Ed Santow, which examined the human rights impacts of new and emerging technologies, including AI-informed decision-making.