New Zealand financial services firms’ initial interest in developing proprietary artificial intelligence (AI) platforms has failed to materialise “as anticipated”, with FSIs increasingly shifting their focus to procuring third-party solutions, a new report by the Financial Markets Authority (FMZ) has revealed.
The report, based on an industry-wide survey gauging New Zealand financial firms’ adoption of AI technologies, has revealed a decidedly cautious approach to the implementation and use of AI technologies.
Of the 13 regulated entities surveyed – including deposit takers, insurers, asset managers, and financial advice providers – at least nine were currently using AI technologies in one or more aspects of their operations.
However, the use of these technologies has generally been limited to risk identification, assessment and mitigation, with few use cases identified in wider business practices.
“We recognise that security and risk management are fundamental to these organisations’ operations. In essence, it makes sense that providers are being cautious in their approach, and it is good that they are,” the FMA wrote.
Broken down by industry, insurers and financial advice providers were primarily seeking to leverage AI technologies to improve operational efficiency and customer outcomes. Deposit takers reported mostly using AI in risk management (and notably fraud detection), which was less evident among insurers or financial advice providers.
The FMA overall noted a decidedly “cautious approach to generative AI”, with simpler machine learning tools – a precursor to more advanced deep learning systems – the prevailing ‘artificial intelligence’ use case among respondents.
A number of respondents “explicitly mentioned” that they are still ‘focusing on how to [use generative AI] safely’, the regulator said.
While off-the-shelf AI solutions are currently favoured over in-house developments, the FMA did note one respondent’s “certain degree of success” in building a proprietary web-based Chat GPT analogue.
Development of the large language model (LLM) system was, the firm reported to the FMA, “motivated by an interest in keeping internal data away from third-party platforms of unknown risk”.
“Another benefit is the model’s ability to learn from internal training data, with a bespoke tool being a better fit for them than an off-the-shelf one,” it added.
Among some of the current AI use cases highlighted by respondents were:
- Off-the-shelf tools such as Microsoft CoPilot and GitHub are being trialled to assist teams with efficiency, documentation and knowledge and support in product development for engineering SDLC, for research, and alternative verification of coding reviews.
- Machine learning tools such as Darktrace, Databricks and HuggingFace are being used in security and fraud detection to identify patterns, monitor and track behaviours, for predictive modelling, to determine product pricing, and to support personalisation.
- Web automation tools such as Miro and Zapier are being used to understand customer behaviours, automate credit decision-making processes, document scraping, web automation of standard form documents, experimenting with GenAI code, and automated testing capabilities.
- Security and detection tools such as Darktrace, Egress and FRISS, that use self-learning AI, are being used to detect and respond to cyber threats using machine learning algorithms to understand normal behaviour and identify anomalies or deviations.
- Customer service tools such as chatbots are being used to summarise interactions and respond to consumer needs, provide chatbot search options, and draft documents in a customer friendly way.
Future adoption of AI is largely motivated by an interest in improving customer outcomes and operational efficiencies. Risk management solutions and improved data analytics were also cited as compelling reasons for adoption.
Fraud detection, the FMA noted, was less frequently considered as a future motivation for adoption “given it is already being employed by responding organisations”.
Commenting on the report, FMA chief economist Stuart Johnson said the survey sought “to understand both the benefits and the risks to inform more oversight”.
“AI is a transformative technology, and application is evolving at pace. As financial services integrate AI, understanding its potential and limitations is crucial. Our findings emphasise the need for a balanced approach to harness AI’s benefits while addressing governance and risk concerns.
“We are aware that AI adoption in financial services offers significant opportunities but also presents emerging risks. Our research shares how institutions are beginning to navigate these challenges and the importance of robust governance structures.
“Our focus on AI governance and risk management reflects our commitment to ensuring that financial innovations are introduced responsibly. By understanding current practices and future directions, we aim to support a balanced regulatory approach.”
He added that the regulator remains “technology-neutral and pro-innovation”.
“We believe that New Zealanders should have access to the same technological advancements as those in other countries. We want to see firms leverage AI to improve consumers’ experiences in financial markets and services.”
The FMA will host a roundtable on 1 October 2024 with the study participants to further explore how AI and GenAI is being used in New Zealand’s financial services and how firms are managing risks.