Benefits of AI will outweigh risks for FSIs – if used right

Artificial intelligence FS-ISAC white paper

While the fears around artificial intelligence (AI) sparking a new cyber arms race are in many ways justified, financial services businesses, on balance – and when applied properly – can reap net benefits from the deployment of these technologies, as both an efficiency-boosting tool but also as powerful counteroffensive weapon against next-generation hackers, according to global cyber intelligence-sharing group, the FS-ISAC.

The financial service’s dedicated cyber intelligence-sharing community has released a “first-of-its-kind” whitepaper series detailing a extensive array of standards, tactics and guidance to support FSIs in safely adopting AI technologies within various business functions.

Developed by the FS-ISAC’s AI Risk Working Group, the series of six separate whitepapers outline practical frameworks and tactics that financial services firms “can customise to their size, needs, and risk appetites according to each relevant function in the institution”.

Among these include a breakdown of today’s adversarial AI threat landscape (including ‘AI poisoning’ and deepfakes), key considerations when building AI into cyber defences, principles for responsible AI. As well, the papers provide best practice policies for using generative AI technologies, taking account of the increasingly observable phenomenon of AI “hallucinations”, where large language models (LLMs) can produce incoherent, incorrect, or even prejudiced outputs that are presented as ‘facts’.

The papers also explores ethical problem/solution scenarios resulting from improper applications of AI technologies, including a generative AI bot generating “racially homogenous advertising images”, an investor chatbot that makes inappropriate comments, or a bank making unintentional use of biased outputs from an AI system for mortgage decisioning, ultimately affecting the fairness of decisions made by front-line loan officers.

Designed as “additive resources”, the whitepapers, the FS-ISAC notes, have been developed with expertise from government agencies, standards bodies, academic researchers, and financial services partners including FSSCC and BPI/BITS, and NIST’s AI Risk Management Framework.

“While AI promises breakthroughs in the financial services industry, there are a plethora of risk factors that the sector needs to be aware of, both when integrating AI into internal processes as well as building cyber defences against threat actors utilizing AI tools,” said Michael Silverman, the FS-ISAC’s vice president of strategy and innovation.

“It is integral to operational safety and the very foundation of trust in the financial services industry that the sector aligns on how to counteract the risks that AI poses.”

Benjamin Dynkin, chair of FS-ISAC’s AI Risk Working Group said the papers “provide point-in-time guidance on using AI securely, responsibly, and effectively, while offering tangible steps the sector can take to counteract the rising risks associated with AI”.

Among the six whitepapers include:

  1. Adversarial AI Frameworks: Taxonomy, Threat Landscape, and Control Frameworks: Defines and maps the existing threats associated with AI and characterizes the types of attacks and vulnerabilities this new technology presents to the financial services industry, as well as security controls that can be used to address those risks.
  2. Building AI into Cyber Defenses: Highlights financial services’ key considerations and use cases for leveraging AI in cybersecurity and risk technology.
  3. Combating Threats and Reducing Risks Posed by AI: Outlines the mitigation approaches necessary to combat the external and internal cyber threats and risks posed by AI.
  4. Responsible AI Principles: Examines the principles and practices that ensure the ethical development and deployment of AI in alignment with industry standards.
  5. Generative AI Vendor Evaluation and Qualitative Risk Assessment: A customizable tool designed to help financial services organizations assess, select, and make informed decisions about generative AI vendors while managing associated risks.
  6. Framework of Acceptable Use Policy for External Generative AI: Guidelines to assist financial services organizations in developing an acceptable use policy when incorporating external generative AI into security programs.