The FS-ISAC, the global financial services industry’s cyber intelligence collective, has released first-of-its-kind guidance for senior executives and board members to help understand and gauge the threat of deepfake technologies.
The Deepfake Taxonomy, part of the FS-ISAC’s Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks report [pdf], defines and details emerging risks and immediate threats posed to financial services organisations by deepfake technologies.
“Deepfakes – synthetic media generated using advanced artificial intelligence – have become increasingly sophisticated, enabling threat actors to impersonate executives, employees, and customers to bypass conventional security measures,” the FS-ISAC writes.
The term ‘deepfake’ is a portmanteau of “deep learning” and “fake”, with the former alluding to the generative AI (GenA) models used to produce these synthetic media. Deepfake videos replace a person’s likeness with someone else’s in existing images or videos, for example, or mimic a person’s voice in deepfaked audio with striking accuracy, the FS-ISAC notes.
The threats posed by deepfakes to financial services organisations are manifold and, at their worst, fundamentally undermine trust in the financial system.
“By exploiting the human element of trust that underpins financial transactions and decision-making processes, deepfakes allow cyber criminals to defraud financial institutions and their customers, steal money and data, and sow confusion and disinformation.”
A Deloitte report, cited by the FS-ISAC, projects that losses from deepfake and other AI-generated frauds could reach US$40 billion (AU$61 billion) in the US alone by 2027.
According to a recent survey, at least one in 10 business have reported facing threats from deepfakes.
Examples of deepfake theats include the use of a deepfake voice impersonator to trick voice recognition software designed to verify identity.
Another vector of attack targets senior members of a business, using deepfake technologies to convincingly mimic an employee’s appearance or voice to effectively breach an internal security barrier. Mimicing a senior executive, this same method could also be used to spread misinformation about a business.
The FS-ISAC’s Deepfake Taxonomy, designed for non-cyber experts (future reports will target cyber experts in this space, the group confirmed), outlines financial institutions’ risks, including information security breaches, market manipulation, direct fraud against customers and clients, and reputational harm from disinformation campaigns.
The report is also designed to help financial firms determine deepfake threat categories of greatest risk to them and implementable controls to mitigate those risks.
“The potential damage of deepfakes goes well beyond the financial costs to undermining trust in the financial system itself,” said Michael Silverman, chief strategy and innovation officer at the FS-ISAC.
“To address this, organisations need to adopt a comprehensive security strategy that promotes a culture of vigilance and critical thinking to stay ahead of these evolving threats.”