Industry regulators working group, the Digital Platform Regulators Forum (DP-REG), has released its third paper outlining the impacts of multimodal foundation models (MFMs), a type of generative artificial intelligence (genAI).
Group members, the Australian Competition and Consumer Commission (ACCC), the Australian Communications and Media Authority (ACMA), the eSafety Commissioner (eSafety) and the Office of the Australian Information Commissioner (OAIC), investigated MFMs and how their use may affect each one’s regulatory role.
MFMs are able to process and output multiple data types including text, image, audio and video based off of analysis of large datasets, and are also able to combine these formats.
DP-REG acknowledged the strong potential for MFMs to be adopted by consumers and businesses on a broad scale, presenting both opportunities and risks to the existing AI landscape. The group said the working paper seeks to accompany ongoing government initiatives in AI.
“MFMs perform as supercharged AI creators. Give them a text prompt, and they can create an image to match. Feed them audio, and they might generate a corresponding video. Provide a picture, ask them to describe it, and they can provide a text description,” the working paper said.
“These capabilities could open many opportunities for consumer and business adoption across various industries – from generating personalised content experiences to new ways of creating music and images.
“Many of the risks associated with MFMs are similar to the limitations considered by DP-REG members in our examination of large language models (LLMs) – for example, the potential to produce unexpected outputs or outputs that are inaccurate or harmful. Although MFMs provide potential opportunities to consumers and business, they also have the potential to amplify risks.
“The ability to generate multiple types of content, such as image, audio and video also raise concerns about scams and deceptive practices, the spread of misinformation and disinformation, the generation of harmful content, and loss of control over personal information.”
DP-REG said several interconnecting issues could arise as a result of the introduction of MFMs, including deepfakes and their impacts on online safety, privacy, misinformation, consumer protection and trust in the digital economy; as well as the collection of personal information and online scams.
The working paper also raised concerns that were initially related to the rise of AI, such as distinguishing between genuine and AI-generated content or material, the use of AI to “spread and amplify” misinformation, terrorist propaganda and scams”, and challenges for regulation and enforcement of such a fluid and volatile concept.
“The Australian Government is considering proposed reforms that could enhance the ability of regulators to tackle issues related to consumer protection, competition, privacy, online safety, and misinformation and disinformation,” the working paper said.
“Both in Australia and around the world, this includes work to address the issues posed by artificial intelligence (AI). Internationally, regulators and policymakers have introduced dedicated AI legislation, self-regulatory principles and governance frameworks.
“The Australian Government is also investing in developing policies and capabilities that support the safe and responsible adoption and use of AI technology. This includes funding to support industry analytical capabilities, and coordination of AI policy development, regulation and engagement activities across government.
“These efforts will also review and strengthen existing regulations in healthcare, consumer and copyright law. This working paper aims to complement and inform these broader government initiatives. DP-REG members will continue to apply our existing frameworks and engage with Government on these issues to ensure the digital economy is a safe, trusted, fair, innovative and competitive space.”