
In a world first, researchers from the CSIRO have developed a set of techniques to effectively ‘immunise’ machine learning algorithms from adversarial attacks.
As machine learning (ML) technologies increasingly pervade Australia’s public agencies and private enterprises, the CSIRO has acknowledged the potential vulnerability of ML algorithms to adversarial attack – with malicious agents triggering small, carefully designed changes to trick its algorithm into reaching fake conclusions or incorrect predictions.
According to Dr Richard Nock, machine learning group leader at CSIRO’s Data61, the agency’s dedicated data research hub, attackers can fool algorithmic models into misclassifying images, for example, by simply overlaying an image with seemingly benign-looking ‘noise’.
For instance, attacks have been shown to be capable of tricking a machine learning model into mistaking a stop sign for a speed sign, based on a 2017 study, which Dr Nock fears “could have disastrous effects in the real world”.
Another study, published last March by China’s Tencent Keen Security Lab, revealed how adversaries, using similar attack methods, could push a self-driving Tesla Model S onto the wrong side of the road.
To combat such attacks, the CSIRO has developed a new counter-adversarial technique that functions much like a vaccination – implanting an adversary in its weaker form, such as a small modification to a series of images, to create a more ‘difficult’ training dataset for the friendly AI to overcome and build future resistance.
“When the algorithm is trained on data exposed to a small dose of distortion, the resulting model is more robust and immune to adversarial attacks,” Dr Nock said in a statement.
The researchers’ mathematical vaccine, which was presented this month at the International Conference on Machine Learning in Los Angeles, purportedly contains the worst possible adversarial examples, thereby enabling trained algorithmic models to withstand extreme attacks.
Adrian Turner, Chief Executive of CSIRO’s Data61, has lauded the significance of these research outcomes, which show potential for developing not only a new line of study within the growing field of adversarial machine learning but also safeguard the future use of artificial intelligence (AI).
“Artificial intelligence and machine learning can help solve some of the world’s greatest social, economic and environmental challenges, but that can’t happen without focused research into these technologies,” Turner said.
Data61’s AI ‘vaccine’ follows a government-backed public consultation on a nationwide AI ethics framework, which wrapped up last month.
Turner has been vocal about Australia’s push to develop sovereign capabilities in ethical AI, citing the technology’s potential to drive competitive advantage in industries, including finance, mining, and energy, whilst delivering a $315 billion boost to Australia’s economy – if implemented with the right approach.