Artificial intelligence algorithms are rapidly becoming part of everyday life. Many systems that require strong security are already or will soon be underpinned by machine learning. These systems include facial recognition, banking, military targeting applications, robots and self-driving cars, to name a few.
This raises an important question: How secure are these machine learning algorithms against malicious attacks?
In an article published today, nature machine intelligencea University of Melbourne colleague and I discuss potential solutions to vulnerabilities in machine learning models.
We propose that integrating quantum computing into these models may lead to new algorithms with strong resilience to adversarial attacks.
Risk of data manipulation attacks
Machine learning algorithms are highly accurate and efficient in many tasks. They are especially useful for classifying and identifying image features. However, it is also highly vulnerable to data manipulation attacks, which can pose a serious security risk.
Data manipulation attacks involving highly sophisticated manipulation of image data can be launched in several ways. Attacks can be launched by mixing corrupted data into the training dataset used to train the algorithm, causing the algorithm to learn things it shouldn’t have learned.
Manipulated data can also be injected during the testing phase (after training is complete) if the AI system continues to train the underlying algorithms while in use.
People can even carry out such attacks from the physical world. Someone could put a sticker on a stop sign to trick a self-driving car’s AI into thinking it’s a speed limit sign. Or, on the front lines, armed forces may wear uniforms that can trick AI-based drones into recognizing them as landscape features.
Either way, the consequences of data manipulation attacks can be severe. For example, if a self-driving car uses a compromised machine learning algorithm, it could incorrectly predict that there are no humans on the road when they are.
How Quantum Computing Can Help
In our article, we describe how the integration of quantum computing and machine learning produces secure algorithms called quantum machine learning models.
These algorithms are carefully designed to take advantage of special quantum properties to find specific patterns in image data that cannot be easily manipulated. The result is a resilient algorithm that is safe even against powerful attacks. It also eliminates the expensive “adversarial training” currently used to teach algorithms how to counter such attacks.
In addition, quantum machine learning has the potential to enable faster algorithm training and more accurate learning functions.
So how does it work?
Today’s classical computers work by storing and processing information as “bits”, or binary digits, which are the smallest units of data a computer can process. In classical computers, which follow the laws of classical physics, bits are represented by binary numbers, specifically 0 and 1.
Quantum computing, on the other hand, follows the principles used in quantum physics. Information in a quantum computer is stored and processed as quantum bits (qubits) that can exist simultaneously as 0, 1, or a combination of both. A quantum system that exists in more than one state at the same time is said to be in a superposition state. Quantum computers allow us to design clever algorithms that take advantage of this property.
However, while there are great potential benefits to using quantum computing to secure machine learning models, it can also be a double-edged sword.
Quantum machine learning models, on the other hand, provide important security for many sensitive applications. Quantum computers, on the other hand, could be used to generate powerful adversarial attacks, easily fooling even state-of-the-art conventional machine learning models.
Now we need to seriously consider how best to secure our systems. If an adversary gains access to an early quantum computer, it could pose a significant security threat.
limits to overcome
Current evidence suggests that quantum machine learning is still years away from becoming a reality due to the limitations of the current generation of quantum processors.
Today’s quantum computers are relatively small (less than 500 qubits) and have high error rates. Errors can occur for several reasons, including imperfect fabrication of qubits, errors in control circuitry, and loss of information due to interaction with the environment (called “quantum decoherence”).
Still, we have seen significant advances in quantum hardware and software over the past few years. According to recent quantum hardware roadmaps, quantum devices manufactured in the next few years are expected to contain hundreds to thousands of qubits.
These devices should be able to run powerful quantum machine learning models that help protect a wide range of industries that rely on machine learning and AI tools.
Around the world, governments and the private sector alike are increasing their investment in quantum technology.
The Australian government this month launched a National Quantum Strategy aimed at growing the country’s quantum industry and commercializing quantum technologies. Australia’s quantum industry could be worth around A$2.2 billion by 2030, according to CSIRO.
Magazine information:
nature machine intelligence