Google DeepMind has developed an AI model that could improve the performance of quantum computers by correcting errors more effectively than any existing method, bringing these devices a step closer to broader use.
Quantum computers perform calculations on quantum bits, or qubits, which are units of information that can store multiple values at the same time, unlike classical bits, which can hold either a 0 or 1. These qubits, however, are fragile and prone to mistakes when disturbed by factors like environmental heat or a roving cosmic ray.
To correct these mistakes, researchers can group qubits together to form a so-called logical qubit, where some of the qubits are used for computation while others are reserved as error-detection tools. The information from the latter qubits must be interpreted, often by a classical computing algorithm, to work out how to then correct errors, in a process called decoding. This is a difficult task, but it is closely tied to the overall error correction capacity of a quantum computer which, in turn, dictates its ability to run useful real-world tasks.
Now, Johannes Bausch at Google DeepMind and his colleagues have developed an artificial intelligence model, called AlphaQubit, that can decode these errors better and more quickly than any existing algorithm.
“Designing a decoder for quantum error correction code is, if you’re interested in very, very high accuracy, highly non-trivial,” Bausch told journalists at a press briefing on 2 November. “AlphaQubit learns this high-accuracy decoding task without a human to actively design the algorithm for it.”
To train AlphaQubit, Bausch and his team used a transformer neural network, the same technology that powers their Nobel prize-winning protein-prediction AI, AlphaFold, and large language models like ChatGPT, to learn how data from error-detecting qubits corresponds to qubit errors. They first trained the model with data from a simulation of what the errors would look like, before fine tuning it on real-world data from Google’s Sycamore quantum computing chip.
In experiments on a small number of qubits on the Sycamore chip, Bausch and his team found that AlphaQubit makes 6 per cent fewer errors than the next-best algorithm, called a tensor network. But tensor networks also become increasingly slow as quantum computers get bigger, so can’t scale to future machines, whereas AlphaQubit appears to be able to run just as quickly, according to simulations, making it a promising tool as these computers grow, says Bausch.
“It’s tremendously exciting,” says Scott Aaronson at the University of Texas at Austin. “It’s been clear for a while that decoding and correcting the errors quickly enough, in a fault-tolerant quantum computation, was going to push classical computing to the limit also. It’s also become clear that for just about anything classical computers do involving optimisation or uncertainty, you can now throw machine learning at it and they might do it better.”
Topics: