The promise is certainly there, but so is the hype. In 2022, for instance, Haim Israel, managing director of research at Bank of America, declared that quantum computing will be “bigger than fire and bigger than all the revolutions that humanity has seen.” Even among scientists, a slew of claims and vicious counterclaims have made it a hard field to assess.
Ultimately, though, assessing our progress in building useful quantum computers comes down to one central factor: whether we can handle the noise. The delicate nature of quantum systems makes them extremely vulnerable to the slightest disturbance, whether that’s a stray photon created by heat, a random signal from the surrounding electronics, or a physical vibration. This noise wreaks havoc, generating errors or even stopping a quantum computation in its tracks. It doesn’t matter how big your processor is, or what the killer applications might turn out to be: unless noise can be tamed, a quantum computer will never surpass what a classical computer can do.
For many years, researchers thought they might just have to make do with noisy circuitry, at least in the near term—and many hunted for applications that might do something useful with that limited capacity. The hunt hasn’t gone particularly well, but that may not matter now. In the last couple of years, theoretical and experimental breakthroughs have enabled researchers to declare that the problem of noise might finally be on the ropes. A combination of hardware and software strategies is showing promise for suppressing, mitigating, and cleaning up quantum errors. It’s not an especially elegant approach, but it does look as if it could work—and sooner than anyone expected.
“I’m seeing much more evidence being presented in defense of optimism,” says Earl Campbell, vice president of quantum science at Riverlane, a quantum computing company based in Cambridge, UK.
Even the hard-line skeptics are being won over. University of Helsinki professor Sabrina Maniscalco, for example, researches the impact of noise on computations. A decade ago, she says, she was writing quantum computing off. “I thought there were really fundamental issues. I had no certainty that there would be a way out,” she says. Now, though, she is working on using quantum systems to design improved versions of light-activated cancer drugs that are effective at lower concentrations and can be activated by a less harmful form of light. She thinks the project is just two and a half years from success. For Maniscalco, the era of “quantum utility”—the point at which, for certain tasks, it makes sense to use a quantum rather than a classical processor—is almost upon us. “I’m actually quite confident about the fact that we will be entering the quantum utility era very soon,” she says.
Putting qubits in the cloud
This breakthrough moment comes after more than a decade of creeping disappointment. Throughout the late 2000s and the early 2010s, researchers building and running real-world quantum computers found them to be far more problematic than the theorists had hoped.
To some people, these problems seemed insurmountable. But others, like Jay Gambetta, were unfazed.
A quiet-spoken Australian, Gambetta has a PhD in physics from Griffith University, on Australia’s Gold Coast. He chose to go there in part because it allowed him to feed his surfing addiction. But in July 2004, he wrenched himself away and skipped off to the Northern Hemisphere to do research at Yale University on the quantum properties of light. Three years later (by which time he was an ex-surfer thanks to the chilly waters around New Haven), Gambetta moved even further north, to the University of Waterloo in Ontario, Canada. Then he learned that IBM wanted to get a little more hands-on with quantum computing. In 2011, Gambetta became one of the company’s new hires.