Quantum machine learning (QML) is a research field that sits at the intersection of quantum computing and machine learning. The term refers to two complementary directions of study: using quantum computers to speed up or improve classical machine learning algorithms, and using classical machine learning techniques to analyze and control quantum systems. By exploiting quantum mechanical phenomena such as superposition, entanglement, and interference, QML researchers aim to develop algorithms that could outperform their classical counterparts on certain computational tasks. As of 2026, no practical quantum advantage for a real-world machine learning problem has been demonstrated, though theoretical and experimental progress continues at a rapid pace.
Understanding quantum machine learning requires a foundation in the principles of quantum computation. Classical computers store and process information in bits, each of which takes a value of either 0 or 1. Quantum computers instead use quantum bits, or qubits, which obey the rules of quantum mechanics and exhibit properties that have no classical analogue.
A qubit is the basic unit of quantum information. Unlike a classical bit, a qubit can exist in a superposition of the states |0> and |1> simultaneously. Mathematically, the state of a single qubit is written as |psi> = alpha|0> + beta|1>, where alpha and beta are complex numbers whose squared magnitudes sum to 1. When the qubit is measured, it collapses to |0> with probability |alpha|^2 or to |1> with probability |beta|^2. Physical implementations of qubits include superconducting transmon circuits (used by Google and IBM), trapped ions (used by IonQ and Quantinuum), neutral atoms (used by QuEra), and photonic systems (explored by Xanadu and PsiQuantum).
Superposition allows a quantum system to exist in multiple configurations at once. For a register of n qubits, the system can represent a superposition of all 2^n possible bitstrings simultaneously. This exponential state space is one of the properties that gives quantum computers their theoretical power. Each additional qubit doubles the number of states that can be held in superposition, so a 50-qubit register can represent over 10^15 amplitudes at once. However, superposition alone does not guarantee a speedup; useful computation requires carefully orchestrated interference to amplify correct answers and suppress incorrect ones.
Entanglement is a uniquely quantum correlation between two or more qubits. When qubits are entangled, measuring one qubit instantly determines information about the others, regardless of the physical distance between them. Albert Einstein famously referred to this phenomenon as "spooky action at a distance." In quantum algorithms, entanglement is used to create correlations that cannot be efficiently simulated by classical computers. Two-qubit gates such as CNOT or CZ are the standard operations used to create entanglement in quantum circuits.
Quantum computation proceeds by applying a sequence of quantum gates to a set of qubits. Single-qubit gates (such as the Hadamard gate, Pauli-X, Pauli-Y, and Pauli-Z) manipulate individual qubits, while two-qubit gates create entanglement between pairs. A quantum circuit is a sequence of these gates, analogous to a classical logic circuit. At the end of the circuit, qubits are measured to extract classical information. The depth of a circuit (the number of sequential gate layers) and the total gate count are critical metrics because noise accumulates with each operation.
Quantum machine learning encompasses a range of algorithmic strategies. The most actively studied approaches include variational quantum circuits, quantum kernel methods, and quantum neural networks.
Variational quantum circuits (VQCs), also called parameterized quantum circuits (PQCs), are the most widely used framework for near-term quantum machine learning. A VQC consists of three stages: (1) a data encoding step (also called a feature map) that maps classical input data x into a quantum state, (2) a parameterized ansatz circuit with tunable rotation angles theta that transforms the encoded state, and (3) a measurement step that produces an output used to compute a loss function. A classical optimizer (such as gradient descent, COBYLA, or SPSA) then updates the parameters theta to minimize the loss. This hybrid quantum-classical optimization loop is the foundation of most NISQ-era QML algorithms.
VQCs are used in variational quantum eigensolvers (VQE) for chemistry simulations, the quantum approximate optimization algorithm (QAOA) for combinatorial optimization, and various classification and regression tasks. The approach is attractive for current hardware because the circuits can be kept relatively shallow, reducing the impact of noise. However, VQCs face the serious challenge of barren plateaus, where gradients of the loss function vanish exponentially as the number of qubits or circuit layers increases, making training impractical for large systems.
Quantum kernel methods adapt classical kernel methods (such as support vector machines) to quantum computing. The central idea is to use a quantum circuit as a feature map that embeds classical data into a high-dimensional quantum Hilbert space. The inner products between quantum states in this space define a kernel function, which can then be used with a classical support vector machine or other kernel-based algorithm.
In a quantum support vector machine (QSVM), a quantum processor evaluates the kernel matrix by preparing pairs of data-encoded quantum states and measuring their overlap. This kernel matrix is then passed to a classical optimizer that finds the optimal separating hyperplane. The potential advantage comes from the ability to access feature spaces that may be intractable to compute classically.
In 2021, researchers showed that all supervised quantum machine learning models that use data-encoding quantum circuits are mathematically equivalent to kernel methods. This insight, established by Maria Schuld and colleagues, unified the theoretical understanding of many QML algorithms under a single framework. IBM researchers also provided a mathematical proof that, for certain specially constructed datasets, quantum kernels can achieve classification accuracy that no efficient classical kernel can match. However, for generic real-world datasets, quantum kernel methods have not yet demonstrated a clear advantage over classical approaches.
The term quantum neural network (QNN) is used broadly to describe parameterized quantum circuits that are trained in a manner analogous to classical neural networks. Despite the name, QNNs differ significantly from classical neural networks in their architecture and mathematical structure. A QNN typically consists of layers of parameterized single-qubit rotations interleaved with entangling two-qubit gates. The output is obtained by measuring one or more qubits and interpreting the expectation values as predictions.
Several QNN architectures have been proposed:
A major open question is whether QNNs can offer any advantage over classical neural networks for practical tasks. Research published in Nature Physics in 2025 showed that outputs of certain QNN architectures based on Haar-random unitaries converge to Gaussian processes in the limit of large Hilbert space dimension, which places fundamental constraints on their expressivity.
| Approach | Description | Quantum resource | Typical algorithm | Key challenge |
|---|---|---|---|---|
| Variational quantum circuits (VQCs) | Parameterized circuits optimized via classical-quantum hybrid loop | Gate-based NISQ device | VQE, QAOA, variational classifiers | Barren plateaus; noise accumulation |
| Quantum kernel methods | Quantum feature maps used to compute kernel matrices for classical SVM | Gate-based NISQ device | QSVM, quantum kernel estimation | Kernel concentration for large qubit counts |
| Quantum neural networks (QNNs) | Layered parameterized circuits trained like neural networks | Gate-based NISQ device | QCNN, quantum reservoir computing | Barren plateaus; limited expressivity proofs |
| Quantum Boltzmann machines | Quantum Hamiltonian-based energy models for generative learning | Quantum annealer or gate-based | QBM training, quantum sampling | Efficient training of full quantum Hamiltonians |
| Quantum generative adversarial networks | Generator and/or discriminator replaced by quantum circuits | Gate-based NISQ device | QGAN for distribution learning | Mode collapse; measurement overhead |
| Quantum principal component analysis | Exponentially fast extraction of principal components from quantum states | Fault-tolerant quantum computer | qPCA (Lloyd, Mohseni, Rebentrost 2014) | Requires fault tolerance and quantum RAM |
| HHL-based algorithms | Quantum linear systems solver applied to ML subroutines | Fault-tolerant quantum computer | Quantum least-squares fitting, quantum SVM (Rebentrost et al.) | Requires quantum RAM; dequantized by Tang |
| Quantum approximate optimization | Hybrid algorithm for combinatorial optimization problems | Gate-based NISQ device | QAOA (Farhi, Goldstone, Gutmann 2014) | Performance vs. classical heuristics unclear |
The term NISQ (Noisy Intermediate-Scale Quantum) was coined by Caltech physicist John Preskill in a 2018 paper titled "Quantum Computing in the NISQ era and beyond." Preskill defined NISQ devices as quantum processors containing roughly 50 to a few hundred qubits that are not yet capable of full fault-tolerant operation. As of 2026, all commercially available quantum computers fall into the NISQ category.
NISQ devices face several fundamental limitations that directly impact quantum machine learning:
These limitations mean that current quantum machine learning experiments are restricted to small problem sizes (typically fewer than 30 qubits) and shallow circuit depths. Error mitigation techniques can partially compensate for noise but introduce measurement overhead ranging from 2x to 10x or more.
In December 2024, Google Quantum AI announced Willow, a 105-qubit superconducting processor that achieved two notable milestones. First, Willow demonstrated below-threshold quantum error correction using the surface code: its logical error rate was suppressed by a factor of 2.14 when increasing the code distance from 5 to 7, reaching 0.143% error per cycle for a distance-7 code on 101 qubits. This was the first time any quantum processor had definitively demonstrated below-threshold performance for the surface code. Second, Willow completed a random circuit sampling (RCS) benchmark in under 5 minutes that Google estimated would take the world's fastest classical supercomputer approximately 10^25 years.
Willow's key specifications include single-qubit gate errors of 0.035%, two-qubit (CZ) gate errors of 0.33%, measurement errors of 0.77%, and T1 coherence times approaching 100 microseconds, a roughly 5x improvement over Google's previous-generation processors.
IBM has published a detailed roadmap for scaling quantum computers toward practical utility:
IBM targets near-term quantum advantage by the end of 2026 and the first large-scale, fault-tolerant quantum computer by 2029. Future Nighthawk iterations are expected to deliver up to 7,500 gates by end of 2026 and 10,000 gates in 2027.
In October 2019, Google published results in Nature claiming quantum supremacy using Sycamore, a 53-qubit superconducting processor. The experiment involved random circuit sampling: Sycamore generated one million samples from a random quantum circuit in about 200 seconds, a task Google estimated would take the most powerful classical supercomputer approximately 10,000 years. This claim was contested, and by 2024, improvements in classical tensor network simulation algorithms reduced the estimated classical simulation time significantly. Nonetheless, the Sycamore experiment marked an important symbolic milestone for the field.
Several open-source software libraries provide tools for developing and running quantum machine learning algorithms.
PennyLane is an open-source Python framework for quantum differentiable programming developed by Xanadu. Released under the Apache License 2.0, PennyLane enables researchers to build, optimize, and differentiate quantum circuits using techniques borrowed from classical deep learning, including automatic differentiation and backpropagation. PennyLane integrates with classical machine learning frameworks such as PyTorch, JAX, and TensorFlow, and can execute circuits on backends from multiple hardware providers including IBM, Google, IonQ, Rigetti, and Amazon Braket. Its design philosophy centers on treating quantum circuits as differentiable programs, making it particularly well-suited for variational quantum algorithms and hybrid quantum-classical workflows.
Qiskit is an open-source SDK developed by IBM for building, optimizing, and executing quantum workloads. The Qiskit Machine Learning library provides high-level abstractions for quantum kernels and quantum neural networks, with interfaces to classical machine learning libraries such as scikit-learn and NumPy. It also includes a dedicated connector to PyTorch for neural network-based algorithms. As of version 0.7, Qiskit Machine Learning is co-maintained by IBM and the Hartree Centre (part of the UK Science and Technology Facilities Council). Qiskit's transpiler prepares workloads for execution on IBM's quantum hardware and supports error mitigation techniques built into the runtime.
Cirq is a Python library developed by Google Quantum AI for writing, manipulating, and optimizing quantum circuits. First announced in July 2018, Cirq is specifically designed for NISQ-era computation, providing fine-grained control over circuit construction and noise-aware compilation. Cirq is used to run experiments on Google's quantum processors and also supports backends from IonQ, Pasqal, Rigetti, and Alpine Quantum Technologies. Google also provides the Quantum Virtual Machine, a noise-model-based simulator that replicates the behavior of actual Google hardware.
TensorFlow Quantum (TFQ) is a library for hybrid quantum-classical machine learning that integrates with both TensorFlow and Cirq. Developed by Google in collaboration with the University of Waterloo, X, and Volkswagen, TFQ provides primitives for constructing quantum ML models using the Keras API, with support for automatic differentiation of quantum circuits. It was designed for exploring quantum data and developing quantum algorithms within the TensorFlow ecosystem.
| Framework | Developer | Language | Key features | License |
|---|---|---|---|---|
| PennyLane | Xanadu | Python | Quantum differentiable programming; integrates with PyTorch, JAX, TensorFlow | Apache 2.0 |
| Qiskit ML | IBM | Python | Quantum kernels, QNNs; integrates with scikit-learn, PyTorch | Apache 2.0 |
| Cirq | Google Quantum AI | Python | Low-level circuit control; noise-aware compilation; Quantum Virtual Machine | Apache 2.0 |
| TensorFlow Quantum | Google / U. Waterloo | Python | Keras integration; hybrid quantum-classical models; autodiff for circuits | Apache 2.0 |
The central promise of quantum machine learning is that quantum computers could solve certain ML problems exponentially faster than classical computers. Several theoretical results support this possibility, though each comes with important caveats.
The Harrow-Hassidim-Lloyd (HHL) algorithm, proposed in 2009, solves systems of linear equations in time O(log(N)), compared to O(N) for the best classical methods, where N is the dimension of the system. Because many machine learning algorithms (such as least-squares regression, principal component analysis, and support vector machines) reduce to linear algebra, HHL appeared to offer a path to exponential quantum speedups for ML. However, the HHL algorithm requires several strong assumptions: the coefficient matrix must be sparse and well-conditioned, the data must be loaded into quantum memory (requiring quantum random access memory, or qRAM, which does not exist at scale), and the output is a quantum state from which only limited classical information can be extracted per measurement.
Quantum computers can natively sample from probability distributions defined by quantum circuits. This has motivated research into quantum generative models, where a parameterized quantum circuit learns to approximate a target probability distribution. Theoretical results suggest that certain quantum distributions (such as those produced by instantaneous quantum polynomial-time, or IQP, circuits) cannot be efficiently sampled by classical computers under standard complexity-theoretic assumptions. However, identifying real-world data distributions where quantum generative models offer a practical advantage over classical alternatives remains an open challenge.
Grover's algorithm provides a quadratic speedup for unstructured search, reducing the number of queries from O(N) to O(sqrt(N)). While this is provably optimal for unstructured search, the quadratic (rather than exponential) speedup means that for most practical problem sizes, the overhead of running a quantum computer may negate the theoretical advantage. Furthermore, NISQ devices cannot implement Grover's algorithm reliably due to the deep circuits it requires.
The quantum machine learning community has faced significant scrutiny regarding the practical relevance of claimed speedups.
In 2018, Ewin Tang, then an 18-year-old undergraduate at the University of Texas at Austin, proved that a classical algorithm could match the performance of the quantum recommendation algorithm proposed by Kerenidis and Prakash in 2016. This quantum algorithm had been considered one of the strongest candidates for an exponential speedup in QML. Tang showed that by using l2-norm sampling (a technique from randomized linear algebra), a classical computer could produce recommendations in time O(poly(k) log(mn)), only polynomially slower than the quantum algorithm, where m x n is the matrix dimension and k is the rank.
Tang's work initiated an entire subfield of "dequantization" research. Follow-up results demonstrated classical algorithms with comparable performance for quantum principal component analysis, quantum supervised clustering, quantum low-rank stochastic regression, and quantum linear systems solving. The core insight is that many claimed quantum speedups arise from implicit assumptions about data access (specifically, the availability of qRAM) rather than from fundamental computational barriers. When classical algorithms are given equivalent data access (in the form of sample-and-query access to data structures supporting l2-norm sampling), the exponential gap often narrows to a polynomial one.
A 2022 review in Nature Reviews Physics by Chia, Gilyen, and colleagues formalized a robust dequantization framework for the quantum singular value transformation, showing that a wide class of QML algorithms based on this primitive can be matched classically up to polynomial overhead.
Computer scientist Scott Aaronson of the University of Texas at Austin has been a prominent voice urging caution about QML speedup claims. In his 2015 paper "Read the Fine Print," Aaronson highlighted that many quantum ML algorithms assume the existence of qRAM, ignore the cost of data loading, or provide speedups only in contrived settings. Aaronson and others have argued that for QML to deliver genuine value, researchers must carefully specify the computational model and ensure fair comparisons with classical baselines.
As of early 2026, the consensus in the research community is that no quantum machine learning algorithm has demonstrated a practical advantage over classical methods on a real-world dataset. Demonstrations of quantum speedup have been confined to synthetic benchmarks, specially constructed datasets, or tasks with no direct practical application (such as random circuit sampling). Key researchers including Aram Harrow and Scott Aaronson have emphasized that the path to practical quantum advantage in ML remains uncertain. Quantum kernel methods face the problem of kernel concentration, where kernel matrices approach the identity matrix as qubit counts grow beyond roughly 10, degrading model performance. Variational methods face barren plateaus that limit scalability. And fault-tolerant algorithms that promise exponential speedups (such as HHL-based methods) require hardware capabilities that remain years or decades away.
Despite the absence of demonstrated practical advantage, QML research targets several application domains where quantum effects could eventually prove beneficial:
The field of quantum machine learning is in a phase of rapid theoretical development but limited practical impact. Several trends define the current landscape:
Hybrid quantum-classical approaches dominate. Nearly all current QML experiments use hybrid loops where a classical computer handles optimization while a quantum processor executes parameterized circuits. This approach is dictated by the limitations of NISQ hardware.
Error correction is advancing. Google's Willow chip demonstrated below-threshold surface code error correction in 2024, a necessary step toward fault-tolerant quantum computing. IBM's roadmap targets fault tolerance by 2029. These milestones, while important, are still far from enabling the large-scale, error-corrected quantum computers that fault-tolerant QML algorithms require.
Classical ML continues to advance rapidly. The explosive growth of classical deep learning, particularly large language models and diffusion models, has raised the bar for any quantum approach to demonstrate superiority. Classical hardware (GPUs, TPUs) and algorithms are improving at a pace that makes it difficult for quantum approaches to keep up.
Dequantization has narrowed the gap. The theoretical quantum speedup for many ML tasks has been reduced from exponential to polynomial by classical dequantization results, diminishing the expected returns from quantum approaches to these specific problems.
New frontiers are emerging. Research into quantum advantages for learning properties of quantum systems, quantum error correction decoding, and quantum-enhanced sensing represents promising directions that may yield practical benefits before general-purpose QML.
The global quantum computing market was estimated at $1.8 billion to $3.5 billion in 2025, with projections indicating growth to approximately $5.3 billion by 2029. While quantum machine learning represents only a fraction of this market, it remains one of the most actively researched application areas. The path from current NISQ experiments to practical quantum advantage in ML will likely require significant advances in error correction, qubit quality, and algorithm design.