Skip to content

ORNL Research

Adiabatic Quantum Support Vector Machines

This page summarizes the ORNL internship work and paper results comparing classical SVM training with an adiabatic quantum approach on D-Wave hardware.

Internship context: Oak Ridge National Laboratory (May 2023 - Aug 2023), with prior baseline work in Jun 2022 - Aug 2022.

Publication

Paper: Adiabatic Quantum Support Vector Machines

Publication metadata / DOI: 10.1007/s42484-025-00280-6

Note: the hosted PDF is the accepted manuscript (author version), not the Springer version of record.

Problem and method

The project tested whether SVM training can be accelerated by mapping the optimization to a QUBO formulation and solving it with adiabatic quantum annealing.

Evaluation compared three approaches: classical Scikit-learn SVM, D-Wave quantum annealing, and simulated annealing. Metrics included classification accuracy and end-to-end compute time (QUBO conversion, embedding, hardware access, and solve steps).

Theoretical framing in the paper: classical training scales as O(N^3), while the quantum formulation is analyzed as O(N^2) when precision is fixed.

Key findings

Feature-scale speedup

3.5x to 4.5x

Quantum SVM training was faster than the classical baseline on very high-feature synthetic datasets.

Peak result in this study

3.69x faster

Observed at 8,388,608 features in the feature-scaling experiment.

Point-scaling result

4.48x faster

Observed at 52 training points in the point-scaling experiment.

Accuracy profile

Near-parity on many datasets

Quantum results were often close to classical results and matched on several synthetic and Iris pairings.

Accuracy snapshot

DatasetClassicalQuantumNotes
Synthetic random100% test100% testAll approaches found strong separating hyperplanes.
Iris Setosa vs Virginica100% test100% testNo meaningful gap between approaches.
WBC95.0% test93.1% testQuantum remained competitive.
Digits 0 vs 199.2% test97.5% testSimulated annealer failed to find viable solutions.
Lambeq100% test88.7% testHarder NLP task with larger accuracy gap.

Figure walkthrough

Hyperplane comparison on positive synthetic data
Comparison of hyperplanes created by the support vectors with Scikit-learn (+), simulated annealing(- -), and D-Wave (---) on positive synthetic data (blue and green circles).
Hyperplane comparison on negative synthetic data
Comparison of hyperplanes created by the support vectors with Scikit-learn (+), simulated annealing(- -), and D-Wave (---) on negative synthetic data (blue and green circles).
Hyperplane comparison on random synthetic data
Random synthetic example: hyperplanes produced by Scikit-learn, simulated annealing, and D-Wave quantum SVM are closely aligned.
Feature scaling with smaller feature counts
Feature-scaling (smaller feature range): runtime decomposition of classical SVM vs quantum SVM components (preprocessing, embedding, and access).
Feature scaling with very large feature counts
Feature-scaling (large feature range): at multi-million features, the quantum path overtakes the classical baseline in compute time.
Scaling with number of training points
Point-scaling study: quantum SVM achieved up to 4.48x speedup at 52 training points and 1.72x at 54 points.