ORNL — Quantum ML: SVM Speedups & Scaling
Implemented Support Vector Machine (SVM) techniques on quantum hardware with end‑to‑end evaluation, demonstrating 3.5–4.5× speedups and 11.1× scaling on a leading supercomputer.
Context
- Software Engineering Intern, Oak Ridge National Laboratory (May–Aug 2023).
- Prior internship (Jun–Aug 2022) established classical baselines and accuracy targets.
Problem
- Evaluate whether quantum hardware can accelerate practical classification workloads.
- Design fair comparisons vs. classical pipelines while controlling for I/O and orchestration overhead.
Role & Stack
- Python for ML orchestration; quantum SDK/hardware integrations; HPC job submission & telemetry.
Architecture
Key Decisions
- Kernel choices to minimize circuit depth while preserving separability.
- Batching & parallelization strategies to hit 11.1× scaling.
Impact & Metrics
- 3.5–4.5× speedups on target workloads with 11.1× scaling.
- Reported publicly in a research preprint.
Code Highlights
Sanitized snippets showing dataset loaders, circuit builders, and evaluators.
What I’d Do Next
- Error‑mitigation comparisons; hybrid classical‑quantum scheduling to reduce queue latency.
Links
- Research preprint: arXiv:2401.12485