Section 1
τ-chrono v0.2 on QuTech Tuna-9 superconducting processor. Petz recovery + Bayesian composition. Zero training circuits. 4,096 shots per data point.
Section 2
Every number sourced from published papers or vendor documentation. We show exactly where competitors beat us today.
| Approach | Method | Best Improvement | Training Data | Hardware | Open Source | Scales to 100Q? |
|---|---|---|---|---|---|---|
| ML Noise Model arXiv:2509.12933 | Bayesian ML optimization | 50-65% Hellinger | ~4,000 circuits | IBM 4-9Q | No | No |
| Adaptive Bayesian PEC arXiv:2404.13269 | Bayesian tracking + PEC | 42% accuracy gain | Moderate | IBM 5Q | No | No |
| Q-fid LSTM neural network (2025) | LSTM neural network | 24.7x RMSE reduction | 700K parameters | IBM | Partial | No |
| Q-CTRL Fire Opal Commercial product | AI pulse optimization | up to 9,000x* | Proprietary | IBM + IonQ | No | Unclear |
| IBM Sparse PL + PEC Qiskit Runtime | Pauli-Lindblad learning | ~25% at 100Q+ | Per-layer learning | IBM 127Q | Yes (Qiskit) | Yes |
| Mitiq ZNE Unitary Fund | Zero noise extrapolation | 18-24x error reduction | None | Multi-platform | Yes | Yes |
| τ-chrono v0.2 This project | Petz recovery map (applied to noise prediction) | +46% at depth 50 | 0 circuits | QuTech Tuna-9 | Yes (MIT) | Yes |
* Q-CTRL's reported 9,000x improvement factor reflects a large relative gain from a very challenging baseline. Different metrics can yield very different numbers for the same underlying result. We report absolute improvement to facilitate direct comparison.
Section 3
How our physics-based approach compares across circuit depths. ML surpasses us around depth 30 — with 4,000 training circuits — here we compare the trade-offs.
Section 4
Understanding the theoretical limits of each approach is more important than today's numbers.
Pure physics (current)
Mathematical limit from Bayesian saturation. The Petz recovery map gives the optimal retrodiction, but single-qubit composition bounds eventually saturate.
Empirical, data-hungry
Keeps growing with training data. Learns hardware-specific correlations that physics models miss. But requires retraining after every calibration cycle.
Realistic budget constraint
Most teams cannot afford 4,000 calibration circuits. With realistic budgets, ML performance drops below our physics baseline.
Our planned approach
Multiple enhancement strategies under development. Projected to significantly exceed current ceiling while maintaining core advantages.
Section 5
For unstructured learning, ML needs O(4^n) training circuits as qubit count grows. Physics needs O(poly(n)). This is the entire thesis.
Section 6
We have a clear iteration path to push beyond the current ceiling while preserving our core advantages.
Section 7
The fundamental problem: A quantum channel on n qubits lives in a 4^n-dimensional space. For unstructured noise learning, this is information-theoretic. Structured ML methods can partially mitigate this, but face diminishing returns as correlations grow.
The Petz recovery insight: The Petz recovery map (Petz, 1986) is the unique Bayesian retrodiction functor (Parzygnat & Buscemi, 2023). Combined with the strengthened DPI (Junge, Renner, Sutter, Wilde & Winter, 2018), it provides the mathematically canonical decomposition of how noise propagates through a circuit. We applied this theoretical framework to practical circuit noise prediction. This gives us the correct structure "for free" — we do not need to learn it from data.
The path forward: By using physics to capture the exponential structure, the remaining problem is reduced from 4^n dimensions to O(poly(n)) residual parameters. We have a clear roadmap to push beyond the current ~50% ceiling while preserving our core advantages: zero training data, polynomial scaling, and provable bounds.
Hardware access generously provided by QuTech through the Quantum Inspire platform.