Evan Peters
e6peters.bsky.social
Evan Peters
@e6peters.bsky.social
Postdoc (UWaterloo/Perimeter). I study quantum information and machine learning
this technique also works for surface code memory experiments, where training on data with 10x error rates is optimal (but 20x higher is still better than data sampled from the device at its true error rate and MWPM!). The optimal (weirdly) corresponds with when MWPM breaks down:
June 9, 2025 at 1:42 PM
This behavior holds for repetition codes (across hyperparameters, initializations, architectures), which the theory predicts:
June 9, 2025 at 1:42 PM
I put out a preprint on machine learning for QEC decoders with some theory about importance sampling for decoding. Takeaway: You can robustly improve ML decoder performance by training on data generated with higher device error rates, sometimes 10x higher! arxiv.org/abs/2505.22741
June 9, 2025 at 1:42 PM