Evan Peters
e6peters.bsky.social
Evan Peters
@e6peters.bsky.social
Postdoc (UWaterloo/Perimeter). I study quantum information and machine learning
please reach out with any questions/comments!
June 9, 2025 at 1:42 PM
this technique also works for surface code memory experiments, where training on data with 10x error rates is optimal (but 20x higher is still better than data sampled from the device at its true error rate and MWPM!). The optimal (weirdly) corresponds with when MWPM breaks down:
June 9, 2025 at 1:42 PM
This behavior holds for repetition codes (across hyperparameters, initializations, architectures), which the theory predicts:
June 9, 2025 at 1:42 PM
so begins a revolution in inline formatting. My submission is 2xp(-β H) / Tr[2xp(-β H)] :D
February 1, 2025 at 6:21 PM
Reposted by Evan Peters
Quantum Computing Stack Exchange has the potential to be a resource of great value, but the only source for its value is its users (not unlike Bluesky). So I hope some will consider participating, whether by asking or answering questions or just giving some upvotes.
November 28, 2024 at 4:57 PM