Johannes Jakob Meyer
@jjmeyer.bsky.social
Quantum Information @ FU Berlin. Imprint on www.johannesjakobmeyer.com
I also wanted to add a shout-out to Álvaro, Thomas and Jan who put out a work with similar mindset today: arxiv.org/abs/2509.21308. It turns out that our results mostly complement each other quite nicely, so please have a look there as well!
Efficient Quantum Measurements: Computational Max- and Measured Rényi Divergences and Applications
Quantum information processing is limited, in practice, to efficiently implementable operations. This motivates the study of quantum divergences that preserve their operational meaning while faithfull...
arxiv.org
September 26, 2025 at 1:00 PM
I also wanted to add a shout-out to Álvaro, Thomas and Jan who put out a work with similar mindset today: arxiv.org/abs/2509.21308. It turns out that our results mostly complement each other quite nicely, so please have a look there as well!
A big thanks to my coauthors Asad, Jacopo, Lorenzo, Sofiene and Jens.
September 26, 2025 at 1:00 PM
A big thanks to my coauthors Asad, Jacopo, Lorenzo, Sofiene and Jens.
This is in stark contrast to how people approached computational information theory to date, where the approaches focused on single-shot statements. These are of course more exact, but also much more complicated to manipulate and build intuition for.
September 26, 2025 at 1:00 PM
This is in stark contrast to how people approached computational information theory to date, where the approaches focused on single-shot statements. These are of course more exact, but also much more complicated to manipulate and build intuition for.
We believe that our definition has a nice level of abstraction that captures the essence of information theory under computational constraints while at the same time having the look and feel of unbounded information theory.
September 26, 2025 at 1:00 PM
We believe that our definition has a nice level of abstraction that captures the essence of information theory under computational constraints while at the same time having the look and feel of unbounded information theory.
I invite you to discover many exciting things in the paper, like computationally measured divergences and a computational Stein's Lemma or a computational Rains bound on efficiently distillable entanglement.
September 26, 2025 at 1:00 PM
I invite you to discover many exciting things in the paper, like computationally measured divergences and a computational Stein's Lemma or a computational Rains bound on efficiently distillable entanglement.
Another highlight is the fact that we can obtain separations in hypothesis testing, both between bounded and unbounded observers as well as classical-quantum separations. This means there exist classical distributions testable when having access to a quantum computer but not classically.
September 26, 2025 at 1:00 PM
Another highlight is the fact that we can obtain separations in hypothesis testing, both between bounded and unbounded observers as well as classical-quantum separations. This means there exist classical distributions testable when having access to a quantum computer but not classically.
We can use the computational relative entropy as a mother quantity. We exemplify this by defining a computational entropy S̲(ρₙ):=−D(ρₙ‖𝕀) and show that it quantifies the best rate at which we can compress a state under complexity restrictions -- just like the entropy in the unbounded setting.
September 26, 2025 at 1:00 PM
We can use the computational relative entropy as a mother quantity. We exemplify this by defining a computational entropy S̲(ρₙ):=−D(ρₙ‖𝕀) and show that it quantifies the best rate at which we can compress a state under complexity restrictions -- just like the entropy in the unbounded setting.
For example, we obtain a computational version of Pinsker's inequality, which looks exactly like the unbounded relation only that the involved quantities carry underscores, marking them as computational quantities.
September 26, 2025 at 1:00 PM
For example, we obtain a computational version of Pinsker's inequality, which looks exactly like the unbounded relation only that the involved quantities carry underscores, marking them as computational quantities.
One of our main contributions is to turn this process of "polynomial regularization" into a rigorous notion. It allows us to reap the benefits of regularizing to obtain simpler expressions that closely resemble their counterparts from unbounded information theory.
September 26, 2025 at 1:00 PM
One of our main contributions is to turn this process of "polynomial regularization" into a rigorous notion. It allows us to reap the benefits of regularizing to obtain simpler expressions that closely resemble their counterparts from unbounded information theory.
Of course, talking about polynomial scaling only makes sense when there is a scaling parameter. Therefore, all our results pertain to families of quantum states indexed by some parameter n. This is often the number of qubits, but need not be.
September 26, 2025 at 1:00 PM
Of course, talking about polynomial scaling only makes sense when there is a scaling parameter. Therefore, all our results pertain to families of quantum states indexed by some parameter n. This is often the number of qubits, but need not be.
We propose the following remedy: we define the computational relative entropy D̲(ρₙ‖σₙ) as the best error exponent in asymmetric hypothesis testing when restricted to polynomially many copies and polynomial-time tests and take the polynomials to be arbitrarily large.
September 26, 2025 at 1:00 PM
We propose the following remedy: we define the computational relative entropy D̲(ρₙ‖σₙ) as the best error exponent in asymmetric hypothesis testing when restricted to polynomially many copies and polynomial-time tests and take the polynomials to be arbitrarily large.
But there is a catch! The connection between hypothesis testing and the relative entropy only holds if arbitrary tests can be performed. The optimal tests, however, could be exponentially complex to implement. We also need a large number of copies of the state to get sufficient statistics.
September 26, 2025 at 1:00 PM
But there is a catch! The connection between hypothesis testing and the relative entropy only holds if arbitrary tests can be performed. The optimal tests, however, could be exponentially complex to implement. We also need a large number of copies of the state to get sufficient statistics.
The fact that the relative entropy and its derived quantities appear so often in information theory is that it quantifies the best achievable error exponent in asymmetric hypothesis testing, a primitive many important information processing tasks can be reduced to.
September 26, 2025 at 1:00 PM
The fact that the relative entropy and its derived quantities appear so often in information theory is that it quantifies the best achievable error exponent in asymmetric hypothesis testing, a primitive many important information processing tasks can be reduced to.
The relative entropy D(ρ‖σ) is a fundamental quantity in classical and quantum information theory. It appears in many important theorems and serves as a mother quantity to derive other important measures from -- for example mutual information or entropy.
September 26, 2025 at 1:00 PM
The relative entropy D(ρ‖σ) is a fundamental quantity in classical and quantum information theory. It appears in many important theorems and serves as a mother quantity to derive other important measures from -- for example mutual information or entropy.
I think the best bit is that they have a guy named "Clifford Mapp" on their team!
April 20, 2025 at 6:27 PM
I think the best bit is that they have a guy named "Clifford Mapp" on their team!
Very nice! Michael Wolf also has some very good newer ones: mediatum.ub.tum.de/download/170...
mediatum.ub.tum.de
March 27, 2025 at 9:31 PM
Very nice! Michael Wolf also has some very good newer ones: mediatum.ub.tum.de/download/170...
Context please
January 18, 2025 at 12:18 PM
Context please
I guess that particular myth was formulated so that you can debunk it. I never heard anyone claim that error mitigation cannot be used, just that it is no scalable solution for the problem.
January 14, 2025 at 8:20 AM
I guess that particular myth was formulated so that you can debunk it. I never heard anyone claim that error mitigation cannot be used, just that it is no scalable solution for the problem.
We should all recall once in a while that we as a community are in a hamster wheel of our own making. This means we can also just choose to be nicer and more helpful to each other.
December 9, 2024 at 8:53 AM
We should all recall once in a while that we as a community are in a hamster wheel of our own making. This means we can also just choose to be nicer and more helpful to each other.
That said, one line rejects should not happen. At the end of the day we've all been on the other side and justifiably annoyed by bad reviews.
December 9, 2024 at 8:53 AM
That said, one line rejects should not happen. At the end of the day we've all been on the other side and justifiably annoyed by bad reviews.
Therefore, it's more of an answer to the question: "how likely is this to cross the bar relative to the other submissions in your batch?". In that way, "weak reject" doesn't mean the submission is not good science, but just that there is so much other good stuff out there.
December 9, 2024 at 8:53 AM
Therefore, it's more of an answer to the question: "how likely is this to cross the bar relative to the other submissions in your batch?". In that way, "weak reject" doesn't mean the submission is not good science, but just that there is so much other good stuff out there.
Something that is important for context – and that I only realized after seeing the process from the inside – is that the scores are not meant to judge the quality of the submission. The task of the PC is to make a program.
December 9, 2024 at 8:53 AM
Something that is important for context – and that I only realized after seeing the process from the inside – is that the scores are not meant to judge the quality of the submission. The task of the PC is to make a program.