Ken Wharton
@kenwharton.bsky.social
650 followers 1K following 230 posts
Physics professor at San Jose State. Quantum Foundations. Big fan of space and time, and also many things therein.
Posts Media Videos Starter Packs
Reposted by Ken Wharton
Great news for Michael Berry who has received the Isaac Newton medal of the IOP. What I absolutely love about his work is that it cuts across so many different areas of physics - from the foundations of quantum physics to understanding the rainbow. physicsworld.com/a/theoretica...
Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize – Physics World
Berry recognized for his contributions in mathematics and theoretical physics over a 60-year career
physicsworld.com
Interesting question! As I see it, many different causal models are consistent with E&M. (The causation isn't in the bare equations, those are just correlations.) Different cases evidently call for different causal models (controlling particles with lasers, controlling fields with charges, etc.)
Who knows? No one knows the right ontological model of what is "really" happening when we're not looking. My point is that if you are just looking at absorption phenomena, and trying to model them using classical E&M, there is motivation to consider causal models with future inputs/constraints.
So, if I'm consistent in my application of causal logic, every time I see a photoelectric effect I should infer that I should use final-field-inputs (at least in part), to explain the observations. Using final-field causal inputs is actually a lot simpler than ditching classical E&M entirely.
But here's the thing: this 'time-reversed movie' I mentioned doesn't just happen when I play an emission event backwards. The same situation appears in real life, every time an array of atoms actually absorbs a photon! That's the photoelectric effect, an empirical fact.
If I took a movie of this and played it in reverse, despite the apparent advanced field, I'd reach the same conclusion. The way to explain a convergence of the field (onto one particle) would be to use a *final* field input. (Final in the time-reversed movie, meaning "initial" originally.)
Okay, now, replace the charges with a bunch of excited atoms, in some metastable state. One atom decays. A classical E&M model is still pretty good here. A similar initial-field-input causal model is needed to explain the pattern of where the field can eventually be detected.
Yes, exactly. If you use a causal model with a different field input, you won't get the retarded solution. And notice it's not a Cauchy problem! The particle input isn't all at the beginning, it's a full worldline. (And if I told you I shook the particle with a laser, that's another deal entirely.)
Now, I want to causally model this experiment, in this region of spacetime (without zooming out to the whole universe!), recovering the observed retarded fields from my model output. What model “inputs” should I therefore use? (Think about both field inputs and 4-current inputs.)
This will be fun! :-) First, a warm-up exercise. A bunch of charged particles sit on a plane. We decide to shake particle X. There are many solutions to the eqns. of E&M which are consistent with the worldline of X. But we generally see the EM fields of one particular solution (the retarded field).
If you decide my argument works, start a new thread and ask me about the analogous case of microscopic classical E&M. The empirical evidence for causation in that case sometimes points in a very different direction!
That’s not circular logic: that’s using empirical observations to draw conclusions. Sure, it doesn’t go through if you think that there’s no causation, just bare correlations. But then, wouldn’t the empirical success of causal models in this context be evidence that we should use them? ;-)
So your questions really come down to this: Why do classical causal models with past inputs work, and why do models with future inputs fail? If our universe can be causally modeled, I can see only one possible answer: some relevant external “inputs” to our universe really do lie in our past.
When we model thermodynamical processes in some region, we set the inputs at the beginning, and almost without trying we find successful models which generally explain the observations. Sure, you could try different casual models, putting the inputs at the end. But those models fail, empirically.
Causal models are asymmetric, by definition. In any causal model, we treat the causes as special inputs, imagining that we can set them to anything we want. Then we compute the result for everything else, calculating the “effects” of the causes as we counterfactually tweak the inputs.
We observe that at some point in the distant past the universe had a very smooth/uniform energy density. Any non-random-explanation of this fact will in turn explain all of those "problems". They're all causally downstream of the PH, and therefore aren't really problems; the only problem is the PH.
That part of the problem is essentially resolved by the PH. Normally “the problem of time” shows up when trying to get QM and GR to play nice together. The root problem, IMHO, is that GR is set in spacetime, and QM isn’t. And the 3D space -> configuration space move in QM doesn’t work in 4D.
Look at some of the computational work on this topic and I think you'll see the issue has been solved beyond any doubt. A simulation can ensure perfect time-symmetry, and the explanation of the 2nd law is entirely attributable to the (coarse-grained) boundary conditions. arxiv.org/abs/cond-mat...
Causality is an effect
Using symmetric boundary conditions at separated times, I show analytically that both the time ordering of (macroscopic) causality and the direction of entropy increase follow from these boundary cond...
arxiv.org
But what if it's not a *complete* boundary constraint? What if the Big Bang constraint is something like a smooth energy density with a random/unconstrained phase? Then one gets a boundary-explanation without having to "stipulate all facts". You don't even have to break the uncertainty principle.
Suppose someone blindfolds me and puts me next to a glacier, where I can feel a temperature gradient. Even if I don’t believe there’s an “arrow of space”, shouldn’t I still be allowed to hypothesize that there’s some thermal reservoir off to one side acting as a boundary constraint?
This gets into the question of whether boundary constraints can serve as explanations in their own right. I think they can. From an interventionist-causation perspective, I can always ask the counterfactual question “what if that boundary were different, or absent”? forums.fqxi.org/d/3139-funda...
Fundamental is Non-Random by Ken Wharton - QSpace Forums
forums.fqxi.org
In your branch, or in all branches? 😉
What you find “compelling” here, I suspect, is that such language ties into your causal reasoning, not a temporal process. So my advice is to keep those distinct. The interventionist view of causation works perfectly fine in a block universe, no additional time dimension required. (See Price's book)
Possibility spaces are very useful for making predictions; just ask any poker player. Those possibility spaces might seem to a player as “evolving” when they Bayesian update upon learning new information. But they’re not real in an ontic sense. There really is just one factual card situation.