Christopher W. Lynn
banner
chriswlynn.bsky.social
Christopher W. Lynn
@chriswlynn.bsky.social
Statistical physics of the brain 🧠 & other complex systems 🦠 | Asst Prof of Physics & QBio at Yale

X: @ChrisWLynn
Lab: lynnlab.yale.edu/
Got to meet one of my heroes today. John Hopfield broke down barriers for generations of physicists:

“Physics at its best is a point of view for understanding the totality of man and the Universe.”
November 14, 2025 at 8:34 PM
In neural dynamics in the hippocampus, the maximum irreversibility coarse-graining uncovers a large-scale loop of flux in neural space that is directly driven by the animal's movement in physical space.
June 5, 2025 at 6:17 PM
In chemical oscillators, the maximum irreversibility coarse-graining picks out macroscopic loops of flux that dominate the dynamics.
June 5, 2025 at 6:17 PM
Across a range of living systems, this maximum irreversibility coarse-graining uncovers key biological functions.

For example, in models of kinesin (a motor protein that ships cargo inside your cells), we can derive simplified dynamics without losing any irreversibility.
June 5, 2025 at 6:17 PM
When living systems burn energy, they drive irreversible dynamics and produce entropy.

Under coarse-graining, the apparent irreversibility can only decrease.

This means that -- at every level of description -- there's a unique coarse-graining with maximum irreversibility.
June 5, 2025 at 6:17 PM
Biology consumes energy at the microscale to power functions across all scales: From proteins and cells to entire populations of animals.

Led by @qiweiyu.bsky.social‬ and @mleighton.bsky.social‬, we study how coarse-graining can help to bridge this gap 👇🧵

arxiv.org/abs/2506.01909
June 5, 2025 at 6:17 PM
In neural dynamics in the hippocampus, the maximum irreversibility coarse-graining uncovers a large-scale loop of flux in neural space that is directly driven by the animal's movement in physical space.
June 5, 2025 at 5:50 PM
In chemical oscillators, the maximum irreversibility coarse-graining picks out macroscopic loops of flux that dominate the dynamics.
June 5, 2025 at 5:50 PM
Across a range of living systems, this maximum irreversibility coarse-graining uncovers key biological functions.

For example, in models of kinesin (a motor protein that ships cargo inside your cells), we can derive simplified dynamics without losing any irreversibility.
June 5, 2025 at 5:50 PM
When living systems burn energy, they drive irreversible dynamics and produce entropy.

Under coarse-graining, the apparent irreversibility can only decrease.

This means that -- at every level of description -- there's a unique coarse-graining with maximum irreversibility.
June 5, 2025 at 5:50 PM
We review emerging applications, which range from neuroscience and biology to machine learning and engineering...
May 16, 2025 at 5:14 PM
Starting only with MDL, we show that the optimal details provide as much information about the data as possible while remaining maximally random with regard to all unobserved details

This "minimax entropy" principle was proposed 25 years ago but remains largely unexplored
May 16, 2025 at 5:14 PM
When constructing models of the world, we aim for good compressions: models that are as accurate as possible with as few details as possible. But which details should we include in a model?

An answer lies in the "minimax entropy" principle 👇
arxiv.org/abs/2505.01607
May 16, 2025 at 5:14 PM
Moreover, the inferred connection weights are 1. sparse, 2. heavy-tailed, 3. balanced, and 4. directed -- all key features observed in synaptic wiring between neurons
April 16, 2025 at 1:49 PM
With only a small number of direct inputs (no interactions between inputs) we are able to predict complex higher-order dependencies on multiple inputs
April 16, 2025 at 1:49 PM
This means that real neurons are closely approximated *quantitatively* by the first artificial neuron proposed by McCulloch and Pitts in 1943
April 16, 2025 at 1:49 PM
In the mouse hippocampus and visual cortex, we find that direct dependencies capture 90% of a neuron's activity. This leaves only 10% for interactions between inputs and inherent noise
April 16, 2025 at 1:49 PM
We decompose the computation of a neuron into three components:
1. Direct dependencies on inputs (as in artificial neurons)
2. Indirect dependencies (which require interactions between inputs)
3. Inherent stochasticity (which doesn't depend on inputs)
April 16, 2025 at 1:49 PM
Our understanding of neural computation -- both in the brain and artificial networks -- is founded on an assumption: That neurons fire in response to a linear sum of inputs

We systematically test this assumption 👇 arxiv.org/abs/2504.08637
April 16, 2025 at 1:49 PM