William Gilpin
banner
wgilpin.bsky.social
William Gilpin
@wgilpin.bsky.social
asst prof at UT Austin physics interested in chaos, fluids, & biophysics.

https://www.wgilpin.com/
This work was inspired by amazing recent work on transients by the dynamical systems community: Analogue KSAT solvers, slowdowns in gradient descent during neural network training, and chimera states in coupled oscillators. (12/N)
June 16, 2025 at 5:30 PM
For the Lotka-Volterra case, optimal coordinates are the right singular vectors of the species interaction matrix. You can experimentally estimate these with O(N) operations using Krylov-style methods: perturb the ecosystem, and see how it reacts. (11/N)
June 16, 2025 at 5:30 PM
This variation influences how we reduce the dimensionality of biological time series. With non-reciprocal interactions (like predator prey), PCA won’t always separate timescales. The optimal dimensionality-reducing variables (“ecomodes”) should precondition the linear problem (10/N)
June 16, 2025 at 5:30 PM
As a consequence of ill-conditioning, large ecosystems become excitable: small changes cause huge differences in how they approach equilibrium. Using the FLI, a metric invented by astrophysicists to study planetary orbits, we see caustics indicating variation in solve path (9/N)
June 16, 2025 at 5:30 PM
How would hard optimization problems arise in nature? I used genetic algorithms to evolve ecosystems towards supporting more biodiversity, and they became more ill-conditioned—and thus more prone to supertransients. (8/N)
June 16, 2025 at 5:30 PM
So ill-conditioning isn’t just something numerical analysts care about. It’s a physical property that measures computational complexity, which translates to super long equilibration times in large biological networks with trophic overlap (7/N)
June 16, 2025 at 5:30 PM
More precisely: the expected equilibration time of a random Lotka-Volterra system scales with the condition number of the species interaction matrix. The scaling matches the expected scaling of the solvers that your computer uses to do linear regression (6/N)
June 16, 2025 at 5:30 PM
We can think of ecological dynamics as an analogue constraint satisfaction problem. As the problem becomes more ill-conditioned, the ODEs describing the system take longer to “solve” the problem of who survives and who goes extinct (5/N)
June 16, 2025 at 5:30 PM
But is equilibrium even relevant? In high dimensions, stable fixed points might not be reachable in finite time. Supertransients due to unstable solutions that trap dynamics for increasingly long durations. E.g, pipe turbulence is supertransient (laminar flow is globally stable) (4/N)
June 16, 2025 at 5:30 PM
Dynamical systems are linear near fixed points, so May used random matrix theory to show large random ecosystems are usually unstable. The biodiversity we see in the real world requires finer-tuned structure from selection, niches, et al. that recover stability (3/N)
June 16, 2025 at 5:30 PM
A celebrated result in mathematical biology is Robert May’s “stability vs complexity” tradeoff. In large biological networks, we can’t possibly measure all N^2 interactions among N species, genes, neurons, etc. What is our null hypothesis for their behavior? (2/N)
June 16, 2025 at 5:30 PM
Does stability matter in biology? My article on the cover of this month’s @PLOSCompBiol explores how large ecosystems develop supertransients, a manifestation of computational hardness (1/N)

doi.org/10.1371/jour...
June 16, 2025 at 5:30 PM
The attention architecture allows the model to handle much higher-dimensional inputs at testing than it ever saw during training, so we asked it to forecast two chaotic PDE (a fluid flow and KS equation). Not bad, given that the model has never seen a PDE before (6/7)
May 22, 2025 at 4:49 AM
We fed the model mixes of pure frequencies & measured its response. The activations lit up in complex patterns, indicating nonlinear resonance & mode-mixing, akin to triad interactions visible in turbulent bispectra. Compare these activations to Arnold webs in N-body chaos (5/7)
May 22, 2025 at 4:49 AM
We find a scaling law relating performance and the number of chaotic systems. Even if we control for total amount of training timepoints, more pretraining ODEs improves the model.
May 22, 2025 at 4:49 AM
What is the generalization signal for pretrained model to handle unseen chaotic systems? Post training, attention rollouts show recurrence maps and Toeplitz matrices, suggesting the model learns to implement complex numerical integration strategies to extend the context (5/8)
May 22, 2025 at 4:49 AM
Panda beats pure time-series foundation models at zero-shot forecasting unseen dynamical systems. That means that the model sees a snippet of an unseen chaotic system as context, and autonomously continues the dynamics (no weights are updated) (4/8)
May 22, 2025 at 4:49 AM
We made a novel chaotic systems dataset for pretaining by taking 135 hand-curated chaotic ODE (e.g. Lorenz, Rossler, etc) and mutating/recombining/selecting their ODE to select for chaoticity (3/8)
May 22, 2025 at 4:49 AM
Heroic effort co-lead by UT PhD students Jeff Lai & Anthony Bao, who implemented a new channel-attention architecture combining PatchTST, Takens embeddings, & Extended Dynamic Mode Decomp. They trained the whole thing on AMD GPUs! (2/8)
May 22, 2025 at 4:49 AM
We present Panda: a foundation model for nonlinear dynamics pretrained on 20,000 chaotic ODE discovered via evolutionary search. Panda zero-shot forecasts unseen ODE best-in-class, and can forecast PDE despite having never seen them during training (1/8)
arxiv.org/abs/2505.13755
May 22, 2025 at 4:49 AM
The method works pretty well on real-world time series from ecosystems, gene regulatory networks, & turbulence. I also applied it to dynamical systems with interesting recurrence properties (due to unstable periodic orbits), such as a coupled oscillators that synchronize (6/n)
January 15, 2025 at 5:41 PM
If we think about recurrences as defining a graph among timepoints, this problem becomes partitioning the graph. We can make an analogy between this process and discovering the trapping regions (like vortices) that inhibit transport in turbulent fluid flows (5/n)
January 15, 2025 at 5:41 PM
The problem is that real time series are noisy and incomplete. So they tend to exhibit lots of false recurrences, disrupting the delicate structure of recurrence networks (4/n)
January 15, 2025 at 5:41 PM
In 2004 Tim Sauer showed that, when two dynamical systems have nonreciprocal coupling, then anytime the downstream child recurs, it means the parent did too. But the opposite won’t necessarily be true (3/n)
January 15, 2025 at 5:41 PM
How can we pull information about unseen driving forces out of our measurements? Time series often repeat themselves. These repeated motifs provide clues to the existence of a common driving signal (2/n)
January 15, 2025 at 5:41 PM