Maxine 💃🏼
banner
maxine.science
Maxine 💃🏼
@maxine.science
🔬 Looking at the brain’s “dark matter”
🤯 Studying how minds change
👩🏼‍💻 Building science tools

🦋 ♾️ 👓
🌐 maxine.science
At an extremely abstract nonsense level, bicomodules in Poly directly generalize linear operator theory—hence the connection to quantum, etc. we see with “operators” for LLMs’ semantics, properly defined functorially.
January 13, 2026 at 4:39 PM
@gracekind.net this is to me a much better formalism for the “multiverse” link you had www.math3ma.com/blog/languag...
Language, Statistics, & Category Theory, Part 3
Welcome to the final installment of our mini-series on the new preprint
www.math3ma.com
January 13, 2026 at 4:37 PM
This series I always refer people to as a more general introduction to some concepts in the context of LLMs www.math3ma.com/blog/languag...
Language, Statistics, & Category Theory, Part 1
In the previous post I mentioned a new preprint that John Terilla, Yiannis Vlassopoulos, and I recently posted on the arXiv. In it, we ask a question motivated by the recent successes of the world's b...
www.math3ma.com
January 13, 2026 at 4:35 PM
I like this one! I got to have wonderfully deep conversatuons woth Nelson and eapecially David over the years, and I think this was a very good stepping stone toward something practical for a computer implementation. github.com/ToposInstitu...
GitHub - ToposInstitute/poly
Contribute to ToposInstitute/poly development by creating an account on GitHub.
github.com
January 13, 2026 at 4:34 PM
To me this underlies a tremendous amount of the stagnation in the field: you can demonstrate X a million different ways if every time you actually are describing your own personal X. My experience over ~15 years in various fields of brain/mind science is we almost entirely talk past each other.
January 13, 2026 at 4:31 PM
All of the above, in my view! I would be hard pressed to find a term of art about aspects of mind/cognition/etc. (that is, descriptors that aren’t at the level of structural features of the biology, chemistry, physics, etc.) that have any of 1–4 adequately in a common language. +
January 13, 2026 at 4:31 PM
My contentions are:

A. we presently lack all of these, and
B. applied CT as a mathematical framework supplies all of these, where X is an aspect of experiential structure.
January 13, 2026 at 2:46 PM
For sure! To do that in the process of science, we need

1. a shared understanding of what X is,
2. a shared understanding of what does or does not constitute empirical evidence of X,
3. a way to make predictions of Y entailed by X, and
4. a way to communicate that unambiguously. +
January 13, 2026 at 2:46 PM
3 year prediction:

Structuralism’s natural mathematical formalism—applied category theory—is completely mainstream and ubiquitous.
January 13, 2026 at 2:36 PM
Structuralism is the only answer to the question of “Why can AI do X?” for all X. It’s because these algorithms converge on an internal image of (up to problem-suitable weak equivalence) the structure of X (that is, the Platonic representation hypothesis). +
January 13, 2026 at 2:36 PM
Literally this morning!—
The “structural turn” in the study of consciousness (i.e., setting aside a couple key philosophical roadblocks to acknowledge common relational features of experience) enables a totally new mathematical formalism—applied category theory—for formalizing observables. +
January 13, 2026 at 2:32 PM
In previous paradigmatic moments in science, the emergence of a shared mathematical formalism has enabled compact, precise communication, serving as the basis for coordinated investigation.

I am extremely bullish on the ability for applied CT to serve as that formalism, and drive the new paradigm.
January 13, 2026 at 1:57 PM
The “structural turn” in the study of consciousness (i.e., setting aside a couple key philosophical roadblocks to acknowledge common relational features of experience) enables a totally new mathematical formalism—applied category theory—for formalizing observables. +
January 13, 2026 at 1:57 PM
After watching, it reminds me of how pivotal I think the work of @johanneskleiner.bsky.social and others from the contemporary mathematical consciousness science field is going to be, particularly for framing a new empirical framework around subjective experience. +
January 13, 2026 at 1:57 PM
and then never ever set foot near an MRI again.
January 13, 2026 at 1:33 PM
^^ I work across the room from the Simplex folks—they’re expanding the empirics of the fractal structure learned implicitly in transformer models.
January 13, 2026 at 1:29 PM
ah gork’s down.

the answer I wanted is “yeh”.
January 13, 2026 at 1:27 PM
That is:

“I am the thing that could learn how to predict the future of the thing shaped like me in a world shaped like what would make me-shaped things.”
January 13, 2026 at 1:23 PM
It has to be, for the system to persist (ie, be more common, by Darwinism): self-world prediction is an evolutionarily dominant strategy, in a game-theoretic sense.
January 13, 2026 at 1:22 PM
The trick for embodied systems is there is a natural choice of boundary (“me”), which is precisely the one that is suited to regularizing the learning problem via a “SSM”. +
January 13, 2026 at 1:22 PM
Right a stateful operator

T(world, state)

is just a stateless operator

T( (world ⊕ state) )

if you change the reference frame of the world-self boundary.
January 13, 2026 at 1:17 PM
But I’m not surprised that if you throw enough compute at the larger class you will do a crappy-but-impressive job at searching, cause you can reframe one problem into the other class.
January 13, 2026 at 1:12 PM
I mean fwiw I’m long SSMs because I do agree that I conjecture they factorize the search problem in a useful way when you have a prior on how much structure your system has (again a consequence of the work in the referenced paper, imo). +
January 13, 2026 at 1:12 PM
The thing that’s far more interesting here (the subject of the paper) is that self-prediction implies Massive structural constraint on an operator.
January 13, 2026 at 1:01 PM