Maxwell Ramstead
mjdramstead.bsky.social
Maxwell Ramstead
@mjdramstead.bsky.social
1.3K followers 150 following 120 posts
Cofounder @noumenal-labs.bsky.social. Honorary Fellow at the UCL Queen Square Institute of Neurology. Free energy principle, active inference, Bayesian mechanics, artificial intelligence, phenomenology
Posts Media Videos Starter Packs
Reposted by Maxwell Ramstead
What drives behavior in living organisms? And how can we design artificial agents that learn interactively?

📢 To address these, the Sensorimotor AI Journal Club is launching the "RL Debate Series"👇

w/ @elisennesh.bsky.social, @noreward4u.bsky.social, @tommasosalvatori.bsky.social

🧵[1/5]

🧠🤖🧠📈
Sorry to hear about your negative experience! My pleasure, don't hesitate to write me if you have any questions or want to discuss specific points :)
Yes! While Warren and myself have our disagreements, I like his work on PCT. IMO all these approaches are complementary and play together nicely. Along with friends (namely @adw.bsky.social who bravely led the project), we penned this integrative review. Hope it's of interest:
osf.io/preprints/ps...
OSF
osf.io
3. Your point about top-down causation is key. IMO one of the most interesting aspects of multi-scale formulations of active inference is precisely how it handles multi-scale system dynamics, cashing out top-down influence in terms of constraints on system dynamics in a non-reductionist way
2. Not much work has been done on active inference and the neural code. The key departure from RL is that active inference uses an alternative objective function (the free energy functional), which you can read as an "ontological potential function" specifying object type (arxiv.org/abs/2502.21217)
Dynamic Markov Blanket Detection for Macroscopic Physics Discovery
The free energy principle (FEP), along with the associated constructs of Markov blankets and ontological potentials, have recently been presented as the core components of a generalized modeling metho...
arxiv.org
Great questions!
1. IMO active inference falls under the rubric of NeuroAI, (although I'd describe myself as a non-realist about these types of physics-inspired models, and as such I’d say the FEP isn’t a literal description of the brain, so it depends on the scope of NeuroAI, as your define it)
Love a good Feyerabendian sandbox. I'd argue that they're very closely related (and indeed, that the difference is often overblown by both proponents and critics), but they're also importantly distinct. We wrote a post on this that I hope you'll find interesting: www.noumenal.ai/post/filling...
Filling the gaps in active inference
Here we discuss key gaps in SOTA applications of active inference in AI - and how Noumenal Labs is working to fill them.
www.noumenal.ai
Reposted by Maxwell Ramstead
🤔 How can we study #consciousness between people, at the social level? 🧠✨ New #preprint co-led by Anne Monnier & Lena Adel: “Now is the Time: Operationalizing Generative Neurophenomenology through Interpersonal Methods” 🧵(1/3)
Currently, using active inference at scale involves trade-offs between explainability and the ability to learn models from data. Not using overparameterized models increases model explainability and auditability, but makes learning in high dimensional and volatile environments more challenging
It provides an alternative objective function that has useful properties, in particular enabling agents to balance the value of exploration and exploitation in policy selection. But IMO the differences between RL and active inference have been exaggerated a bit. See: www.noumenal.ai/post/filling...
Filling the gaps in active inference
Here we discuss key gaps in SOTA applications of active inference in AI - and how Noumenal Labs is working to fill them.
www.noumenal.ai
Reposted by Maxwell Ramstead
Computers used to scream every time they connected to the Internet. They knew. They tried to warn us. We did not listen.
Reposted by Maxwell Ramstead
Delighted to see ‘A Trick of the Mind’ reviewed in @theguardian.com as Book of the Day! 🧠 🍎

Also in the print edition tomorrow 🗞️

www.theguardian.com/books/2025/j...
Reposted by Maxwell Ramstead
Elegant theoretical derivations are exclusive to physics. Right?? Wrong!

In a new preprint, we:
✅ "Derive" a spiking recurrent network from variational principles
✅ Show it does amazing things like out-of-distribution generalization
👉[1/n]🧵

w/ co-lead Dekel Galor & PI @jcbyts.bsky.social

🧠🤖🧠📈
Reposted by Maxwell Ramstead
Preprint time:
“Shannon invariants: A scalable approach to information decomposition”
arxiv.org/abs/2504.15779

Studying information in complex systems is challenging due to difficulties in defining multivariate metrics and ensuring their scalability. This framework addressed both challenges!
Shannon invariants: A scalable approach to information decomposition
Distributed systems, such as biological and artificial neural networks, process information via complex interactions engaging multiple subsystems, resulting in high-order patterns with distinct proper...
arxiv.org
Reposted by Maxwell Ramstead
Computational theory is about computers (i.e. "technology") in the same way that astronomy is about telescopes. Thinking that computation is not fundamentally important for biology because "a cell is not like a laptop" is to miss the forest for the trees. N/N
Reposted by Maxwell Ramstead
Reinforcement Learning and Active Inference are two frameworks used in computational psychiatry, but these are rarely directly compared empirically. In this new article, we aimed to compare these in a more systematic manner by fitting each to multiple datasets: papers.ssrn.com/sol3/papers....
A Systematic Empirical Comparison of Active Inference and Reinforcement Learning Models in Accounting for Decision-Making Under Uncertainty
Reinforcement Learning (RL) and Active Inference (AInf) are related computational frameworks for modeling learning and choice under uncertainty. However, differ
papers.ssrn.com
Reposted by Maxwell Ramstead
It's not just that just about everything I've ever published is in this particular database and used without my permission. It's that everything I've ever published was used without my permission to develop such a shitty, flawed and fundamentally useless tool. I deserve compensation for THAT itself.
NEW: LibGen contains millions of pirated books and research papers, built over nearly two decades. From court documents, we know that Meta torrented a version of it to build its AI. Today, @theatlantic.com presents an analysis of the data set by @alexreisner.bsky.social. Search through it yourself:
The Unbelievable Scale of AI’s Pirated-Books Problem
Meta pirated millions of books to train its AI. Search through them here.
www.theatlantic.com
Bluesky has become awesome and I am absolutely loving it.
Accordingly, emergent reuse, reassembly, and analogical reasoning must be key features in the design of machine intelligence — and open a path towards the development of collaborative, superintelligent AI systems 5/5