Manuel Baltieri
manuelbaltieri.bsky.social
Manuel Baltieri
@manuelbaltieri.bsky.social
Chief Researcher at Araya, Tokyo. #ALife, #AI, embodied and enactive #cognition. Information, control and applied category theory for cognitive science.

https://manuelbaltieri.com/
We then show how this corresponds to a Bayesian filtering interpretation for a reasoner: how a controller modelling its environment can be understood as performing Bayesian filtering on its environment.

14/
March 5, 2025 at 2:31 AM
Firstly, we show that the definition of model between two autonomous system can be “reversed” to build a “possibilistic” version of the internal model principle.

13/
March 5, 2025 at 2:31 AM
After a reasonably self contained overview of string diagrams for Markov categories, and some definitions including Bayesian inference/filtering, their parametrised and conjugate prior versions, we dive into the main result, showing mainly two things.

12/
March 5, 2025 at 2:31 AM
Our focus here is mostly technical and has to do almost entirely with control theory, but considering where the conversation started on the other platform, I hope that this will have an impact also in the cognitive and life sciences.

10/
March 5, 2025 at 2:31 AM
The internal model principle is arguably one of the most influential outputs of control theory, claiming, at its core, that if a controller regulates a plant against disturbances from the environment, it does so by implementing a model of the environment.

8/
March 5, 2025 at 2:31 AM
We define “models” for non-autonomous (fully observable) systems, generalising the original definition for autonomous systems (but focus on the latter). We think of this as generalising aspects of lumpability, state aggregation, coarse grainings, dynamical consistency, etc.

7/
March 5, 2025 at 2:31 AM
In the first part, we review and reformulate the “internal model principle” from control theory (at least, one of its versions) in a more modern language heavily inspired by categorical systems theory (www.davidjaz.com/Papers/Dynam..., github.com/mattecapu/ca...).

5/
March 5, 2025 at 2:31 AM
This looks fantastic, and goes to the must read pile for 2025
February 25, 2025 at 11:43 PM
This looks 🔥🔥
January 31, 2025 at 7:49 AM
December 19, 2024 at 11:45 AM
All #LLMs are wrong.
Some more than others: according to Gemini eigenvalues with a positive real part imply stability. #mathsky
December 10, 2024 at 5:52 AM
Me: “Wouldn’t it be nice to know what it means for two processes to be equal?”

“Don’t open that door” screams anyone who’s ever looked into the semantics of process theories..

#compsky #mathsky
December 4, 2024 at 2:37 PM
I know what we need next: another neuro-brain-something starter pack with the same names for a bit of visibility and another ml-nonsense-but-I-think-it’s-funny one with my best buddies because I don’t like starter packs

#neuroskyence #mlsky
November 24, 2024 at 1:47 AM
To accompany all those Bluesky science/academia
#starterpacks, let's share introductions to the literature of our different fields.

Here are the 12 main types of scientific papers on active inference (inspired by the amazing @xkcd.com). #activeinference #freeenergyprinciple #neuroskyence
November 17, 2024 at 11:18 PM
Sources and integration are discussed by examining more closely works on reinforcement learning (RL), with the goal of drawing a comparison between artificial and natural agents in causal tasks.

11/
November 12, 2024 at 7:29 AM
e.g. identifying key causal features and/or causal relationships among objects, like the fact that the shape of a ball can make a difference for whether it bounces but its colour does not.

9/
November 12, 2024 at 7:29 AM
causal agents can be placed on an explicitness spectrum depending on much environmental causal structure can acquire, ...

8/
November 12, 2024 at 7:29 AM
By proposing a distinction between weak and strong disentanglement approaches, ...

7/
November 12, 2024 at 7:29 AM
The paper proposes a computational framework for causal cognition in natural and artificial agents, drawing from recent work in causal machine learning (in part based on recent developments of Markov categories in applied category theory) and reinforcement learning.

#machinelearning #MLsky

2/
November 12, 2024 at 7:29 AM
Am I misunderstanding this part maybe?
September 19, 2023 at 1:05 AM