Joachim W Pedersen
@joachimwpedersen.bsky.social
620 followers 460 following 27 posts
Bio-inspired AI, meta-learning, evolution, self-organization, developmental algorithms, and structural flexibility. Postdoc @ ITU of Copenhagen. https://scholar.google.com/citations?user=QVN3iv8AAAAJ&hl=en
Posts Media Videos Starter Packs
Pinned
In deep learning research, we often categorize meta-learning approaches as either gradient-based or black-box meta-learning. In my PhD thesis, I argued that it can sometimes be useful to classify approaches based on how the outer-loop optimization affects the inner-loop optimization.
Looking forward to this!
We're excited to announce final program of the @alife2025.bsky.social SONI session
which will host a panel discussion with @blaiseaguera.bsky.social, @risi.bsky.social, @emilydolson.bsky.social & Sidney Pontes-Filho

Check out the full program: sites.google.com/view/soni-al...

See you in Kyoto ⛩️
Reposted by Joachim W Pedersen
Introducing The Darwin Gödel Machine

sakana.ai/dgm

The Darwin Gödel Machine is a self-improving agent that can modify its own code. Inspired by evolution, we maintain an expanding lineage of agent variants, allowing for open-ended exploration of the vast design space of such self-improving agents.
Reposted by Joachim W Pedersen
“Continuous Thought Machines”

Blog → sakana.ai/ctm

Modern AI is powerful, but it's still distinct from human-like flexible intelligence. We believe neural timing is key. Our Continuous Thought Machine is built from the ground up to use neural dynamics as a powerful representation for intelligence.
New submission deadline: April 2nd!
So still some time to put interesting thoughts on Evolving Self-Organization together!

Also: We are very fortunate to have the great Risto Miikkulainen as the keynote speaker at the workshop!

Can't wait to see you all there! 🤩🙌
#Evolution #Gecco #ALife
Join us for the Evolving Self-Organisation workshop at #GECCO this year! Great chance to submit your favourite ideas concerning self-organisation processes and evolution, and how they interact.
Relevant for Alifers #ALife and anyone interested in #evolution, #self-organisation, and #ComplexSystems.
We're excited to announce the first Evolving Self-organisation workshop at GECCO 2025!

Submission deadline: March 26, 2025

More information: evolving-self-organisation-workshop.github.io
Very satisfying to see one's code run on actual real-world robots and not just simulation.
Check out the paper here:
arxiv.org/pdf/2503.12406
arxiv.org
www.youtube.com/watch?v=jnoa...
Bio-Inspired Plastic Neural Nets that continually adapt their own synaptic strengths can make for extremely robust locomotion policies!
Trained exclusively in simulation, the plastic networks transfer easily to the real world, even under various extra OOD situations.
[IROS25] Bio-Inspired Plastic Neural Nets for Zero-Shot Out-of-Distribution Generalization in Robots
YouTube video by Worasuchad Haomachai
www.youtube.com
Remember that 4-page submissions of early results are also welcome!

Also, does anyone know if #GECCO has an official 🦋 account? I cannot seem to find it...
Join us for the Evolving Self-Organisation workshop at #GECCO this year! Great chance to submit your favourite ideas concerning self-organisation processes and evolution, and how they interact.
Relevant for Alifers #ALife and anyone interested in #evolution, #self-organisation, and #ComplexSystems.
We're excited to announce the first Evolving Self-organisation workshop at GECCO 2025!

Submission deadline: March 26, 2025

More information: evolving-self-organisation-workshop.github.io
Both 4-pagers of early research as well as 8-page papers with more substantial results are welcome!
Join us for the Evolving Self-Organisation workshop at #GECCO this year! Great chance to submit your favourite ideas concerning self-organisation processes and evolution, and how they interact.
Relevant for Alifers #ALife and anyone interested in #evolution, #self-organisation, and #ComplexSystems.
We're excited to announce the first Evolving Self-organisation workshop at GECCO 2025!

Submission deadline: March 26, 2025

More information: evolving-self-organisation-workshop.github.io
Very cool! And great aesthetics as well 🙌 😊
Reposted by Joachim W Pedersen
Ever wish you could coordinate thousands of units in games such as StarCraft through natural language alone?

We are excited to present our HIVE approach, a framework and benchmark for LLM-driven multi-agent control.
With all the research coming from Sakana AI, this figure needs to be updated fast! direct.mit.edu/isal/proceed...

#LLM #ALife #ArtificialIntelligence
Reposted by Joachim W Pedersen
Transformer²: Self-adaptive LLMs

arxiv.org/abs/2501.06252

Check out the new paper from Sakana AI (@sakanaai.bsky.social) paper. We show the power of an LLM that can self-adapt its weights to its environment!
Reposted by Joachim W Pedersen
Vi har samlet et starter pack med forskere og repræsentanter fra ITU på Bluesky. Mød dem her 👇
go.bsky.app/E8WJwXS
Reposted by Joachim W Pedersen
Can Dynamic Neural Networks boost Computer Vision and Sensor Fusion?
We are very happy to share this awesome collection of papers on the topic!
Reposted by Joachim W Pedersen
If microchip ~= silicon
then AGI ~= huge pile of sand
Reposted by Joachim W Pedersen
Neural Attention Memory Models are evolved to optimize the performance of Transformers by actively pruning the KV cache memory. Surprisingly, we find that NAMMs are able to zero-shot transfer its performance gains across architectures, input modalities and even task domains! arxiv.org/abs/2410.13166
An Evolved Universal Transformer Memory

sakana.ai/namm/

Introducing Neural Attention Memory Models (NAMM), a new kind of neural memory system for Transformers that not only boost their performance and efficiency but are also transferable to other foundation models without any additional training!
3) Optimizer optimization: Think hyperparameter tuning, e.g., learning rate etc. The search within the inner-loop is altered.

We use meta-learning to achieve improved inner-loop optimization, so it is well worth considering exactly how our double-loop achieves this!
#meta-learning #deeplearning #ai
1) Starting point optimization: Think MAML. Move the initial point of the inner-loop search to a better place to learn quick.
2) Loss landscape optimization: Think neural architecture search. The loss landscape(s) of the inner-loop is transformed.
This can be thought of independently from which optimizer is being used in the inner-loop.
In any meta-learning approach, the outer-loop optimization will transform the inner-loop optimization process in at least of one three ways and often in a combination of these three.
In deep learning research, we often categorize meta-learning approaches as either gradient-based or black-box meta-learning. In my PhD thesis, I argued that it can sometimes be useful to classify approaches based on how the outer-loop optimization affects the inner-loop optimization.
Reposted by Joachim W Pedersen
Like 130,000 others, I made a starter pack. This one is people working on or with evolutionary computation in its many forms: genetic algorithms, genetic programming, evolution strategies.

If you like to be added, or suggest someone else, message me or reply to this post.