Dane Carnegie Malenfant
@dvnxmvlhdf5.bsky.social
550 followers 460 following 80 posts
MSc. @mila-quebec.bsky.social and @mcgill.ca in the LiNC lab Fixating on multi-agent RL, Neuro-AI and decisions Ēka ē-akimiht https://danemalenfant.com/
Posts Media Videos Starter Packs
Pinned
dvnxmvlhdf5.bsky.social
I am presenting this work at the @cocomarl-workshop.bsky.social part of @rl-conference.bsky.social Tuesday (: I additionally have a generalized correction term for n-arbitrary agents (it is like walking a tree for the order of gradients) that I am looking for thoughts, validations or critiques.
dvnxmvlhdf5.bsky.social
Preprint Alert 🚀

Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate with them. But for humans, this isn’t always the case. For example, plains indigenous groups used to leave resources for others to use at effigies called Manitokan.
1/8
Manitokan are images set up where one can bring a gift or receive a gift. 1930s Rocky Boy Reservation, Montana, Montana State University photograph. Colourized with AI
Reposted by Dane Carnegie Malenfant
ent3c.bsky.social
In 2018, Charles Murray challenged me to a bet: "We will understand IQ genetically—I think most of the picture will have been filled in by 2025—there will still be blanks—but we’ll know basically what’s going on." It's now 2025, and I claim a win. I write about it in The Atlantic.
Your Genes Are Simply Not Enough to Explain How Smart You Are
Seven years ago, I took a bet with Charles Murray about whether we’d basically understand the genetics of intelligence by now.
www.theatlantic.com
Reposted by Dane Carnegie Malenfant
Reposted by Dane Carnegie Malenfant
napaaqtuk.bsky.social
I am seeing a lot of people reposting Lakota Man today bc it's Indigenous Peoples Day. A reminder that he is not well liked amongst many (most?) Natives on social media. Many of us blocked him a long time ago. Some of the reasons why are in this article.

www.dailydot.com/irl/lakotama...
Who is LakotaMan, the user behind one of the most popular Native American accounts on X?
John Martin is adored by white X users—but infamous among Native and Indigenous communities.
www.dailydot.com
Reposted by Dane Carnegie Malenfant
saskajanet.bsky.social
Hard to believe I’m watching snow fall now after a day like yesterday. Beautiful walk full of amazing nature encounters in the oldest migratory bird sanctuary in North America. Saw a few snow geese 😂. I’ll share more after I get through the photos! 🌿 #birds #prairie
Large buffalo rubbing stone in the foreground of a wide landscape shot of prairie and blue sky. Thousands of snow geese on a prairie lake. Tall metal cut-out sign in front of a wooden rail fence with brown prairie and blue sky behind. Sign marks the head of the Grasslands Nature Trail in Last Mountain Lake National Wildlife Area. Brown grass stretches to the horizon under a blue sky with wispy clouds. There is a fence on the horizon.
dvnxmvlhdf5.bsky.social
Here is my plan to make Bluesky more fun and active:
Cat
Reposted by Dane Carnegie Malenfant
dvnxmvlhdf5.bsky.social
7/8 The takeaway for the public: consider training choices like entropy regularization can make systems more robust so fewer restarts and less costly retraining when the world shifts. This means your learning systems are more durable and efficient.
dvnxmvlhdf5.bsky.social
6/8 To make it more visually fun), I teamed up with the Société des arts technologiques sat.qc.ca to create an experience. Using open-source Ossia Score's particle clouds, audio, and 3D transforms in real time while the agents learned. ossia.io
dvnxmvlhdf5.bsky.social
5/8
Both agents must unlearn and relocate the reward peak. The entropy-max agent stays a bit uncertain, keeps exploring, so it detects the shift faster and adapts sooner.
dvnxmvlhdf5.bsky.social
4/8
To communicate this to a general audience and the #art community, I built a minimal task: two Gaussian bandits. One agent optimizes with entropy; the other doesn’t. Mid-training, the reward distribution jumps.
dvnxmvlhdf5.bsky.social
3/8
By training systems this way, agents should handle non-stationary changes better. Yet outside research circles, “AI” ≈ only LLMs or generative models. RL, on the other hand, is an unknown learning paradigm to the public’s eyes.
dvnxmvlhdf5.bsky.social
2/8
I proposed a reinforcement-learning (RL) demo: add a maximum-entropy term to increase the longevity of systems in a non-stationary environment. This is well known to the RL research community: openreview.net/forum?id=PtS...
(photo by Félix Bonne-Vie)
Photo by Félix Bonne-Vie
dvnxmvlhdf5.bsky.social
1/8
A month ago I wrapped a 4-month project with MuTek Forum’s AI Ecologies Lab led by Sarah Mackenzie: the research arm of Montréal’s 25-year electronic music festival. Why entropy can make AI more resilient Event: ra.co/events/2206981
ra.co
dvnxmvlhdf5.bsky.social
My eye colour apparently changed after 6 years
Reposted by Dane Carnegie Malenfant
eugenevinitsky.bsky.social
We're finally out of stealth: percepta.ai
We're a research / engineering team working together in industries like health and logistics to ship ML tools that drastically improve productivity. If you're interested in ML and RL work that matters, come join us 😀
Percepta | A General Catalyst Transformation Company
Transforming critical institutions using applied AI. Let's harness the frontier.
percepta.ai
dvnxmvlhdf5.bsky.social
I am on one transformer paper from 3 years ago and ICLR flooded my bids with RLVR & RLHF :S
Reposted by Dane Carnegie Malenfant
charlottevolk.bsky.social
9. We hypothesized that the efficacy of the learning curricula depends on how many distinct, useful visual features the brain recruits to solve the task - curricula which lead learners to rely on fewer, more essential visual features will result in better generalization.
Reposted by Dane Carnegie Malenfant
charlottevolk.bsky.social
5. In this study, we leveraged ANNs to develop a mechanistic predictive theory of learning generalization in humans. Specifically, we wanted to understand the role of **learning curriculum**, and develop a theory of how curriculum affects generalization.
Reposted by Dane Carnegie Malenfant
charlottevolk.bsky.social
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
Reposted by Dane Carnegie Malenfant
michellecyca.com
so fucking tiresome to get emails like this whenever I write about residential school history, truly. people who believe that graves don't exist if they can't see the bodies with their own two eyes possess the critical thinking skills of a baby playing peekaboo.
I hope you are doing well. 

I read your article on the Walrus on the reality of the current state on implementation of recommendations from that TRC. It truly is unfortunate that implementing these recommendations isn't proceeding with alacrity. 

One are that is confusing for me is the truth around the Kamloops mass grave site. In your article, you state, "discovery of unmarked graves on the grounds of the former Kamloops Indian Residential School". However, follow up work hasn't found any mass graves. I have tried to find primary sources on discovery of actual mass graves without success. 

Can you please share primary sources on this?  I have spoken to others who state that though ground penetrating radar found some suggestions of graves, follow up digging did not find any actual graves. 

Appreciate any help you can provide. Thank you.