Michael Beyeler
banner
mbeyeler.bsky.social
Michael Beyeler
@mbeyeler.bsky.social
👁️🧠🖥️🧪🤖 Associate Professor in @ucsb-cs.bsky.social and Psychological & Brain Sciences at @ucsantabarbara.bsky.social. PI of @bionicvisionlab.org.
#BionicVision #Blindness #LowVision #VisionScience #CompNeuro #NeuroTech #NeuroAI
Grateful to the organizing team: @mariusschneider.bsky.social, @jingpeng.bsky.social, Y Hou, L Herbelin, J Canzano, @spencerlaveresmith.bsky.social.

👏🙏🙌 Special thanks to MS, YH, JP for daily work behind the scenes (at the expense of their own research). The challenge would not exist without them!
November 26, 2025 at 7:56 PM
Next: Join our NeurIPS workshop on Dec 7, 2025, 11 to 2 PT on Zoom!

Hear from top competitors and our 3 keynote speakers:
- @sinzlab.bsky.social
- @ninamiolane.bsky.social
- @crisniell.bsky.social

More info: robustforaging.github.io/workshop

#NeurIPS2025 #Neuroscience #AI
November 26, 2025 at 7:56 PM
Top teams:

🥇 371333_HCMUS_TheFangs (ASR 0.968, MSR 0.940, Score 0.954)
🥈 417856_alluding123 (ASR 0.864, MSR 0.650, Score 0.757)
🥉 366999_pingsheng-li (ASR 0.802, MSR 0.670, Score 0.736)

Full leaderboard: robustforaging.github.io/leaderboard/

#NeurIPS2025 #Neuroscience #AI
Robust Foraging Competition
Can your AI visually navigate better than a mouse?
robustforaging.github.io
November 26, 2025 at 7:56 PM
Thank you so much for this tip! Infuriating change
October 13, 2025 at 3:02 AM
Good eye! You’re right, my spicy summary skipped over the nuance. Color was a free-form response, which we later binned into 4 categories for modeling. Chance level isn’t 25% but adjusted for class imbalance (majority class frequency). Definitely preliminary re:“perception”, but beats stimulus-only!
September 27, 2025 at 11:53 PM
Thanks! I hear you, that thought has crossed my mind, too. But IP & money have already held this field back too long... This work was funded by public grants, and our philosophy is to keep data + code open so others can build on it. Still, watch us get no credit & me eat my words in 5-10 years 😅
September 27, 2025 at 11:48 PM
Together, this argues for closed-loop visual prostheses:

📡 Record neural responses
⚡ Adapt stimulation in real-time
👁️ Optimize for perceptual outcomes

This work was only possible through a tight collaboration between 3 labs across @ethz.ch, @umh.es, and @ucsantabarbara.bsky.social!
September 27, 2025 at 2:52 AM
And here’s the kicker: 🚨

If you try to predict perception from stimulation parameters alone, you’re basically at chance.

But if you use neural responses, suddenly you can decode detection, brightness, and color with high accuracy.
September 27, 2025 at 2:52 AM
We pushed further: Could we make V1 produce new, arbitrary activity patterns?

Yes ... but control breaks down the farther you stray from the brain’s natural manifold.

Still, our methods required lower currents and evoked more stable percepts.
September 27, 2025 at 2:52 AM
Prediction is only step 1. We then inverted the forward model with 2 strategies:

1️⃣ Gradient-based optimizer (precise, but slow)
2️⃣ Inverse neural net (fast, real-time)

Both shaped neural responses far better than conventional 1-to-1 mapping
September 27, 2025 at 2:52 AM
We trained a deep neural network (“forward model”) to predict neural responses from stimulation and baseline brain state.

💡 Key insight: accounting for pre-stimulus activity drastically improved predictions across sessions.

This makes the model robust to day-to-day drift.
September 27, 2025 at 2:52 AM
Many in #BionicVision have tried to map stimulation → perception, but cortical responses are nonlinear and drift day to day.

So we turned to 🧠 data: >6,000 stim-response pairs over 4 months in a blind volunteer, letting a model learn the rules from the data.
September 27, 2025 at 2:52 AM
Curious though - many of the orgs leading this effort don’t seem to be on @bsky.app yet… Would love to see more #Blind, #Accessibility, and #DisabilityJustice voices here!
August 31, 2025 at 12:49 AM