Damien Teney
banner
damienteney.bsky.social
Damien Teney
@damienteney.bsky.social
Research Scientist @ Idiap Research Institute. @idiap.bsky.social
Adjunct lecturer @ Australian Institute for ML. @aimlofficial.bsky.social
Occasionally cycling across continents.
https://www.damienteney.info
Coming up at ICML: 🤯Distribution shifts are still a huge challenge in ML. There's already a ton of algorithms to address specific conditions. So what if the challenge was just selecting the right algorithm for the right conditions?🤔🧵
July 7, 2025 at 4:51 PM
Reposted by Damien Teney
OMG I can confirm this ... tested by @mbsariyildiz.bsky.social on our new upcoming work (vision/robotics). Thanks @damienteney.bsky.social the effect is real 😍

arxiv.org/abs/2505.20802
June 24, 2025 at 7:43 AM
⬇️ "Do We Always Need the Simplicity Bias? Looking for Optimal Inductive Biases in the Wild" arxiv.org/abs/2503.10065
Thought-provoking work by @damienteney.bsky.social et al. looking for optimal inductive biases (activation functions) in the wild (beyond image classification: regression, tabular, algorithmitc, shortcut).
ReLU works well on avg, but you can find completely different activations. #cvpr2025
June 14, 2025 at 1:26 PM
Coming up this week: (oral @cvprconference.bsky.social)
Do We Always Need the Simplicity Bias?
We take another step to understand why/when neural nets generalize so well. ⬇️🧵
June 7, 2025 at 11:19 AM
What does/will happen when the model learns (from experience or historical data) that accepted papers contain grand claims and bold numbers >SOTA?💡Hint: it's not "doing rigorous science".
Intology releases Zochi, the Artificial Scientist with state-of-the-art contributions accepted in ICLR 2025 workshops.

With a standardized automated reviewer, Zochi’s papers score an average of 7.67 compared to other publicly available papers generated by AI systems that score between 3 and 4.
June 1, 2025 at 10:12 AM
Leaner Transformers: More Heads, Less Depth

H. Saratchandran, @damienteney.bsky.social , S. Lucey
June 1, 2025 at 10:09 AM
Seen on the other place... 😕🤷
Any advice to get more ML and less bunnies/politics in my Bluesky feed?
April 22, 2025 at 7:14 PM
This ⬇️ is also great advice for writing a paper! 👌
But start it one *month* before the deadline.
Creating your slides at the last minute is a bad idea for multiple reasons, IMHO. Not only is generally bad to rush things, but having a first draft ready a week before your presentation will make you revisit them mentally while showering, cycling, driving, cooking ...
April 10, 2025 at 4:21 PM
"Deep learning does not require rethinking generalization"
If you enjoyed our work on inductive biases (eg the Neural Redshift arxiv.org/abs/2403.02241), you'll love this paper that rigorously articulates "soft inductive biases" & how they explain supposedly-mysterious behaviors of neural nets.
My new paper "Deep Learning is Not So Mysterious or Different": arxiv.org/abs/2503.02113. Generalization behaviours in deep learning can be intuitively understood through a notion of soft inductive biases, and formally characterized with countable hypothesis bounds! 1/12
March 10, 2025 at 2:29 AM
This is as useful as telling us what the authors had for breakfast on the day of the experiments. 🤷
February 27, 2025 at 10:54 PM
Reposted by Damien Teney
Une chose me marque au sujet de l'IA, notamment générative.

Que penser d'une innovation technico-scientifique dont la promotion dans les médias est plutôt du fait d'entrepreneurs, éditorialistes et politiciens, tandis que les scientifiques du secteur sont bien plus mesurés ?
February 9, 2025 at 8:51 PM
"Attention" in attention layers. How about sum-product layers? Key-query products? ... Neural attention has little to do with human attention. And the intuitive baggage of the name probably constrains our thinking about how transformers work. (1/2)
If you could fix one☝🏻 piece of terminology in your field, what would it be?

I’ll go first👇🏻(replying to myself like it’s normal)
December 11, 2024 at 8:59 AM
💡 Just learned about a useful short-hand notation for "expectation". Seems common for physicists but I can't remember coming across it before. With an example use-case below ⬇️
December 8, 2024 at 9:49 AM
Reviewing ML papers? 💡
If you feel that experiments are missing, ask yourself: are the additional results likely to affect the central message of the paper/nullify its main claims? If not, it's probably a nice suggestion (eg additional comparisons, datasets) but not a reason for rejection by itself.
November 25, 2024 at 8:35 AM
PSA: Can we use more bar charts in ML papers?
I can't recall the last time I wanted to compare dozens of numbers in a table to two decimal places. A visualization makes it much clearer whether claimed differences are significant.
November 24, 2024 at 9:06 AM
Writing tips! This should be mandatory reading for every PhD student 👇 We'll all benefit from it as readers.
For those who missed this post on the-network-that-is-not-to-be-named, I made public my "secrets" for writing a good CVPR paper (or any scientific paper). I've compiled these tips of many years. It's long but hopefully it helps people write better papers. perceiving-systems.blog/en/post/writ...
Writing a good scientific paper
perceiving-systems.blog
November 20, 2024 at 5:01 PM