Jean-Sébastien Giroux
banner
jsgiroux.bsky.social
Jean-Sébastien Giroux
@jsgiroux.bsky.social
I love learning new things and acquiring new skills.
DL Research Scientist and future CEO, leveraging AI to mitigate climate change, develop Quantum computing algorithms & understand the brain.
PS: anyone has suggestions for other great quantum physics foundation papers?
(6/6)
November 26, 2025 at 5:41 PM
How many Einstein we lost because they listened to others and got evaluated by metrics that weren’t designed for them in the first place ?

Paper: inters.org/files/einste...
(5/6)
November 26, 2025 at 5:41 PM
In 1905, he published 3 of the most influential papers in Physics.

And knowing this the quote :
“If you judge a fish by its ability to climb a tree, it will live it whole life believing that he is stupid” takes all his meaning.
(4/6)
November 26, 2025 at 5:41 PM
He must have face so much critics and judgments from his family, friends, colleagues…
If you still doubt this - apply for PhDs for 4 straight years in heavy schools and after you get rejected everywhere tell everyone you are going to be all in on your research projects by yourself.
(3/6)
November 26, 2025 at 5:41 PM
No professors nor universities believes in his ability to do great research - Rejected everywhere he applied from 1901 to 1905.
Still he believes in his ideas so much that he decided to go all in on them.
(2/6)
November 26, 2025 at 5:41 PM
With AGI it should be able to do inference on outside of distribution data. I would argue that the current performance of our best model has seen the test data somewhere in their training set - Data leakage.
I would love to hear from you on my statement! (4/4)
November 20, 2025 at 1:47 AM
We optimized the model with a gradient descent methodology minimizing a selected loss function that should give us optimal weights models for that specific dataset, and that generalize well on the validation set - Similar data distribution. (3/4)
November 20, 2025 at 1:47 AM
We are using generative AI algorithms trained on as many good quality data as possible. So in theory the model has as much information of the world as it has been trained on - Which is a lot! (2/4)
November 20, 2025 at 1:47 AM
Here is the official Artificial Intelligence for the Earth Systems journal link!

journals.ametsoc.org/view/journal...
journals.ametsoc.org
November 18, 2025 at 8:53 PM
We should create a new research direction - Maximizing researcher contributions to humanity.

I’m sure lot of great researchers have awesome ideas for that!
November 11, 2025 at 1:30 PM
I think Canada needs a clearer definition of what “top talent” means before prioritizing recruitment from abroad.
Many of us already here are struggling to find opportunities - not because of a lack of talent.
Policies like this only push young researchers to leave and do great research elsewhere.
November 7, 2025 at 2:23 AM
As a beginners myself, I highly recommend this French video from one of the most exceptional professors in the domain to get started with QC: youtu.be/XkGaitu3EbI?...
Quantique - Le début d'un temps nouveau
YouTube video by Coeur des sciences - UQAM
youtu.be
November 2, 2025 at 3:54 AM
I take a lot of pride in this paper since it was done as an undergraduate student-Nothing is impossible with hard work.

I love research, and I truly do it out of pure passion. I feel alive when collaborating with highly talented people and being around those who understand and challenge my ideas.
October 15, 2025 at 2:19 PM
3.We showed that a model trained only with the predictand achieved similar performance to a model trained with multiple predictors, challenging prevailing beliefs in our field.
October 15, 2025 at 2:19 PM
2.We created a model that incorporates learned interpolation directly within the network instead of performing it beforehand. By reducing the model inputs by a factor of 16 (4× downscaling), we showed that learned interpolation achieves performance equal to that of traditional interpolation.
October 15, 2025 at 2:19 PM
We decided to challenge these ideas by:
1.Comparing models trained with multiple sparse and nearby domains. We showed that using sparse domains maintains performance equal-better than that of nearby domains.
We also demonstrated that adding more domains keeps performance unchanged or improves it.
October 15, 2025 at 2:19 PM
I will never retire from research.
I truly do it for fun, and I would even pay just to do it with talented people.
September 27, 2025 at 5:08 AM