Andrew Ilyas
aifi.bsky.social
Andrew Ilyas
@aifi.bsky.social
Machine Learning | Stein Fellow @ Stanford Stats (current) | Assistant Prof @ CMU (incoming) | PhD @ MIT (prev)

https://andrewilyas.com
Really big thanks to the organizers for the invitation & for putting together such a fun workshop.

My talk: simons.berkeley.edu/talks/andrew...

The paper: arxiv.org/abs/2503.13751

Joint work with @logn.bsky.social, Benjamin Chen, Axel Feldmann, Billy Moses, and @aleksmadry.bsky.social
April 10, 2025 at 9:34 PM
After another very lively poster session, our final talk of the day from @coallaoh.bsky.social - who is talking about the interactions between ML, attribution, and humans!
December 15, 2024 at 12:41 AM
Our second-last talk of the day - Robert Geirhos on “how do we make attribution easy?”
December 14, 2024 at 10:36 PM
One great poster session (and lunch) later - Baharan Mirzasoleiman on data selection for large language models!
December 14, 2024 at 10:22 PM
After some amazing contributed talks, we now have a panel moderated by @sadhika.bsky.social - with @coallaoh.bsky.social Baharan Mirzasoleiman and Robert Geirhos!
December 14, 2024 at 7:32 PM
Next up, @sanmikoyejo.bsky.social on predicting downstream properties of language models!
December 14, 2024 at 6:14 PM
You might be looking for smoothed analysis (en.wikipedia.org/wiki/Smoothe...)? Kind of interpolates between worst and average-case: no distribution over problem instances you have to specify but ignores "brittle" worst-case instances. Explains, eg, simplex algorithm (paper: arxiv.org/abs/cs/0111050)
Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time
We introduce the smoothed analysis of algorithms, which is a hybrid of the worst-case and average-case analysis of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected ...
arxiv.org
November 27, 2024 at 12:07 AM