Joel Lehman
banner
joelbot3000.bsky.social
Joel Lehman
@joelbot3000.bsky.social
ML researcher, co-author Why Greatness Cannot Be Planned. Creative+safe AI, AI+human flourishing, philosophy; prev OpenAI / Uber AI / Geometric Intelligence
2) Open-endedness: Field that rhymes most w/ unknown unknowns -- it explicitly aims to endlessly generate them. We believe OE algos can simultaneously aim towards robustness to them

Related to @jeffclune's AI-GAs, @_rockt, @kenneth0stanley, @err_more, @MichaelD1729, @pyoudeyer
January 24, 2025 at 4:00 PM
1) Artificial Life: Relative to its grand aspirations to recreate life's tapestry digitally, ALife is underappreciated. scaling + creativity may uncover novel robust neural architectures

See work done by @risi1979 @drmichaellevin @hardmaru @BertChakovsky @sina_lana + many others
January 24, 2025 at 4:00 PM
Paradigms like meta-learning ("learning how to learn") are exciting and seem like potential solutions. But they still assume a (meta-)frozen world, and need not incentivize to learn how to deal w/ the unknown (paper has more on other paradigms).
January 24, 2025 at 4:00 PM
This isn't a dig at LLMs, which are amazing but still interestingly fragile at times. Generalization of big NNs is great, but underlying assumption is train world = test world = static. The paper argues NN generalization does not directly target robustness to open unknown future.
January 24, 2025 at 4:00 PM
Contrasting evolution with machine learning helps highlight the blind spot: a "dumb" algo w/ no gradients or formalisms can yet create much more open-world robustness. In hindsight it makes sense: If algo implicitly denies a problem's existence, why would they best solve it?
January 24, 2025 at 4:00 PM
Evolution, like science or VC, can be seen as making many diverse bets, that future experiments may invalidate (diversify-and-filter). Organisms able to persist through many unexpected shocks are lindy, i.e. likely to persist through more. D&F can be integrated into ML methods.
January 24, 2025 at 4:00 PM
Interestingly, evolution's products = remarkably robust. Invasive species evolve in one habitat, dominate another. Humans zero-shot generalize from US driving to the UK (i.e. w/o any UK data) -- still a big challenge for AI. How does evolution do it, w/o gradients or foresight?
January 24, 2025 at 4:00 PM
Most open-world AI (like LLMs) rely on "anticipate-and-train": Collect as much diverse data as possible, in anticipation of everything the model might later encounter. This often works! But training assumes a static, frozen world. This leads to fragility under new situations.
January 24, 2025 at 4:00 PM
Economics papers were a bit different in the 80s?

From "Let's Take the Con out of Econometrics" by Edward Leamer, >3k citations
December 9, 2024 at 10:05 PM