Pat Alt
banner
patalt.org
Pat Alt
@patalt.org
Trustworthy AI, Counterfactual Explanations, Open-Source Software and other PhD things at TU Delft.

@julialang.org developer @taija.org

Website: www.patalt.org
Haven’t read the full paper, but in my mind, this is just an inevitable consequence of extremely high degrees of freedom and MI just exists in the context of that
June 12, 2025 at 12:21 PM
I don’t think multiplicity of explanations is necessarily problematic, in fact it may often be desirable e.g. in the context of algorithmic recourse. But it’s definitely important to be transparent about it when interpreting and communicating results in MI and XAI more broadly
June 12, 2025 at 12:17 PM
I‘m avoiding actual eye contact at all costs
May 23, 2025 at 4:04 PM
I did use RCall.jl back then to extend Plots.jl functions with ggplot2 (incredible scenes) and even those monstrosities still work, so props to #rstats I guess.
May 22, 2025 at 12:50 PM
I've had little time for #julialang dev work in recent weeks as I've been wrapping up my thesis. Can't wait to get back to it soon and DifferentiationInterface.jl will be one of the first places to look at.
May 20, 2025 at 8:24 AM
... but not area of expertise I'm afraid so just thinking out loud
May 14, 2025 at 1:47 PM
hmm I guess you're thinking of something along the lines of probing activations (see e.g. arxiv.org/abs/2404.14082) but that just maps from learned representations to some output. Honestly the best I can think of for attribution is membership inference attacks: www.cs.cornell.edu/~shmat/shmat...
www.cs.cornell.edu
May 14, 2025 at 1:47 PM
In all seriousness, I’ve learned a lot from the work of @mmitchell.bsky.social and others in her field and I’ve also learned a lot from Hard Fork. There’s disagreements but I feel that there’s also certain overlaps and you+Kevin have a fantastic platform to discuss them using >300 characters.
May 2, 2025 at 4:28 AM
I happen to know a great podcast where this conversation could be continued 👀
May 2, 2025 at 4:28 AM
Assuming it can be solved and assuming hallucinations become less of an issue (o4-mini 👀), there is still a very valid question about how environmentally sustainable this is vis-a-vis traditional search (and evidence has been pretty damning, e.g. techwontsave.us/episode/229_...)
Generative AI is a Climate Disaster w/ Sasha Luccioni - Tech Won’t Save Us
A left-wing podcast for better technology and a better world.
techwontsave.us
April 26, 2025 at 5:50 AM
Hard Fork did a good episode on this a while ago when Google’s AI summaries still recommended people eat rocks. How sustainable is it to essentially take away revenue from your own suppliers? Maybe this can be solved, but I’m not convinced it serves us or Google well in the long term.
April 26, 2025 at 5:50 AM
I’ve been positively surprised by Brave’s AI summaries lately, because they induce me to click on links to multiple sources. That helps with one major concern: diminishing incentives for folks to actually freely supply the content you’re going to just AI summarize.
April 26, 2025 at 5:50 AM