Eli Chien
banner
elichien.bsky.social
Eli Chien
@elichien.bsky.social
Incoming assistant professor at National Taiwan University. Postdoc at GeorgiaTech. Ph.D. from UofIllinois. Focus on privacy + graph learning. #MachineUnlearning #DifferentialPrivacy #DP #GNN #LLM

Homepage: https://sites.google.com/view/eli-chien/home
I don't recall many papers getting retracted from NeurICMLR or having errata after being published. This is really sad and unfortunate.

That being said, feel free to let me know if there's any error in my work. Will really appreciate the comments. 4/n,n=4.
August 16, 2025 at 4:20 AM
New students and researchers kept rediscovering that some "famous" papers are wrong (in the best case...) by wasting tons of time, but then still have to cite or compare to these works since they're well-cited or published in NeurICMLR. How's that even make sense? 3/n
August 16, 2025 at 4:20 AM
Examples: critical errors in papers, awful reproducibility, and the worst, intentional lying/cheating. These researchers still earn a number of citations, nice jobs, and have not been "punished" in terms of their reputation. 2/n
August 16, 2025 at 4:20 AM
Preprint: arxiv.org/abs/2412.08559

Stay tuned for the GitHub code and our updated version (we have some new results!).

I also want to thank my friends @jyhong.bsky.social , Chulin Xie, Ayush Sekhari, Martin Pawelczyk for their helpful discussion and clarification of their works! 2/n, n=2.
Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning
Large Language Models are trained on extensive datasets that often contain sensitive, human-generated information, raising significant concerns about privacy breaches. While certified unlearning appro...
arxiv.org
May 1, 2025 at 2:08 PM
Preprint: arxiv.org/abs/2412.08559

Stay tuned for the GitHub code and our updated version (we have some new results!).

I also want to thank my friends @jyhong.bsky.social Chulin Xie, Ayush Sekhari, Martin Pawelczyk for their helpful discussion and clarification of their works! 2/n, n=2.
Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning
Large Language Models are trained on extensive datasets that often contain sensitive, human-generated information, raising significant concerns about privacy breaches. While certified unlearning appro...
arxiv.org
May 1, 2025 at 2:06 PM
The last one is crazy 🤣🤣🤣
April 14, 2025 at 3:21 AM
I would like to thank Pan Li, Olgica Milenkovic, Kamalika Chaudhuri, and Cho-Jui Hsieh for their help during my job search. I also appreciate the help from all my friends who provided me suggestions or discussed the situation with me! (can't type all due to space limit). 3/3
April 9, 2025 at 7:54 PM
I will keep working on trustworthy/regulatable AI, especially on privacy, machine unlearning, and AI copyright issues. Feel free to let me know if you want to collaborate in the future! Also, I wish the best of luck to my friends who are still on the job market now. It is a really tough year :( 2/3
April 9, 2025 at 7:54 PM
I believe so, but I will have to wait until Monday to know. I will DM you the Zoom link if there is one!
March 23, 2025 at 9:19 PM
Thanks for sharing! We are actually writing something related to this. Will probably cite this post :p
March 12, 2025 at 10:54 PM
Preprint: arxiv.org/abs/2410.01068

We will update with more related works, and make changes as promised during the rebuttal soon.

I am now cooking something more exciting along this line of work with my collaborators. Hope to share it with everyone soon :p
Convergent Privacy Loss of Noisy-SGD without Convexity and Smoothness
We study the Differential Privacy (DP) guarantee of hidden-state Noisy-SGD algorithms over a bounded domain. Standard privacy analysis for Noisy-SGD assumes all internal states are revealed, which lea...
arxiv.org
January 22, 2025 at 7:39 PM