Lingjun Zhao
@lingjunz.bsky.social
93 followers 47 following 6 posts
NLP PhD student @UMD. Study how to make visual-language models more trustworthy and useful for humans. Website: http://lingjunzhao.github.io
Posts Media Videos Starter Packs
📄 Paper: arxiv.org/abs/2505.19299
💻 Code: github.com/lingjunzhao/PE…
🙏 Huge thanks to my advisor @haldaume3.bsky.social and everyone who shared insights!
🚨 New #EMNLP2025 (main) paper!
LLMs often produce inconsistent explanations (62–86%), hurting faithfulness and trust in explainable AI.
We introduce PEX consistency, a measure for explanation consistency,
and show that optimizing it via DPO improves faithfulness by up to 9.7%.
Reposted by Lingjun Zhao
What should Machine Translation research look like in the age of multilingual LLMs?

Here’s one answer from researchers across NLP/MT, Translation Studies, and HCI.
"An Interdisciplinary Approach to Human-Centered Machine Translation"
arxiv.org/abs/2506.13468
An Interdisciplinary Approach to Human-Centered Machine Translation
Machine Translation (MT) tools are widely used today, often in contexts where professional translators are not present. Despite progress in MT technology, a gap persists between system development and...
arxiv.org
Reposted by Lingjun Zhao
Do you like trivia? Can you spot when AI is feeding you BS? Or can you make AIs turn themselves inside out? Then on June 14 at College Park (or June 21 online), we have a competition for you.
QANTA Logo: Question Answering is not a Trivial Activity

[Humans and computers competing on a buzzer]
Super thankful for my wonderful collaborators: @pcascanteb.bsky.social @haldaume3.bsky.social Mingyang Xie, Kwonjoon Lee
We introduce a super simple yet effective strategy to improve video-language alignment (+18%): add hallucination correction in your training objective👌
Excited to share our accepted paper at ACL: Can Hallucination Correction Improve Video-language Alignment?
Link: arxiv.org/abs/2502.15079
For the ACL ARR review, I’ve heard complaints about the workload—some reviewers have 16 papers. Even though I only need to write 1 rebuttal and respond to 4, it still feels substantial. For those managing more (thank you!), it can be difficult to thoroughly engage with every rebuttal.
Reposted by Lingjun Zhao
There is a new version of the Research Plan for NIST's AI Safety Consortium (AISIC) in response to EOs. I did a diff.

Out: safety, responsibility, sociotechnical, fairness, working w fed agencies, authenticating content, watermarking, RN of CBRN, autonomous replication, ctrl of physical systems
>
Page one of diff. Page 2 of diff. Page 3 of diff.
Reposted by Lingjun Zhao
This is my first time serving as an AC for a big conference.

Just read this great work by Goyal et al. arxiv.org/abs/2411.11437

I'm optimizing for high coverage and low redundancy—assigning reviewers based on relevant topics or affinity scores alone feels off. Seniority and diversity matter!
Causal Effect of Group Diversity on Redundancy and Coverage in Peer-Reviewing
A large host of scientific journals and conferences solicit peer reviews from multiple reviewers for the same submission, aiming to gather a broader range of perspectives and mitigate individual biase...
arxiv.org