Navita Goyal
banner
navitagoyal.bsky.social
Navita Goyal
@navitagoyal.bsky.social
PhD student @umdcs, Member of @ClipUmd lab | Earlier @AdobeResearch, @IITRoorkee
This is a great use case of linear erasure! It's always exciting to see interesting applications of these techniques :)
September 24, 2025 at 6:45 PM
Congrats! 🎉 Very excited to follow your lab's work
August 19, 2025 at 9:50 PM
Congratulations and welcome to Maryland!! 🎉
May 30, 2025 at 4:30 PM
This option is available on the menu (three dots) next to the comment/repost/like section. I only see this when I am in the Discover feed though; not on my regular feed
April 27, 2025 at 4:16 PM
This is called going above and beyond for job assigned to you.
February 26, 2025 at 1:28 AM
Our paper on studying over-reliance in claim verification with the help of an LLM assistance arxiv.org/abs/2310.12558

Re mitigation: we find that showing users contrastive explanations—reasoning both why a claim may be true and why it may be false—helps counter over-reliance to some extent.
Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Large Language Models (LLMs) are increasingly used for accessing information on the web. Their truthfulness and factuality are thus of great interest. To help users make the right decisions about the ...
arxiv.org
February 25, 2025 at 4:43 PM
Reposted by Navita Goyal
The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy Features

Despite hopes that explanations improve fairness, we see that when biases are hidden behind proxy features, explanations may not help.

Navita Goyal, Connor Baumler +al IUI’24
hal3.name/docs/daume23...
>
December 9, 2024 at 11:41 AM
Reposted by Navita Goyal
Large Language Models Help Humans Verify Truthfulness—Except When They Are Convincingly Wrong

Should one use chatbots or web search to fact check? Chatbots help more on avg, but people uncritically accept their suggestions much more often.

by Chenglei Si +al NAACL’24

hal3.name/docs/daume24...
>
December 3, 2024 at 9:31 AM
🙋‍♀️
November 20, 2024 at 11:31 AM