Tejas Srinivasan
@tejassrinivasan.bsky.social
CS PhD student at USC. Former research intern at AI2 Mosaic. Interested in human-AI interaction and language grounding.
We show that adapting AI behavior to user trust levels, by showing AI explanations during moments of low trust and counter-explanations during high trust, effectively mitigates inappropriate reliance and improves decision accuracy! These improvements are also seen with other intervention strategies.
February 27, 2025 at 6:00 PM
We show that adapting AI behavior to user trust levels, by showing AI explanations during moments of low trust and counter-explanations during high trust, effectively mitigates inappropriate reliance and improves decision accuracy! These improvements are also seen with other intervention strategies.
In two decision-making tasks, we find that low and high user trust levels worsen under-reliance and over-reliance on AI recommendations, respectively 💀💀💀
Can the AI assistant do something differently when user trust is low/high to prevent such inappropriate reliance? Yes!
Can the AI assistant do something differently when user trust is low/high to prevent such inappropriate reliance? Yes!
February 27, 2025 at 5:59 PM
In two decision-making tasks, we find that low and high user trust levels worsen under-reliance and over-reliance on AI recommendations, respectively 💀💀💀
Can the AI assistant do something differently when user trust is low/high to prevent such inappropriate reliance? Yes!
Can the AI assistant do something differently when user trust is low/high to prevent such inappropriate reliance? Yes!
People are increasingly relying on AI assistance, but *how* they use AI advice is influenced by their trust in the AI, which the AI is typically blind to. What if they weren’t?
We show that adapting AI assistants' behavior to user trust mitigates under- and over-reliance!
arxiv.org/abs/2502.13321
We show that adapting AI assistants' behavior to user trust mitigates under- and over-reliance!
arxiv.org/abs/2502.13321
February 27, 2025 at 5:56 PM
People are increasingly relying on AI assistance, but *how* they use AI advice is influenced by their trust in the AI, which the AI is typically blind to. What if they weren’t?
We show that adapting AI assistants' behavior to user trust mitigates under- and over-reliance!
arxiv.org/abs/2502.13321
We show that adapting AI assistants' behavior to user trust mitigates under- and over-reliance!
arxiv.org/abs/2502.13321