Aditya Vashistha
imadityav.bsky.social
Aditya Vashistha
@imadityav.bsky.social
Assistant Professor at Cornell. Research in HCI4D, Social Computing, Responsible AI, and Accessibility. https://www.adityavashistha.com/
Thank you to all our participants, co-organizers, student volunteers, funders, and partners who made this possible. And to Joy Ming for the beautiful visual summaries.
May 23, 2025 at 4:00 PM
Our conversations spanned:
🔷 Meaningful use cases of AI in high-stakes global settings
🔷 Interdisciplinary methods across computing and humanities
🔷 Partnerships between academia, industry, and civil society
🔷 The value of local knowledge, lived experiences, and participatory design
May 23, 2025 at 4:00 PM
Over three days, we explored what it means to design and govern pluralistic and humanistic AI technologies — ones that serve diverse communities, respect cultural contexts, and center social well-being. The summit was part of the Global AI Initiative at Cornell.
May 23, 2025 at 4:00 PM
This was the week of reflection, new ideas, and a renewed sense of urgency to design AI systems that serve marginalized communities globally. Can't wait for what's next.
May 2, 2025 at 1:06 AM
Pragnya Ramjee presented work (with Mohit Jain at MSR India) on deploying LLM tools for community health workers in India. In collaboration with Khushi Baby, we show how thoughtful AI design can (and cannot) bridge critical informational gaps in low-resource settings.
dl.acm.org/doi/10.1145/...
ASHABot: An LLM-Powered Chatbot to Support the Informational Needs of Community Health Workers | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
dl.acm.org
May 2, 2025 at 1:06 AM
Ian René Solano-Kamaiko presented our study on how algorithmic tools are already shaping home care work—often invisibly. These systems threaten workers’ autonomy and safety, underscoring the need for stronger protections and democratic AI governance.
dl.acm.org/doi/10.1145/...
"Who is running it?" Towards Equitable AI Deployment in Home Care Work | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
dl.acm.org
May 2, 2025 at 1:06 AM
Joy Ming presented our award-winning paper on designing advocacy tools for home care workers. In this work, we unpack tensions between individual and collective goals and highlight how to use data responsibly in frontline labor organizing.
dl.acm.org/doi/10.1145/...
Exploring Data-Driven Advocacy in Home Health Care Work | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
dl.acm.org
May 2, 2025 at 1:06 AM
Dhruv presented our cross-cultural study on AI writing tools and their Western-centric biases. We found that AI suggestions disproportionately benefit American users and subtly nudge Indian users toward Western writing norms—raising concerns about cultural homogenization.
dl.acm.org/doi/10.1145/...
AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
dl.acm.org
May 2, 2025 at 1:06 AM
Sharon Heung presented our work on personalizing moderation tools to help disabled users manage ableist content online. We showed how users want control over filtering and framing—while also expressing deep skepticism toward AI-based moderation.
dl.acm.org/doi/10.1145/...
"Ignorance is not Bliss": Designing Personalized Moderation to Address Ableist Hate on Social Media | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
dl.acm.org
May 2, 2025 at 1:06 AM
www.fastcompany.com/91324551/cha...

Kudos to Dhruv Agarwal for leading this work and such fun collaboration with @informor.bsky.social!
Why writing with ChatGPT makes you sound like an American
A new study from Cornell University suggests some large generative AI models have a Western bias that strips away cultural nuance.
www.fastcompany.com
May 2, 2025 at 12:41 AM
As these tools become more common, it’s critical to ask: Whose voice is being amplified—and whose is being erased? www.theatlantic.com/technology/a...
The Great Language Flattening
Chatbots learned from human writing. Now it’s their turn to influence us.
www.theatlantic.com
May 2, 2025 at 12:41 AM
Huge congratulations to Mahika Phutane for leading this work, and Ananya Seelam for her contributions!

We’re thrilled to share this at ACM FAccT 2025.

Read the full paper: lnkd.in/eCsAupvK
April 12, 2025 at 8:57 PM
Our findings make a clear case: AI moderation systems must center disabled people’s expertise, especially when defining harm and safety.

This isn’t just a technical problem—it’s about power, voice, and representation.
April 12, 2025 at 8:57 PM
Disabled participants frequently described these AI explanations as “condescending” or “dehumanizing.”

The models reflect a clinical, outsider gaze—rather than lived experience or structural understanding.
April 12, 2025 at 8:57 PM
AI systems often underestimate ableism—even in clear-cut cases of discrimination or microaggressions.

And when they do explain their decisions? The explanations are vague, euphemistic, or moralizing.
April 12, 2025 at 8:57 PM
We studied how AI systems detect and explain ableist content—and how that compares to judgments from 130 disabled participants.

We also analyzed explanations from 7 major LLMs and toxicity classifiers. The gaps are stark.
April 12, 2025 at 8:57 PM