Kush Varshney कुश वार्ष्णेय
krvarshney.bsky.social
Kush Varshney कुश वार्ष्णेय
@krvarshney.bsky.social
I wrote a book.
Free pdf: http://trustworthymachinelearning.com
Paperback: http://amazon.com/dp/B09SL5GPCD

Posts are my own and don't necessarily represent IBM.
"When language no longer requires belief, AI’s fluency becomes a kind of anesthesia. And we are the ones it sedates. I’m reminded of T. S. Eliot’s ghostly image of a “patient etherized upon a table,” alive yet emptied of agency." www.psychologytoday.com/us/blog/the-...
The Perfect Emptiness of AI
We’ve built a technology that speaks like a sage but thinks like a spreadsheet.
www.psychologytoday.com
October 30, 2025 at 11:47 AM
Reposted by Kush Varshney कुश वार्ष्णेय
Granite-4.0-H-Small: a 32B-A9B MoE Mamba for high efficency

Damn! IBM is on the map. The American Qwen? I barely even knew IBM made LLMs, this is solid

www.ibm.com/new/announce...
October 2, 2025 at 3:21 PM
Reposted by Kush Varshney कुश वार्ष्णेय
Recently got to have a super interesting conversation with the infinitely fascinating @krvarshney.bsky.social about why we need to make AI safe, and the very nature of ethics in a disaggregated digital world. Have a watch !
www.youtube.com/watch?v=g2A7...
Why do AI models need to be safe?
YouTube video by IBM Research
m.youtube.com
September 26, 2025 at 3:31 PM
Check out IBM's latest open source tools for trustworthy AI on GitHub:

In-Context Explainability 360

FactReasoner

Contextual Privacy

Links from here: research.ibm.com/blog/debuggi...
Debugging LLMs to improve their credibility
New tools from IBM Research can help LLM users check AI-generated content for accuracy and relevance and defend against jailbreak attacks.
research.ibm.com
August 4, 2025 at 3:45 PM
"In my own interactions with ChatGPT, it has often responded, with patently insincere flattery: “That’s a great question.” It has never responded: “That’s the wrong question.” It has never challenged my moral convictions or asked me to justify myself."
www.nytimes.com/2025/08/02/o...
Opinion | A.I. Is Shedding Enlightenment Values
www.nytimes.com
August 3, 2025 at 6:41 PM
"Until we recognise that the debate about AI is not just about what machines can do but also about how humans should value education and knowledge, it will remain mired in confusion." observer.co.uk/news/opinion...
AI thrives where education has been devalued | The Observer
A culture that views knowledge as a means to an end invites the misuse of new technology
observer.co.uk
August 3, 2025 at 6:04 PM
"The true measure of progress in AI lies not in the sophistication of algorithms but in whether it genuinely serve the people and communities they seek to empower. Without grounding in human dignity and local contexts, AI risks creating technological subjugation."
www.brookings.edu/articles/ai-...
AI is not Africa’s savior: Avoiding technosolutionism in digital development | Brookings
Chinasa T. Okolo discusses how Africa can ensure AI progress serves the contitnent's broader goals of social and economic empowerment.
www.brookings.edu
August 3, 2025 at 2:56 AM
Reposted by Kush Varshney कुश वार्ष्णेय
What do authorship, copyright, and creativity mean in the age of AI? @krvarshney.bsky.social talks to us about it:
research.ibm.com/blog/kush-va...
How IBM’s Kush Varshney became an iconic ’test’ photo
The IBM Fellow reflects on copyright law, generative AI, and how he became the face of the modern camera man
research.ibm.com
July 21, 2025 at 3:45 PM
"Training yourself to observe and challenge these automatic thoughts—what psychologists call metacognition—is strikingly similar to the Buddhist concept of yoniso manasikāra, or wise attention." www.forbes.com/councils/for...
Selective Thinking Is The Skill Every Leader Needs
When you observe your mind without being swept away, you take back control from unconscious, emotional thinking—the kind that fuels rash decisions and poor leadership.
www.forbes.com
July 7, 2025 at 2:03 AM
"The next decade will be shaped by innovators using AI to solve real problems in real communities. The future won’t be written in Silicon Valley, but in Lagos, Jakarta, Cairo and Dubai. AI-powered solutions fused with local knowledge will unlock this future." www.weforum.org/stories/2025...
AI: Rewriting the future of finance and financial inclusion
A new AI-driven framework that is grounded in the distinct needs of the underserved is creating a blueprint for the future of finance around the world.
www.weforum.org
July 2, 2025 at 3:06 AM
Reposted by Kush Varshney कुश वार्ष्णेय
Weike Zhao, Chaoyi Wu, Yanjie Fan, Xiaoman Zhang, Pengcheng Qiu, Yuze Sun, Xiao Zhou, Yanfeng Wang, Ya Zhang, Yongguo Yu, Kun Sun, Weidi Xie
An Agentic System for Rare Disease Diagnosis with Traceable Reasoning
https://arxiv.org/abs/2506.20430
June 26, 2025 at 5:43 AM
Reposted by Kush Varshney कुश वार्ष्णेय
📣 Today we open-sourced EvalAssist, a web-based tool that makes it super easy to develop criteria for llm judges. You can run this now locally and then scale up with notebooks using Unitxt. Check out the AI Alliance article to get the scoop:
thealliance.ai/blog/llm-as-...
LLM-as-a-Judge Without the Headaches: EvalAssist Brings Structure and Simplicity to the Chaos of LLM Output Review | AI Alliance
Evaluating AI model outputs at scale is a major challenge for teams using LLMs, especially when assessing nuanced qualities like politeness, fairness, and tone that traditional benchmarks miss. IBM Re...
thealliance.ai
June 16, 2025 at 3:38 PM
LLM-as-a-Judge Simplified — Start Small, Refine Fast, Scale Smart ibm.github.io/eval-assist/
EvalAssist
EvalAssist simplifies LLM-as-a-Judge by supporting users in iteratively refining evaluation criteria in a web-based user experience.
ibm.github.io
June 16, 2025 at 3:18 PM
Reposted by Kush Varshney कुश वार्ष्णेय
🚨 Announcing our #keynote speakers for the 3rd Trustworthy AI #Workshop @deeplearningindaba.bsky.social ! We are excited to welcome thought leaders pushing the boundaries of #ResponsibleAI

@krvarshney.bsky.social is a Fellow IBM Research
June 11, 2025 at 9:36 AM
Reposted by Kush Varshney कुश वार्ष्णेय
Djallel Bouneffouf, Matthew Riemer, Kush Varshney: The Ultimate Test of Superintelligent AI Agents: Can an AI Balance Care and Control in Asymmetric Relationships? https://arxiv.org/abs/2506.01813 https://arxiv.org/pdf/2506.01813 https://arxiv.org/html/2506.01813
June 4, 2025 at 6:11 AM
Reposted by Kush Varshney कुश वार्ष्णेय
Announcing our keynote speakers for #FAccT2025! 🎉

Suresh Venkatasubramanian (Brown)
Nathalie Smuha (KU Leuven)
Kristian Lum (Google DeepMind)
Molly Crockett (Princeton)

And the plenary panel will be on “Pathways of Change and the Future of Responsible AI"
May 16, 2025 at 10:43 AM
Frying gulab jamuns helps you understand the phenomenon of tidal locking between moons and planets.
May 6, 2025 at 2:07 PM
Reposted by Kush Varshney कुश वार्ष्णेय
🔗 Want to connect your agents together wherever they are🌎?

See what's possible with ACP! This video will show:
🎁 How to wrap an agent with the SDK
🔈 Calling out with a a standardized client
⛓️Chaining ACP calls to different agents
📲 Prototype of ACPCallingAgent

👉 www.youtube.com/watch?v=Nzaq...
I tried getting LLMs to work together using ACP (Agent Communication Protocol)
YouTube video by Nicholas Renotte
www.youtube.com
May 6, 2025 at 2:01 PM
Happy to see @bhoov.bsky.social recognized in this article about spin glasses and associative memory.
www.quantamagazine.org/the-strange-...
The Strange Physics That Gave Birth to AI | Quanta Magazine
Modern thinking machines owe their existence to insights from the physics of complex materials.
www.quantamagazine.org
May 3, 2025 at 7:51 PM
Reposted by Kush Varshney कुश वार्ष्णेय
🤖 ✏️ There is a better way to explain how you used AI in your {research paper, college essay, blog posts, …}. Check out our new AI Attribution Toolkit and look for us at #CHI2025!

aiattribution.github.io
dl.acm.org/doi/full/10....
AI Attribution Toolkit
An attribution statement identifies not only the presence of AI involvement, but also how AI was used. This approach makes important distinctions between different types and amounts of AI…
aiattribution.github.io
April 29, 2025 at 12:01 AM
April 17, 2025 at 12:08 AM
“If we think about how human beings in the world, we do see bad things, so it’s not about allowing the language model to see only the good things. It’s about understanding the full spectrum — both good and bad,” says Ko, “and choosing to uphold our values when we speak.”
news.mit.edu/2025/trainin...
Training LLMs to self-detoxify their language
A new method called self-disciplined autoregressive sampling (SASA) allows large language models to detoxify their own outputs, without sacrificing fluency.
news.mit.edu
April 15, 2025 at 10:35 PM