EunJeong Hwang
@ejhwang.bsky.social
250 followers 75 following 11 posts
PhD @ UBC. LLMs/NLP
Posts Media Videos Starter Packs
Pinned
ejhwang.bsky.social
Theory of Mind can make LLMs better at dialogue: more strategic, goal-oriented, enabling long-horizon adaptation!

In our new paper, we introduce ToMA, a dialogue lookahead training framework that enables the LLMs to generate mental states that are maximally useful for achieving dialogue goals.🧵👇
ejhwang.bsky.social
Co-lead with @yuweiyin.bsky.social

Huge thanks to @veredshwartz.bsky.social, Peter West, Giuseppe Carenini
Paper: huggingface.co/papers/2509....
Code will be released soon!
ejhwang.bsky.social
Our findings highlight that:
👉 Social reasoning in LLMs cannot be achieved through optimizing their performance on general reasoning benchmarks alone!
👉It requires explicit modeling of mental states to enable safe, fair, and effective interactions with humans.
ejhwang.bsky.social
We also examine mental states (beliefs, desires, intentions, emotions, knowledge).

🔹 ToMA prioritizes intentions > emotions (other dimensions remain similar)
🔹 Uses +5.6% more 1st-order belief than bases, even when both are prompted equally for 0th/1st order states.
ejhwang.bsky.social
We analyze 4 scenario types: cooperation, negotiation, persuasion, and conflict.

ToMA outperforms the base under all settings. Its reasoning is more strategic (e.g., compromise, accommodation). Even in failures, ToMA shows more active engagement (e.g., failed persuasion).
ejhwang.bsky.social
ToMA adapts effectively to long conversations, sustaining strategic dialogue. When paired with diverse partners, it improves both its own goal completion and its partners’ success.
ejhwang.bsky.social
ToMA generates latent mental states and utterances optimized for social interaction goals using dialogue simulation signals. On Sotopia, it improves performance by +18.9% with Qwen2.5-3B and +6.9% with Qwen2.5-7B, while remaining competitive with a GPT-5 nano baseline.
ejhwang.bsky.social
Theory of Mind can make LLMs better at dialogue: more strategic, goal-oriented, enabling long-horizon adaptation!

In our new paper, we introduce ToMA, a dialogue lookahead training framework that enables the LLMs to generate mental states that are maximally useful for achieving dialogue goals.🧵👇
ejhwang.bsky.social
Im also curious about this
yoavgo.bsky.social
let's talk about "agents" (in the LLM sense). there's a lot of buzz around "multi-agent" systems where agents collaborate but... i don't really get how it differs from a thinking of a single agent with multiple modes of operation. what are the benefits of modeling as multi-agent?
ejhwang.bsky.social
Also, consider presenting a poster showcasing any ongoing projects or previously presented works from recent conferences. It will be a great chance to get feedback and promote your work!
ejhwang.bsky.social
We are organizing an NLP workshop in Vancouver on Dec 10. Consider registering if you're here for NeurIPS - it's free and open to everyone who is interested in NLP!
We have great members for invited talks and panel discussions.

More details here: nlp.cs.ubc.ca/future-of-nl...