Thomas Hueber
banner
thueber.bsky.social
Thomas Hueber
@thueber.bsky.social
CNRS research scientist (directeur de recherches) - Grenoble Alpes Univ. (France) - automatic speech and language processing, assistive tech, speech and language development
👏 Congratulations to the whole team, and especially to Marc-Antoine Georges and Marvin Lavechin!

This work has been conducted at GIPSA-lab (@cnrs.fr/ Grenoble Alpes University) and is supported by the MIAI Cluster AI institute.
November 5, 2025 at 8:49 AM
🧠
Why this matters?

These works contribute to the development of computational models that learn acoustic, articulatory, and linguistic structure with minimal supervision and can be used to study the mechanisms underlying speech acquisition in children.
November 5, 2025 at 8:49 AM
2) From perception to production
How acoustic invariance facilitates articulatory learning in a self-supervised vocal imitation model
📍 Gather Session 1 — 5 Nov 2025 @ 08:00 (online)
🔗 Full text: arxiv.org/abs/2509.05849
🎧 Demo&code: marvinlvn.github.io/projects/fro...
From perception to production: how acoustic invariance facilitates articulatory learning in a self-supervised vocal imitation model
Human infants face a formidable challenge in speech acquisition: mapping extremely variable acoustic inputs into appropriate articulatory movements without explicit instruction. We present a computati...
arxiv.org
November 5, 2025 at 8:49 AM
DevAI&Speech involves researcher and engineers from GIPSA-lab (CNRS, Université Grenoble Alpes), Laboratoire de Psychologie et de NeuroCognition (Mathilde Fort), Tampere University, Inria, and Atos (6/6)
July 2, 2025 at 11:39 AM
📢 Several fully funded PhD positions will be announced soon — but feel free to reach out already if you’re interested! (5/6)
July 2, 2025 at 11:39 AM
🤖 embedding SpeechLMs in our humanoid robots and training them through natural interaction with humans
👶 better understand the underlying mechanisms of speech acquisition through experimental studies involving parents, children, and robots at Grenoble’s Babylab. (4/6)
July 2, 2025 at 11:39 AM
Key goals include:
🧠 integrating knowledge of biomechanics in SpeechLMs
👁️ enabling SpeechLMs with multimodal input/output processing (3/6)
July 2, 2025 at 11:39 AM
We’ll be developing Speech Language Models (SpeechLMs) that learn like children do: through multimodal sensory input (audio, images) and interactive experiences with both their speech production system and social environment. (2/6)
July 2, 2025 at 11:39 AM