ElevenLabs
banner
elevenlabs.io
ElevenLabs
@elevenlabs.io
Our mission is to make content universally accessible in any language and voice. https://elevenlabs.io
voice-chat-03 is a rich multimodal chat interface with state management built in. Pass your Elevenlabs Agent ID as a prop and ship it.

Try the demo: ui.elevenlabs.io/blocks#voice...
October 8, 2025 at 9:24 AM
transcriber-01 is an open-source voice dictation component you can drop into any web app.

Try the demo: ui.elevenlabs.io/blocks#trans...
October 8, 2025 at 9:23 AM
Introducing ElevenLabs UI - open-source components for AI audio & voice agents.

• 22 components & examples for chat interfaces, transcription, music, and more
• Fully customizable
• MIT licensed
October 8, 2025 at 9:23 AM
We're launching a comprehensive video series of Agent Tutorials that will take you from beginner to expert in building, deploying, customizing, and testing conversational AI agents with ElevenLabs.
September 23, 2025 at 3:52 PM
Last week @bolt.new brought the World's Largest Hackathon winners to New York to see their apps featured in Times Square.

We invited @serg.tech, the Voice AI Challenge winner, to our office to share their experience adding a conversational cooking agent into their app.
August 27, 2025 at 3:32 PM
Introducing the ElevenLabs Kotlin SDK

Add conversational agents to your Android apps in minutes.

Demo and docs links below:
August 26, 2025 at 2:40 PM
Introducing the ElevenLabs React Native SDK

Build cross-platform conversational AI agents for iOS and Android in minutes. WebRTC and first-class @expo.dev support built in.

Docs links below:
August 6, 2025 at 2:08 PM
Introducing the ElevenLabs Swift SDK 2.0 & Voice UI Starter Kit

Add Conversational AI to your visionOS, macOS, and iOS apps in minutes.

Built with:
• ElevenLabs Swift SDK 2.0 (powered by WebRTC)
• SwiftUI + drop-in Xcode template / components

Docs and repo links below:
July 29, 2025 at 2:34 PM
Introducing Eleven v3 (alpha) - the most expressive Text to Speech model ever.

Supporting 70+ languages, multi-speaker dialogue, and audio tags such as [excited], [sighs], [laughing], and [whispers].

Now in public alpha and 80% off in June.
June 5, 2025 at 6:17 PM
Give wind.surf a voice using the ElevenLabs MCP server.

@asjes.dev walks you through configuring and using our MCP server inside Windsurf.
May 9, 2025 at 6:10 AM
Introducing the open-source Next.js Audio Starter Kit

Add Text to Speech, Speech to Text, Sound Effects and Conversational AI to your product in minutes.

Built with:
• ElevenLabs SDK
• Next.js + shadcn/ui
• Tailwind CSS v4

Repo link below:
May 7, 2025 at 2:51 PM
The future of streaming is multilingual.

Gaia uses ElevenLabs to dub their original series into new languages—cutting production time by 25% and costs by 10%.

They started with trailers. Now they're localizing entire series into Spanish and German.
May 6, 2025 at 2:04 PM
Meet KUBI, the conversational robot barista and receptionist at secondspace.dev.

As the first point of contact for members, KUBI plays a key role in creating a warm and engaging experience.

That's why Second Space chose Conversational AI — to add a unique, friendly touch.
April 4, 2025 at 8:02 AM
We've added native, low-latency RAG to Conversational AI — enabling your voice agents to access and use large knowledge bases in real time.
March 27, 2025 at 4:50 PM
Conversational AI now supports automatic language detection and switching.
March 25, 2025 at 4:31 PM
We're in Taiwan this week filming a developer story about KUBI, the conversational robot barista at secondspace.dev.

Are you building an exciting project that we should feature? Let us know below.
March 18, 2025 at 3:53 PM
Voice speed controls are now available in ElevenLabs TTS, Studio, Conversational AI, and our API.

Pacing completely changes the delivery of spoken word.

You can now control pace down to the word level, giving you more control over the expressiveness of your dialogue.
March 7, 2025 at 4:25 AM
Introducing Scribe — the most accurate Speech to Text model.

It has the highest accuracy on benchmarks, making it the leading model for English, Spanish, Italian, and many more. With support for 99 languages, speaker diarization, character-level timestamps, and non-speech events such as laughing.
February 27, 2025 at 3:55 AM