Inception Labs
banner
inceptionlabs.bsky.social
Inception Labs
@inceptionlabs.bsky.social
150 followers 1 following 7 posts
Pioneering a new generation of LLMs.
Posts Media Videos Starter Packs
On Copilot Arena, developers consistently prefer Mercury’s generations. It ranks #1 on speed and #2 on quality. Mercury is the fastest code LLM on the market
We achieve over 1000 tokens/second on NVIDIA H100s. Blazing fast generations without specialized chips!
Mercury Coder diffusion large language models match the performance of frontier speed-optimized models like GPT-4o Mini and Claude 3.5 Haiku while running up to 10x faster.
We are excited to introduce Mercury, the first commercial-grade diffusion large language model (dLLM)! dLLMs push the frontier of intelligence and speed with parallel, coarse-to-fine text generation.
A new generation of LLMs . . . coming soon . . .