openai.com/index/why-la...
openai.com/index/why-la...
If you are around and want to give feedback or learn more about the AI SDK send me a DM.
If you are around and want to give feedback or learn more about the AI SDK send me a DM.
Completely revamped, type-safe AI chat is a game changer imo. I don't know any other framework that has full stack support for this. Hope it will enable you to build the next generation of AI applications.
Introducing type-safe chat, agentic loop controls, data parts, speech generation and transcription, Zod 4 support, global provider, and raw request access.
Completely revamped, type-safe AI chat is a game changer imo. I don't know any other framework that has full stack support for this. Hope it will enable you to build the next generation of AI applications.
An entirely new foundation with LanguageModelV2, a rearchitected useChat, new agentic controls and more.
Play with the alpha release and help us improve it.
Disclaimer: not yet ready for production use or migrating existing projects.
You can define custom ui data parts and stream updates to them from the server:
You can define custom ui data parts and stream updates to them from the server:
ChatStore synchronizes chat write operations and manages streaming state. You can use it directly or through framework integrations like useChat.
ChatTransport makes backend integrations more flexible, allowing for client-only usage.
ChatStore synchronizes chat write operations and manages streaming state. You can use it directly or through framework integrations like useChat.
ChatTransport makes backend integrations more flexible, allowing for client-only usage.
We recommend storing UI messages, not model messages, if your application has a useChat component.
This ensures that the UI state can be correctly restored and helps integrating backends other than streamText (e.g. Langchain).
We recommend storing UI messages, not model messages, if your application has a useChat component.
This ensures that the UI state can be correctly restored and helps integrating backends other than streamText (e.g. Langchain).
UI messages in AI SDK 5 will have a generic metadata property (instead of specific properties like createdAt).
This lets you send and show the message metadata that's important in your application:
UI messages in AI SDK 5 will have a generic metadata property (instead of specific properties like createdAt).
This lets you send and show the message metadata that's important in your application:
What you display to the user (ui messages) is different from what you want to send to the LLM (model messages).
UI messages contain additional information such as custom app data and metadata. Tools calls might be omitted. etc.
What you display to the user (ui messages) is different from what you want to send to the LLM (model messages).
UI messages contain additional information such as custom app data and metadata. Tools calls might be omitted. etc.
This is my current vision of how UI messages could look like - feedback welcome.
This is my current vision of how UI messages could look like - feedback welcome.
Our planned timeline:
- alpha 1st half of May
- beta 2nd half of May
- GA in June
Our planned timeline:
- alpha 1st half of May
- beta 2nd half of May
- GA in June
Workflows have a deterministic number of steps after which they will finish with with a solution (of varying quality).
Agents run in a loop and it is unclear when and if they will finish with a solution.
the problem of deciding when to stop an agent run is a variant of the halting problem
there will always be agent runs for which, given their current state and history, it cannot be known whether they will terminate with an answer or not
Workflows have a deterministic number of steps after which they will finish with with a solution (of varying quality).
Agents run in a loop and it is unclear when and if they will finish with a solution.
the problem of deciding when to stop an agent run is a variant of the halting problem
there will always be agent runs for which, given their current state and history, it cannot be known whether they will terminate with an answer or not
the problem of deciding when to stop an agent run is a variant of the halting problem
there will always be agent runs for which, given their current state and history, it cannot be known whether they will terminate with an answer or not
really digging the class-based APIs (`chat = new Chat(...)` etc) that are popping up in the Svelte ecosystem. Classes are great! They feel solid, tangible. I missed them.
Thanks to Elliot and @rich-harris.dev for this huge update!
really digging the class-based APIs (`chat = new Chat(...)` etc) that are popping up in the Svelte ecosystem. Classes are great! They feel solid, tangible. I missed them.
Thanks to Elliot and @rich-harris.dev for this huge update!
Thanks to Elliot and @rich-harris.dev for this huge update!
When you use generateText or generateObject, you can now access the full JSON response body and use additional data:
Access the full provider response when using generateText and generateObject
When you use generateText or generateObject, you can now access the full JSON response body and use additional data: