If you are around and want to give feedback or learn more about the AI SDK send me a DM.
If you are around and want to give feedback or learn more about the AI SDK send me a DM.
You can define custom ui data parts and stream updates to them from the server:
You can define custom ui data parts and stream updates to them from the server:
ChatStore synchronizes chat write operations and manages streaming state. You can use it directly or through framework integrations like useChat.
ChatTransport makes backend integrations more flexible, allowing for client-only usage.
ChatStore synchronizes chat write operations and manages streaming state. You can use it directly or through framework integrations like useChat.
ChatTransport makes backend integrations more flexible, allowing for client-only usage.
We recommend storing UI messages, not model messages, if your application has a useChat component.
This ensures that the UI state can be correctly restored and helps integrating backends other than streamText (e.g. Langchain).
We recommend storing UI messages, not model messages, if your application has a useChat component.
This ensures that the UI state can be correctly restored and helps integrating backends other than streamText (e.g. Langchain).
UI messages in AI SDK 5 will have a generic metadata property (instead of specific properties like createdAt).
This lets you send and show the message metadata that's important in your application:
UI messages in AI SDK 5 will have a generic metadata property (instead of specific properties like createdAt).
This lets you send and show the message metadata that's important in your application:
What you display to the user (ui messages) is different from what you want to send to the LLM (model messages).
UI messages contain additional information such as custom app data and metadata. Tools calls might be omitted. etc.
What you display to the user (ui messages) is different from what you want to send to the LLM (model messages).
UI messages contain additional information such as custom app data and metadata. Tools calls might be omitted. etc.
This is my current vision of how UI messages could look like - feedback welcome.
This is my current vision of how UI messages could look like - feedback welcome.
🧪 structured outputs with tool calling using generateText (experimental)
You can create structured outputs with generateText using the experimental_output setting. The object is available in the experimental_output result property.
🧪 structured outputs with tool calling using generateText (experimental)
You can create structured outputs with generateText using the experimental_output setting. The object is available in the experimental_output result property.
Would love feedback!
Would love feedback!
Using TogetherAI models has become even easier with the new provider for the AI SDK:
Using TogetherAI models has become even easier with the new provider for the AI SDK:
🆕 useObject hook
🆕 useAssistant hook
🔨 useChat update fixes for message annotations
Here is an example page that leverages useObject:
🆕 useObject hook
🆕 useAssistant hook
🔨 useChat update fixes for message annotations
Here is an example page that leverages useObject:
You can e.g. pass it to Cursor prompts, or to build dedicated LLM-based code generators for the AI SDK.
You can now download the AI SDK docs as txt from:
sdk.vercel.ai/llms.txt
You can e.g. pass it to Cursor prompts, or to build dedicated LLM-based code generators for the AI SDK.
You can now download the AI SDK docs as txt from:
sdk.vercel.ai/llms.txt
🆕 access step messages in tools
You can now access the message that triggered the tool call in the tool execution options. This is particularly useful for multi-step executions with sequential tool calls.
Documentation: sdk.vercel.ai/docs/ai-sdk-...
🆕 access step messages in tools
You can now access the message that triggered the tool call in the tool execution options. This is particularly useful for multi-step executions with sequential tool calls.
Documentation: sdk.vercel.ai/docs/ai-sdk-...
🆕 grok-vision-beta image support
You can now use image content parts in your messages with the xAI provider:
🆕 grok-vision-beta image support
You can now use image content parts in your messages with the xAI provider:
🆕 Streaming for OpenAI o1-series models
The OpenAI provider v1.0.1 supports streaming reasoning model responses:
🆕 Streaming for OpenAI o1-series models
The OpenAI provider v1.0.1 supports streaming reasoning model responses:
🆕 predicted outputs support
You can use predicted outputs via the provider metadata:
🆕 predicted outputs support
You can use predicted outputs via the provider metadata:
🆕 Update to Cohere v2 API & tool calling
You can now use tools with the Cohere provider.
🆕 Update to Cohere v2 API & tool calling
You can now use tools with the Cohere provider.
🆕 PDF input support
You can send PDF inputs to claude-3-5-sonnet-20241022 as file parts:
🆕 PDF input support
You can send PDF inputs to claude-3-5-sonnet-20241022 as file parts: