Skip to main content

Vercel AI SDK 5 Migration Guide 2026

·PkgPulse Team
0

TL;DR

Vercel AI SDK 5 (released July 2025) is a major architectural overhaul. The biggest changes: UIMessage and ModelMessage are now separate types, streaming uses SSE natively (no custom protocol), tools use inputSchema/outputSchema instead of parameters/result, and a new Agent class wraps generateText for agentic loops. Most codebases require 2–4 hours of migration. Automated codemods handle the easy parts.

Key Takeaways

  • Released July 31, 2025 — v5 is a stable, production release; v6 followed in late 2025 with further additions
  • Two message types: UIMessage (client state) vs ModelMessage (what goes to the LLM) — conversion is now explicit
  • SSE-first streaming replaces the custom streaming protocol — simpler to debug, native browser support
  • New tool API: inputSchema + outputSchema instead of parameters + result
  • Agent class: Lightweight wrapper around generateText with stopWhen and prepareStep for agentic loop control
  • Framework parity: Vue, Svelte, and Angular now have the same hooks as React (useChat, useCompletion)
  • Codemod available: Run npx @ai-sdk/codemod@latest migrate to automate most changes

Why v4 → v5 Is a Breaking Change

Vercel AI SDK v3 and v4 used a custom streaming protocol (StreamingTextResponse, experimental_StreamData) that worked around browser SSE limitations. By 2025, those limitations were gone — all major environments support SSE natively.

v5 rips out the custom protocol entirely and replaces it with standard SSE. This is architecturally cleaner but breaks existing streaming implementations.

The useChat hook's message type also changed significantly. In v4, messages had a single content: string | ContentPart[] shape. In v5, there's a distinction between UIMessage (what your React component stores and renders) and ModelMessage (what gets sent to the LLM). This enables better streaming UI with more complex content types.


Installation

npm install ai@5

The core package is still just ai. Provider packages stay the same:

npm install @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google

Installing ai@5 updates only the core runtime. Provider packages (@ai-sdk/openai, @ai-sdk/anthropic, @ai-sdk/google) need to be updated separately — the major version bump in ai requires updated provider packages that match the v5 API. Run npm install ai@5 @ai-sdk/openai@latest @ai-sdk/anthropic@latest @ai-sdk/google@latest to update everything together.

After installation, TypeScript will immediately flag incompatible types — Message[] where UIMessage[] is expected, and uses of StreamingTextResponse that no longer exist. These compile errors are your migration roadmap. Starting from the compile errors and working outward is often faster than reading the migration guide end-to-end.

For projects using a monorepo (Turborepo, pnpm workspaces), update ai in all packages that import from it simultaneously. A mismatch between ai@4 in one package and ai@5 in another will cause type errors when the packages share message types across API boundaries.


Migration by Feature

1. Message Types: UIMessage vs ModelMessage

v4:

import { Message } from 'ai';

const [messages, setMessages] = useState<Message[]>([]);
// Message had: id, role, content (string | ContentPart[])

v5:

import { UIMessage } from 'ai';

const [messages, setMessages] = useState<UIMessage[]>([]);
// UIMessage has: id, role, parts (array of content parts)
// ModelMessage is what you send to the LLM (different shape)

Converting between types:

import { convertToModelMessages } from 'ai';

// When calling generateText/streamText from a server action:
const modelMessages = convertToModelMessages(uiMessages);

const result = await streamText({
  model: openai('gpt-4o'),
  messages: modelMessages,
});

This explicit conversion replaces v4's implicit handling. It adds a line of code but makes it clear where the boundary between client state and LLM input is.

2. Streaming: SSE Replaces Custom Protocol

v4 (route.ts):

import { StreamingTextResponse, streamText } from 'ai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = await streamText({
    model: openai('gpt-4o'),
    messages,
  });

  return new StreamingTextResponse(result.textStream);
}

v5 (route.ts):

import { streamText, convertToModelMessages } from 'ai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    messages: convertToModelMessages(messages),
  });

  // toDataStreamResponse() returns standard SSE
  return result.toDataStreamResponse();
}

Key difference: StreamingTextResponse is gone. Use result.toDataStreamResponse() instead.

v5 (useChat hook — minimal change):

import { useChat } from 'ai/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    api: '/api/chat',
  });

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          <strong>{m.role}</strong>
          {/* In v5, render m.parts instead of m.content */}
          {m.parts.map((part, i) =>
            part.type === 'text' ? <p key={i}>{part.text}</p> : null
          )}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

The big UI change: render m.parts instead of m.content. Parts is an array of typed content objects ({ type: 'text', text: '...' }, { type: 'tool-call', ... }, etc.).

3. Tool Calling: New Schema API

v4:

import { tool } from 'ai';
import { z } from 'zod';

const getWeather = tool({
  description: 'Get weather for a city',
  parameters: z.object({
    city: z.string(),
  }),
  execute: async ({ city }) => {
    return { temp: 72, condition: 'sunny' };
  },
});

v5:

import { tool } from 'ai';
import { z } from 'zod';

const getWeather = tool({
  description: 'Get weather for a city',
  inputSchema: z.object({        // was: parameters
    city: z.string(),
  }),
  outputSchema: z.object({       // was: nothing (return type was inferred)
    temp: z.number(),
    condition: z.string(),
  }),
  execute: async ({ city }) => {
    return { temp: 72, condition: 'sunny' };
  },
});

parametersinputSchema. The addition of explicit outputSchema enables better type safety and allows tools to be used in Arazzo-style workflows where downstream steps consume tool outputs.

Dynamic tools (new in v5):

// Tools can now be defined at runtime without static schema
const dynamicTool = tool({
  description: 'Call any endpoint',
  inputSchema: getSchemaFromDatabase(),   // Dynamic schema
  execute: async (input) => { /* ... */ },
});

4. The New Agent Class

v5 introduces a lightweight Agent class for agentic loops — the pattern where you repeatedly call an LLM until it stops using tools:

v4 (manual agentic loop):

let messages = initialMessages;
let shouldContinue = true;

while (shouldContinue) {
  const result = await generateText({
    model: openai('gpt-4o'),
    messages,
    tools: myTools,
  });

  messages = [...messages, ...result.responseMessages];

  if (result.finishReason === 'stop' || result.toolCalls.length === 0) {
    shouldContinue = false;
  }
}

v5 (Agent class):

import { Agent } from 'ai';

const agent = new Agent({
  model: openai('gpt-4o'),
  tools: myTools,
});

const result = await agent.run(initialMessages, {
  stopWhen: (state) => state.toolCalls.length === 0,
  prepareStep: (step) => {
    // Modify the prompt between steps
    return step;
  },
});

The Agent class isn't magic — it's the same loop pattern, just packaged. The benefit is that stopWhen and prepareStep make the control flow explicit and testable.

5. Provider Registry (New)

v5 adds a global provider registry so models can be referenced by string:

import { createProviderRegistry, createOpenAI, createAnthropic } from 'ai';

const registry = createProviderRegistry({
  openai: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
  anthropic: createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY }),
});

// Reference models by string anywhere in your app
const result = await generateText({
  model: registry.languageModel('openai/gpt-4o'),
  prompt: 'Hello',
});

This is especially useful for apps that let users choose their LLM provider at runtime.


Using the Codemod

Vercel provides an official codemod for mechanical changes:

npx @ai-sdk/codemod@latest migrate

The codemod handles:

  • StreamingTextResponseresult.toDataStreamResponse()
  • parametersinputSchema in tool definitions
  • Some import path updates

It doesn't handle:

  • The UIMessage / ModelMessage split (requires understanding your data flow)
  • Rendering m.parts vs m.content in UI components
  • Custom streaming protocol implementations

Run the codemod first, then manually address the message type changes.

After the Codemod: Manual Steps

The mechanical migration is straightforward; the semantic migration requires understanding your application's message flow. The UIMessage / ModelMessage split is the core concept you need to internalize before touching message-handling code.

In every route that receives messages from the client (typically a Next.js App Router server action or API route), you now need to explicitly convert:

// v4 — messages passed directly
const result = await streamText({
  model: openai('gpt-4o'),
  messages,
})

// v5 — explicit conversion required
const result = await streamText({
  model: openai('gpt-4o'),
  messages: convertToModelMessages(messages), // required in v5
})

In every React component that renders messages, you need to change message.content to iterating over message.parts. A message in v5 is an array of parts rather than a single string:

// v4
{messages.map(m => <div key={m.id}>{m.content}</div>)}

// v5
{messages.map(m => (
  <div key={m.id}>
    {m.parts.map((part, i) => {
      if (part.type === 'text') return <span key={i}>{part.text}</span>
      if (part.type === 'tool-call') return <ToolCallUI key={i} {...part} />
      return null
    })}
  </div>
))}

This change is necessary for applications that use tool calls — the tool call results now appear as parts in the message rather than as a separate toolInvocations array on the message object.

The codemod handles the route changes but not the UI rendering changes. Budget time to update all message-rendering components after running it.

v4 Features with Direct v5 Equivalents

Several v4 features changed names or locations:

  • StreamingTextResponseresult.toDataStreamResponse()
  • createStreamableUIstreamUI (experimental in v4, stable in v5)
  • generateText options: maxTokens parameter added alongside maxSteps
  • tool() helper: parametersinputSchema, add outputSchema for typed return

For teams using ai/rsc (React Server Components streaming), the API is largely stable but imports have moved. The streamUI function is the primary RSC streaming primitive in v5, replacing the experimental createStreamableUI from v4.

Testing the Migration

After running the codemod and manually updating message rendering, the most effective way to verify the migration is to test the streaming behavior in browser DevTools. Open the Network tab and start a chat conversation. You should see a request with a response of content type text/event-stream, with individual data: lines appearing as the model streams its response.

If you see a 500 error or a response that's not content type text/event-stream, the route conversion is incomplete. The most common cause is a missing convertToModelMessages() call or an outdated import path.

Testing tool calling requires more deliberate setup: create a prompt that reliably triggers one of your tools, then verify the tool result appears correctly in the message parts array. A good test is a tool that returns a structured object — confirm that the object appears as a tool-result part in the message rather than as formatted text.

For applications with extensive test coverage using jest or vitest, the AI SDK provides a MockLanguageModelV1 test helper that works correctly with v5's streaming model. The test helper was updated for v5 to produce UIMessage-compatible output when used with useChat in unit tests. If your existing tests use the v4 test helpers, they need updating alongside the application code.

The Vercel AI SDK vs OpenAI SDK vs Anthropic SDK 2026 comparison is the companion article for teams evaluating whether to use the Vercel AI SDK at all vs building directly on vendor SDKs. The migration complexity covered here applies to the AI SDK specifically; if you decide the abstraction isn't worth it, that article covers the alternatives. For most teams building TypeScript LLM applications in 2026, the AI SDK's abstractions are worth the migration cost — the provider portability, standardized streaming, and typed tool calling API are genuinely valuable for teams building at scale.


Migration Checklist

  • Run npx @ai-sdk/codemod@latest migrate
  • Update messages from Message[] to UIMessage[]
  • Add convertToModelMessages() calls in server routes before passing to streamText/generateText
  • Replace StreamingTextResponse with result.toDataStreamResponse()
  • Update tool definitions: parametersinputSchema, add outputSchema if needed
  • Update UI rendering: m.contentm.parts.map(part => ...)
  • Replace manual agentic loops with new Agent() if applicable
  • Test streaming in browser DevTools (Network tab → check for text/event-stream content type)
  • Verify tool calls round-trip correctly end-to-end

v5 vs v6: What Changed Next

v6 (released late 2025) added:

  • ResponseFunctionToolCallOutputItem.output can now return arrays of content, not just strings
  • Realtime API call support
  • Additional dev tools from Vercel Ship AI 2025

If you're migrating now, migrate to v5 first, then v6 is mostly additive.

The distinction between v5 and v6 is important for planning your migration timeline. v5 is the foundational architectural change — the UIMessage/ModelMessage split, standard SSE, the new tool API. v6 builds on v5's foundations with new capabilities rather than changing the existing architecture. Teams that complete the v5 migration can adopt v6 features incrementally as they need them, rather than facing another breaking change.

The Realtime API support in v6 deserves special mention. The OpenAI Realtime API enables bidirectional audio streaming with LLMs — the kind of real-time voice conversation capability that powers voice assistants. v6's Realtime support in the AI SDK means teams can build voice-to-voice features without low-level WebSocket management. If your roadmap includes voice interaction, v6 is the version to target.

For most teams in early 2026, v5 should be the migration target. v6's additive features don't require migrating from v5 — they simply extend what v5 already provides. The breaking changes were concentrated in v5; v6 was designed to be adopted incrementally.


Packages Impacted

Packagev4 Namev5 Status
aiaiSame name, major version bump
@ai-sdk/openaiSameUpdated, compatible
@ai-sdk/anthropicSameUpdated, compatible
@ai-sdk/googleSameUpdated, compatible
ai/reactai/reactuseChat updated (parts vs content)
ai/svelteai/svelteNow at feature parity with React
ai/vueai/vueNow at feature parity with React

Ecosystem & Community

The Vercel AI SDK has grown into the de facto standard for TypeScript LLM applications. The npm package ai has over 1 million weekly downloads as of early 2026, reflecting how thoroughly it's been adopted for everything from chatbot widgets to automated content pipelines. Vercel maintains it as a strategic priority — it's foundational to their platform's AI features and several of their major customers rely on it for production workloads.

The provider ecosystem is broad. Beyond the core OpenAI, Anthropic, and Google providers, there are community-built providers for Mistral, Groq, AWS Bedrock, Azure OpenAI, and dozens of smaller LLM services. The provider interface is well-specified enough that new providers appear quickly after new models launch.

Community support is strong through the Vercel Discord and GitHub Discussions. The SDK team is responsive to issues and the migration from v4 to v5 was accompanied by extensive documentation, example repositories, and recorded walkthroughs. For a major version change, the transition support was above average.

Real-World Adoption

Vercel AI SDK v5 is in production across thousands of applications. Startups building AI-powered tools — customer support bots, code assistants, document analysis pipelines — commonly use it as their primary LLM integration layer. The Next.js integration is particularly seamless: the App Router's server actions pattern pairs naturally with streamText and generateText.

Larger enterprises are using it for internal tooling. The provider registry feature introduced in v5 is especially valuable in enterprise settings where different teams may want different LLM providers, or where the LLM provider needs to be configurable at runtime based on user permissions or cost considerations.

The migration from v4 to v5 has been somewhat slow for established production applications — the m.content to m.parts change in UI components requires touching every chat message rendering component, which in large applications can mean dozens of files. Teams running v4 in production without complex requirements often defer migration until they need a v5-only feature.

Developer Experience Deep Dive

v5's TypeScript experience is substantially better than v4. The UIMessage and ModelMessage type split eliminates a class of runtime errors where developers accidentally passed UI-shaped messages to the LLM API. The explicit convertToModelMessages() call is verbose but makes the boundary visible in code review.

The streaming debugging experience is dramatically improved. v4's custom streaming protocol was opaque in browser DevTools — the Network tab showed a binary-ish stream that didn't clearly map to response chunks. v5's standard SSE shows clean data: prefixed event lines in DevTools, making it easy to verify that streaming is working correctly and trace which part of the response arrived when.

Tool calling development is more ergonomic with outputSchema. In v4, tool return types were inferred from the execute function, which meant TypeScript type errors in tool consumers were sometimes cryptic. In v5, the explicit outputSchema is the source of truth for tool output types, and validation errors at runtime produce clear error messages.

Migration Guide: Common Pitfalls

The most common migration mistake is updating the server route but forgetting to update the client component. After converting StreamingTextResponse to result.toDataStreamResponse(), the streaming works but the chat UI breaks because m.content is now undefined — only m.parts exists. Run a codebase-wide search for m.content and message.content after running the codemod.

The second most common issue is forgetting convertToModelMessages() in server routes. Without it, streamText receives UIMessage[] which has a different shape than expected, causing either TypeScript errors or runtime failures. Every route that accepts messages from useChat needs the conversion call.

For applications that implement custom streaming consumers (WebSocket adapters, SSE polyfills), the migration is more involved. The v5 SSE format has a different event schema than v4's custom protocol. Existing custom parsers need to be rewritten to consume standard SSE events.

Final Verdict 2026

Migrate to AI SDK v5 if you're starting a new project or building on top of existing v4 code that needs v5 features like the Agent class or provider registry. The architectural improvements — standard SSE, explicit message type conversion, typed tool schemas — are genuine improvements that make AI applications easier to debug and maintain.

For stable v4 applications in production without active development, the migration cost (2-4 hours for most apps, potentially more for large chat UIs) may not be justified until you need a v5-specific feature. The v4 protocol continues to work, though it won't receive security fixes indefinitely.

The Vercel AI SDK is now mature enough that v5 should be the starting point for any new JavaScript LLM integration project in 2026.


Recommendations

Migrate now if:

  • You're building a new project — start with v5, don't inherit v4 patterns
  • Your app uses tool calling heavily — inputSchema/outputSchema is strictly better
  • You want native SSE debugging in browser DevTools

Plan 2–4 hours if:

  • You have a working v4 app — the codemod handles 60–70% of mechanical changes; message type updates are manual
  • Your UI renders message.content in many places — each needs updating to message.parts

Understanding the v5 Architecture

The core architectural insight in AI SDK v5 is the separation of UI state from LLM input. In v4, the same message array served both purposes — it was the state that React rendered and the input that went to the API. This conflation caused issues: UI-specific fields (rendering state, client IDs) polluted the LLM input, and LLM responses sometimes contained structured data that didn't render well in generic string-based UI components.

v5 solves this with two distinct message types. UIMessage represents what the user interface stores and renders — it can contain rich content parts like images, attachments, and AI-generated UI. ModelMessage is what actually goes to the LLM — a normalized form with the specific structure the model expects. The convertToModelMessages() function converts between them.

This architecture cleanly separates concerns. Your React component manages UIMessage[] with all its rendering metadata. Your server route calls convertToModelMessages(uiMessages) to produce the clean LLM input. The LLM's response comes back as a ModelMessage which the streaming response converts into UI state updates. Each layer has a clear responsibility.

The Streaming Model in v5

v4's custom streaming protocol (StreamingTextResponse, experimental_StreamData) was a workaround for the fact that standard SSE didn't support all the data types the SDK needed to stream. By 2025, the SSE specification had evolved and the workarounds were no longer necessary.

v5's standard SSE streaming means every HTTP debugging tool works correctly — curl, browser DevTools, Postman all display the streaming response as readable text. In v4, the response was a binary-ish stream that required SDK-specific tooling to interpret. The move to standard SSE is a meaningful quality-of-life improvement for debugging production streaming issues.

Tool Calling and Agents in v5

Tool calling in v4 required defining a parameters Zod schema and an execute function. The tool's return type was inferred from the execute function signature. In v5, both input and output are explicitly typed: inputSchema (what the LLM sends to the tool) and outputSchema (what the tool returns to the LLM) are separate schemas.

The explicit output schema improves type safety in the tool calling pipeline. In v4, tools that returned complex objects required careful type assertions downstream. In v5, the outputSchema is the contract — TypeScript validates that your execute function returns something that matches, and the AI SDK validates at runtime that it does.

The Agent class introduced in v5 addresses the most common pattern in agentic AI applications: a loop where the LLM runs, potentially calls tools, and continues until a condition is met. Before v5, teams wrote custom loops with generateText and maxSteps. v5's Agent class codifies this pattern with stopWhen callbacks and prepareStep hooks for modifying context between iterations.

Production Considerations

For production AI applications, v5 introduces important infrastructure changes. The provider registry (added in v5) allows runtime selection of LLM providers — useful for A/B testing models, implementing fallback providers when one is rate-limited, or routing requests to different models based on user tier. Applications that previously hardcoded openai('gpt-4o') can now configure providers dynamically.

The standard SSE format also simplifies edge deployment. AI SDK v4's custom streaming protocol required careful handling in Cloudflare Workers, Vercel Edge Functions, and other edge runtimes where the response body had to be manually constructed. v5's standard SSE works correctly with the standard Response constructor in all edge environments.

Methodology

  • Sources: Vercel AI SDK v5 announcement (vercel.com/blog/ai-sdk-5, July 2025), official migration guides at ai-sdk.dev, Callstack technical breakdown, VoltAgent explainer
  • Date: March 2026

Comparing AI SDK providers? See Vercel AI SDK vs OpenAI SDK vs Anthropic SDK 2026.

Building AI agents? See AI SDK vs LangChain JavaScript 2026 — when the AI SDK's Agent class is enough vs when you need full orchestration.

For a broader view of the JavaScript AI ecosystem, see best AI LLM libraries JavaScript 2026. The AI SDK pairs naturally with Hono or Elysia for API routes — see Hono vs Elysia 2026 for the backend framework comparison.

The AI SDK v5 migration is worth doing before your codebase grows — the unified message format and improved streaming primitives become increasingly valuable as you add more model providers and streaming endpoints.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.