Skip to main content

AI SDK vs LangChain: Which to Use in 2026

·PkgPulse Team
0

The npm package named ai has 2.8 million weekly downloads. The langchain package has 1.3 million. These two numbers tell an incomplete story — because they represent fundamentally different positions on the complexity spectrum of AI development. One is a polished library for common patterns; the other is a framework for the problems that libraries can't solve.

TL;DR

The ai package (Vercel AI SDK) is the right default for most JavaScript AI applications in 2026 — especially anything with a React or Next.js frontend. The langchain package (and its ecosystem of @langchain/core, @langchain/openai, etc.) is the right choice when you're building complex RAG systems, stateful agents, or need LangChain's 200+ integrations. For many teams, the answer is both.

Key Takeaways

  • ai package: 2.8M weekly downloads, 38K GitHub stars, Vercel-backed, React-first
  • langchain package: 1.3M weekly downloads, but @langchain/core has 28M monthly users across ecosystem
  • ai provides generateText, streamText, generateObject, useChat, useCompletion — covers 80% of use cases
  • LangChain provides chains, agents, RAG primitives, memory, 200+ integrations — covers the remaining 20%
  • AI SDK bundle: ~15-60 kB gzipped; LangChain bundle: ~380 kB (full), ~101 kB (@langchain/core)
  • AI SDK natively supports edge runtimes; LangChain does not
  • @ai-sdk/langchain bridge package makes them interoperable

Package Names and Ecosystem

First, let's clarify the package landscape since it's confusing:

Vercel AI SDK:

  • ai — Core SDK with React hooks and provider-agnostic API
  • @ai-sdk/openai — OpenAI provider
  • @ai-sdk/anthropic — Anthropic provider (See also: Anthropic Claude API Developer Guide on APIScout — APIScout has a full developer guide for the Claude/Anthropic API.)
  • @ai-sdk/google — Google Generative AI provider
  • @ai-sdk/langchain — LangChain bridge

LangChain JavaScript:

  • langchain — High-level chains, document loaders, text splitters
  • @langchain/core — Base types, interfaces, LCEL
  • @langchain/openai — OpenAI integration
  • @langchain/community — Community integrations (vector stores, loaders, etc.)
  • @langchain/langgraph — Agent orchestration framework

The Core Difference: Level of Abstraction

This is the key insight that determines which package to use:

ai package is at the API abstraction level: it standardizes how you call LLM APIs, handles streaming, and provides React primitives. It doesn't care about document processing, memory management, or agent orchestration.

langchain is at the application pattern level: it provides pre-built implementations of common AI application patterns — RAG pipelines, conversational memory, tool-using agents, document processing chains. It assumes you'll need these things and builds opinionated scaffolding.

Side-by-Side: Common Tasks

Task 1: Simple chat with streaming

ai package:

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await streamText({
  model: openai('gpt-4o'),
  messages: [{ role: 'user', content: 'Hello!' }],
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

langchain:

import { ChatOpenAI } from '@langchain/openai';
import { HumanMessage } from '@langchain/core/messages';

const model = new ChatOpenAI({ streaming: true });
const stream = await model.stream([new HumanMessage('Hello!')]);

for await (const chunk of stream) {
  process.stdout.write(chunk.content as string);
}

Winner for this task: ai — cleaner API, less boilerplate.

Task 2: Structured output extraction

ai package:

import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const { object } = await generateObject({
  model: openai('gpt-4o'),
  schema: z.object({
    name: z.string(),
    age: z.number(),
    email: z.string().email(),
  }),
  prompt: 'Extract contact info from: John Smith, 34, john@example.com',
});
// Fully typed: { name: string, age: number, email: string }

langchain:

import { ChatOpenAI } from '@langchain/openai';
import { z } from 'zod';

const model = new ChatOpenAI({ model: 'gpt-4o' });
const structured = model.withStructuredOutput(
  z.object({
    name: z.string(),
    age: z.number(),
    email: z.string().email(),
  })
);

const result = await structured.invoke('Extract contact info from: John Smith, 34, john@example.com');

Winner for this task: Roughly equal — both have excellent Zod-based structured output.

Task 3: RAG (Retrieval-Augmented Generation)

ai package:

// You'd need to implement retrieval yourself, or use a vector store client directly
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { Pinecone } from '@pinecone-database/pinecone';

const pinecone = new Pinecone();
const index = pinecone.index('my-docs');
const results = await index.query({ vector: queryEmbedding, topK: 5 });
const context = results.matches.map(m => m.metadata.text).join('\n');

const { text } = await generateText({
  model: openai('gpt-4o'),
  prompt: `Answer based on context:\n${context}\n\nQuestion: ${question}`,
});

langchain:

import { ChatOpenAI } from '@langchain/openai';
import { OpenAIEmbeddings } from '@langchain/openai';
import { PineconeStore } from '@langchain/pinecone';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { ChatPromptTemplate } from '@langchain/core/prompts';

const vectorStore = await PineconeStore.fromExistingIndex(
  new OpenAIEmbeddings(),
  { pineconeIndex }
);

const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'Answer based on context: {context}'],
  ['human', '{input}'],
]);

const chain = await createRetrievalChain({
  retriever: vectorStore.asRetriever(),
  combineDocsChain: await createStuffDocumentsChain({ llm: new ChatOpenAI(), prompt }),
});

const result = await chain.invoke({ input: question });

Winner for this task: langchain — dramatically less setup code, handles chunking, embedding, and retrieval as first-class concerns.

Task 4: Multi-step agent with tools

ai package:

import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4o'),
  maxSteps: 10,
  tools: {
    search: tool({
      parameters: z.object({ query: z.string() }),
      execute: async ({ query }) => searchWeb(query),
    }),
    calculate: tool({
      parameters: z.object({ expression: z.string() }),
      execute: async ({ expression }) => evaluate(expression),
    }),
  },
  prompt: 'What is the market cap of Apple divided by Microsoft?',
});

langchain (with LangGraph):

import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { ChatOpenAI } from '@langchain/openai';
import { DynamicTool } from '@langchain/core/tools';

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: 'gpt-4o' }),
  tools: [
    new DynamicTool({ name: 'search', func: searchWeb }),
    new DynamicTool({ name: 'calculate', func: evaluate }),
  ],
});

const result = await agent.invoke({
  messages: [{ role: 'user', content: 'What is the market cap of Apple divided by Microsoft?' }],
});

Winner for this task: ai for simple agents; langchain/LangGraph for stateful or multi-agent scenarios.

Feature Matrix

Featureai packagelangchain
Provider-agnostic APIYes (25+ providers)Yes (200+ integrations)
React hooksYes (useChat, useCompletion)No (manual)
Edge runtimeYesNo
StreamingYes (first-class)Yes (via callbacks)
Structured outputYes (generateObject)Yes (withStructuredOutput)
RAG primitivesNoYes (comprehensive)
Document loadersNoYes (50+)
Text splittersNoYes
Vector store integrationNoYes (20+ stores)
Conversational memoryBasicComprehensive
Agent orchestrationBasic (maxSteps)Full (LangGraph)
ObservabilityNoLangSmith
Bundle sizeSmallLarge

When the Answer Is Both

Many production systems use both packages in different layers:

Frontend (React/Next.js)
  └── ai package (useChat, useCompletion, streaming)

Backend API (Node.js)
  └── LangChain (RAG pipeline, document processing, agent orchestration)
  └── @ai-sdk/langchain (bridge for streaming results to frontend)

The @ai-sdk/langchain package makes this integration seamless:

import { toAIStream } from '@ai-sdk/langchain';
import { NextResponse } from 'next/server';

// In your Next.js API route
export async function POST(req: Request) {
  const { messages } = await req.json();

  // Use LangChain for complex processing
  const langchainStream = await ragChain.stream({ input: messages.at(-1).content });

  // Stream to AI SDK compatible format for useChat hook
  return new NextResponse(toAIStream(langchainStream));
}

Ecosystem & Community

The Vercel AI SDK has grown remarkably fast since its initial release. Vercel's position as the leading Next.js hosting platform means the AI SDK has direct access to millions of Next.js developers. The team ships updates rapidly — support for new model providers typically arrives within days of a model's public release. The provider ecosystem now covers OpenAI, Anthropic, Google, Mistral, Amazon Bedrock, Azure OpenAI, and 20+ more. The GitHub repository is among the most actively maintained AI libraries in the JavaScript ecosystem.

LangChain's JavaScript ecosystem has matured significantly since its initial port from Python. The early reputation for being "just a Python port with rough edges" has been addressed — @langchain/core is a well-designed TypeScript library that doesn't require understanding the Python version to use effectively. LangSmith, the observability platform from LangChain Inc., has become a genuine competitive advantage for production AI applications that need to trace, debug, and evaluate LLM pipelines. The LangChain community is large and active, with regular contributions to the community integrations package.

Real-World Adoption

The Vercel AI SDK is the dominant choice for consumer-facing AI features in web applications. Any company building a chatbot interface, an AI writing assistant, an image generation tool, or any streaming text UI on React has likely standardized on the ai package. Vercel's own AI templates (v0.dev, the AI chat template) use it. The SDK's React hooks are responsible for the consistent, well-implemented streaming chat UI that has become the standard pattern for AI web applications.

LangChain is the dominant choice for backend AI pipelines and enterprise knowledge management systems. Any company building a document Q&A system, a customer support agent with knowledge base access, or an AI workflow automation tool is likely using LangChain for its RAG primitives and agent orchestration. The Python version of LangChain powers thousands of production systems, and teams migrating those systems to JavaScript use langchain to preserve familiar patterns.

The "use both" architecture is common at companies building sophisticated AI products. The frontend chat interface uses the ai package's useChat hook for streaming. The backend uses LangChain to retrieve relevant documents, format prompts, and manage conversation memory. The @ai-sdk/langchain bridge connects them, streaming LangChain's output in the format the AI SDK's frontend hooks expect.

Developer Experience Deep Dive

The ai package's developer experience is optimized for React and Next.js developers. The useChat hook encapsulates everything needed for a streaming chat interface — message state, loading indicators, error handling, and the streaming connection — in a single hook call. The generateObject function with Zod schema inference is the most ergonomic structured output API in any language, AI or otherwise. TypeScript IntelliSense for AI SDK functions is excellent, and the error messages are descriptive.

LangChain's developer experience has a steeper learning curve. The LCEL (LangChain Expression Language) pipe syntax for composing chains is powerful but requires understanding the concept before it feels natural. The class-based API (new ChatOpenAI(), new PineconeStore()) is familiar to enterprise JavaScript developers but feels verbose compared to functional APIs. LangSmith integration, while not required, dramatically improves the debugging experience for complex pipelines — seeing every step of a multi-document RAG pipeline traced and visualizable is genuinely valuable.

Both libraries have excellent TypeScript support. The AI SDK ships its own types and uses Zod for schema validation. LangChain's TypeScript types are comprehensive, though some community integrations have weaker typing than the core library. For TypeScript-first development, both are functional.

Performance in Production

Both packages are used in large-scale production applications. The ai package has an edge in:

  • First-response latency (30ms vs 50ms p99 under load)
  • Cold start time (no heavy initialization)
  • Memory footprint

LangChain has advantages in:

  • Throughput for complex pipelines (built-in batching, parallelism)
  • Cache efficiency (document and result caching)
  • Retry and fallback handling

For edge deployments (Cloudflare Workers, Vercel Edge Functions), the AI SDK is the only viable choice — LangChain's dependency on Node.js APIs means it cannot run in edge runtimes without significant shimming. The AI SDK's streaming implementation is designed specifically for the edge execution model: no persistent connections, no Node.js streams, standard Web Streams API throughout.

Migration Guide

From bare fetch + OpenAI SDK to Vercel AI SDK: This migration is straightforward. Replace openai.chat.completions.create() with streamText() or generateText(). Add useChat to your React component to handle streaming state. The migration typically takes a few hours for a simple chatbot and benefits immediately from the improved streaming UX.

From LangChain Python to LangChain JavaScript: The class names and method signatures are similar, but the JavaScript version is not a 1:1 port. Most Python LangChain patterns have JavaScript equivalents in @langchain/core and langchain. The main differences are in how streaming is handled (JavaScript uses async iterators; Python uses callbacks) and in the ecosystem of integrations (Python has more). The migration is typically straightforward for simple pipelines and requires more adaptation for complex chains.

From LangChain to AI SDK (partial): Many teams migrate their React frontend from manual LangChain streaming to useChat while keeping LangChain on the backend. This is usually the right move — the AI SDK's React integration is more polished than LangChain's frontend story, and the @ai-sdk/langchain bridge makes the integration clean.

Decision Framework

Start with ai (Vercel AI SDK) if:

  • Building a React or Next.js application
  • Your AI features are chat, completion, or structured extraction
  • You want to switch providers without rewriting code
  • Edge runtime is a requirement
  • Team is new to AI development and wants low complexity
  • Bundle size matters

Start with langchain if:

  • Building a RAG pipeline with document ingestion
  • You need conversational memory with built-in persistence
  • Multi-step agent orchestration is required (use LangGraph)
  • You need LangSmith for observability and debugging
  • Migrating from Python LangChain to JavaScript
  • You need integrations with 20+ vector databases or 50+ document loaders

Use both if:

  • Your app has a real-time chat frontend AND a complex backend processing pipeline
  • You want AI SDK's React hooks with LangChain's RAG capabilities
  • You need to iterate on frontend quickly while backend complexity grows

The Verdict

In 2026, the ai package should be your default starting point for JavaScript AI development. It's simpler, faster, has better TypeScript support, and handles the vast majority of AI feature requirements elegantly. Add langchain packages when you hit the ceiling — when you need RAG, complex agents, or observability that ai doesn't provide.

The ecosystem has matured to the point where these two packages are more complementary than competitive, and the bridge package makes integrating them practical.

Compare on PkgPulse

See live download trends and bundle size comparisons for ai vs langchain on PkgPulse.

See also: AI SDK courses and tutorials on CourseFacts — learn to build AI-powered apps with structured courses on Vercel AI SDK and LangChain.

Compare AI SDK and LangChain package health on PkgPulse.

Evaluating New Providers and Model Selection

One practical consideration that doesn't get enough attention is model selection strategy. The ai package's provider-agnostic design means switching from openai('gpt-4o') to anthropic('claude-opus-4-5') or google('gemini-2.0-flash') is a one-line change. This makes A/B testing different models or falling back to a secondary provider on rate limits trivially easy.

LangChain's model abstraction works similarly in principle — new ChatOpenAI() swaps to new ChatAnthropic() with minimal code change. In practice, LangChain's provider integrations are more varied in quality. The OpenAI and Anthropic integrations are well-maintained. Community integrations for newer providers can lag behind API changes by weeks. The AI SDK's provider packages (@ai-sdk/openai, @ai-sdk/anthropic) are typically updated within hours of model releases.

For teams that want to compare different LLM APIs systematically — comparing Claude, GPT-4o, Gemini, and Mistral across the same prompts — the AI SDK's unified interface makes this significantly easier. See the Gemini API vs Claude API vs Mistral API comparison for how these models stack up on common tasks.

Production Observability Patterns

Debugging production AI applications requires more than console logs. LangChain has a clear advantage here with LangSmith, its first-party tracing and evaluation platform. LangSmith gives you a trace view of every chain step, the input and output at each node, token counts, and latency breakdown. For RAG pipelines where you need to understand why a particular chunk was retrieved — or wasn't — this visibility is invaluable.

The AI SDK doesn't ship with its own observability. However, it integrates with standard JavaScript observability tooling. OpenTelemetry tracing works with the AI SDK using wrapper patterns, and services like Langfuse, Helicone, and Braintrust provide AI-specific observability that works with any provider API. The AI SDK's onChunk callback for streaming responses gives you hooks to emit custom telemetry events. For a full comparison of LLM observability options, see Langfuse vs LangSmith vs Helicone LLM observability 2026.

Token Management and Cost Control

Token counting and cost management are practical concerns for any production AI application. The AI SDK exposes token usage data in both generateText() and streamText() responses via usage.promptTokens and usage.completionTokens. This makes it straightforward to log costs, implement user-level quotas, or abort requests that exceed budget thresholds.

LangChain provides similar token visibility with its getTokens() utility and through LangSmith's cost tracking. LangSmith's per-run cost breakdown is particularly useful for RAG pipelines where prompt length varies significantly based on retrieved context length. Seeing that a specific RAG chain costs $0.04 per query rather than $0.004 — because a document retriever is returning too many tokens — is the kind of insight that immediately changes production behavior.

For applications serving many users with varied AI feature usage, both libraries integrate with the same billing patterns: log usage per user, aggregate by time period, apply rate limits based on token consumption. The AI SDK's cleaner per-request usage data makes this slightly simpler to implement.

Streaming Architecture Considerations

The way streaming is handled architecturally differs in ways that matter for production deployments. The AI SDK is built around Web Streams — the same standard used by browsers, edge runtimes, and Bun. This means AI SDK streaming works correctly in Cloudflare Workers, Vercel Edge Functions, and any environment that supports the ReadableStream API. The streamText() return value is a proper ReadableStream compatible with Response objects.

LangChain's streaming uses Node.js-style async iterators and callback patterns that predate the Web Streams standard. Most LangChain chains can be made to stream, but getting the output into a Web Streams-compatible format for an HTTP response requires additional adapter code — which is exactly what @ai-sdk/langchain's toAIStream() bridge provides. For serverless functions that need to stream LangChain output to a browser, this bridge is essential.

The practical implication: if your AI application needs to stream responses in environments beyond Node.js — Next.js App Router routes, Cloudflare Workers, Deno Deploy — the AI SDK's streaming architecture is more portable.

Related: Best AI and LLM libraries for JavaScript in 2026 · Hono RPC vs tRPC vs ts-rest type-safe APIs · Best Next.js auth solutions in 2026

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.