Best Vector Database Clients for JavaScript 2026
TL;DR
pgvector (via Prisma/Drizzle) wins if you already have Postgres. Pinecone wins if you want zero infrastructure and fast time-to-production. Qdrant wins for self-hosted performance at scale. In 2026, vector search has become infrastructure as common as full-text search — your choice comes down to whether you want a managed service (Pinecone, Weaviate Cloud) or self-hosted (Qdrant, pgvector). The JavaScript SDKs are all solid; the differentiator is the database itself.
Key Takeaways
- pgvector: free, runs in your existing Postgres, ~500K npm installs (via pg/drizzle), HNSW indexing
- Pinecone: managed, zero-ops, $0 free tier (1 index, 100K vectors), best managed DX
- Qdrant: best performance/price ratio for self-hosted, strong filtering,
@qdrant/js-client-rest~50K downloads - Weaviate: GraphQL-first, built-in vectorization,
weaviate-ts-client~80K downloads - For RAG: pgvector via Prisma is sufficient up to ~10M vectors; Qdrant/Pinecone beyond that
Why Vector Databases Matter in 2026
The explosion of LLM-powered applications has made vector search a first-class infrastructure concern. Whether you're building a RAG (Retrieval Augmented Generation) pipeline, a semantic search feature, a recommendation engine, or a knowledge base, you need a way to store high-dimensional embedding vectors and query them by similarity efficiently.
Traditional relational databases are not designed for this. A brute-force similarity scan over millions of 1536-dimensional OpenAI embeddings is prohibitively slow. Vector databases use specialized index structures — HNSW (Hierarchical Navigable Small World graphs), IVF (Inverted File Index), or product quantization — to find approximate nearest neighbors in milliseconds.
For JavaScript developers in 2026, the four main options are: pgvector (an extension for your existing Postgres), Pinecone (a fully managed cloud service), Qdrant (an open-source engine you self-host), and Weaviate (a semantic search engine with built-in vectorization). Each occupies a different point in the tradeoff space between cost, operational complexity, feature richness, and performance.
This guide covers each option in depth — the JavaScript SDK, practical code examples including storing and querying OpenAI embeddings, and honest guidance on which fits which situation.
pgvector (Postgres Extension)
Best for: teams already on Postgres, modest scale (<10M vectors), cost optimization
pgvector is a Postgres extension that adds a native vector column type and similarity search operators. It's not a separate database — it runs inside your existing Postgres instance. If you're on Supabase, Neon, or any modern managed Postgres provider, pgvector is already available. You just need to enable it.
The appeal is obvious: zero additional infrastructure, SQL filtering power over your existing data, JOIN support with your application tables, and all the operational tooling you already have (backups, monitoring, migrations). The tradeoff is that for very large collections or highly concurrent workloads, a dedicated vector engine will outperform it.
HNSW (Hierarchical Navigable Small World) indexing was added to pgvector in version 0.5.0 and is now the recommended index type for approximate nearest neighbor search. It builds a layered graph structure that dramatically speeds up queries compared to the default IVF index, at the cost of higher memory usage and slower build times.
# Enable extension in Postgres:
CREATE EXTENSION IF NOT EXISTS vector;
# Or via Supabase/Neon (already available — just run the CREATE EXTENSION)
// With Drizzle ORM (recommended):
// npm install drizzle-orm pg @types/pg
import { pgTable, text, integer, index, jsonb } from 'drizzle-orm/pg-core';
import { vector, sql } from 'drizzle-orm/pg-core';
export const documents = pgTable('documents', {
id: text('id').primaryKey().default(sql`gen_random_uuid()`),
content: text('content').notNull(),
embedding: vector('embedding', { dimensions: 1536 }), // OpenAI text-embedding-3-small
metadata: jsonb('metadata'),
createdAt: text('created_at').default(sql`now()`),
}, (table) => ({
// HNSW index for fast approximate nearest-neighbor search:
embeddingIndex: index('embedding_hnsw_idx')
.using('hnsw')
.on(table.embedding.op('vector_cosine_ops')),
}));
// Insert with embedding from OpenAI:
import { openai } from '@ai-sdk/openai';
import { embed } from 'ai';
async function insertDocument(content: string, metadata: Record<string, unknown>) {
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: content,
});
return db.insert(documents).values({
content,
embedding,
metadata,
});
}
// Similarity search with Drizzle — cosine distance:
async function semanticSearch(query: string, limit = 10) {
const { embedding: queryEmbedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: query,
});
// cosine distance (lower = more similar); similarity = 1 - distance
return db
.select({
id: documents.id,
content: documents.content,
similarity: sql<number>`1 - (embedding <=> ${JSON.stringify(queryEmbedding)}::vector)`,
})
.from(documents)
.orderBy(sql`embedding <=> ${JSON.stringify(queryEmbedding)}::vector`)
.limit(limit);
}
With Prisma, vector support requires raw SQL for the similarity query since Prisma doesn't natively generate the <=> cosine distance operator syntax. You define the column via Unsupported("vector(1536)") in your schema and use $queryRaw for similarity searches. The Drizzle approach is cleaner for vector workloads.
For hybrid search (combining vector similarity with keyword relevance), pgvector supports combining <=> distances with Postgres full-text search ts_rank using a weighted scoring formula. This is manual but fully possible within SQL. It won't match the native hybrid search implementations in Qdrant or Weaviate, but for most RAG use cases it's sufficient.
Pinecone
Best for: managed zero-ops vector search, rapid prototyping, <100M vectors
Pinecone is the most developer-friendly fully managed vector database. You create an index, upsert vectors with metadata, and query — no infrastructure to provision, no Docker to run, no tuning required. The free tier is genuinely useful: one index, up to 100,000 vectors, and enough throughput for development and small production workloads.
The @pinecone-database/pinecone SDK is well-designed. Type safety is solid, the API surface is small, and the documentation is excellent. Pinecone's serverless architecture (launched 2024) means you're only charged for storage and operations rather than always-on compute, which makes the economics competitive for workloads with irregular query patterns.
Metadata filtering is a first-class feature. You can filter by exact match, range, or set membership before the similarity search executes, which keeps your result set relevant without post-filtering.
npm install @pinecone-database/pinecone
import { Pinecone } from '@pinecone-database/pinecone';
const pc = new Pinecone({ apiKey: process.env.PINECONE_API_KEY! });
// Create an index (serverless, us-east-1):
await pc.createIndex({
name: 'documents',
dimension: 1536,
metric: 'cosine',
spec: {
serverless: {
cloud: 'aws',
region: 'us-east-1',
},
},
});
// Connect to existing index:
const index = pc.index('documents');
// Upsert vectors with metadata:
async function upsertDocuments(docs: Array<{ id: string; content: string; category: string }>) {
const embeddings = await generateEmbeddings(docs.map(d => d.content));
// Pinecone recommends batches of 100:
const batch = docs.map((doc, i) => ({
id: doc.id,
values: embeddings[i],
metadata: {
content: doc.content,
category: doc.category,
timestamp: Date.now(),
},
}));
await index.upsert(batch);
}
// Query with metadata filtering:
async function search(query: string, topK = 10, category?: string) {
const queryEmbedding = await generateEmbedding(query);
const result = await index.query({
vector: queryEmbedding,
topK,
includeMetadata: true,
// Metadata filter — only return results from a specific category:
filter: category ? { category: { $eq: category } } : undefined,
});
return result.matches.map(m => ({
id: m.id,
score: m.score,
content: m.metadata?.content as string,
category: m.metadata?.category as string,
}));
}
Pinecone's main weakness is that you're locked into a managed service with pricing that scales linearly. For teams that expect to store billions of vectors or run extremely high query volumes, self-hosted alternatives become cost-competitive. But for teams that want to ship fast without managing infrastructure, Pinecone removes all the friction.
Qdrant
Best for: self-hosted, high performance, complex filtering, >10M vectors
Qdrant is an open-source vector database written in Rust, which gives it excellent performance characteristics and low memory overhead. The @qdrant/js-client-rest package provides a clean TypeScript interface to the REST API. There's also a gRPC client (@qdrant/js-client-grpc) for lower latency in high-throughput scenarios.
The standout feature is Qdrant's filtering system. Where Pinecone offers basic metadata filters, Qdrant's payload filter system supports arbitrarily nested boolean conditions, range queries, geo-distance filters, and full-text matching on payload fields. For applications where filtering is complex — "find documents similar to this query that are tagged with 'technical', created after 2025-01-01, by users in this list" — Qdrant handles it elegantly.
Qdrant also has native hybrid search support via sparse vectors (SPLADE model output), which you can combine with dense vector search for better recall.
# Start Qdrant locally with Docker:
docker run -p 6333:6333 -p 6334:6334 \
-v $(pwd)/qdrant_storage:/qdrant/storage:z \
qdrant/qdrant
npm install @qdrant/js-client-rest
import { QdrantClient } from '@qdrant/js-client-rest';
const client = new QdrantClient({ url: 'http://localhost:6333' });
// Create collection with HNSW config:
await client.createCollection('documents', {
vectors: {
size: 1536,
distance: 'Cosine',
on_disk: true, // Offload to disk for large collections
},
optimizers_config: {
memmap_threshold: 100000, // Memory-map segments with >100K vectors
},
hnsw_config: {
m: 16, // Number of edges per node (higher = better recall, more memory)
ef_construct: 100, // Construction time accuracy
},
});
// Create payload index for fast filtering:
await client.createPayloadIndex('documents', {
field_name: 'category',
field_schema: 'keyword',
});
// Upsert with rich payload:
await client.upsert('documents', {
wait: true,
points: docs.map((doc, i) => ({
id: i, // Must be integer or UUID
vector: embeddings[i],
payload: {
content: doc.content,
category: doc.category,
date: doc.date,
tags: doc.tags,
authorId: doc.authorId,
},
})),
});
// Search with complex nested filter:
const result = await client.search('documents', {
vector: queryEmbedding,
limit: 10,
score_threshold: 0.7,
filter: {
must: [
{ key: 'category', match: { value: 'technical' } },
{ key: 'date', range: { gte: '2025-01-01' } },
],
should: [
{ key: 'tags', match: { any: ['javascript', 'typescript'] } },
],
},
with_payload: true,
});
Qdrant Cloud (the managed offering) is also available if you want the performance and feature set without running your own infrastructure. The pricing is competitive with Pinecone for equivalent scale.
Weaviate
Best for: semantic search over large document collections, built-in vectorization, GraphQL power users
Weaviate is the most opinionated of the four options. It ships with a module system that handles vectorization automatically: you configure a vectorizer (OpenAI, Cohere, HuggingFace, or local models), and Weaviate calls the embedding API for you when you insert or query data. You never explicitly manage embedding vectors in application code.
The trade-off is complexity. Weaviate's schema is more involved to set up, the GraphQL query language has a learning curve, and the TypeScript SDK has historically been harder to use than Pinecone's or Qdrant's. The newer weaviate-client v3 SDK (replacing weaviate-ts-client) is a significant improvement.
Weaviate also supports hybrid search (BM25 keyword + vector) natively, making it well-suited for search applications where users might query with short keywords rather than natural language.
# Start Weaviate locally:
docker run -p 8080:8080 -p 50051:50051 \
-e ENABLE_MODULES='text2vec-openai' \
-e OPENAI_APIKEY=$OPENAI_API_KEY \
cr.weaviate.io/semitechnologies/weaviate:latest
npm install weaviate-client
import weaviate from 'weaviate-client';
const client = await weaviate.connectToLocal();
// Create a class with automatic OpenAI vectorization:
await client.collections.create({
name: 'Document',
vectorizers: weaviate.configure.vectorizer.text2VecOpenAI({
model: 'text-embedding-3-small',
}),
properties: [
{ name: 'content', dataType: weaviate.configure.dataType.TEXT },
{ name: 'category', dataType: weaviate.configure.dataType.TEXT },
{ name: 'createdAt', dataType: weaviate.configure.dataType.DATE },
],
});
const documents = client.collections.get('Document');
// Insert — Weaviate calls OpenAI automatically:
await documents.data.insertMany(docs.map(doc => ({
content: doc.content,
category: doc.category,
createdAt: doc.createdAt,
})));
// Semantic search (nearText — Weaviate generates query embedding):
const result = await documents.query.nearText(['machine learning tutorial'], {
limit: 10,
filters: weaviate.filter.byProperty('category').equal('technical'),
returnProperties: ['content', 'category'],
});
// Hybrid search (BM25 keyword + vector):
const hybridResult = await documents.query.hybrid('react hooks performance', {
limit: 10,
alpha: 0.75, // 0 = pure BM25, 1 = pure vector
});
Weaviate's GraphQL API remains available for more complex queries, but the new v3 SDK largely abstracts it away. The built-in vectorization is the feature most worth considering: if you want semantic search without managing an embeddings pipeline, Weaviate handles it in one place.
Package Health Comparison
| pgvector | Pinecone | Qdrant | Weaviate | |
|---|---|---|---|---|
| npm package | pg / drizzle-orm | @pinecone-database/pinecone | @qdrant/js-client-rest | weaviate-client |
| Weekly downloads | ~500K (pg) | ~200K | ~50K | ~80K |
| Hosting | Self/Supabase/Neon | Managed only | Self or Cloud | Self or Cloud |
| Free tier | Unlimited (Postgres) | 1 index, 100K vectors | Self-hosted free | Self-hosted free |
| Scale limit | ~100M (practical) | Billions | Billions | Billions |
| Filtering | Full SQL | Metadata filters | Complex nested | GraphQL / filters |
| Hybrid search | Manual (SQL) | No | Yes (native) | Yes (native) |
| Multi-vector | No | Yes (namespaces) | Yes (named vectors) | Yes |
| Auto-vectorize | No | No | No | Yes (modules) |
| SDK TypeScript quality | Excellent (Drizzle) | Good | Good | Improving (v3) |
| Best fit | Existing Postgres | Fast time-to-prod | Self-hosted scale | Built-in embedding |
When to Choose
Existing Postgres → pgvector
If your application already runs on Postgres — whether Supabase, Neon, Railway, or your own instance — pgvector is the zero-friction path. You get vector search without adding another database, another service to monitor, or another set of credentials to manage. For collections under about 10 million vectors with moderate query throughput, pgvector with HNSW indexing performs well and SQL filtering over your existing tables is a genuine advantage. Drizzle ORM's vector type support makes the developer experience particularly smooth.
Fastest time to production → Pinecone
If you're building an MVP, a demo, or a product where you want to ship in days rather than weeks, Pinecone removes every operational decision. Sign up, get an API key, create an index, start upserting. The free tier is large enough for real development. The metadata filtering covers the majority of production query patterns. For teams without dedicated infrastructure expertise, Pinecone is the pragmatic choice.
Self-hosted with complex filtering → Qdrant
When you need control over where your data lives, want to avoid per-query vendor costs at scale, or have filtering requirements that go beyond simple metadata equality checks, Qdrant is the best self-hosted option. Its Rust core gives it excellent performance per dollar of compute, and its payload filter system is genuinely powerful for applications with multi-dimensional filtering needs. Qdrant Cloud is also an option if you want managed Qdrant.
Semantic search with automatic vectorization → Weaviate
When you want to minimize embedding pipeline management, Weaviate's module system handles it. You configure which embedding model to use, insert raw text objects, and Weaviate calls the embedding API automatically. Combined with native hybrid search, this makes Weaviate compelling for document search applications where the emphasis is on search quality rather than infrastructure tuning.
Choosing an Embedding Model
The vector database client is only half the equation. You also need to choose an embedding model that converts text into the numerical vectors the database stores. The model determines vector dimensionality (which must match your index configuration) and heavily influences retrieval quality.
OpenAI text-embedding-3-small (1536 dimensions) is the default for most JavaScript/Node.js projects. It's fast, cost-effective ($0.02 per million tokens), and well-tested. The newer text-embedding-3-large (3072 dimensions) performs better but costs 13x more and requires larger index storage.
Cohere embed-v3 (1024 dimensions) is competitive with OpenAI on retrieval benchmarks and supports multilingual text natively — useful for applications serving non-English content. Cohere also provides an input_type parameter for distinguishing between search queries and documents, which improves relevance.
Local models via Ollama (nomic-embed-text, 768 dimensions) are worth considering for privacy-sensitive data or cost-sensitive high-volume workloads. Running embeddings locally eliminates per-token API costs at the expense of slower throughput and infrastructure requirements.
For most teams starting out: use OpenAI text-embedding-3-small with whichever vector database fits your infrastructure preferences. The embedding model can be swapped later — though you'll need to re-embed your entire corpus when you switch.
Indexing Strategies and Performance Tuning
All four vector databases use approximate nearest neighbor algorithms because exact search over millions of high-dimensional vectors is computationally prohibitive. Understanding the tradeoff between recall and search speed helps with production tuning.
HNSW (used by pgvector, Qdrant, and optionally Weaviate) builds a layered graph. The m parameter controls how many edges each node has — higher values improve recall at the cost of memory. The ef_construct parameter controls build-time accuracy. Default values work for most use cases; tune ef at query time (not build time) if you need better recall without rebuilding the index.
IVF (Inverted File Index, available in pgvector) partitions vectors into clusters and searches only the nearest clusters. It uses less memory than HNSW but has lower recall for the same number of comparisons. IVF is more appropriate when memory is severely constrained.
Pinecone handles all index management internally. You configure the metric (cosine, dotproduct, euclidean) at index creation and Pinecone tunes the rest automatically.
For RAG workloads with collections under 1 million vectors, default HNSW settings with cosine similarity will give excellent performance. Tuning becomes important when you're optimizing for the last 2-3% of recall at high query volumes.
RAG Implementation Patterns
Regardless of which vector database you choose, the RAG (Retrieval Augmented Generation) implementation pattern is similar. The differences are in the specific SDK calls.
The standard flow is: chunk documents into passages of 200–512 tokens, generate embeddings for each chunk, store them in the vector DB, then at query time embed the user's question and retrieve the top-K most similar chunks to include as context in the LLM prompt.
pgvector is well-supported by LangChain.js and LlamaIndex — both have built-in vector store adapters. Pinecone and Qdrant also have official integrations. If you're using Vercel AI SDK, the ai package's embed() function works with any of these storage backends.
For production RAG systems, the choice of vector database is often less important than chunking strategy, embedding model quality, and prompt engineering. Start with whichever database fits your existing infrastructure.