Skip to main content

Best WebSocket Libraries for Node.js in 2026

·PkgPulse Team
0

TL;DR

ws for raw performance; Socket.io for full-featured real-time apps. ws (~80M weekly downloads) is the minimal, fast WebSocket implementation — Node.js's de facto standard. Socket.io (~10M downloads) wraps ws with rooms, namespaces, auto-reconnect, and fallback to HTTP long-polling. uWebSockets.js is the raw performance king — handles 10x the connections of ws with lower latency. Pick based on whether you need Socket.io's features or raw throughput.

Key Takeaways

  • ws: ~80M weekly downloads — minimal, fast, what most WebSocket libs are built on
  • Socket.io: ~10M downloads — rooms, namespaces, auto-reconnect, HTTP fallback
  • uWebSockets.js — highest throughput, C++ bindings, 10x faster than ws
  • Socket.io v4+ — HTTP/2, WebTransport support, CORS built-in
  • Ably/Pusher — managed WebSocket services (no infra to maintain)

The WebSocket Protocol Landscape in 2026

WebSockets provide full-duplex communication between browser and server over a persistent TCP connection. Unlike HTTP request-response, WebSockets allow the server to push data to clients at any time without the client polling. This makes them the right transport for chat applications, live dashboards, collaborative editing, and multiplayer games.

The library question in 2026 is about which abstraction layer you need, not which protocol. All the options below implement the same WebSocket protocol — they differ in what they build on top of it.

Raw WebSockets (ws, uWebSockets.js) implement the protocol and nothing else. You get onopen, onmessage, onclose and onerror. Room management, reconnection logic, presence tracking, and message acknowledgment are your responsibility to build.

Socket.io builds those abstractions on top of ws. Rooms, namespaces, auto-reconnect, event-based messaging, and HTTP long-polling fallback come included. The tradeoff is that Socket.io requires both client and server to use the Socket.io library — you can't connect a raw WebSocket client to a Socket.io server.

Managed services (Ably, Pusher, Liveblocks) move the WebSocket infrastructure to a hosted service. Your application connects via HTTP to trigger events and subscribes via the provider's SDK. These are the right choice for serverless deployments where maintaining persistent WebSocket connections isn't possible.


ws (Raw, Fast)

// ws — WebSocket server
import { WebSocketServer, WebSocket } from 'ws';
import http from 'http';

const server = http.createServer();
const wss = new WebSocketServer({ server });

wss.on('connection', (ws, req) => {
  const clientId = generateId();
  console.log(`Client connected: ${clientId}`);

  ws.on('message', (data) => {
    const message = JSON.parse(data.toString());

    // Broadcast to all connected clients
    wss.clients.forEach((client) => {
      if (client.readyState === WebSocket.OPEN) {
        client.send(JSON.stringify({
          from: clientId,
          ...message,
        }));
      }
    });
  });

  ws.on('close', () => {
    console.log(`Client disconnected: ${clientId}`);
  });

  ws.on('error', (error) => {
    console.error(`WebSocket error from ${clientId}:`, error);
  });

  ws.send(JSON.stringify({ type: 'connected', clientId }));
});

server.listen(8080, () => console.log('WS server on :8080'));
// ws — client with reconnection
class ReconnectingWebSocket {
  private ws: WebSocket | null = null;
  private reconnectAttempts = 0;
  private maxReconnectDelay = 30000;

  constructor(private url: string) {
    this.connect();
  }

  private connect() {
    this.ws = new WebSocket(this.url);

    this.ws.onopen = () => {
      console.log('Connected');
      this.reconnectAttempts = 0;
    };

    this.ws.onclose = () => {
      const delay = Math.min(
        1000 * Math.pow(2, this.reconnectAttempts++),
        this.maxReconnectDelay
      );
      console.log(`Disconnected. Reconnecting in ${delay}ms...`);
      setTimeout(() => this.connect(), delay);
    };

    this.ws.onmessage = (event) => {
      this.onMessage(JSON.parse(event.data));
    };
  }

  send(data: object) {
    if (this.ws?.readyState === WebSocket.OPEN) {
      this.ws.send(JSON.stringify(data));
    }
  }

  onMessage(data: any) {
    // Override in subclass
  }
}

ws is a foundational package — it's installed as a dependency in hundreds of other packages because it's the reliable, minimal WebSocket implementation for Node.js. The 80M weekly downloads include all the indirect installs through Socket.io, Vitest, and other tools that use WebSockets internally.

Using ws directly makes sense when you're building a protocol on top of WebSockets, need maximum performance control, or want a minimal dependency footprint. The raw WebSocketServer gives you everything you need to build custom real-time protocols without Socket.io's conventions.

The reconnection example above illustrates why Socket.io's built-in reconnection is valuable. Implementing correct exponential backoff with jitter, handling reconnection during active sends, and managing message queuing during disconnects is non-trivial. Socket.io handles this correctly; ws leaves it to you.

ws integrates cleanly with Node.js's HTTP server, Express, Fastify, and Hono. The wss.handleUpgrade() pattern lets you share a port between HTTP API and WebSocket connections — important for deployments where you can't expose multiple ports.


// Socket.io — server with rooms and namespaces
import { Server } from 'socket.io';
import { createServer } from 'http';
import express from 'express';

const app = express();
const httpServer = createServer(app);

const io = new Server(httpServer, {
  cors: {
    origin: 'https://app.example.com',
    methods: ['GET', 'POST'],
  },
  transports: ['websocket', 'polling'],  // Automatic fallback
});

// Namespace: /chat
const chatNS = io.of('/chat');

chatNS.on('connection', (socket) => {
  console.log(`User connected: ${socket.id}`);

  socket.on('join-room', (roomId) => {
    socket.join(roomId);
    socket.to(roomId).emit('user-joined', { userId: socket.id });
  });

  socket.on('send-message', ({ roomId, message }) => {
    socket.to(roomId).emit('new-message', {
      from: socket.id,
      message,
      timestamp: Date.now(),
    });
  });

  // Acknowledge pattern (request/response over WebSocket)
  socket.on('ping', (data, callback) => {
    callback({ pong: true, serverTime: Date.now() });
  });

  socket.on('disconnect', (reason) => {
    console.log(`User ${socket.id} disconnected: ${reason}`);
  });
});

httpServer.listen(3000);
// Socket.io — client (browser)
import { io } from 'socket.io-client';

const socket = io('https://api.example.com/chat', {
  reconnectionDelayMax: 10000,
  auth: {
    token: localStorage.getItem('authToken'),
  },
});

socket.on('connect', () => {
  console.log('Connected:', socket.id);
  socket.emit('join-room', 'general');
});

socket.on('new-message', ({ from, message, timestamp }) => {
  renderMessage({ from, message, timestamp });
});

// Send with acknowledgment
socket.emit('ping', { test: true }, (response) => {
  console.log('Server response:', response);
});
// Socket.io — middleware (authentication)
io.use(async (socket, next) => {
  const token = socket.handshake.auth.token;
  try {
    const user = await verifyJWT(token);
    socket.data.user = user;
    next();
  } catch (err) {
    next(new Error('Authentication failed'));
  }
});

Socket.io's rooms abstraction is the reason most chat and collaborative applications use it rather than raw WebSockets. A "room" is a named group that sockets can join and leave dynamically. Broadcasting to a room with io.to(roomId).emit(...) reaches all current room members regardless of which server they're connected to (when using the Redis adapter). This is the fundamental primitive for implementing channels, groups, and shared document spaces.

The acknowledgment pattern (socket.emit('ping', data, callback)) is unique to Socket.io and more natural for certain communication patterns than pure pub/sub. The callback is invoked when the server handles the event and calls callback(), providing request-response semantics over WebSocket without needing HTTP.

HTTP long-polling fallback is Socket.io's hidden value for enterprise environments. Some corporate networks block WebSocket connections. Socket.io's automatic fallback to HTTP long-polling (the transports: ['websocket', 'polling'] default) ensures the application works even in restricted networks. Users on restricted networks see slightly higher latency but functionality is preserved.


uWebSockets.js (High Performance)

// uWebSockets.js — 10x throughput of ws, C++ bindings
import { App, SHARED_COMPRESSOR } from 'uWebSockets.js';

const app = App({});

app.ws('/chat/:room', {
  compression: SHARED_COMPRESSOR,
  maxPayloadLength: 16 * 1024,
  idleTimeout: 60,

  open(ws) {
    const room = ws.getUserData().room || 'general';
    ws.subscribe(room);
  },

  message(ws, message, isBinary) {
    const room = ws.getUserData().room;
    app.publish(room, message, isBinary);
  },

  close(ws, code, message) {
    console.log('Client disconnected:', code);
  },
})
.listen(9001, (token) => {
  if (token) console.log('Listening to port 9001');
});

uWebSockets.js is for applications where WebSocket throughput is the bottleneck. The C++ implementation handles 10x more concurrent connections than ws with significantly lower per-connection memory overhead. The built-in pub/sub system (ws.subscribe(room), app.publish(room, message)) provides room-like functionality without Socket.io's full feature set.

The primary use cases are: high-concurrency game servers, market data distribution (thousands of clients receiving frequent price updates), and collaborative applications at scale. For a typical chat application with hundreds or low thousands of concurrent connections, the performance difference between ws and uWebSockets.js is irrelevant. At tens of thousands of connections with frequent messages, it matters significantly.

The tradeoff is API complexity and ecosystem. uWebSockets.js uses a different API than the standard ws interface, doesn't integrate with Express/Fastify the same way, and has a smaller ecosystem of documentation and community examples.


Performance Comparison

LibraryConnections/secLatency (p99)Memory/1K clients
uWebSockets.js~100K~1ms~60MB
ws~10K~5ms~300MB
Socket.io (ws)~5K~8ms~400MB
Socket.io (polling)~2K~50ms~500MB

Benchmarks are approximate. Real numbers depend on message size and hardware.


Security Considerations

WebSocket servers need the same security attention as HTTP APIs:

Authentication: WebSocket connections authenticate during the handshake. Socket.io supports passing tokens in the auth object (socket.handshake.auth.token). For ws, use query parameters or cookies during the initial HTTP upgrade request, and verify credentials before accepting the connection.

Input validation: Messages arriving over WebSocket are untrusted user input. Validate message schemas, enforce size limits (maxPayloadLength in uWebSockets.js), and sanitize content before broadcasting.

Rate limiting: WebSocket connections allow clients to send messages at high frequency. Implement per-connection rate limiting to prevent abuse. Socket.io doesn't include rate limiting — add it in middleware or use a separate rate limiter.

CORS: WebSocket connections aren't subject to browser CORS policy during the TCP handshake, but the Origin header is sent. Validate the Origin header on the server to prevent unauthorized cross-origin connections.


When to Choose

ScenarioPick
Chat app, notifications, presenceSocket.io
Need rooms + namespaces out of the boxSocket.io
HTTP long-polling fallback for mobileSocket.io
Building a protocol on top of WebSocketsws
10K+ concurrent connections, low latencyuWebSockets.js
Massively multiplayer game serveruWebSockets.js
Managed service (no infra)Ably or Pusher
Next.js server actions real-timeAbly (serverless-friendly)
Corporate network WebSocket restrictionsSocket.io (polling fallback)

Horizontal Scaling and Infrastructure Considerations

WebSocket connections are stateful — a client is connected to a specific server instance. When you run multiple server instances (which you will in any production deployment with horizontal scaling), this creates a fundamental challenge: a message broadcast from one server instance only reaches clients connected to that instance.

The Sticky Sessions Approach

The simplest solution is sticky sessions: configure your load balancer to route all WebSocket traffic from a given client to the same server instance throughout the session. NGINX's ip_hash directive and AWS ALB's sticky session cookies both implement this. The client always connects to the same instance, so broadcasts reach all clients on that instance.

Sticky sessions break down under two conditions: server instance failure (clients on the failed instance disconnect) and load imbalance (sessions aren't distributed evenly, leaving some instances overloaded). For applications where connection loss is acceptable (most cases), sticky sessions are the lowest-complexity scaling solution.

The Adapter Approach (Socket.io)

Socket.io's adapter system solves horizontal scaling properly. The @socket.io/redis-adapter uses Redis pub/sub to propagate events across all server instances. When one instance calls io.emit('update', data), it publishes to Redis, and all other instances subscribe to that channel and forward to their connected clients:

import { createAdapter } from '@socket.io/redis-adapter';
import { createClient } from 'redis';

const pubClient = createClient({ url: 'redis://localhost:6379' });
const subClient = pubClient.duplicate();
await Promise.all([pubClient.connect(), subClient.connect()]);

io.adapter(createAdapter(pubClient, subClient));

This enables true horizontal scaling: any instance can broadcast to all connected clients regardless of which instance they're connected to. The trade-off is Redis as an additional infrastructure dependency. For applications already using Redis for caching or rate limiting, this cost is marginal. For applications with no Redis, it adds operational complexity.

The @socket.io/postgres-adapter provides the same capability using PostgreSQL's LISTEN/NOTIFY mechanism — no Redis required if you're already running Postgres. Performance is lower than Redis (PostgreSQL's pub/sub is not optimized for high-throughput messaging) but adequate for applications under moderate load.

The ws Library at Scale

The ws library provides no built-in scaling mechanism — it's a WebSocket server, not a complete real-time application framework. Scaling ws-based applications requires implementing your own coordination:

  • Redis pub/sub for event propagation (same as Socket.io's adapter, but implemented manually)
  • A connection registry that maps user/room identifiers to server instances
  • Explicit handling of instance failure and reconnection

This is more work than Socket.io's adapter but gives complete control over the coordination mechanism. Teams building custom real-time protocols on top of ws often implement a lightweight event bus for cross-instance communication rather than adopting a full framework.

uWebSockets.js in Production

uWebSockets.js's extreme throughput (benchmarked at 10-15x more connections per server than ws) is most relevant for gaming, financial data feeds, and high-frequency notification systems. Its C++ implementation means it doesn't participate in Node.js's V8 garbage collector, eliminating GC pauses that would cause latency spikes at high connection counts.

The operational trade-off: uWebSockets.js is maintained by a single author, not a large open-source community. The documentation is sparse, and integration with common Node.js middleware patterns requires more effort. Teams choosing uWebSockets.js typically do so after profiling and confirming that ws or Socket.io becomes a bottleneck — not speculatively.

Testing WebSocket Code

Testing WebSocket handlers requires running an actual server (in-process or on a test port). Unlike HTTP, you can't mock WebSocket behavior as cleanly with static responses. A practical pattern for Vitest or Jest:

// Start server in beforeAll, close in afterAll
let server, clientSocket;

beforeAll(async () => {
  server = createServer(app);
  io.attach(server);
  await new Promise(resolve => server.listen(0, resolve));
  const port = server.address().port;
  clientSocket = ioc(`http://localhost:${port}`);
  await new Promise(resolve => clientSocket.on('connect', resolve));
});

afterAll(() => {
  clientSocket.close();
  server.close();
});

test('broadcasts message to room', async () => {
  const received = new Promise(resolve => clientSocket.on('message', resolve));
  io.to('test-room').emit('message', { text: 'hello' });
  const msg = await received;
  expect(msg.text).toBe('hello');
});

This pattern works with both Socket.io and ws. The key is using actual TCP connections rather than mocking the socket layer — WebSocket behavior (reconnection, acknowledgments, event ordering) is complex enough that mocking introduces false confidence.

Frequently Asked Questions

When does Socket.io make sense over a managed service like Ably?

Socket.io makes sense when you need to run WebSocket infrastructure on your own servers — on-premise deployments, regulated industries with data residency requirements, or when your application's WebSocket traffic is high enough that managed service costs become a significant budget item. For most applications starting with real-time features, a managed service (Ably, Pusher) reduces operational overhead substantially. Socket.io's value is control and portability, not simplicity.

How does WebSocket performance compare to Server-Sent Events (SSE)?

Server-Sent Events (SSE) are the right choice for unidirectional push from server to client — news feeds, notifications, status updates. SSE works over standard HTTP/1.1, requires no special server configuration, and scales with standard HTTP infrastructure including Vercel's Edge Network. WebSockets are necessary when you need bidirectional communication — the client sends events to the server with low latency, not just receiving. For most notification-style use cases, SSE is simpler and scales more naturally. For chat, collaboration, and gaming, WebSockets are necessary.

Can I use WebSockets with Next.js deployed on Vercel?

Vercel's serverless architecture doesn't support persistent WebSocket connections in standard API routes — connections close after the function completes. Options for WebSockets with Next.js include Vercel's experimental WebSocket support (currently in beta), using a separate WebSocket server (a long-running Node.js process on Railway or Fly.io that your Next.js app connects to), or using a managed real-time service (Ably, Pusher) that handles the persistent connection layer outside your Next.js deployment.

What's the difference between Socket.io rooms and namespaces?

Namespaces provide isolated communication channels within a single Socket.io server — a /chat namespace and an /notifications namespace can coexist, each with their own event emitters and room organization. Rooms within a namespace group clients for targeted broadcasts. The typical pattern: namespaces separate major application features; rooms group users within a feature (a chat room, a collaborative document session). Rooms are lightweight (no new socket connections required) and can be dynamically created and destroyed as users join and leave.

Compare WebSocket library package health on PkgPulse. Related: Best Realtime Libraries 2026, Hono vs Elysia 2026, and Best Node.js Logging Libraries 2026.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.