Skip to main content

Best Node.js Logging Libraries 2026

·PkgPulse Team
0

TL;DR

Pino for production Node.js services. Winston for flexible multi-destination logging. Morgan for Express HTTP request logging. The logging landscape has converged around structured JSON logging to stdout, with Pino as the performance leader. Choose based on whether you need raw speed, flexibility, or simplicity.

Key Takeaways

  • Winston: ~15M weekly downloads — largest install base
  • Pino: ~9M downloads — fastest, used by Fastify
  • Morgan: ~4M downloads — HTTP request logging for Express
  • Bunyan: ~2M downloads — original JSON logger, mostly legacy
  • Structured JSON to stdout is the production standard in 2026

The Libraries

Pino was built with one philosophy: logging should never become your performance bottleneck. The team behind Fastify created Pino, and their approach is fundamentally different from other loggers. Rather than serializing and writing synchronously inside your event loop, Pino pushes JSON to stdout using asynchronous writes, offloading the actual I/O to the operating system and keeping your Node.js event loop free for request handling. In high-throughput APIs where you're logging thousands of events per second, this distinction matters enormously — Pino is 5-8x faster than Winston in benchmarks, and that gap represents real CPU time that your application can use for business logic instead of string serialization.

The child logger pattern is Pino's killer feature for production systems. When a request comes in, you create a child logger with the requestId bound, and every log line in that request automatically includes it. This enables powerful downstream querying: in Grafana Loki or Elasticsearch, you can filter all logs for a specific request in milliseconds.

import pino from 'pino';
const logger = pino({ level: 'info' });

logger.info({ requestId: '123', userId: 'u_456' }, 'Request processed');
// Output: {"level":30,"time":1709900000000,"requestId":"123","userId":"u_456","msg":"Request processed"}

// Child logger for request scoping
const reqLogger = logger.child({ requestId: req.id });
reqLogger.info('Processing');
reqLogger.error({ err }, 'Failed');

5-8x faster than Winston. Used by Fastify. Best for high-throughput APIs. See our detailed Pino vs Winston comparison for a deeper dive.

Winston — Most Flexible

Winston's design philosophy centers on transports — the idea that a logger should be able to write to multiple destinations simultaneously without changing the application code. Where Pino says "write to stdout and let the infrastructure handle routing," Winston says "the logger itself can route to files, databases, external services, and the console in parallel." This makes Winston genuinely powerful for scenarios where you need logs going to different destinations at different levels — errors to PagerDuty, all logs to a file, and info-level events to your analytics service.

The cost of this flexibility is performance. Winston's transport system involves synchronous operations and complex format pipelines, which is why it's roughly 6x slower than Pino under load. For most applications that aren't handling thousands of requests per second, this difference is negligible. But for high-throughput services, you'll feel it.

Winston's ecosystem is rich: winston-cloudwatch, winston-mongodb, winston-datadog, and dozens of other transports are maintained by the community. If you need to plug into an obscure log destination, Winston likely has a transport for it.

const winston = require('winston');
const logger = winston.createLogger({
  transports: [
    new winston.transports.Console({ format: winston.format.json() }),
    new winston.transports.File({ filename: 'error.log', level: 'error' }),
    // Community: winston-cloudwatch, winston-mongodb, winston-datadog
  ],
});

logger.info('Server started', { port: 3000 });

Best for: Multiple log destinations, custom transports, complex formatting pipelines.

Morgan — HTTP Request Logging

Morgan occupies a specific niche: it is an Express middleware that logs HTTP request/response data, not a general-purpose application logger. This distinction matters because Morgan and Pino/Winston solve different problems — you'll often use them together. Morgan handles the access log layer (who requested what, when, with what status code and timing), while Pino or Winston handles your application-level events (what your business logic did, what errors occurred, what data was processed).

In production, Morgan is most useful when configured to output JSON format and piped through a general logger, so all log lines share the same format and destination. The combined Apache format is useful for human-readable development logs, but in production you want structured data.

const morgan = require('morgan');
app.use(morgan('combined')); // Apache combined log format
// Or: 'tiny', 'dev', custom format
// → GET /api/users 200 45ms - 1.2kb

// JSON format for production:
app.use(morgan((tokens, req, res) => JSON.stringify({
  method: tokens.method(req, res),
  url: tokens.url(req, res),
  status: parseInt(tokens.status(req, res)),
  responseTime: parseFloat(tokens['response-time'](req, res)),
})));

Best for: Express HTTP request logging. Use alongside Pino or Winston for application logs.

Bunyan — Legacy but Stable

Bunyan was the first popular Node.js JSON logger and established many conventions that Pino later improved upon. It introduced the idea of structured JSON logging with serializers, child loggers, and level-based filtering that are now standard across the ecosystem. However, Bunyan has been in maintenance mode for years — active development has effectively stopped, and Pino does everything Bunyan does while being significantly faster.

const bunyan = require('bunyan');
const logger = bunyan.createLogger({ name: 'myapp' });
// JSON output, similar to Pino but slower and less maintained

Legacy choice. Still works, but Pino does everything Bunyan does, faster. If you're on a Bunyan codebase, the migration to Pino is straightforward since the API patterns are similar.


Performance Benchmark

100,000 log calls benchmark:

Library    | Time    | Relative
-----------|---------|----------
Pino       | 450ms   | 1x (baseline)
Bunyan     | 1,100ms | 2.4x
Winston    | 2,800ms | 6.2x

Why Pino is faster:
✓ Async stdout writes
✓ Minimal serialization
✓ No synchronous I/O
✓ log-stream processing outside Node.js event loop

Production Logging Architecture

Node.js app (Pino/Winston)
  ↓ JSON to stdout
Log collector (Fluentd, Vector, Filebeat)
  ↓ Forward + enrich
Log aggregator (Elasticsearch, Loki, CloudWatch)
  ↓ Index and store
Dashboard (Kibana, Grafana, CloudWatch Logs)
  ↓ Query and alert

All modern loggers output JSON to stdout — the collection and storage layer is separate.


Recommendations by Use Case

ScenarioRecommended Library
High-throughput API (Fastify/Hono)Pino (built-in)
Express REST APIPino + pino-http
HTTP request loggingMorgan
Multiple destinations neededWinston
Legacy codebaseKeep what you have
New project (any)Pino

Package Health

Understanding the health of these packages helps you make a long-term decision, not just a performance one. A package with declining maintenance or cratering downloads is a liability even if it technically works today.

PackageWeekly DownloadsLast ReleaseGitHub StarsBundle Size
winston~15MActive (2025)22K+~100KB
pino~9MActive (2026)13K+~50KB
morgan~4MStable (maintained)7.5K+~10KB
bunyan~2MStagnant (2021)7K+~80KB

Pino's download trajectory is the most interesting story here — it grew from roughly 3M weekly downloads in 2022 to 9M in 2026, driven by Fastify adoption and the broader shift toward performance-conscious backend development. Winston's 15M downloads remain large but include a massive installed base of legacy applications; new project adoption has been shifting toward Pino.

Bunyan's stagnation is the cautionary tale. It still works, but the last meaningful release was years ago, and its issues list has hundreds of open items. If you're starting a new project, Bunyan is not a realistic choice.


Structured Logging in Production

The logging ecosystem in 2026 has fully converged on structured JSON logging as the production standard. This shift happened for concrete reasons: when logs are structured JSON, your log aggregation platform can index individual fields, build dashboards from log data, and trigger alerts based on specific field values. An unstructured log line like "Error: connection timeout for user 12345" requires regex parsing to extract useful information. A structured line {"level":"error","msg":"connection timeout","userId":"12345","latencyMs":5000} is immediately queryable.

What structured logging enables in practice:

  • Field-based filtering: Show all log lines where userId = "12345" across your entire service fleet, instantly
  • Alerting on specific errors: Alert when errorCode = "DB_TIMEOUT" exceeds 10 occurrences per minute
  • Performance tracking: Average responseTimeMs over a sliding window to detect degradation
  • Correlation with traces: Join log lines to distributed traces using traceId and spanId

What you should never log in any structured field: passwords, tokens, credit card numbers, social security numbers, or any other PII. Even in supposedly internal logs, these values create compliance liability and security risk. Use a Pino redact option or Winston custom format to scrub sensitive fields before they hit stdout.

// Pino redaction — scrub sensitive fields before logging
const logger = pino({
  redact: {
    paths: ['password', 'token', 'authorization', '*.creditCard'],
    censor: '[REDACTED]',
  },
});

logger.info({ email: 'user@example.com', password: 'secret123' }, 'Login attempt');
// Output: {"email":"user@example.com","password":"[REDACTED]","msg":"Login attempt"}

Child Loggers and Request Context

One of the most powerful patterns in production logging is binding context to a child logger and passing it through the request lifecycle. Without this, every log line must manually include the requestId, userId, or traceId — easy to forget and inconsistent. With child loggers, you bind the context once at the start of a request and all subsequent logs automatically inherit it.

import pino from 'pino';
import { AsyncLocalStorage } from 'node:async_hooks';

const asyncLocalStorage = new AsyncLocalStorage();
const baseLogger = pino({ level: 'info' });

// Middleware: create child logger bound to this request
app.use((req, res, next) => {
  const requestId = req.headers['x-request-id'] || crypto.randomUUID();
  const childLogger = baseLogger.child({
    requestId,
    traceId: req.headers['x-trace-id'],
    userId: req.user?.id,
  });
  
  // Store in AsyncLocalStorage for access anywhere in call stack
  asyncLocalStorage.run({ logger: childLogger }, next);
});

// Helper to get logger anywhere in the async call stack
export function getLogger() {
  return asyncLocalStorage.getStore()?.logger ?? baseLogger;
}

// In a service function called from the route handler:
async function fetchUserData(userId: string) {
  const logger = getLogger(); // Gets the request-scoped child logger
  logger.info({ userId }, 'Fetching user data');
  // All logs include requestId/traceId from the parent context
}

The AsyncLocalStorage pattern is important here. Without it, you'd need to thread the logger through every function call as a parameter — which works but is verbose and easy to forget. AsyncLocalStorage propagates the logging context automatically through async boundaries, including across await calls, Promises, and timers.


Log Aggregation Setup

Choosing a logger is only half the equation. The other half is where logs go after they leave your process. The production-standard architecture is: application logs to stdout → log collector reads stdout → collector forwards to aggregator → aggregator indexes and makes searchable.

Grafana Loki is increasingly popular for teams already using Grafana for metrics, because it allows you to correlate logs with metrics on the same dashboard. Here's a minimal Docker Compose setup:

# docker-compose.yml — Loki + Promtail for local log aggregation
version: '3.8'
services:
  loki:
    image: grafana/loki:2.9.0
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/local-config.yaml

  promtail:
    image: grafana/promtail:2.9.0
    volumes:
      - /var/log:/var/log
      - ./promtail-config.yaml:/etc/promtail/config.yaml
    command: -config.file=/etc/promtail/config.yaml
    depends_on:
      - loki

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true

Datadog offers the most polished managed experience with the winston-datadog or pino-datadog-transport packages sending logs directly from your process:

// Pino with Datadog transport
import pino from 'pino';
import { createWriteStream } from 'pino-datadog-transport';

const logger = pino(
  { level: 'info' },
  createWriteStream({ apiKey: process.env.DD_API_KEY, service: 'my-api' })
);

AWS CloudWatch is the default choice for teams already deployed on AWS. The winston-cloudwatch community transport handles batching and retry logic. For Pino, the pino-cloudwatch transport provides equivalent functionality.

The key architectural principle: your application code should not know or care where logs go. Pino's "log to stdout" philosophy makes this cleanest — swapping from CloudWatch to Loki is an infrastructure change with no code changes. Winston's transport approach ties the destination decision to application code, which is why migrating a Winston app to a new log destination requires code changes.


Compare all logging library health scores on PkgPulse. For a head-to-head deep dive, see Pino vs Winston 2026. Explore all packages in the PkgPulse directory.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.