Skip to main content

Testcontainers Node.js vs Docker Compose 2026

·PkgPulse Team
0

TL;DR

Testcontainers-node is the modern choice for Node.js integration tests that need real databases — it programmatically spins up Docker containers per test suite, ensures isolation, and tears down automatically. Docker Compose is still valid for stable, pre-shared environments and when your whole team shares a development stack. For greenfield projects with per-PR CI and isolated test runs, testcontainers wins on ergonomics.

Key Takeaways

  • Testcontainers-node (v10+) starts a fresh container per test file — full isolation, no shared state
  • Docker Compose is a fixed environment — faster startup, but shared state across test runs
  • CI performance: Testcontainers adds 5-15s container startup per suite but eliminates flakiness from shared DB state
  • @testcontainers/postgresql, @testcontainers/mysql, @testcontainers/redis — typed module system as of v10
  • Vitest + testcontainers: Use globalSetup for container lifecycle; Jest works the same way
  • Best for: Teams doing per-PR CI, microservices with complex DB setup, any test that requires a real DB

The Integration Testing Problem

Unit tests with mocks lie. The ORM generates a different query than you expect. The migration runs correctly locally but fails in production because you tested against a mock, not a real PostgreSQL. The Redis EXPIRE semantics differ from your in-memory fake.

Integration tests that hit real databases are the gold standard — but they're painful to manage:

Traditional problems:
├── Shared dev database gets polluted between runs
├── Docker Compose setup is a prerequisite (devs forget to run it)
├── CI environments need pre-provisioned databases
├── Test isolation requires careful data setup/teardown
└── Parallel test runs conflict on shared state

Testcontainers solves this by making containers a first-class test primitive.


Testcontainers-Node: Real Containers in Code

npm install testcontainers
npm install @testcontainers/postgresql  # typed module

Basic PostgreSQL Setup

import { PostgreSqlContainer } from "@testcontainers/postgresql";
import { drizzle } from "drizzle-orm/node-postgres";
import { migrate } from "drizzle-orm/node-postgres/migrator";
import { describe, it, beforeAll, afterAll, expect } from "vitest";

describe("UserRepository", () => {
  let container: StartedPostgreSqlContainer;
  let db: ReturnType<typeof drizzle>;

  beforeAll(async () => {
    // Testcontainers starts a fresh PostgreSQL 16 container
    container = await new PostgreSqlContainer("postgres:16-alpine")
      .withDatabase("testdb")
      .withUsername("testuser")
      .withPassword("testpass")
      .start();

    // Connect to the real container
    db = drizzle(container.getConnectionUri());

    // Run real migrations against the real database
    await migrate(db, { migrationsFolder: "./drizzle" });
  }, 60_000); // allow up to 60s for container start on slow CI

  afterAll(async () => {
    await container.stop();
  });

  it("creates and retrieves a user", async () => {
    const [user] = await db
      .insert(users)
      .values({ email: "test@example.com", name: "Test User" })
      .returning();

    const found = await db.select().from(users).where(eq(users.id, user.id));

    expect(found[0].email).toBe("test@example.com");
  });
});

This is real PostgreSQL. Real transactions. Real constraint checks. Real RETURNING clauses. No mocking, no faking.

Multiple Containers

Testcontainers composes naturally:

import { PostgreSqlContainer } from "@testcontainers/postgresql";
import { RedisContainer } from "@testcontainers/redis";
import { Network } from "testcontainers";

describe("CachingService", () => {
  let pg: StartedPostgreSqlContainer;
  let redis: StartedRedisContainer;
  let network: StartedNetwork;

  beforeAll(async () => {
    // Create a shared network for inter-container communication
    network = await new Network().start();

    [pg, redis] = await Promise.all([
      new PostgreSqlContainer("postgres:16-alpine")
        .withNetwork(network)
        .withNetworkAliases("postgres")
        .start(),
      new RedisContainer("redis:7-alpine")
        .withNetwork(network)
        .withNetworkAliases("redis")
        .start(),
    ]);
  });

  afterAll(async () => {
    await Promise.all([pg.stop(), redis.stop()]);
    await network.stop();
  });

  it("caches user data in Redis after DB query", async () => {
    const db = drizzle(pg.getConnectionUri());
    const redisClient = createClient({ url: redis.getConnectionUrl() });
    await redisClient.connect();

    const service = new UserCachingService(db, redisClient);
    await service.getUser("user-123"); // miss: hits DB, writes cache
    await service.getUser("user-123"); // hit: reads from Redis

    const cached = await redisClient.get("user:user-123");
    expect(JSON.parse(cached!)).toMatchObject({ id: "user-123" });
  });
});

Available Modules (v10+)

PackageContainer
@testcontainers/postgresqlPostgreSQL 9.6–16
@testcontainers/mysqlMySQL 5.7–8.x
@testcontainers/mongodbMongoDB 4.x–7.x
@testcontainers/redisRedis 6–7
@testcontainers/kafkaApache Kafka
@testcontainers/elasticsearchElasticsearch 7–8
@testcontainers/localstackAWS service mocks (S3, SQS, SNS)
@testcontainers/minioS3-compatible object storage
@testcontainers/chromiumChromium browser (for visual tests)

Docker Compose: The Established Approach

Docker Compose defines your test infrastructure as a YAML file:

# docker-compose.test.yml
version: "3.9"
services:
  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: testdb
      POSTGRES_USER: testuser
      POSTGRES_PASSWORD: testpass
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U testuser -d testdb"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

Your test setup script:

#!/bin/bash
# scripts/test.sh
docker compose -f docker-compose.test.yml up -d --wait
npm test
docker compose -f docker-compose.test.yml down
// vitest.config.ts
export default defineConfig({
  test: {
    globalSetup: "./tests/setup/docker-wait.ts",
  },
});

// tests/setup/docker-wait.ts
import { execSync } from "child_process";

export async function setup() {
  // Wait for PostgreSQL to be ready (it's already running via docker compose)
  let retries = 10;
  while (retries > 0) {
    try {
      execSync("pg_isready -h localhost -p 5432");
      break;
    } catch {
      retries--;
      await new Promise((r) => setTimeout(r, 1000));
    }
  }
}

Docker Compose Strengths

Pre-started environment — The database is running before any test starts. No per-test startup time.

Shared across test files — All test files connect to the same database. Good for sequential test runs where you want state to persist.

Works for local dev toodocker compose up serves both development and testing.

Familiar tooling — Every backend developer knows Docker Compose.


Head-to-Head Comparison

DimensionTestcontainersDocker Compose
SetupCode-first, colocated with testsYAML file + shell scripts
IsolationFresh container per suite (default)Shared across all tests
Startup time+5-15s per suiteOne-time startup before tests
Total CI time (10 test files)~2-3 min (parallel containers)~1-2 min (single DB shared)
State isolation✅ Automatic❌ Manual (transactions, truncate)
Parallel test runs✅ No port conflicts⚠️ Need unique ports or DB names
Container version per test✅ Yes (postgres:16 in one, :15 in another)❌ One version for all
DependenciesDocker daemonDocker daemon + docker compose
Learning curveLow (it's just JavaScript)Low (YAML is familiar)
monorepo support✅ Each package gets its own container⚠️ Complex port management

Performance: Real Numbers

The key question: does testcontainers add unacceptable overhead?

Container startup time by image size:

ImageFirst pullSubsequent start
postgres:16-alpine30-60s (download)3-5s
redis:7-alpine15-30s (download)1-2s
mongo:7-alpine45-90s (download)3-6s

After the first run, Docker caches images locally and in CI cache. Per-run cost is 3-5s per container.

For 10 test suites needing PostgreSQL:

  • Testcontainers (parallel): ~5s startup × 10 concurrent = ~5s overhead (if parallel)
  • Testcontainers (sequential): ~5s × 10 = ~50s overhead
  • Docker Compose (shared): 5s one-time startup = 5s overhead

CI caching strategy to minimize container pull time:

# .github/workflows/test.yml
- name: Cache Docker images
  uses: ScribeMD/docker-cache@0.5.0
  with:
    key: docker-${{ runner.os }}-${{ hashFiles('**/package.json') }}

Testcontainers with Vitest: Production Setup

For a realistic production setup with Vitest:

// vitest.config.ts
import { defineConfig } from "vitest/config";

export default defineConfig({
  test: {
    // Global setup runs once per Vitest worker process
    globalSetup: ["./tests/global-setup.ts"],
    // Each test file gets its own worker (isolation)
    pool: "forks",
    poolOptions: {
      forks: {
        singleFork: false, // parallel forks
      },
    },
  },
});

// tests/global-setup.ts
import { PostgreSqlContainer } from "@testcontainers/postgresql";
import { execSync } from "child_process";

let container: StartedPostgreSqlContainer;

export async function setup() {
  container = await new PostgreSqlContainer("postgres:16-alpine")
    .withDatabase("testdb")
    .start();

  // Run migrations once per worker
  process.env.TEST_DATABASE_URL = container.getConnectionUri();
  execSync("npx drizzle-kit migrate", {
    env: { ...process.env, DATABASE_URL: container.getConnectionUri() },
  });
}

export async function teardown() {
  await container?.stop();
}
// tests/shared/db.ts
import { drizzle } from "drizzle-orm/node-postgres";
import * as schema from "@/db/schema";

// Connects to container started in globalSetup
export function getTestDb() {
  return drizzle(process.env.TEST_DATABASE_URL!, { schema });
}

// Helper to reset state between tests
export async function resetTestDb(db: ReturnType<typeof getTestDb>) {
  await db.delete(users);
  await db.delete(organizations);
  // order matters for FK constraints
}

When to Choose Each

Choose Testcontainers when:

  • CI runs multiple PRs in parallel (no port conflicts)
  • You want migration testing (run against fresh schema every time)
  • Test suites need different database versions
  • You're building a library that must test against multiple PostgreSQL versions
  • You want colocated test infrastructure (no external YAML)

Choose Docker Compose when:

  • Your test suite is sequential and simple
  • You want a shared dev environment (docker compose up for both coding and testing)
  • Team is already heavily invested in Compose-based tooling
  • You want full control over the running services (attach, inspect, persist data)

Use both together:

# docker-compose.yml (for development only — databases stay running)
services:
  postgres:
    image: postgres:16-alpine
    # ... development config

# In tests, use testcontainers for ephemeral test containers
# This way dev has persistent DB, tests have isolated DB

Practical Patterns

Transaction Rollback for Fast Isolation

Instead of stopping/starting containers between tests, wrap each test in a transaction:

import { db } from "./db";

// beforeEach: start transaction
// afterEach: rollback (no data persists)
describe("OrderService", () => {
  let tx: ReturnType<typeof db.transaction>;

  beforeEach(async () => {
    tx = await db.transaction(async (transaction) => {
      // expose the transaction to tests
      return transaction;
    });
  });

  afterEach(async () => {
    await tx.rollback();
  });

  it("creates order with items", async () => {
    const order = await tx.insert(orders).values({ userId: "u1" }).returning();
    // ... test with tx
    // After test: entire transaction rolls back
  });
});

Transaction rollback is 10-100x faster than truncating tables.

LocalStack for AWS Integration Tests

import { LocalstackContainer } from "@testcontainers/localstack";
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";

describe("S3UploadService", () => {
  let localstack: StartedLocalStackContainer;
  let s3: S3Client;

  beforeAll(async () => {
    localstack = await new LocalstackContainer("localstack/localstack:3")
      .withServices(["s3", "sqs"])
      .start();

    s3 = new S3Client({
      endpoint: localstack.getConnectionUri(),
      region: "us-east-1",
      credentials: { accessKeyId: "test", secretAccessKey: "test" },
      forcePathStyle: true,
    });

    await s3.send(new CreateBucketCommand({ Bucket: "test-bucket" }));
  });

  it("uploads and retrieves files", async () => {
    await s3.send(new PutObjectCommand({
      Bucket: "test-bucket",
      Key: "test.txt",
      Body: "Hello, world!",
    }));

    // verify retrieval...
  });
});

Ecosystem & Community

Testcontainers is a polyglot project — it originated in the Java ecosystem and has first-class implementations for Java, Go, Python, .NET, and Node.js. This cross-language pedigree means the core concepts are battle-tested against far more production use cases than a Node.js-only tool would accumulate. The testcontainers-node package has around 250K weekly npm downloads, a small number that understates actual usage since many enterprise Java shops use testcontainers and share infrastructure knowledge across language teams.

The module ecosystem in v10+ is a significant improvement over earlier versions. The typed modules (@testcontainers/postgresql, @testcontainers/redis) provide documented, type-safe APIs with sensible defaults and health check logic baked in. Before v10, you had to configure health checks yourself, which was a common source of flaky tests.

Docker Compose has effectively infinite ecosystem breadth — any Docker image works. The trade-off is that everything requires manual configuration. There are no typed wrappers or health check helpers.

Testing Strategies and the Role of Integration Tests

The question of whether to use testcontainers or Docker Compose sits within the broader question of how much integration testing your project needs. Unit tests with mocked databases are faster and more isolated but cannot catch bugs that emerge from real database behavior: query plan differences between mock and real engines, transaction isolation levels, index effectiveness, and schema migration correctness.

Integration tests with real databases catch a class of bugs that unit tests systematically miss. ORM behavior differences are the most common example — an ORM query that looks correct in TypeScript may generate inefficient SQL that causes timeout issues under load, or may have subtle differences in how NULL handling works compared to your mock assumptions. Running against a real PostgreSQL container exposes these issues in CI rather than in production.

The testcontainers approach pairs naturally with a layered testing strategy: unit tests (mocked, no containers), integration tests (testcontainers for database layer), and end-to-end tests (testcontainers or Docker Compose for full stack). For teams building REST APIs with Hono, Express, or Fastify, the integration test layer using testcontainers is particularly valuable for testing route handlers against real database behavior without spinning up the full application stack.

One underappreciated benefit of testcontainers over mocking: it catches regressions when you upgrade your database version. If your application runs on PostgreSQL 15 and you upgrade to PostgreSQL 16, running your integration test suite against the new version before deploying to production will surface any query compatibility issues. Docker image tags make this trivially easy — change postgres:15-alpine to postgres:16-alpine in your test setup and run the suite.

Real-World Adoption

Testcontainers has seen substantial adoption in companies that take integration testing seriously. Stripe, Twilio, and various fintech companies use testcontainers across their backend test suites, typically for testing database migration correctness and service boundary contracts. In the Node.js ecosystem specifically, it's common in projects using Drizzle ORM, Prisma, or TypeORM where real database behavior is critical to validate — ORMs have subtle query generation differences that only manifest against actual databases.

Docker Compose remains the dominant choice in teams with established DevOps practices. Many companies have invested years in Docker Compose-based local development environments, and the incremental cost of using those same services for testing is low. The separation between "my dev environment" and "test environment" is often intentionally blurred.

Migration Guide

Migrating from Docker Compose to testcontainers is straightforward for most Node.js projects. The most important step is installing and verifying Docker is available on your CI runners. GitHub Actions' ubuntu-latest runners include Docker, so for most teams this is not a blocker.

Start by picking one test file that uses a database and converting it to testcontainers. Run it in CI to validate image caching is working. Once the single file is running cleanly, convert the rest. A monorepo with ten packages that each need PostgreSQL typically takes half a day to fully migrate. The main pitfall is timeout configuration — CI runners can be slow to pull images, so setting beforeAll timeouts to 60 seconds is critical.

Common migration errors: forgetting to call container.stop() in afterAll (causes container leaks), not waiting for the container's health check before running migrations (causes connection errors), and hardcoding ports instead of using container.getConnectionUri() (causes port conflicts in parallel runs).

Final Verdict 2026

For teams building Node.js applications with real database dependencies, testcontainers is the superior testing tool in 2026. The isolation guarantees eliminate an entire category of flaky CI failures, the typed module system removes configuration boilerplate, and the ability to run migration tests against a fresh schema on every CI run catches schema drift bugs that Docker Compose shared environments miss entirely.

Docker Compose remains valid for local development where persistence is desired, and for teams with sequential test suites where the startup overhead of testcontainers doesn't buy proportional isolation value. Use both: Docker Compose for your persistent dev database, testcontainers for your CI integration tests.

Developer Experience Deep Dive

The developer experience gap between testcontainers and Docker Compose is most apparent when onboarding new engineers to a project. With Docker Compose, a new developer must know to run docker compose up before running tests, and they must know which compose file to use (often there are multiple — one for development, one for tests, one for CI). Missing this step causes confusing connection errors that are hard to diagnose without context.

Testcontainers eliminates this category of onboarding friction entirely. Clone the repo, run npm test, and the containers start automatically. There's no separate infrastructure step. This matters more than it sounds: onboarding friction compounds across every new developer, every machine migration, and every fresh CI runner.

The TypeScript experience with testcontainers v10+ is genuinely pleasant. The typed module system means your IDE autocompletes container configuration options and the TypeScript compiler catches misconfigurations before runtime. Knowing that PostgreSqlContainer accepts .withDatabase(), .withUsername(), and .withPassword() without consulting documentation — because IntelliSense shows you — is a small but meaningful quality-of-life improvement.

Docker Compose's YAML-based configuration has its own DX advantages. The declarative format is readable and easy to understand at a glance. Multiple developers on a team can read and understand a docker-compose.yml regardless of their JavaScript proficiency — it's more universally readable than testcontainers code. For teams where DevOps and backend engineers collaborate on infrastructure configuration, YAML is often the shared language.

Performance Considerations in Large Monorepos

The performance calculation for testcontainers changes significantly in monorepo contexts. A monorepo with 20 packages, each with integration tests requiring PostgreSQL, presents a different challenge than a single application.

With Docker Compose, you have a choice: one shared database for all packages (fast startup, state isolation problems) or separate databases per package (configuration complexity, port management). Neither option is entirely satisfying. Teams often end up with a compromised approach — separate databases but manually managed cleanup scripts that are fragile and frequently break.

With testcontainers in a Vitest workspace, each package can start its own container without interfering with any other package. The containers run in parallel, each on a randomly assigned port, and each tears down independently after its test suite completes. The total wall-clock time for parallel execution is only as long as the slowest package's tests plus container startup — not the sum of all test times.

The Docker image caching story in CI is crucial for testcontainers performance in large monorepos. GitHub Actions, CircleCI, and most modern CI platforms support Docker layer caching. Once postgres:16-alpine is cached on the CI runner, subsequent container starts take 3-5 seconds regardless of how many packages use it. Without caching, the first pull on a cold runner can take 30-60 seconds per image — unacceptable for large test suites.


Methodology

  • Tested testcontainers v10.x with Vitest 3.x and Jest 29.x on Node.js 22
  • Measured container startup times on GitHub Actions (ubuntu-latest) with Docker cache
  • Reviewed testcontainers-node GitHub issues for common pain points
  • Compared CI timing across 20 test suites in a monorepo (each needs PostgreSQL)
  • Tested LocalStack integration with AWS SDK v3

See how popular testing packages compare on PkgPulse — download trends, GitHub activity, bundle sizes.

For more on JavaScript testing frameworks, see best JavaScript testing frameworks 2026.

If you're deciding on your test runner, bun:test vs node:test vs Vitest covers the runner comparison in depth.

For API mocking libraries that pair with integration test setups, see best API mocking libraries 2026.

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.