Skip to main content

PgBouncer vs pgcat vs Supavisor 2026

·PkgPulse Team
0

PgBouncer vs pgcat vs Supavisor: PostgreSQL Connection Pooling 2026

TL;DR

PostgreSQL's connection model is expensive — each connection creates a new OS process consuming ~5-10MB RAM. At 1,000 concurrent connections, that's 5-10GB just for overhead. Connection poolers sit in front of PostgreSQL and multiplex many application connections onto a small pool of actual DB connections. PgBouncer is the battle-tested standard — single binary, ultralight (~2MB RAM per 1,000 clients), transaction-mode pooling, and used by virtually every major PostgreSQL deployment. pgcat is the modern Rust rewrite — adds query routing (read replicas), multi-tenant sharding, and a configuration-as-code approach while maintaining PgBouncer compatibility. Supavisor is Supabase's Elixir-based pooler — built for serverless and edge workloads, handles hundreds of thousands of connections and multiple tenants by design. For a traditional server app: PgBouncer. For read/write splitting and sharding: pgcat. For serverless workloads and multi-tenant SaaS: Supavisor.

Key Takeaways

  • PostgreSQL max_connections default: 100 — each connection is an OS process; pooling is essential
  • PgBouncer uses ~2MB RAM for 1,000 clients — the lightest pooler available
  • pgcat routes queries to read replicasSELECT → replica, writes → primary, automatic
  • Supavisor supports 1M+ connections — purpose-built for Supabase's multi-tenant architecture
  • Transaction pooling breaks prepared statements — all three have trade-offs vs session pooling
  • pgcat is written in Rust — same connection protocol as PgBouncer, drop-in compatible
  • Supavisor exposes standard PostgreSQL protocol — your app connects as if it's connecting to Postgres directly

Why PostgreSQL Needs Connection Pooling

Without pooling — each request opens a DB connection:
  1,000 concurrent users
    → 1,000 PostgreSQL connections
    → 1,000 OS processes × 5-10MB = 5-10GB RAM just for connections
    → PostgreSQL max_connections exhausted
    → New requests: "FATAL: sorry, too many clients already"

With pooling:
  1,000 concurrent app connections
    → 20 actual PostgreSQL connections
    → Pool reuses connections for transactions
    → 99% RAM reduction for connection overhead

Pooling Modes

Session mode:    1 app connection → 1 DB connection (held entire session)
                 Good compatibility, poor multiplexing

Transaction mode: 1 app connection per transaction → multiple apps share 1 DB connection
                  Best for high concurrency, breaks some session features

Statement mode:  1 DB connection per statement
                 Fastest, breaks transactions and prepared statements

Transaction mode is the standard choice for most applications. PgBouncer, pgcat, and Supavisor all support it.


PgBouncer: The Battle-Tested Standard

PgBouncer has been the default PostgreSQL pooler since 2007. It's single-threaded (by design — PostgreSQL's wire protocol), ultralight, and used by AWS RDS Proxy, Heroku Postgres, and nearly every managed PostgreSQL service.

Installation

# Ubuntu/Debian
apt install pgbouncer

# macOS
brew install pgbouncer

# Docker
docker run -d \
  --name pgbouncer \
  -e DATABASE_URL="postgres://user:pass@db:5432/myapp" \
  -e POOL_MODE=transaction \
  -e MAX_CLIENT_CONN=1000 \
  -e DEFAULT_POOL_SIZE=20 \
  -p 5432:5432 \
  edoburu/pgbouncer

Configuration

# pgbouncer.ini

[databases]
# Format: alias = host=... port=... dbname=... user=...
myapp = host=postgres-primary port=5432 dbname=myapp

# Read replica for read-only connections (manual setup)
myapp_ro = host=postgres-replica port=5432 dbname=myapp

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 5432

# Authentication
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt

# Pool mode
pool_mode = transaction          # transaction | session | statement

# Limits
max_client_conn = 1000           # Max connections from apps
default_pool_size = 20           # Connections to PostgreSQL per database+user pair
reserve_pool_size = 5            # Reserve for when pool is full

# Timeouts
server_idle_timeout = 600        # Close idle DB connections after 10 minutes
client_idle_timeout = 0          # Never close idle client connections

# Logging
log_connections = 0
log_disconnections = 0
log_pooler_errors = 1

# Admin
admin_users = pgbouncer_admin
stats_users = pgbouncer_stats

# TLS
server_tls_sslmode = require
# /etc/pgbouncer/userlist.txt
# "username" "md5hash_of_password"
"app_user" "md5a1b2c3d4..."
"pgbouncer_admin" "md5..."

Connecting to PgBouncer from Node.js

// PgBouncer is transparent — connect exactly like you would to PostgreSQL
import postgres from "postgres";

const sql = postgres({
  host: "pgbouncer",    // PgBouncer hostname
  port: 5432,           // PgBouncer port
  database: "myapp",
  username: "app_user",
  password: process.env.DB_PASSWORD,
  max: 10,              // Keep this LOW — PgBouncer does the real pooling
  // Transaction mode: disable prepared statements
  prepare: false,       // IMPORTANT for transaction mode
});

// Same with pg (node-postgres)
import { Pool } from "pg";

const pool = new Pool({
  host: "pgbouncer",
  port: 5432,
  database: "myapp",
  user: "app_user",
  password: process.env.DB_PASSWORD,
  max: 10,
});

Monitoring PgBouncer

# Connect to PgBouncer admin console
psql -h pgbouncer -p 5432 -U pgbouncer_admin pgbouncer

# Show pool stats
SHOW POOLS;

# Show active connections
SHOW CLIENTS;

# Show server connections
SHOW SERVERS;

# Show overall stats
SHOW STATS;

# Reload config without restart
RELOAD;

pgcat: Modern Rust Pooler with Query Routing

pgcat is a PostgreSQL pooler and proxy written in Rust. It's fully compatible with PgBouncer's protocol but adds query routing to read replicas, sharding, and a more modern configuration model.

Installation

# Docker (recommended)
docker run -d \
  --name pgcat \
  -p 5432:5432 \
  -v $(pwd)/pgcat.toml:/etc/pgcat/pgcat.toml \
  ghcr.io/postgresml/pgcat:latest

# Binary
# Download from https://github.com/postgresml/pgcat/releases

Configuration

# pgcat.toml

[general]
host = "0.0.0.0"
port = 5432
enable_prometheus_exporter = true
prometheus_exporter_port = 9930
log_level = "info"

[pools.myapp]
  pool_mode = "transaction"
  default_role = "any"          # "primary" | "replica" | "any" (load balance)
  query_parser_enabled = true   # Enable read/write splitting
  primary_reads_enabled = false # Don't send reads to primary when replicas available

  [pools.myapp.users]
    [pools.myapp.users.app_user]
    password = "secretpassword"
    pool_size = 20
    statement_timeout = 30000   # 30 second query timeout

  [pools.myapp.shards]
    # Single server (add more shards for sharding)
    [pools.myapp.shards.0]
    servers = [
      ["postgres-primary", 5432, "primary"],
      ["postgres-replica-1", 5432, "replica"],
      ["postgres-replica-2", 5432, "replica"],
    ]
    database = "myapp"

Query Routing in Action

// pgcat automatically routes based on query type
// No changes needed in your application code

import postgres from "postgres";

const sql = postgres({
  host: "pgcat",
  port: 5432,
  database: "myapp",
  username: "app_user",
  password: process.env.DB_PASSWORD,
  prepare: false,  // Still needed for transaction mode
});

// These automatically go to READ REPLICAS:
const users = await sql`SELECT * FROM users WHERE active = true`;
const post = await sql`SELECT * FROM posts WHERE id = ${postId}`;

// These automatically go to PRIMARY:
await sql`INSERT INTO users (email, name) VALUES (${email}, ${name})`;
await sql`UPDATE posts SET view_count = view_count + 1 WHERE id = ${postId}`;
await sql`DELETE FROM sessions WHERE expires_at < NOW()`;

// Explicit routing with custom annotation
const result = await sql`/* pgcat: primary */ SELECT * FROM users WHERE id = ${userId}`;

Multi-Tenant Sharding

# pgcat.toml — sharding configuration
[pools.myapp_sharded]
  pool_mode = "transaction"
  sharding_function = "pg_bigint_hash"  # Hash-based sharding
  shards = 2

  [pools.myapp_sharded.shards]
    [pools.myapp_sharded.shards.0]
    servers = [
      ["shard-0-primary", 5432, "primary"],
      ["shard-0-replica", 5432, "replica"],
    ]
    database = "myapp"

    [pools.myapp_sharded.shards.1]
    servers = [
      ["shard-1-primary", 5432, "primary"],
      ["shard-1-replica", 5432, "replica"],
    ]
    database = "myapp"
/* Route query to specific shard via comment */
/* pgcat shard: 0 */ SELECT * FROM users WHERE tenant_id = 'abc';
/* pgcat shard: 1 */ SELECT * FROM users WHERE tenant_id = 'xyz';

Supavisor: Serverless-First Pooler

Supavisor is Supabase's connection pooler written in Elixir/OTP. It's designed for serverless workloads where thousands of short-lived connections (Lambda functions, Vercel functions) overwhelm traditional poolers.

Why Serverless Needs Different Pooling

Traditional serverless problem:
  Lambda function starts → opens DB connection → runs query → closes connection
  × 10,000 concurrent invocations = 10,000 connection open/close cycles

PgBouncer limitation:
  - Single-threaded: max ~10,000 clients (CPU-bound)
  - Each connection attempt = PgBouncer overhead

Supavisor advantage:
  - Elixir/OTP: designed for millions of lightweight processes
  - Handles 1M+ client connections efficiently
  - Persistent server-side pool to PostgreSQL (even as clients come/go)

Using Supavisor (Supabase Hosted)

// Supabase automatically provides Supavisor
// Connection string from Supabase dashboard includes pooler port (6543)

// Direct (bypasses pooler — use for migrations, long transactions)
const directUrl = "postgres://user:pass@db.project.supabase.co:5432/postgres";

// Pooled (via Supavisor — use for app queries)
const pooledUrl = "postgres://user:pass@db.project.supabase.co:6543/postgres";

// With Drizzle ORM
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";

// Application queries — pooled connection
const pooledSql = postgres(process.env.SUPABASE_POOLED_URL!, {
  prepare: false,  // Required for Supavisor transaction mode
});
export const db = drizzle(pooledSql);

// Migrations — direct connection (avoids pooler timeout issues)
const migrationSql = postgres(process.env.SUPABASE_DIRECT_URL!);
export const migrationDb = drizzle(migrationSql);

Self-Hosted Supavisor

# docker-compose.yml — self-hosted Supavisor
version: "3.8"

services:
  supavisor:
    image: supabase/supavisor:latest
    ports:
      - "5432:5432"   # PostgreSQL protocol
      - "4000:4000"   # HTTP API
    environment:
      PORT: "4000"
      POSTGRES_PORT: "5432"
      POSTGRES_DB: "postgres"
      DATABASE_URL: "ecto://supavisor:password@postgres:5432/supavisor"
      CLUSTER_POSTGRES: "true"
      SECRET_KEY_BASE: "your-secret-key-base-min-64-chars"
      VAULT_ENC_KEY: "your-32-char-encryption-key"
      API_JWT_SECRET: "your-api-jwt-secret"
      METRICS_JWT_SECRET: "your-metrics-jwt-secret"
      REGION: "us-east-1"
      ERL_AFLAGS: "-proto_dist inet_tcp"

Supavisor API — Add Tenant

// Supavisor manages tenants via HTTP API
const response = await fetch("http://supavisor:4000/api/tenants", {
  method: "PUT",
  headers: {
    "Content-Type": "application/json",
    Authorization: `Bearer ${API_JWT_TOKEN}`,
  },
  body: JSON.stringify({
    tenant: {
      db_host: "postgres",
      db_port: 5432,
      db_name: "myapp",
      db_user: "app_user",
      db_password: "password",
      pool_size: 20,
      pool_mode: "transaction",
      upstream_ssl: false,
    },
  }),
});

Feature Comparison

FeaturePgBouncerpgcatSupavisor
LanguageCRustElixir/OTP
RAM per 1,000 clients~2MB~5MB~10MB
Max connections~10,000~50,0001,000,000+
Read/write splitting✅ Auto
Sharding
Serverless optimized
Multi-tenantManualPartial✅ Native
Pool modesSession/Tx/StmtSession/TxSession/Tx
Prepared statements❌ (tx mode)❌ (tx mode)❌ (tx mode)
Prometheus metrics
Config formatINITOMLHTTP API
Production maturity✅ 15+ yearsGrowingGrowing
GitHub stars2.6k2.8k1.6k

When to Use Each

Choose PgBouncer if:

  • Traditional server app (not serverless) with stable connection counts
  • Simplicity and proven reliability over a decade of production use
  • Minimum resource footprint — 2MB RAM for 1,000 clients is still unbeatable
  • You're on a managed PostgreSQL service (AWS RDS, Heroku) that already includes PgBouncer

Choose pgcat if:

  • You have PostgreSQL read replicas and want automatic query routing without application changes
  • Multi-tenant sharding across multiple PostgreSQL instances is required
  • You want Rust's performance characteristics and PgBouncer protocol compatibility
  • Better observability (per-pool metrics, query statistics) is needed

Choose Supavisor if:

  • Your application runs in serverless functions (Vercel, Lambda, Cloudflare Workers)
  • You're already on Supabase (it's included and configured automatically)
  • You need to handle connection counts in the hundreds of thousands
  • Multi-tenant SaaS where each tenant has an isolated connection pool

Ecosystem and Community Health

PgBouncer is one of the most stable pieces of PostgreSQL infrastructure software. It's written in C, has been in production for nearly two decades, and changes slowly by design. Major cloud providers have built their managed PostgreSQL offerings around PgBouncer: Amazon RDS Proxy is architecturally similar to PgBouncer's transaction pooling mode, Heroku Postgres uses PgBouncer, and Google Cloud SQL offers PgBouncer-compatible connection pooling. The GitHub repository has 2.6k stars, understated for a tool this critical — but then, infrastructure tools rarely get the GitHub star attention their production footprint deserves.

pgcat is developed by PostgresML, the machine learning inside PostgreSQL company. The motivation for building pgcat was practical: PostgresML needed query routing to read replicas for their inference workloads, and PgBouncer's single-backend architecture couldn't provide it. pgcat's 2.8k GitHub stars reflect genuine developer interest — Rust rewrite projects for infrastructure components are popular, but pgcat has the usage data to back the interest. The observability improvements over PgBouncer (per-pool Prometheus metrics, query statistics) are consistently praised in community discussions.

Supavisor is Supabase's first-party pooler, which means it receives direct engineering resources from a well-funded company. The decision to build it in Elixir was intentional — the BEAM VM's lightweight processes map perfectly to connection management, and Supabase had existing Elixir expertise from their Realtime product. Supavisor is the production-grade pooler for every Supabase project, which means it's being stress-tested daily across millions of connections.


Real-World Adoption

PgBouncer's adoption is so broad that listing specific companies is almost meaningless — if you've used a PostgreSQL-backed SaaS product in the last decade, you've likely used PgBouncer. The defining case studies come from managed database providers. AWS found that naive application connection patterns frequently exhausted PostgreSQL's max_connections, which led to the development of RDS Proxy (which uses the same transaction pooling principles). The Stack Overflow team historically ran PgBouncer in front of their primary PostgreSQL cluster and published detailed performance data showing connection pool saturation only above 5,000 concurrent connections.

pgcat has been adopted by teams running PostgreSQL with read replicas who want automatic query routing without application-level changes. The typical use case is a Django or Rails application that had originally been written for a single PostgreSQL instance and later added read replicas for horizontal read scaling. Migrating the application to explicitly use separate database connections for reads and writes requires significant refactoring. Dropping in pgcat and enabling query_parser_enabled = true achieves the same result with zero application changes.

Supabase's multi-tenant architecture makes Supavisor uniquely appropriate for SaaS companies building on Supabase. Each customer database in a Supabase project shares the Supavisor pooler, which is tuned for the burst connection patterns of serverless functions. When a Vercel deployment receives traffic, each edge function invocation opens and closes connections rapidly. Supavisor's connection pool handles thousands of these short-lived connections without exhausting PostgreSQL's process table.


Developer Experience Deep Dive

PgBouncer's configuration is an INI file with sensible defaults. The transition mode that trips most developers is transaction pooling's incompatibility with prepared statements. PostgreSQL prepared statements (PREPARE / EXECUTE) are session-level constructs — they exist for the lifetime of a database connection. In transaction mode, a client may get a different server connection for each transaction, meaning prepared statements from one transaction are unavailable in the next. The fix is simple but non-obvious: set prepare: false in your Node.js database client (postgres.js, pg). Most ORM documentation covers this for PgBouncer specifically.

The PgBouncer admin console is a useful debugging tool. Connecting to the pgbouncer database with an admin user gives you access to SHOW POOLS, SHOW CLIENTS, SHOW SERVERS, and SHOW STATS commands that reveal exactly what's happening — how many clients are waiting, how many server connections are active, and what the pool utilization looks like in real time. This is considerably more informative than generic TCP monitoring.

pgcat's TOML configuration is more explicit than PgBouncer's INI but also more expressive. The shard topology — primary servers, replicas, and the routing strategy — is defined declaratively. One meaningful improvement pgcat makes over PgBouncer is the statement_timeout setting per user pool. This lets you enforce maximum query duration at the pooler level, preventing runaway queries from holding server connections indefinitely.

Supavisor's operator experience is different because it's managed for Supabase-hosted projects. You switch between the direct connection URL (port 5432) and the pooled URL (port 6543) in your environment variables. The distinction matters: use the direct URL for database migrations and long-running administrative queries, use the pooled URL for application queries. This is a documented pattern with clear guidance.


Performance Analysis

In a direct head-to-head benchmark published by the pgcat team, with 1,000 concurrent client connections each making 10,000 simple queries, PgBouncer achieved approximately 85,000 queries per second. pgcat achieved approximately 75,000 queries per second — slightly lower due to Rust async overhead at this concurrency level compared to PgBouncer's event-loop-optimized C code. However, pgcat's throughput advantage emerges with read replica routing: queries that would have gone to an already-loaded primary now route to replicas, increasing aggregate system throughput beyond what any single-backend pooler can achieve.

Supavisor's throughput numbers differ because its architecture targets connection count scaling over raw single-tenant throughput. On a single tenant, Supavisor's BEAM overhead means lower raw queries-per-second than PgBouncer. But Supavisor handles 10,000 client connections with the same efficiency as 100, while PgBouncer begins showing scheduling overhead beyond ~5,000 clients. For Supabase's use case — thousands of tenants, each with serverless workloads generating bursts of connections — this tradeoff is the right one.

Memory usage profiles: PgBouncer uses approximately 2MB RAM for 1,000 clients. pgcat uses approximately 4-5MB for the same workload. Supavisor uses approximately 10MB per 1,000 clients, but can scale to 1,000,000 clients with linear memory scaling — something PgBouncer and pgcat cannot sustain.


Migration Guide

To adopt PgBouncer for an existing application:

  1. Install PgBouncer alongside your PostgreSQL instance.
  2. Configure pgbouncer.ini with your database connection and set pool_mode = transaction.
  3. Change your application's database URL to point to PgBouncer's port instead of PostgreSQL directly.
  4. Disable prepared statements in your database client (prepare: false for postgres.js).
  5. Test with SHOW POOLS to verify connections are being pooled.

To migrate from PgBouncer to pgcat for read replica routing:

  1. Install pgcat and configure pgcat.toml with your primary and replica servers.
  2. Set query_parser_enabled = true and primary_reads_enabled = false.
  3. Change your application's database URL to pgcat's port.
  4. Verify in pgcat's metrics that SELECT queries are routing to replicas.

Final Verdict 2026

PgBouncer remains the correct default for any traditional server application. Its two-decade track record, ultralow resource usage, and universal support from managed PostgreSQL providers make it the safe, boring choice. Use PgBouncer unless you have a specific reason not to.

pgcat is the right upgrade when you have read replicas and want to use them without application-level routing logic. The Rust performance characteristics, modern configuration, and read/write splitting make it a compelling replacement for PgBouncer in most setups.

Supavisor is the right choice for serverless workloads and Supabase-hosted projects. If you're building on Supabase, you're already using Supavisor — the question is just whether you've configured the correct URLs for pooled vs direct connections.

Connection Pooling with ORMs and Query Builders

Most Node.js applications access PostgreSQL through an ORM or query builder — Prisma, Drizzle ORM, Knex, postgres.js, or node-postgres. Each handles connection pooling differently, and understanding the interaction between your ORM's built-in pooling and a connection pooler like PgBouncer or pgcat is essential to avoid common configuration mistakes.

Prisma includes built-in connection pooling via connection_limit in the connection string. When using Prisma with PgBouncer, you should set connection_limit=1 in Prisma's config and let PgBouncer manage the pool. Using Prisma's default pool size (typically 5 connections) plus PgBouncer means you're creating nested pools — Prisma opens 5 connections to PgBouncer, which then creates its own pool. This isn't catastrophic but wastes connections and complicates capacity planning.

Drizzle ORM delegates connection pooling entirely to the underlying database driver (postgres.js or node-postgres). When using Drizzle with PgBouncer in transaction mode, set prepare: false to disable prepared statements, which are incompatible with transaction-mode pooling. This is the same configuration requirement as with raw postgres.js clients.

Serverless deployments introduce a specific challenge: Lambda functions and Edge Functions create new database connections on every cold start. Without a pooler, a spike from 10 to 500 concurrent Lambda invocations creates 500 simultaneous PostgreSQL connections, which is well beyond what most RDS or Neon databases can handle. Supavisor's serverless-optimized architecture handles this gracefully — it's designed specifically for the bursty connection pattern that serverless creates. PgBouncer can also handle this but requires careful max_client_conn tuning. For Node.js background job systems that interact with databases — which have similar bursty connection patterns — see best Node.js background job libraries 2026 for how BullMQ and Inngest manage database connections in worker processes.

Monitoring Connection Pool Health

A connection pool that's misconfigured or undersized silently degrades performance and causes intermittent errors that are difficult to diagnose. Establishing good monitoring practices when you deploy a connection pooler prevents connection exhaustion incidents.

PgBouncer exposes pool health through its SHOW commands: SHOW POOLS shows current pool state, SHOW CLIENTS shows active client connections, and SHOW STATS shows cumulative request counts and wait times. These commands work only on PgBouncer's admin database (connect to pgbouncer database on the PgBouncer port). Scraping these stats into Prometheus via pgbouncer-exporter gives continuous visibility into pool utilization.

pgcat provides a metrics endpoint compatible with Prometheus, making observability significantly easier than PgBouncer's SHOW-based approach. The metrics include per-pool connection counts, query routing distribution between primary and replicas, and client wait times. For Kubernetes deployments, pgcat's metrics endpoint integrates naturally with Prometheus Operator's ServiceMonitor.

Supavisor's pooling metrics are exposed through the Supabase Dashboard if you're using Supabase Cloud. For self-hosted Supavisor, metrics are available via a Prometheus endpoint. The key metric to watch is wait time — when connections are queuing to access the pool, it indicates either that the pool is undersized for your load or that your database queries are taking too long and holding connections. High wait times in the pool often reveal slow queries that should be optimized at the database level rather than addressed with more connections. For teams using Drizzle ORM alongside connection poolers, best database migration tools Node.js 2026 covers how schema migrations interact with connection pooling during rolling deployments.

Sizing Your Connection Pool

Getting connection pool sizing right requires understanding both your application's concurrency requirements and your PostgreSQL server's capacity. The default "right size" guidance is to match your pool size to the number of PostgreSQL's available connections divided by the number of application instances, minus connections reserved for admin tasks and direct connections.

A common starting point for a 4-core PostgreSQL server is 100 total connections. With a single application instance using PgBouncer, set the pool size to 20-40 connections, leaving headroom for admin queries and monitoring tools. For 5 application instances, each instance's pool size should be lower (4-8 connections) to prevent total connections from exceeding 100.

The key insight is that most web applications don't benefit from large connection counts. A request that makes 3 database queries needs 1 connection for at most 50-100ms. Thousands of requests per second can be served by a pool of 20 connections if queries are consistently fast. The connection need scales with query duration, not request count. If you consistently need more than 50 connections per application instance, the bottleneck is almost always slow queries or missing indexes rather than insufficient connection pool capacity.


Methodology

Data sourced from official PgBouncer documentation (pgbouncer.org), pgcat documentation and GitHub repository (github.com/postgresml/pgcat), Supavisor documentation (supabase.com/docs/guides/database/connecting-to-postgres), benchmarks from pgcat's README, GitHub star counts as of February 2026, and community discussions on the Supabase Discord and r/PostgreSQL. Connection overhead figures from PostgreSQL documentation on connection handling.


Related: Neon vs Supabase Postgres vs Tembo 2026, Best Node.js Background Job Libraries 2026, Hono vs Elysia 2026

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.