Back to field notes
Engineering 6 min read

Supabase + Drizzle ORM in serverless: the `prepare: false` problem nobody warns you about

We deployed Proofly on Vercel with Supabase Postgres and Drizzle ORM, and our API routes started throwing PgBouncer errors in production. Here's what causes it, why it only shows up under load, and the exact configuration that fixed it.

TL;DR

Supabase's Supavisor connection pooler (port 6543) uses transaction mode, which doesn't support prepared statements. Drizzle ORM sends prepared statements by default. Set `prepare: false` in your postgres-js client config. Also cap `max: 1` connections per Vercel function instance or you'll exhaust your pooler's connection limit under traffic.

We deployed Proofly on Vercel with Supabase Postgres. For the first few weeks in production, everything worked. Then we started getting errors in our Vercel function logs:

prepared statement "s0" already exists

And occasionally:

prepared statement "s1" does not exist

These errors were intermittent, never reproducible locally, and always in API routes that did simple database reads. The queries themselves were correct. The schema was correct. Our local setup — Postgres running directly without a pooler — worked fine.

It took us longer than we'd like to admit to understand what was happening.

What prepared statements are and why they break#

When a PostgreSQL client sends the same query repeatedly, it can "prepare" it: the database parses and plans the query once, names it (something like s0), and then the client just sends the name plus parameters on subsequent executions. This is faster than sending the full SQL every time.

Postgres stores prepared statements in the session — the long-lived connection between client and server. If the client uses the same session for multiple queries, prepared statements work perfectly.

The problem is Supabase's Supavisor pooler in transaction mode (port 6543). Transaction mode doesn't give you a persistent session. Instead, it multiplexes many application requests through a smaller pool of real Postgres connections. When your request comes in, Supavisor assigns you a connection, your transaction runs, and then the connection is returned to the pool for someone else.

If your client prepared statement s0 on connection A, and then later tries to use s0 — but Supavisor assigns it to connection B — Postgres on connection B has never heard of s0. Hence: prepared statement "s0" does not exist.

And if two function instances try to prepare s0 on the same connection sequentially: prepared statement "s0" already exists.

The fix: disable prepared statements#

The postgres npm package (which drizzle-orm/postgres-js uses) enables prepared statements by default. One option turns it off:

// lib/db/index.ts
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
import * as schema from "./schema";

const client = postgres(process.env.DATABASE_URL!, {
  prepare: false,   // ← this is the fix
  max: 1,
  idle_timeout: 20,
  max_lifetime: 60 * 30,
  connect_timeout: 10,
});

export const db = drizzle(client, { schema });

prepare: false tells the postgres client to send all queries as simple queries rather than prepared statements. This is slightly slower per query because the database has to parse and plan on every execution — but it's the correct mode for transaction-mode connection pooling.

The performance cost is real but small. For the kinds of queries a typical SaaS API route runs — a few simple SELECT and INSERT statements — the plan-cache benefit of prepared statements is negligible compared to the round-trip latency to Supabase. We've never measured a perceptible difference before and after this change.

The second problem: connection exhaustion#

The prepared statement error was the one that showed up in logs. The second problem was quieter: slow response times under traffic spikes that we initially blamed on query performance.

The cause: postgres defaults to max: 10 connections per client instance. In serverless, each function invocation is potentially a fresh Node.js process. At peak load, Vercel could spin up dozens of function instances simultaneously, each with max: 10, each trying to connect to Supavisor.

Supabase's free tier allows 60 simultaneous connections to the pooler. 10 instances × 10 connections = 100. We were hitting the ceiling and getting connection queuing, which showed up as slow API responses.

The fix is also in the client config:

postgres(process.env.DATABASE_URL!, {
  prepare: false,
  max: 1,             // one connection per serverless instance
  idle_timeout: 20,   // release the socket after 20s of idle
  max_lifetime: 60 * 30, // recycle connections every 30 min
  connect_timeout: 10,
});

With max: 1, each function instance holds at most one real connection. 50 active instances means 50 connections — well within the free-tier limit. The connection is released after 20 seconds of idle time (idle_timeout), and connections are recycled every 30 minutes (max_lifetime) to avoid issues with stale sockets.

The idle_timeout is important for serverless: if your function runs a query and then sits idle for 20 seconds before exiting, you don't want it holding a connection the whole time. The 20-second timeout releases it promptly.

Development setup: don't use the pooler#

In local development, we use a local Postgres instance without any pooler. The prepare: false setting still works — it's just slower than it needs to be locally since prepared statements would be fine without a pooler.

The difference is the DATABASE_URL you point at:

# .env.local — direct connection (no pooler), for development and migrations
DATABASE_URL=postgresql://postgres:password@db.xxxxx.supabase.co:5432/postgres

# Production — Supavisor transaction mode (port 6543)
DATABASE_URL=postgresql://postgres.xxxxx:password@aws-0-region.pooler.supabase.com:6543/postgres

For Drizzle migrations, we always use the direct connection (port 5432), not the pooler. Drizzle Kit's migration commands need session-mode behavior. Running a migration through a transaction-mode pooler can fail in surprising ways if the pooler switches connections mid-migration.

The global client instance#

In development with Next.js, hot-reloading creates new module instances on every code change. Without careful handling, this means a new postgres client and new connections on every hot reload — you'll exhaust your local connection limit in minutes.

The pattern to avoid this:

declare global {
  var __prooflyPg: ReturnType<typeof postgres> | undefined;
}

const client =
  global.__prooflyPg ??
  postgres(connectionString, {
    prepare: false,
    max: 1,
    idle_timeout: 20,
    max_lifetime: 60 * 30,
    connect_timeout: 10,
  });

if (process.env.NODE_ENV !== "production") {
  global.__prooflyPg = client;
}

export const db = drizzle(client, { schema });

In production, every serverless function invocation is an isolated Node.js process, so the global trick doesn't apply — each invocation creates one client. In development, the global persists across hot reloads and prevents connection accumulation.

Summary: the two settings that matter#

If you're running Drizzle ORM with Supabase Postgres on Vercel or another serverless platform:

postgres(process.env.DATABASE_URL!, {
  prepare: false,  // required for Supavisor transaction mode (port 6543)
  max: 1,          // one connection per serverless instance to avoid pool exhaustion
})

That's it. Both errors — the prepared statement collision and the connection exhaustion — go away. The rest of the config (idle_timeout, max_lifetime, connect_timeout) is quality-of-life for production stability.

The thing that makes this hard to discover is that it only shows up under real load in production. Your local Postgres and your staging database with one or two concurrent users will work fine with the defaults. The errors appear when you have enough traffic to spin up multiple concurrent function instances — exactly the moment you least want to be debugging a database driver configuration issue.

Frequently asked

Quick answers

What's the difference between Supabase's port 5432 and port 6543?+

Port 5432 is a direct connection to the Postgres database — no pooler. This is what you use for local development and migrations. Port 6543 goes through Supavisor, Supabase's connection pooler, in transaction mode. Transaction mode multiplex many application 'connections' through a small pool of actual Postgres connections. The limitation is that transaction-mode pooling can't maintain prepared statement caches between transactions because different transactions may run on different actual connections.

Why does `max: 1` help in serverless? Won't that make queries slower?+

Serverless functions run in isolated instances. Each function instance establishes its own connection to the pooler. With `max: 1`, each instance holds at most one real connection at a time. If you had `max: 10` and Vercel spun up 50 function instances under load, you'd have up to 500 simultaneous connections hitting Supavisor — which has its own connection limit. With `max: 1`, 50 instances means 50 connections. Queries don't get slower because each request is typically one query and the function is done.

Should I use port 5432 or 6543 for Drizzle migrations?+

Port 5432 (direct connection, no pooler) for migrations. Drizzle Kit needs session-mode behavior to run migrations reliably, and the direct connection port gives it that. Set DATABASE_URL to the session-mode URL (port 5432) for migration commands and the transaction-mode URL (port 6543) for application runtime. Many teams use two separate environment variables: DATABASE_URL (runtime) and DATABASE_URL_UNPOOLED (migrations).