PostgreSQL
The most advanced open-source relational database. Transactions, constraints, JSON support, full-text search, and complex data models.

Through TenantsDB, each tenant gets their own PostgreSQL database with the same schema, deployed from a blueprint. The proxy handles routing, TLS, query logging, and settings enforcement. Your app connects with any standard PostgreSQL driver. No SDK needed.

The proxy supports the full PostgreSQL wire protocol including extended query protocol, prepared statements, and streaming results. Control workspaces give your application a managed backend database with full DDL access. Tenant workspaces track schema changes as blueprints for deployment.


Connect to Control Workspace
Your application's backend database. Users, billing, config. Full DDL and DML access, no blueprints.
Shell
psql "postgresql://tdb_2abf90d3:[email protected]:5432/controlplane_workspace?sslmode=require"

Control mode workspaces accept all DDL immediately. No blueprint versioning, no deployment step. Schema changes take effect as soon as you run them. Use this for your application's own tables that are not per-tenant.


Connect to Tenant Workspace
Where you design and iterate on your tenant schema. DDL changes are tracked as versioned blueprints.
Shell
psql "postgresql://tdb_2abf90d3:[email protected]:5432/myapp_workspace?sslmode=require"

Every CREATE TABLE, ALTER TABLE, or other DDL statement you run here is captured as a blueprint version. Deploy it to all tenants with tdb deployments create --blueprint myapp --all.


Connect to Tenant Databases
Isolated production databases for your customers. CRUD only. DDL is blocked.
acme
Shell
psql "postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require"
globex
Shell
psql "postgresql://tdb_2abf90d3:[email protected]:5432/myapp__globex?sslmode=require"
Same credentials, different database name. Each tenant's data is fully isolated. DDL statements return a clear error: DDL not allowed on tenant databases - use workspace mode.

Build Schema
Connect to your tenant workspace and create tables. Every DDL change is tracked as a blueprint version.
SQL
CREATE TABLE accounts (
    id SERIAL PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    email VARCHAR(255) UNIQUE NOT NULL,
    balance DECIMAL(15,2) DEFAULT 0,
    created_at TIMESTAMP DEFAULT NOW()
);

INSERT INTO accounts (name, email, balance) VALUES
  ('Alice', '[email protected]', 1000),
  ('Bob', '[email protected]', 2000);
Only DDL statements (CREATE TABLE, ALTER TABLE, etc.) are tracked as blueprint changes. DML statements (INSERT, UPDATE, DELETE) run in the workspace only and are not deployed to tenants.

You can also import an existing schema from another database or use a template. See tdb workspaces schema --help for all options.


ORM & Drivers
Copy-paste examples for every supported ORM and driver. Each example connects to a tenant database. For workspace or control connections, swap the database name.
Sequelize (Node.js)
JavaScript
const { Sequelize } = require('sequelize');

const sequelize = new Sequelize(
  'postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require'
);

// Define models
const Account = sequelize.define('Account', {
  name: { type: DataTypes.STRING(255), allowNull: false },
  email: { type: DataTypes.STRING(255), allowNull: false, unique: true },
  balance: { type: DataTypes.DECIMAL(15, 2), defaultValue: 0 },
}, { tableName: 'accounts', timestamps: true });

// Query
const accounts = await Account.findAll();
await Account.create({ name: 'Alice', email: '[email protected]', balance: 1000 });
Install
npm install sequelize pg pg-hstore
Sequelize uses the pg driver internally. The connection URL works as-is with ?sslmode=require.
SQLAlchemy (Python)
Python
from sqlalchemy import create_engine, Column, Integer, String, Numeric, DateTime
from sqlalchemy.orm import declarative_base, Session
from datetime import datetime

Base = declarative_base()

class Account(Base):
    __tablename__ = 'accounts'
    id = Column(Integer, primary_key=True, autoincrement=True)
    name = Column(String(255), nullable=False)
    email = Column(String(255), unique=True, nullable=False)
    balance = Column(Numeric(15, 2), default=0)
    created_at = Column(DateTime, default=datetime.utcnow)

engine = create_engine(
    "postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require"
)

# Query
with Session(engine) as s:
    accounts = s.query(Account).all()
    s.add(Account(name='Alice', email='[email protected]', balance=1000))
    s.commit()
Install
pip install sqlalchemy psycopg2-binary
The connection URL works as-is. PostgreSQL's ?sslmode=require is handled natively by psycopg2.
Prisma (Node.js)

Prisma connects through node-pg via the @prisma/adapter-pg package. This is the standard pattern used by managed database platforms including Neon, PlanetScale, and Supabase.

Schema
prisma/schema.prisma
generator client {
  provider        = "prisma-client-js"
  previewFeatures = ["driverAdapters"]
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model Account {
  id        Int      @id @default(autoincrement())
  name      String   @db.VarChar(255)
  email     String   @unique @db.VarChar(255)
  balance   Decimal  @default(0) @db.Decimal(15, 2)
  createdAt DateTime @default(now()) @map("created_at")

  @@map("accounts")
}
Environment
.env
DATABASE_URL="postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require"
Client Setup
JavaScript
const { PrismaClient } = require('@prisma/client')
const { PrismaPg } = require('@prisma/adapter-pg')
const { Pool } = require('pg')

const pool = new Pool({ connectionString: process.env.DATABASE_URL })
const adapter = new PrismaPg(pool)
const prisma = new PrismaClient({ adapter })

// Query
const accounts = await prisma.account.findMany()
await prisma.account.create({
  data: { name: 'Alice', email: '[email protected]', balance: 1000 }
})
Install
npm install prisma @prisma/client @prisma/adapter-pg pg
Define your schema in the tenant workspace using SQL, then deploy via blueprints. Prisma Client handles all runtime CRUD through the adapter. Run npx prisma generate after changes to your schema.prisma file.
Go (pgx)
Go
package main

import (
    "context"
    "fmt"
    "os"

    "github.com/jackc/pgx/v5"
)

func main() {
    ctx := context.Background()
    conn, err := pgx.Connect(ctx, os.Getenv("DATABASE_URL"))
    if err != nil {
        panic(err)
    }
    defer conn.Close(ctx)

    // Query
    rows, _ := conn.Query(ctx, "SELECT id, name, balance FROM accounts")
    for rows.Next() {
        var id int
        var name string
        var balance float64
        rows.Scan(&id, &name, &balance)
        fmt.Printf("%d: %s ($%.2f)\n", id, name, balance)
    }
}
Environment
DATABASE_URL="postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require"
Install
go get github.com/jackc/pgx/v5
ORMs like GORM use pgx under the hood. If pgx works, GORM works. The connection URL is the same.

Proxy Behavior
PostgreSQL-specific details about how the proxy handles your queries.
Wire Protocol

The proxy streams results row-by-row using the native PostgreSQL wire protocol. Memory usage stays flat even for large result sets. Extended query protocol and prepared statements are fully supported.

Settings Enforcement

The proxy enforces max_rows_per_query, query_timeout_ms, and max_connections at the proxy level. These are configured per workspace and apply to all tenants using that blueprint.

Bulk Data

For bulk imports, use tdb workspaces import-full which connects directly to the source database, splits data by routing field, and creates tenants automatically. Multi-row INSERT statements work through the proxy for ongoing batch operations.

PostgreSQL COPY protocol is not supported through the proxy. Use the import endpoint or INSERT statements instead.
Limits

Standard PostgreSQL limits apply. TOAST handles large column values and the proxy does not add any size restrictions on top of what PostgreSQL enforces. There is no proxy-level packet size limit for PostgreSQL.