Through TenantsDB, each tenant gets their own PostgreSQL database with the same schema, deployed from a blueprint. The proxy handles routing, TLS, query logging, and settings enforcement. Your app connects with any standard PostgreSQL driver. No SDK needed.
The proxy supports the full PostgreSQL wire protocol including extended query protocol, prepared statements, and streaming results. Control workspaces give your application a managed backend database with full DDL access. Tenant workspaces track schema changes as blueprints for deployment.
psql "postgresql://tdb_2abf90d3:[email protected]:5432/controlplane_workspace?sslmode=require"
Control mode workspaces accept all DDL immediately. No blueprint versioning, no deployment step. Schema changes take effect as soon as you run them. Use this for your application's own tables that are not per-tenant.
psql "postgresql://tdb_2abf90d3:[email protected]:5432/myapp_workspace?sslmode=require"
Every CREATE TABLE, ALTER TABLE, or other DDL statement you run here is captured as a blueprint version. Deploy it to all tenants with tdb deployments create --blueprint myapp --all.
psql "postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require"
psql "postgresql://tdb_2abf90d3:[email protected]:5432/myapp__globex?sslmode=require"
DDL not allowed on tenant databases - use workspace mode.CREATE TABLE accounts ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL, email VARCHAR(255) UNIQUE NOT NULL, balance DECIMAL(15,2) DEFAULT 0, created_at TIMESTAMP DEFAULT NOW() ); INSERT INTO accounts (name, email, balance) VALUES ('Alice', '[email protected]', 1000), ('Bob', '[email protected]', 2000);
CREATE TABLE, ALTER TABLE, etc.) are tracked as blueprint changes. DML statements (INSERT, UPDATE, DELETE) run in the workspace only and are not deployed to tenants.
You can also import an existing schema from another database or use a template. See tdb workspaces schema --help for all options.
const { Sequelize } = require('sequelize'); const sequelize = new Sequelize( 'postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require' ); // Define models const Account = sequelize.define('Account', { name: { type: DataTypes.STRING(255), allowNull: false }, email: { type: DataTypes.STRING(255), allowNull: false, unique: true }, balance: { type: DataTypes.DECIMAL(15, 2), defaultValue: 0 }, }, { tableName: 'accounts', timestamps: true }); // Query const accounts = await Account.findAll(); await Account.create({ name: 'Alice', email: '[email protected]', balance: 1000 });
npm install sequelize pg pg-hstore
pg driver internally. The connection URL works as-is with ?sslmode=require.from sqlalchemy import create_engine, Column, Integer, String, Numeric, DateTime from sqlalchemy.orm import declarative_base, Session from datetime import datetime Base = declarative_base() class Account(Base): __tablename__ = 'accounts' id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(255), nullable=False) email = Column(String(255), unique=True, nullable=False) balance = Column(Numeric(15, 2), default=0) created_at = Column(DateTime, default=datetime.utcnow) engine = create_engine( "postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require" ) # Query with Session(engine) as s: accounts = s.query(Account).all() s.add(Account(name='Alice', email='[email protected]', balance=1000)) s.commit()
pip install sqlalchemy psycopg2-binary
?sslmode=require is handled natively by psycopg2.
Prisma connects through node-pg via the @prisma/adapter-pg package. This is the standard pattern used by managed database platforms including Neon, PlanetScale, and Supabase.
generator client {
provider = "prisma-client-js"
previewFeatures = ["driverAdapters"]
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model Account {
id Int @id @default(autoincrement())
name String @db.VarChar(255)
email String @unique @db.VarChar(255)
balance Decimal @default(0) @db.Decimal(15, 2)
createdAt DateTime @default(now()) @map("created_at")
@@map("accounts")
}
DATABASE_URL="postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require"
const { PrismaClient } = require('@prisma/client') const { PrismaPg } = require('@prisma/adapter-pg') const { Pool } = require('pg') const pool = new Pool({ connectionString: process.env.DATABASE_URL }) const adapter = new PrismaPg(pool) const prisma = new PrismaClient({ adapter }) // Query const accounts = await prisma.account.findMany() await prisma.account.create({ data: { name: 'Alice', email: '[email protected]', balance: 1000 } })
npm install prisma @prisma/client @prisma/adapter-pg pg
npx prisma generate after changes to your schema.prisma file.package main import ( "context" "fmt" "os" "github.com/jackc/pgx/v5" ) func main() { ctx := context.Background() conn, err := pgx.Connect(ctx, os.Getenv("DATABASE_URL")) if err != nil { panic(err) } defer conn.Close(ctx) // Query rows, _ := conn.Query(ctx, "SELECT id, name, balance FROM accounts") for rows.Next() { var id int var name string var balance float64 rows.Scan(&id, &name, &balance) fmt.Printf("%d: %s ($%.2f)\n", id, name, balance) } }
DATABASE_URL="postgresql://tdb_2abf90d3:[email protected]:5432/myapp__acme?sslmode=require"
go get github.com/jackc/pgx/v5
The proxy streams results row-by-row using the native PostgreSQL wire protocol. Memory usage stays flat even for large result sets. Extended query protocol and prepared statements are fully supported.
The proxy enforces max_rows_per_query, query_timeout_ms, and max_connections at the proxy level. These are configured per workspace and apply to all tenants using that blueprint.
For bulk imports, use tdb workspaces import-full which connects directly to the source database, splits data by routing field, and creates tenants automatically. Multi-row INSERT statements work through the proxy for ongoing batch operations.
COPY protocol is not supported through the proxy. Use the import endpoint or INSERT statements instead.Standard PostgreSQL limits apply. TOAST handles large column values and the proxy does not add any size restrictions on top of what PostgreSQL enforces. There is no proxy-level packet size limit for PostgreSQL.