If you're building a SaaS product, every customer needs their own database. At 10 customers, you can manage that manually. At 100 or 1,000, it becomes your biggest infrastructure problem: provisioning, schema migrations, connection management, backups, isolation. All multiplied by every customer you onboard.
TenantsDB handles all of it. You design your schema in a workspace, and TenantsDB deploys it to every tenant as an isolated database. Each tenant gets their own database, their own data, their own connection string. Add a column in development, deploy it to all tenants with one command.
One API to provision databases across PostgreSQL, MySQL, MongoDB, and Redis. One CLI to manage every tenant. One proxy that routes your application to the right database automatically. That's it.
Think of it like version control for databases. A workspace is your working branch where you make changes. A blueprint is a commit, a versioned snapshot of your schema. A tenant is a deployed instance running that version.
Most multi-tenant solutions force you to choose between shared tables (one database, tenant_id column on every table) or managing separate databases yourself. Shared tables break isolation. Managing databases yourself doesn't scale.
TenantsDB gives you real database-per-tenant isolation with the simplicity of a single schema. You define your tables, indexes, and constraints in a workspace. TenantsDB versions that schema as a blueprint and deploys it identically to every tenant database. Need to add a column? Change it in the workspace, deploy to all tenants with one command.
# 1. Connect to your workspace and make changes ALTER TABLE users ADD COLUMN phone TEXT; # 2. Check what changed $ tdb workspaces diff myapp # 3. Deploy to every tenant $ tdb deployments create --blueprint myapp --all
Start every tenant on L1, it's instant and cost-effective. When a customer upgrades to a premium plan or needs guaranteed performance, migrate them to L2 with a single command. No data loss, no connection string changes.
# Create on shared (default) $ tdb tenants create --name acme --blueprint myapp # Upgrade to dedicated when they go enterprise $ tdb tenants migrate acme --level 2 --blueprint myapp
Settings are defined per workspace and enforced automatically for every tenant that uses it. All values default to 0 (no limit), you only configure what you need.
| Setting | Type | Default | Description |
|---|---|---|---|
| query_timeout_ms | int | 0 | Kill queries that exceed this duration (milliseconds). Prevents runaway queries from consuming resources. |
| max_rows_per_query | int | 0 | Maximum rows returned per query. The proxy truncates results and includes a warning. GridFS collections are excluded automatically. |
| max_connections | int | 0 | Maximum concurrent proxy connections per tenant. New connections are rejected with a clear error when the limit is reached. |
| Setting | Type | Default | Description |
|---|---|---|---|
| default_ttl | int | 0 | Default expiration (seconds) applied to keys that don't have an explicit TTL set. |
| max_keys | int | 0 | Maximum number of keys allowed per tenant. |
| patterns | map | - | Pattern-specific TTL rules. Each pattern (e.g., session:*) can have its own TTL and an enforced flag that overrides user-set TTLs. |
# View current settings $ tdb workspaces settings myapp --json # Update settings via API $ curl -X POST https://api.tenantsdb.com/workspaces/myapp/settings \ -H "Authorization: Bearer $API_KEY" \ -d '{"query_timeout_ms": 5000, "max_rows_per_query": 10000}' # Redis settings via API $ curl -X POST https://api.tenantsdb.com/workspaces/cache/settings \ -H "Authorization: Bearer $API_KEY" \ -d '{"default_ttl": 3600, "max_keys": 100000}'
When you migrate a tenant to dedicated infrastructure, TenantsDB uses native database replication to copy data while your application continues running normally. The actual downtime during cutover is typically under 2 seconds.
| Database | Mechanism | Notes |
|---|---|---|
| PostgreSQL | Logical replication | Publication on source, subscription on target. Streams row-level changes in real time. |
| MySQL | Binlog replication | Standard source→replica setup with per-database filtering. |
| MongoDB | Change streams | Watches all operations on the source database and applies them to the target. |
| Redis | Dump & restore | Redis does not support per-database replication. Migrations require a brief interruption while data is transferred. |
| Migration | Zero Downtime | Command |
|---|---|---|
| Shared → Dedicated | ✓ | tdb tenants migrate acme --level 2 --blueprint myapp |
| Region change | ✓ | tdb tenants migrate acme --level 2 --blueprint myapp --region us-east |
| Dedicated → Shared | ✓ | tdb tenants migrate acme --level 1 --blueprint myapp |
You can poll the tenant status to track progress. During a zero-downtime migration, the status moves through migrating_sync (data is replicating, queries work normally) → migrating (brief cutover, ~2 seconds) → ready. The CLI shows a live progress spinner automatically.
# Check migration status $ tdb tenants get acme Status: migrating_sync # ... seconds later ... Status: ready
Every query through TenantsDB goes through TCP connection, wire protocol handling, authentication, tenant routing, connection pooling, database execution, and response. The total platform overhead across all four engines is 1.33 to 2.43ms.
| Database | Direct p50 | Proxy p50 | Overhead | QPS (1 tenant) |
|---|---|---|---|---|
| PostgreSQL | 0.82ms | 2.23ms | +1.41ms | 2,039 |
| MySQL | 1.01ms | 2.34ms | +1.33ms | 1,776 |
| MongoDB | 1.45ms | 3.32ms | +1.87ms | 1,467 |
| Redis | 0.66ms | 3.09ms | +2.43ms | 1,260 |
At 100 concurrent tenants under sustained mixed read/write workloads, the proxy maintains fair scheduling across all engines. No tenant gets starved regardless of how many tenants share the infrastructure.
| Database | Aggregate QPS | p50 | p95 | Fairness | Errors |
|---|---|---|---|---|---|
| PostgreSQL | 3,926 | 12.81ms | 62.08ms | 1.6x | 0 |
| MySQL | 2,460 | 11.66ms | 98.81ms | 2.1x | 0 |
| MongoDB | 1,467 | 43.06ms | 98.00ms | 1.2x | 0 |
| Redis | 1,195 | 94.41ms | 187.85ms | 1.0x | 0 |
Under extreme noisy neighbor pressure (9 tenants running 45 concurrent writers), all engines maintain sub-17ms latency on L1 Shared. For latency-critical workloads, L2 Dedicated eliminates noisy neighbor impact entirely.
All database connections are encrypted with TLS. PostgreSQL, MySQL, MongoDB, and Redis connections are terminated with TLS at the edge. Your application connects using standard connection strings with TLS enabled. No additional configuration required.
Tenant databases are protected against unauthorized schema changes. DDL statements (CREATE, ALTER, DROP, TRUNCATE) are blocked at the proxy level on all tenant database connections. Schema changes can only reach tenant databases through the blueprint deployment system. This prevents accidental drift and ensures every tenant runs the same schema version.
The platform enforces rate limits at two layers. IP-based rate limiting protects against DDoS and connection flooding. Authentication tracking detects brute force attempts and temporarily bans offending IPs after repeated failures. Both layers apply to API and proxy connections.
| Protection | Scope | Description |
|---|---|---|
| TLS encryption | All connections | Database and API connections encrypted in transit. Standard TLS with automatic certificate management. |
| DDL blocking | Tenant databases | Schema changes blocked on tenant connections. Changes deploy only through blueprints. |
| Rate limiting | API and proxy | Per-IP connection rate limits with automatic ban on repeated violations. |
| Brute force protection | All endpoints | Temporary IP ban after repeated authentication failures. |
TenantsDB supports four database engines. That usually means four query languages: SQL dialects for Postgres and MySQL, MongoDB's query API, and Redis commands. OmniQL eliminates that. It's a universal query language that compiles to native commands for each backend. You write one syntax, the engine translates it instantly.
OmniQL is not an ORM or a wrapper. It's a compiler. Every query is parsed into an AST, validated for correctness, then translated to the native dialect of your target database. The output runs at native speed with zero overhead.
// You write OmniQL :GET User WHERE id = 42 // PostgreSQL SELECT * FROM "users" WHERE "id" = 42; // MySQL SELECT * FROM `users` WHERE `id` = 42; // MongoDB db.users.find({ id: 42 }); // Redis HGETALL users:42
DDL works the same way. Define a table once in OmniQL, and it translates to the correct native structure for each database.
// You write OmniQL :CREATE TABLE User WITH id:AUTO, name:STRING:NOTNULL, email:STRING:UNIQUE // PostgreSQL CREATE TABLE "users" ( "id" SERIAL PRIMARY KEY, "name" VARCHAR(255) NOT NULL, "email" VARCHAR(255) UNIQUE ); // MySQL CREATE TABLE `users` ( `id` INT AUTO_INCREMENT PRIMARY KEY, `name` VARCHAR(255) NOT NULL, `email` VARCHAR(255) UNIQUE ); // MongoDB db.createCollection("users")
Different data has different needs. Financial transactions belong in PostgreSQL. User sessions belong in Redis. Product catalogs work best in MongoDB. With TenantsDB, a single tenant can have databases across all four, each with its own blueprint, its own isolation level, and its own connection string.
# Create workspaces for each database type $ tdb workspaces create --name orders --database PostgreSQL --mode tenant $ tdb workspaces create --name cache --database Redis --mode tenant # Create a tenant with both $ tdb tenants create --name acme --blueprint orders $ tdb tenants create --name acme --blueprint cache