Continuous Migration
pgbranch includes a built-in continuous migration engine that copies a PostgreSQL database from one server to another using logical replication. It handles schema extraction, initial data snapshot, and real-time WAL streaming in a single coordinated process.
This is designed for small-to-medium databases where you need a reliable, resumable sync between two PostgreSQL instances — whether you’re moving between cloud providers, consolidating databases, or setting up a read replica outside your primary provider.
When to Use This
Section titled “When to Use This”Continuous migration fits a specific set of problems:
- Provider migration — moving from Neon to PlanetScale Postgres, Supabase to RDS, or any hosted PostgreSQL to another
- Cross-region sync — copying a database to a different region or availability zone
- Environment seeding — keeping a staging database in sync with production data
- Provider evaluation — running a target database in parallel before cutting over
For databases under 50 GB, the full migration (schema + snapshot + streaming) typically completes the initial sync in minutes, then keeps the target up to date in real time until you’re ready to cut over.
How It Works
Section titled “How It Works”The migration runs in three phases, each building on the previous one:
Source Database Target Database┌──────────────┐ ┌──────────────┐│ │ Phase 1: Schema │ ││ Tables │─────────────────────▶ │ Tables ││ Indexes │ DDL extraction │ Indexes ││ Constraints │ and application │ Constraints ││ Enums │ │ Enums ││ Sequences │ │ Sequences ││ │ │ ││ │ Phase 2: Snapshot │ ││ All rows │─────────────────────▶ │ All rows ││ │ COPY protocol │ ││ │ (consistent point) │ ││ │ │ ││ New writes │ Phase 3: Streaming │ Applied ││ (WAL) │─────────────────────▶ │ in real ││ │ INSERT/UPDATE/DELETE │ time │└──────────────┘ └──────────────┘Phase 1: Schema
Section titled “Phase 1: Schema”pgbranch extracts the full schema from the source — tables, columns, indexes, constraints, enums, sequences, and functions. It applies them to the target in dependency order:
- Sequences and enums first
- Tables without foreign keys
- Indexes and non-FK constraints
- Foreign key constraints last
This ordering avoids reference errors during application. Primary keys are included in the initial table creation.
Phase 2: Snapshot
Section titled “Phase 2: Snapshot”The initial data copy uses PostgreSQL’s COPY protocol for bulk transfer. Before copying, pgbranch creates a replication slot on the source. This marks a consistent point-in-time — any writes that happen after this point are captured by WAL streaming in Phase 3.
Tables are copied in foreign-key dependency order (topological sort) so that parent rows exist before child rows referencing them.
For a 5 GB database on a fast connection, the snapshot phase typically takes 2-5 minutes.
Phase 3: WAL Streaming
Section titled “Phase 3: WAL Streaming”Once the snapshot completes, pgbranch begins streaming changes from the replication slot. Every INSERT, UPDATE, DELETE, and TRUNCATE on the source is applied to the target in real time.
The stream continues indefinitely until you stop it. This gives you a live, continuously updated copy of your source database — run it for minutes or days while you validate the target.
Configuration
Section titled “Configuration”Create a YAML configuration file:
source: host: ep-cool-frost-123456.us-east-2.aws.neon.tech port: 5432 database: myapp user: myapp_owner password: ${NEON_PASSWORD} sslmode: require
target: host: pg-planetscale-abc.us-east-1.aws.connect.psdb.cloud port: 5432 database: myapp user: myapp_admin password: ${TARGET_PASSWORD} sslmode: require
tables: - "*"
slot_name: pgbranch_migratepublication_name: pgbranch_pubConfiguration Reference
Section titled “Configuration Reference”| Field | Description | Default |
|---|---|---|
source.host | Source PostgreSQL hostname | — |
source.port | Source port | 5432 |
source.database | Source database name | — |
source.user | Source user (needs replication privileges) | — |
source.password | Source password | — |
source.sslmode | SSL mode (disable, require, prefer) | prefer |
target.host | Target PostgreSQL hostname | — |
target.port | Target port | 5432 |
target.database | Target database name | — |
target.user | Target user (needs write privileges) | — |
target.password | Target password | — |
target.sslmode | SSL mode | prefer |
tables | List of tables or ["*"] for all | — |
slot_name | Replication slot name | migrate_slot |
publication_name | Publication name | migrate_pub |
Table Selection
Section titled “Table Selection”You can migrate all tables or a specific subset:
# All user tables (excludes system catalogs)tables: - "*"
# Specific tablestables: - public.users - public.orders - public.products
# Tables default to public schema if not specifiedtables: - users - ordersRunning a Migration
Section titled “Running a Migration”Full Migration (Snapshot + Streaming)
Section titled “Full Migration (Snapshot + Streaming)”pgbranch migrate --config migrate.yamlThis runs all three phases. After the snapshot completes, WAL streaming continues until you press Ctrl+C. The target stays in sync with the source in real time.
Schema Only
Section titled “Schema Only”pgbranch migrate --config migrate.yaml --schema-onlyCopies the schema without any data. Useful for validating that the target provider supports all your schema objects before running a full migration.
Snapshot Only (No Streaming)
Section titled “Snapshot Only (No Streaming)”pgbranch migrate --config migrate.yaml --snapshot-onlyCopies schema and data but stops after the snapshot. No WAL streaming. Good for one-time copies where you don’t need continuous sync.
Progress Tracking
Section titled “Progress Tracking”In a terminal, pgbranch displays an interactive progress view:
Continuous Migration
Phase: snapshot
public.users ████████████████████ 12,450 / 12,450 3,200 rows/s public.orders ████████████░░░░░░░░ 89,301 / 145,000 5,100 rows/s public.products ░░░░░░░░░░░░░░░░░░░░ pending
Elapsed: 00:01:34During streaming:
Continuous Migration
Phase: streaming
LSN: 0/1A3B4C50 Total ops: 4,521 Inserts: 3,102 Updates: 1,204 Deletes: 215 Throughput: 142 ops/s Uptime: 00:31:52
Press Ctrl+C to stopIn non-interactive environments (CI, logs), it outputs plain text with timestamps.
Resumability
Section titled “Resumability”Migrations are fully resumable. pgbranch writes a checkpoint file alongside your config:
migrate.yamlpgbranch_migrate.checkpoint.jsonThe checkpoint tracks:
- Which tables have been copied
- How many rows were transferred per table
- The last confirmed WAL position (LSN)
- Whether schema has been applied
If the migration is interrupted — network failure, process killed, machine restart — rerun the same command. pgbranch picks up where it left off:
- Skips schema application if already done
- Skips completed tables during snapshot
- Resumes WAL streaming from the last confirmed position
Error Handling and Reconnection
Section titled “Error Handling and Reconnection”WAL streaming handles transient failures automatically:
- Connection drops — reconnects with exponential backoff (1s, 2s, 4s… up to 30s, max 10 retries)
- Network timeouts — sends standby heartbeats every 10 seconds to prevent slot timeout
- Transaction safety — rolls back any in-flight transaction before reconnecting
Only transient errors (connection reset, EOF, network failures) trigger retries. Schema errors, permission issues, and data conflicts fail immediately with a clear error message.
Provider-Specific Notes
Section titled “Provider-Specific Notes”Neon (Source)
Section titled “Neon (Source)”Enable logical replication in your Neon project settings. The connection string uses the pooled endpoint by default — for replication, use the direct (non-pooled) connection string:
source: host: ep-cool-frost-123456.us-east-2.aws.neon.tech sslmode: requireSupabase (Source)
Section titled “Supabase (Source)”Enable logical replication under Database > Replication in the Supabase dashboard. Use the direct connection (port 5432), not the pooled connection (port 6543):
source: host: db.xxxxxxxxxxxx.supabase.co port: 5432 sslmode: requireAmazon RDS / Aurora (Source)
Section titled “Amazon RDS / Aurora (Source)”Set the rds.logical_replication parameter to 1 in your parameter group and reboot the instance:
source: host: mydb.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com port: 5432 sslmode: requireGoogle Cloud SQL (Source)
Section titled “Google Cloud SQL (Source)”Enable cloudsql.logical_decoding flag in your instance settings:
source: host: /cloudsql/project:region:instance sslmode: disableAny Provider (Target)
Section titled “Any Provider (Target)”The target needs no special configuration beyond a user with write permissions. Standard PostgreSQL connections work.
Cutover Checklist
Section titled “Cutover Checklist”When you’re ready to switch your application to the target database:
- Verify data — spot-check row counts and recent records on the target
- Stop writes to source — pause your application or set the source to read-only
- Wait for stream to drain — watch the streaming view until operations drop to zero
- Stop the migration — press
Ctrl+Cor send SIGINT - Update connection strings — point your application to the target database
- Restart your application — verify everything works against the target
- Clean up — the replication slot and publication on the source can be dropped
Limitations
Section titled “Limitations”PostgreSQL Only
Section titled “PostgreSQL Only”Both source and target must be PostgreSQL. This is not a heterogeneous migration tool — it uses PostgreSQL’s native logical replication protocol.
Logical Replication Requirements
Section titled “Logical Replication Requirements”- Source must have
wal_level = logical - Source must have available replication slots (
max_replication_slots) - Tables without primary keys or replica identity cannot stream UPDATE/DELETE operations
Schema Changes
Section titled “Schema Changes”DDL changes on the source after the migration starts are not replicated. If you add columns or tables to the source during streaming, stop the migration and re-run it.
Large Object and Extension Support
Section titled “Large Object and Extension Support”Large objects (lo type) are not replicated through logical replication. PostgreSQL extensions must be installed on the target manually before running the migration.
Performance
Section titled “Performance”| Database Size | Snapshot Time | Streaming Latency |
|---|---|---|
| 100 MB | ~10 seconds | < 1 second |
| 1 GB | ~30 seconds | < 1 second |
| 5 GB | ~2 minutes | < 1 second |
| 10 GB | ~5 minutes | < 1 second |
| 50 GB | ~25 minutes | < 1 second |
Times depend on network bandwidth between source and target. Streaming latency is the delay between a write on the source and its application on the target under normal load.
Next Steps
Section titled “Next Steps”- Installation guide — install pgbranch
- How pgbranch works — understand template databases
- CLI Reference — full command reference