Skip to content

Continuous Migration

pgbranch includes a built-in continuous migration engine that copies a PostgreSQL database from one server to another using logical replication. It handles schema extraction, initial data snapshot, and real-time WAL streaming in a single coordinated process.

This is designed for small-to-medium databases where you need a reliable, resumable sync between two PostgreSQL instances — whether you’re moving between cloud providers, consolidating databases, or setting up a read replica outside your primary provider.

Continuous migration fits a specific set of problems:

  • Provider migration — moving from Neon to PlanetScale Postgres, Supabase to RDS, or any hosted PostgreSQL to another
  • Cross-region sync — copying a database to a different region or availability zone
  • Environment seeding — keeping a staging database in sync with production data
  • Provider evaluation — running a target database in parallel before cutting over

For databases under 50 GB, the full migration (schema + snapshot + streaming) typically completes the initial sync in minutes, then keeps the target up to date in real time until you’re ready to cut over.

The migration runs in three phases, each building on the previous one:

Source Database Target Database
┌──────────────┐ ┌──────────────┐
│ │ Phase 1: Schema │ │
│ Tables │─────────────────────▶ │ Tables │
│ Indexes │ DDL extraction │ Indexes │
│ Constraints │ and application │ Constraints │
│ Enums │ │ Enums │
│ Sequences │ │ Sequences │
│ │ │ │
│ │ Phase 2: Snapshot │ │
│ All rows │─────────────────────▶ │ All rows │
│ │ COPY protocol │ │
│ │ (consistent point) │ │
│ │ │ │
│ New writes │ Phase 3: Streaming │ Applied │
│ (WAL) │─────────────────────▶ │ in real │
│ │ INSERT/UPDATE/DELETE │ time │
└──────────────┘ └──────────────┘

pgbranch extracts the full schema from the source — tables, columns, indexes, constraints, enums, sequences, and functions. It applies them to the target in dependency order:

  1. Sequences and enums first
  2. Tables without foreign keys
  3. Indexes and non-FK constraints
  4. Foreign key constraints last

This ordering avoids reference errors during application. Primary keys are included in the initial table creation.

The initial data copy uses PostgreSQL’s COPY protocol for bulk transfer. Before copying, pgbranch creates a replication slot on the source. This marks a consistent point-in-time — any writes that happen after this point are captured by WAL streaming in Phase 3.

Tables are copied in foreign-key dependency order (topological sort) so that parent rows exist before child rows referencing them.

For a 5 GB database on a fast connection, the snapshot phase typically takes 2-5 minutes.

Once the snapshot completes, pgbranch begins streaming changes from the replication slot. Every INSERT, UPDATE, DELETE, and TRUNCATE on the source is applied to the target in real time.

The stream continues indefinitely until you stop it. This gives you a live, continuously updated copy of your source database — run it for minutes or days while you validate the target.

Create a YAML configuration file:

migrate.yaml
source:
host: ep-cool-frost-123456.us-east-2.aws.neon.tech
port: 5432
database: myapp
user: myapp_owner
password: ${NEON_PASSWORD}
sslmode: require
target:
host: pg-planetscale-abc.us-east-1.aws.connect.psdb.cloud
port: 5432
database: myapp
user: myapp_admin
password: ${TARGET_PASSWORD}
sslmode: require
tables:
- "*"
slot_name: pgbranch_migrate
publication_name: pgbranch_pub
FieldDescriptionDefault
source.hostSource PostgreSQL hostname
source.portSource port5432
source.databaseSource database name
source.userSource user (needs replication privileges)
source.passwordSource password
source.sslmodeSSL mode (disable, require, prefer)prefer
target.hostTarget PostgreSQL hostname
target.portTarget port5432
target.databaseTarget database name
target.userTarget user (needs write privileges)
target.passwordTarget password
target.sslmodeSSL modeprefer
tablesList of tables or ["*"] for all
slot_nameReplication slot namemigrate_slot
publication_namePublication namemigrate_pub

You can migrate all tables or a specific subset:

# All user tables (excludes system catalogs)
tables:
- "*"
# Specific tables
tables:
- public.users
- public.orders
- public.products
# Tables default to public schema if not specified
tables:
- users
- orders
Terminal window
pgbranch migrate --config migrate.yaml

This runs all three phases. After the snapshot completes, WAL streaming continues until you press Ctrl+C. The target stays in sync with the source in real time.

Terminal window
pgbranch migrate --config migrate.yaml --schema-only

Copies the schema without any data. Useful for validating that the target provider supports all your schema objects before running a full migration.

Terminal window
pgbranch migrate --config migrate.yaml --snapshot-only

Copies schema and data but stops after the snapshot. No WAL streaming. Good for one-time copies where you don’t need continuous sync.

In a terminal, pgbranch displays an interactive progress view:

Continuous Migration
Phase: snapshot
public.users ████████████████████ 12,450 / 12,450 3,200 rows/s
public.orders ████████████░░░░░░░░ 89,301 / 145,000 5,100 rows/s
public.products ░░░░░░░░░░░░░░░░░░░░ pending
Elapsed: 00:01:34

During streaming:

Continuous Migration
Phase: streaming
LSN: 0/1A3B4C50
Total ops: 4,521
Inserts: 3,102
Updates: 1,204
Deletes: 215
Throughput: 142 ops/s
Uptime: 00:31:52
Press Ctrl+C to stop

In non-interactive environments (CI, logs), it outputs plain text with timestamps.

Migrations are fully resumable. pgbranch writes a checkpoint file alongside your config:

migrate.yaml
pgbranch_migrate.checkpoint.json

The checkpoint tracks:

  • Which tables have been copied
  • How many rows were transferred per table
  • The last confirmed WAL position (LSN)
  • Whether schema has been applied

If the migration is interrupted — network failure, process killed, machine restart — rerun the same command. pgbranch picks up where it left off:

  • Skips schema application if already done
  • Skips completed tables during snapshot
  • Resumes WAL streaming from the last confirmed position

WAL streaming handles transient failures automatically:

  • Connection drops — reconnects with exponential backoff (1s, 2s, 4s… up to 30s, max 10 retries)
  • Network timeouts — sends standby heartbeats every 10 seconds to prevent slot timeout
  • Transaction safety — rolls back any in-flight transaction before reconnecting

Only transient errors (connection reset, EOF, network failures) trigger retries. Schema errors, permission issues, and data conflicts fail immediately with a clear error message.

Enable logical replication in your Neon project settings. The connection string uses the pooled endpoint by default — for replication, use the direct (non-pooled) connection string:

source:
host: ep-cool-frost-123456.us-east-2.aws.neon.tech
sslmode: require

Enable logical replication under Database > Replication in the Supabase dashboard. Use the direct connection (port 5432), not the pooled connection (port 6543):

source:
host: db.xxxxxxxxxxxx.supabase.co
port: 5432
sslmode: require

Set the rds.logical_replication parameter to 1 in your parameter group and reboot the instance:

source:
host: mydb.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com
port: 5432
sslmode: require

Enable cloudsql.logical_decoding flag in your instance settings:

source:
host: /cloudsql/project:region:instance
sslmode: disable

The target needs no special configuration beyond a user with write permissions. Standard PostgreSQL connections work.

When you’re ready to switch your application to the target database:

  1. Verify data — spot-check row counts and recent records on the target
  2. Stop writes to source — pause your application or set the source to read-only
  3. Wait for stream to drain — watch the streaming view until operations drop to zero
  4. Stop the migration — press Ctrl+C or send SIGINT
  5. Update connection strings — point your application to the target database
  6. Restart your application — verify everything works against the target
  7. Clean up — the replication slot and publication on the source can be dropped

Both source and target must be PostgreSQL. This is not a heterogeneous migration tool — it uses PostgreSQL’s native logical replication protocol.

  • Source must have wal_level = logical
  • Source must have available replication slots (max_replication_slots)
  • Tables without primary keys or replica identity cannot stream UPDATE/DELETE operations

DDL changes on the source after the migration starts are not replicated. If you add columns or tables to the source during streaming, stop the migration and re-run it.

Large objects (lo type) are not replicated through logical replication. PostgreSQL extensions must be installed on the target manually before running the migration.

Database SizeSnapshot TimeStreaming Latency
100 MB~10 seconds< 1 second
1 GB~30 seconds< 1 second
5 GB~2 minutes< 1 second
10 GB~5 minutes< 1 second
50 GB~25 minutes< 1 second

Times depend on network bandwidth between source and target. Streaming latency is the delay between a write on the source and its application on the target under normal load.