[Bucardo-general] new Bucardo installation with a 1TB DB

Lucas Possamai drum.lucas at gmail.com
Wed Feb 6 03:39:18 UTC 2019


We're moving our EC2 Postgres 1TB DB to RDS and decided to use Bucardo, as
it supports multiple PG versions.
We have a PG 9.2 cluster, and instead of upgrading it, we'll use Bucardo to
replicate to a PG 9.6, then use DMS or pg_dump to restore it in RDS.

Because of the size of the current 9.2 DB, I cannot stop the application so
it doesn't write to the DB while I do pg_dump and restore it in the new
bucardo slave.

So, I thought I would do something like this:

   1. Install Bucardo and add large tables to a pushdelta sync
   2. Copy the tables to the new server (e.g. with pg_dump)
   3. Startup Bucardo and catch things up (e.g. copy all rows changes since
   step 2)

Steps for the above would be pretty much like this article
if I'm not mistaken.

My questions are:

   1. Is it the right approach? do you guys have any other suggestions?
   2. I'll have the following: pg-9.2 master --> bucardo instance (with the
   bucardo DB only) --> pg-9.6 slave from bucardo
      1. When doing the pg_dump, I only need to restore it on the "pg-9.6
      slave from bucardo" instance, correct? the bucardo DB does not store the

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20190206/33ff1108/attachment.html>

More information about the Bucardo-general mailing list