[Bucardo-general] new Bucardo installation with a 1TB DB
drum.lucas at gmail.com
Wed Feb 6 20:18:18 UTC 2019
On Thu, Feb 7, 2019 at 3:09 AM David Christensen <david at endpoint.com> wrote:
> > We're moving our EC2 Postgres 1TB DB to RDS and decided to use Bucardo,
> as it supports multiple PG versions.
> > We have a PG 9.2 cluster, and instead of upgrading it, we'll use Bucardo
> to replicate to a PG 9.6, then use DMS or pg_dump to restore it in RDS.
> > Because of the size of the current 9.2 DB, I cannot stop the application
> so it doesn't write to the DB while I do pg_dump and restore it in the new
> bucardo slave.
> > So, I thought I would do something like this:
> > • Install Bucardo and add large tables to a pushdelta sync
> > • Copy the tables to the new server (e.g. with pg_dump)
> > • Startup Bucardo and catch things up (e.g. copy all rows changes
> since step 2)
> > Steps for the above would be pretty much like this article; if I'm not
> > My questions are:
> > • Is it the right approach? do you guys have any other suggestions?
> Yeah, as long as you’re doing a one-way sync (master -> target) it’s
> sufficient to start capturing the deltas (generally by creating a sync with
> auto-kick off), then pg_dump the data, then kick the sync/set autokick
> until you’re caught up then cutover the app as needed.
First of all, thanks for your reply!
That would be pushdelta <https://bucardo.org/pushdelta/>, correct?
> > • I'll have the following: pg-9.2 master --> bucardo instance
> (with the bucardo DB only) --> pg-9.6 slave from bucardo
> > • When doing the pg_dump, I only need to restore it on the
> "pg-9.6 slave from bucardo" instance, correct? the bucardo DB does not
> store the data?
> Right, “bucardo” database holds meta-information about the syncs, sync
> history, etc, but no user data.
> > Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Bucardo-general