[Bucardo-general] Bucardo sync onetimecopy=2 skipping many tables
Adam McQuistan
adam.mcquistan at thecodinginterface.com
Wed Oct 14 03:33:27 UTC 2020
Hello all,
I have a fairly simple install and configuration of bucardo with the
intention of replicating a postgresql 10 to another postgresql 10 instance
(actually aurora in postgresql mode) to get off a particular cloud platform
(heroku) and onto AWS RDS (aurora).
setup
1) I dump the source schema (remove every constraint and secondary indexes)
and use that to build the new target db's schema
2) add source db to bucardo
bucardo add db herokudevdb dbhost=ec2-11-22-33-44.compute-1.amazonaws.com
dbport=5432 dbname=thedbname dbuser=thedbuser dbpass=thedbpasswd
3) add target db to bucardo
bucardo add db auroradevdb dbhost=dev-db-url.rds.amazonaws.com dbport=5432
dbname=thedbname dbuser=thedbuser dbpass=thedbpasswd
4) add all tables
bucardo add all tables db=herokudevdb --herd=herokudevherd --verbose
5) create a dbgroup
bucardo add dbgroup herokudevmigration herokudevdb:source
auroradevdb:target --verbose
6) add a sync
bucardo add sync herokudevsync relgroup=herokudevherd
dbs=herokudevmigration onetimecopy=2 stayalive=1 kidsalive=1 autokick=1
--verbose
7) validate
bucardo validate sync herokudevsync # Validating sync herokudevsync ... OK
8) start bucardo
sudo bucardo start
9) check status
bucardo status herokudevsync
======================================================================
Last good : Oct 14, 2020 02:57:43 (time to run: 1s)
Rows deleted/inserted : 3 / 3
Sync name : herokudevsync
Current state : Good
Source relgroup/database : herokudevherd / herokudevdb
Tables in sync : 258
Status : Active
Check time : None
Overdue time : 00:00:00
Expired time : 00:00:00
Stayalive/Kidsalive : Yes / Yes
Rebuild index : No
Autokick : Yes
Onetimecopy : No
Post-copy analyze : Yes
Last error: :
======================================================================
As you can see there a quite a few tables (258) and only some of them
(maybe around 20) get copied, usually around 60K rows
My log file shows no errors. Wondering if there is something I need to do
to force that first run to execute for a longer period to copy all tables?
Appreciate any help I can get to better understand what I am doing wrong
here. Also, assuming I can properly migrate this small dev database (around
10 DB of data total) my ultimate goal is to move the full prod db (around 3
TB) with as little down time as possible.
Thanks,
Adam
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://bucardo.org/pipermail/bucardo-general/attachments/20201013/65968293/attachment.htm>
More information about the Bucardo-general
mailing list