[Bucardo-general] Syncing a large table

Jeff Ross rossj at cargotel.com
Thu Jul 25 15:27:52 UTC 2019


Excellent--thanks!

Jeff

Jeff Ross
rossj at cargotel.com

On 7/25/19 6:20 AM, Mertens, B.J.H. (Bart) wrote:
> Hi Jeff,
>
> Something like this should work.
>
> bucardo add sync load_events_sync relgroup=load_events dbs=cargotel_sync,metro_sync type=pushdelta status=inactive
> pg_dump & pg_restore
> bucardo update sync load_events_sync status=active
>
> Best regards,
> Bart Mertens
>
> -----Original Message-----
> From: Bucardo-general [mailto:bucardo-general-bounces at bucardo.org] On Behalf Of Jeff Ross
> Sent: dinsdag 23 juli 2019 20:38
> To: bucardo-general at bucardo.org
> Subject: [Bucardo-general] Syncing a large table
>
> I have a pretty large table (157 million rows) that I need to add to a sync group.  I've tried multiple times now to get the group to sync but it contines to fail.  I removed that large table from the group and made another sync group with just that table.
>
> Now the original group without that large table syncs just fine--but the new sync group continues to fail.
>
> I made the sync with this:
>
> bucardo add sync load_events_sync relgroup=load_events dbs=cargotel_sync,metro_sync onetimecopy=1
>
>
> The error message I'm getting is this:
>
> [postgres at aeneas ~]$ bucardo status load_events_sync
> ======================================================================
> Last bad                 : Jul 23, 2019 10:43:40 (time until fail: 2h
> 11m 26s)
> Sync name                : load_events_sync
> Current state            : Bad
> Source relgroup/database : load_events / cargotel_sync
> Tables in sync           : 1
> Status                   : Active
> Check time               : None
> Overdue time             : 00:00:00
> Expired time             : 00:00:00
> Stayalive/Kidsalive      : Yes / Yes
> Rebuild index            : No
> Autokick                 : Yes
> Onetimecopy              : Yes
> Post-copy analyze        : Yes
> Last error:              : Failed : DBD::Pg::st execute failed: SSL
> SYSCALL error: EOF detected at /usr/local/share/perl5/Bucardo.pm line
> 4830. Line: 5037 Main DB state: 08000 Error: 7 DB cargotel_sync state: ?
> Error: none DB metro_sync state: ? Error: none  (KID 5245)
>
> This almost seems like more of a time out event than an SSL SYSCALL error.
>
> Is it possible to pre-load that big table on the target database with
> pg_dump and then start the sync?  It appears that the answer is no
> because in reading bucardo.pm around that line 4830 that bucardo works
> hard to make sure that the target table is empty before starting.
>
> What might I do to get this to finish?
>
> And, related, when I get this sort of failure the size of that single
> table in the target database is greater than the entire source schema!
>
> Here's the source schema:
>
> postgres at cargotel_dev [local]# select
> pg_size_pretty(pg_schema_size('metro'));
>    pg_size_pretty
> ────────────────
>    101 GB
> (1 row)
>
>
>
> Here's the size of the target database:
>
> metro_sync at metro_sync pgbouncer.cargotel.com# \l+
> List of databases
>
> ─[ RECORD 1 ]─────┼───────────────────────────────────────────
> Name              │ metro_sync
> Owner             │ metro_sync
> Encoding          │ UTF8
> Collate           │ en_US.UTF-8
> Ctype             │ en_US.UTF-8
> Access privileges │
> Size              │ 175 GB
> Tablespace        │ pg_default
> Description       │
>
> After dropping that load_events table, the metro_sync database is only 32G.
>
> So I'd appreciate some clarification on why that is.
>
> Thanks in advance!
>
> Jeff Ross
>


-- 
The contents of this e-mail and any attachments are intended solely for the 
use of the named addressee(s) and may contain confidential and/or 
privileged information. Any unauthorized use, copying, disclosure, or 
distribution of the contents of this e-mail is strictly prohibited by the 
sender and may be unlawful. If you are not the intended recipient, please 
notify the sender immediately and delete this e-mail.


More information about the Bucardo-general mailing list