[Bucardo-general] Fwd: Syncing smaller batches

Ioana Danes ioanadanes at gmail.com
Mon Nov 2 17:03:48 UTC 2015

---------- Forwarded message ----------
From: Ioana Danes <ioanadanes at gmail.com>
Date: Mon, Nov 2, 2015 at 12:03 PM
Subject: Re: [Bucardo-general] Syncing smaller batches
To: Greg Sabino Mullane <greg at endpoint.com>

Hello Greg,

On Mon, Nov 2, 2015 at 10:54 AM, Greg Sabino Mullane <greg at endpoint.com>

> On Fri, Oct 30, 2015 at 11:47:31AM -0400, Ioana Danes wrote:
> > Hi,
> >
> > Is there a way to configure how many records to sync at a time? Something
> > similar with statement_chunk_size but to commit the sync transactions
> only
> > with a limited numbers of records not with all the delta changes?
> No, it is all or nothing, to keep the database consistent. Is there some
> problem that doing partial records would solve for you? Maybe there is
> a different solution.
I am testing Bucardo in an environment where lots of clients (~1200) create
lots of small transactions (1200 per second). A transaction has 2 inserts,
one in table 1 and one in table 2. There are also lots of other tables but
they are changed rarely compared with these 2 tables.

If I enable autosync I can only have about 520 clients connected and the
tps drops to half. If I try to start more clients the tps drops to 0 and
the clients start timing out.

By disabling the autosync the performance on the primary master is good up
to 1200TPS, so I created a script that kicks the sync in an endless loop
but it has hard time keeping up with the master. I created actually 3
syncs. One for the OTHER tables, one for table 1 and one for table 2, just
for testing. Even this way the standby master is way behind. The sync I
noticed it is very slow but not sure what part, probably the one that
builds the delete statements and the copy command. The servers are on the
same msa and they have identical configurations.

So my problem is the standby master keeping up with the primary master and
I that is because of the volume of data being synced in one transaction so
I thought to give it a try with different batch sizes.

I am not planning to have both servers as masters at the same time and I
would probably be fine with db inconsistency for these 2 tables most of the
time. I only want to be covered in case of servers restart.

I also tried Londiste and the lag is very small at the same TPS but
unfortunately it does not have multi master replication.


> --
> Greg Sabino Mullane greg at endpoint.com
> End Point Corporation
> PGP Key: 0x14964AC8
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20151102/f383a054/attachment.html>

More information about the Bucardo-general mailing list