[Bucardo-general] Initial population of collections from large PostgreSQL tables

Greg Sabino Mullane greg at endpoint.com
Sun Apr 29 03:41:46 UTC 2012

On Fri, Apr 27, 2012 at 02:39:48PM +0100, Ali Asad Lotia wrote:
> The error reported in the mongo log is:
> Fri Apr 27 12:28:26 [conn128] recv(): message len 187968085 is too
> large187968085
> >From my understanding, the amount of data we are sending to mongo in the
> query generated by the sync is too large for MongoDB to handle. If my
> understanding is correct, is there a way to get the sync to divide the sync
> up into multiple queries to mongo that don't overflow the maximum defined
> message length?

Yes. I would guess that the first problem is the mass delete we send to the 
MongoDB server. I've divided that up in the latest git push: see commit 

If it still fails, you may want to set statement_chunk_size lower. If 
it *still* fails, see if you can figure out about where in Bucardo.pm it 
is failing by looking at the last lines in the bucardo.log file 
before it blows up.

Greg Sabino Mullane greg at endpoint.com
End Point Corporation
PGP Key: 0x14964AC8
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 163 bytes
Desc: not available
URL: <https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20120428/3e900b33/attachment.sig>

More information about the Bucardo-general mailing list