[Bucardo-general] Word of warning - when queries get too backed up...

Greg Sabino Mullane greg at endpoint.com
Wed Aug 4 11:30:39 UTC 2010


> Might want to see if the data can be chunked - or written to disk if
> it's really impossible because of constraints... Maybe automatically
> write any table data to a file if it has rows of type that can be large
> (eg, text and bytea)..?  Maybe a query for the primary keys then make a
> guess-timate of whether to write to a temp file on disk or write to
> memory? ... the latter will be a lot faster performing in the case of a
> large data chunk as perl does have this annoying habit of realloc'ing
> small chunks at a time when it hit's a large hash size (speaking from
> experience on that one - pre-allocating chunks of memory will help, but
> perl is not efficient at handling large data sets in a single var in
> memory - large being several 100k)

The data cannot be chunked, but you raise some interesting ideas. Once I've 
got some new tests written I'm going to see if there are ways to optimize 
things, especially the swap sync. I think two things that will help is 
to only grab the primary keys on the first run and to treat any "source wins" 
swap sync as an effective pushdelta and do the whole DELETE IN () + COPY 
strategy that pushdelta uses.

-- 
Greg Sabino Mullane greg at endpoint.com
End Point Corporation
PGP Key: 0x14964AC8
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 163 bytes
Desc: not available
Url : https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20100804/da1f9965/attachment.bin 


More information about the Bucardo-general mailing list