[Bucardo-general] Word of warning - when queries get too backed up...

Michelle Sullivan michelle at sorbs.net
Wed Aug 4 14:01:24 UTC 2010


Greg Sabino Mullane wrote:
>> Might want to see if the data can be chunked - or written to disk if
>> it's really impossible because of constraints... Maybe automatically
>> write any table data to a file if it has rows of type that can be large
>> (eg, text and bytea)..?  Maybe a query for the primary keys then make a
>> guess-timate of whether to write to a temp file on disk or write to
>> memory? ... the latter will be a lot faster performing in the case of a
>> large data chunk as perl does have this annoying habit of realloc'ing
>> small chunks at a time when it hit's a large hash size (speaking from
>> experience on that one - pre-allocating chunks of memory will help, but
>> perl is not efficient at handling large data sets in a single var in
>> memory - large being several 100k)
>>     
>
> The data cannot be chunked, but you raise some interesting ideas. Once I've 
> got some new tests written I'm going to see if there are ways to optimize 
> things, especially the swap sync. I think two things that will help is 
> to only grab the primary keys on the first run and to treat any "source wins" 
> swap sync as an effective pushdelta and do the whole DELETE IN () + COPY 
> strategy that pushdelta uses.
>
>   
That would be good as I run with one server as the 'main write' server
the other being the failover server.  In multimaster it means a race
condition at fail, but as all my data is fed via email or the websites
which will fail nicely (ie email will tempfail, the website will warn
the error and prompt for retry) I won't get data loss.

Michelle


More information about the Bucardo-general mailing list