[Bucardo-general] Performance option...
Greg Sabino Mullane
greg at endpoint.com
Mon Aug 23 13:55:33 UTC 2010
> -> Seq Scan on rawevidence t
> (cost=0.00..14328846.10 rows=94309651 width=638)
Yep, that's pretty ugly. Will it run faster if you force
the use of the index? (set enable_seqscan = 0)
> rawevidence is around 100G in size and it's on the end of a U320 scsi
> raid... so sequence scans take a _LONG_ time not to mention the bucado
> process grows to 22G in size ;-)
>
> however... if we say select 100 transactions at a time it's a *LOT*
> faster and uses SIGNIFICANLTY less memory...
Do you mean looping through until you get all the rows, but still staying
inside a single transaction? I don't think that will work, as we need to be
able to check the other side (in a awap sync) to see if the rows are also
on there. In other words, the only guarantee we have is that the database
is consistent at the exact point in time when we start our transaction. We
cannot go back in time to previous transactions, or chunk things up.
However...
I'm working on a new system for swap syncs that should go much faster. We
remove the left join and just grab the delta primary keys from both sides.
We compare those to build lists, and then simply do a pushdelta like update
from source to target, and then from target to source. Still mostly an
idea in my head however, I've not found the cycles to actually test this
out yet.
--
Greg Sabino Mullane greg at endpoint.com
End Point Corporation
PGP Key: 0x14964AC8
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 163 bytes
Desc: not available
Url : https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20100823/1296b35c/attachment.bin
More information about the Bucardo-general
mailing list