[Bucardo-general] long wait time and many rows in bucardo_delta
Brady S Edwards
brady.s.edwards at seagate.com
Sun Nov 6 15:51:45 UTC 2011
I'm currently running bucardo 4.4.2 on Postgresql 9.0.2 in a two db's
multi-master environment. Rtt is between the two db's is about 200ms.
Last week I stopped bucardo "bucardo_ctl stop", added three columns to one
of the tables in the sync, and then restarted bucardo.
I then performed a few hundred thousand updates on this table of which
about 3/4's of the updates did not make it to the remote db.
There are currently about 440000 entries in the bucardo_delta table.
bucardo_ctl status looks like:
Days back: 3 User: bucardo Database: bucardo PID of Bucardo MCP: 22765
Name Type State PID Last_good Time I/U/D Last_bad Time
sync_xxxxx| S |WAIT:42h49m6s|25009|unknown | |
CO-Root@/var/log> bucardo_ctl status sync_xxxxx
Days back: 3 User: bucardo Database: bucardo
Sync name: sync_xxxxx
Current state: WAIT:42h 49m 16s (PID = 25009)
Source herd/database: xxxx2 / xxxx1
Target database: xxxx2
Tables in sync: 24
Last good: unknown
Last bad: 42h 49m 25s (time to run: 27h 20m 36s)
Last bad time: Nov 04, 2011 15:36:10 Target: xxx2
Latest bad reason: Controller cleaning out unended q entry
PID file: /var/run/bucardo/bucardo.ctl.sync.sync_xxxxx.pid
PID file created: Fri Nov 4 15:36:09 2011
Overdue time: 00:00:00
Expired time: 00:00:00
Stayalive: yes Kidsalive: yes
Rebuild index: 0 Do_listen: no
Ping: yes Makedelta: no
I'm wondering how I can resolve this.
I was thinking of stopping the client apps, stopping bucardo, disabling the
triggers on the table in question, perform the updates on both tables,
removing the entries for the table in question from the bucardo_delta table
and restarting everything.
Does this sound like a reasonable approach.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Bucardo-general