[Bucardo-general] Questions !!!

Rosser Schwarz rosser.schwarz at gmail.com
Wed May 25 18:47:32 UTC 2011


On Wed, May 25, 2011 at 8:02 AM, Johnny Leyva Suÿffffe1rez
<john_lesuyh at yahoo.com> wrote:
> 1- How many can grow the cluster. Or how many slaves can be added to a sync?

I've seen accounts of people running a few tens of slaves.  The
largest group of slaves I've ever used a little over ten or so.

If you need more slaves than a single Bucardo instance can manage
while still performing adequately, or the overhead of replication is
adversely affecting your master db's performance, you can always do
"cascaded" replication with tiers of slaves, by enabling makedelta on
the relevant goats.  The master replicates to a smaller number of
"super-slaves", which in turn replicate to their "normal" slaves.  You
could probably even do multiple tiers, though I'd watch the end-to-end
replication latency pretty closely in that kind of setup.

> 2- There is at road-map some master-master-master...-master solution?

Bucardo 5 is in an alpha-to-beta-ish state right now, and supports
multi-master, for values of "multi" > 2.  You can get it from the git
repo.

> 3- How fail-over is resolved? Now there is only one instance of bucardo, if this
> instance fail then the cluster stop working and data is loss. Like a bucardo
>  cluster.

Bucardo doesn't manage fail-over; its job is simply replication.  You
could put a connection pooler in front of a pair of masters to achieve
increased availability, among other things.  There are also a number
of ways to put a "watchdog" on the Bucardo daemon/MCP, so that it
would be automatically restarted in the event of a failure, or
otherwise replicate Bucardo's database.  (Be careful, though; naïve
watchdog or cluster-ish implementations might try to restart Bucardo
after a manual shutdown, or in other situations where you specifically
*don't* want it running.)

All that said, please note: If Bucardo stops running, you won't *lose*
data.  Instead, all the deltas (new rows or changes to existing rows
on the master(s)) will simply accumulate in a queue-like table (or, in
v5, a queue-table per user table that's being replicated, AIUI) in the
source db.  The next time Bucardo is started, the accumulated deltas
will be replicated, and your target db(s) will catch up.

rls

-- 
:wq


More information about the Bucardo-general mailing list