[Bucardo-general] Migrating to Amazon Postgres RDS

Chris Keane chris.keane at zoomius.com
Wed Mar 5 16:14:04 UTC 2014

So yesterday I sent a very enthusiastic update about running bucardo on an
Amazon AWS RDS Postgres instance. I sent it from the wrong address and
bucardo mail list service held it since I wasn't subscribed, which is lucky
because now I'm less enthused about bucardo on postgres RDS.

Yesterday's email is below, but I ran into the problem last night that if
I'd thought about it would have been obvious and now I'm scratching my head
trying to come up with achievable solutions.

The problem is, of course, that I can't have a superuser roled user in the
RDS database. They have a pseudo superuser with a role of rds_superuser,
which is not a superuser (to be honest I'm not sure why they don't allow
superuser, since they modify the postgres source anyway it should be
relatively easy to disable the set commands that can interfere with their
system). Anyhoo, the issue is SET session_replication_mode = 'replica', a
command permitted to superusers only.


Out of curiosity I made a quick edit to Bucardo.pm to switch back to
pg_class mode just to see what would happen, but something changed in how
triggers are enabled/disabled since the pg_class method was written,
because pg_class.reltriggers no longer exists (now there is a
pg_class.relhastriggers along with pg_trigger.tgenabled. If I have to I'm
going to dig into that a bit more but basically on the surface, bucardo +
rds is a no-go.

I've asked the fine folks at Amazon to either
(a) allow role rds_superuser to issue the SET session_replication_mode
command; or
(b) create a function with SECURITY DEFINER as a superuser that would SET

Are there any other options. Any way you can think of to disable triggers
for a single session without session_replication_mode?

Hmmm. I really want this to work.


My message from yesterday:
I'm super stoked that Amazon listened and produced a Postgres version of
their RDS. I've been spending some time over the last week planning a
migration. Or specifically, dumping the database a lot, editing the dump
and loading into an RDS instance.

The problem
The problem is that we, the lowly user, have no access to the postgres
superuser role. Instead, the most powerful role we have is rds_superuser,
which nominally has power but if effectively a regular user with a few
special grants. Certainly, rds_superuser has no permission to modify a
table belonging to someone else (for example, to add a trigger or an
index), or create a table in a schema that is owned by another user, etc.

Our setup
We are super-heavy bucardo users. Each of our clients has a laptop that has
a replica of our central database. Their laptops are only occasionally
connected, so we've installed the bucardo server on the laptops themselves,
each with their own syncs but with make_delta set for the tables that are
shared between everyone. It works fairly well but we're starting to run
into scaling issues, so once bucardo supports syncs with many targets that
are only sporadically connected then we'll look at that. Currently, bucardo
4.99.5 (although I need to do something about that)

The RDS Experience
Straight-up import of the central database fails because we can't run the
import as a real superuser, rather as the fake rds_superuser. Let's call
our rds_superuser "superfred". I also created a bucardo user in the
database, also with role rds_superuser.

Because rds_superuser isn't a real superuser, after our import does
ALTER SCHEMA bucardo OWNER TO bucardo;
As soon superfred continues with the import and starts the CREATE TABLES in
the bucardo schema it fails with a permission problem (since the schema is
owned by another user).
I tried various combinations but ultimately decided that the best approach
was to comment our all the lines in the dump that contain OWNER TO bucardo,
so that bucardo owns nothing and everyone is owned by superfred.

Then, right at the top of the file:
grant superfred to bucardo;

That will allow the bucardo user to act on any object owned by superfred as
if bucardo were a real superuser.

Oid vey!  There are a couple of tables in the bucardo schema that store the
oid of a replicated table. That's great until you do a dump/load into a new
database and the oids no longer match.
So I figured I needed to make a map that I could use to figure this out. In
the current production database:
set search_path = bucardo;
create table oidmap ( poid int, tname text );
insert into oidmap select distinct tablename::oid,
tablename::regclass::text from bucardo_delta_targets;
insert into oidmap select distinct tablename::oid,
tablename::regclass::text from bucardo_truncate_trigger where
tablename::oid not in (select poid::oid from oidmap);
insert into oidmap select distinct tablename::oid,
tablename::regclass::text from bucardo_truncate_trigger_log where
tablename::oid not in (select poid::oid from oidmap);

Then dump and load the database into RDS. After the load is finished, in
the RDS-based db:
set search_path = bucardo;
update bucardo_delta_targets set tablename = (select tname::regclass::oid
from oidmap where poid = tablename::int)::oid;
update bucardo_truncate_trigger set tablename = (select
tname::regclass::oid from oidmap where poid = tablename::int)::oid;
update bucardo_truncate_trigger_log set tablename = (select
tname::regclass::oid from oidmap where poid = tablename::int)::oid;

In the process of another load right now and it's looking the best it has
so far. Fingers crossed!

Has anyone else tried a bucardo install in AWS RDS? Any other gotchas I
haven't come across yet?

*Chris Keane* * Track Intelligence Inc *  +1 (650) 703 5523 (cell)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20140305/48ece47a/attachment.html>

More information about the Bucardo-general mailing list