[Bucardo-general] Problem when copying big table
Levani Gventsadze
lgventsadze at oroinc.com
Wed Dec 29 06:20:04 UTC 2021
Hello,
Here are the answers on the questions:
- How much RAM do you have?
We have 8GB of RAM on both machines.
- Have your seen tables sync and syncrun in Bucardo database?, Both tables are storing status of the transactions.
I couldn’t understand this question, can you clarify what exactly do you mean?
- Could you export your database from posgresql without problems?
Yes, currently this is what we’re doing to fix the broken sync. We manually dump the data and import it.
- Both instances are running same postgresSQL version and have same RAM and HD capacity (or almost the same).
- Did you check if time/date is the same for both Machines?
Yes, these servers are under the puppet management and we keep track of all these details.
- Did you check Bucardo service logs ? They might located in /var/log/bucardo (check your installation), here you can see what is the problem internally.
We checked the logs but couldn’t find any useful explanations.
We’re using bucardo on multiple servers, in a separate environment. Most of them are working fine but we still experience issues mainly on the postresq databases version 9.6. Could this be a problem related to this version?
Thank you for your time.
Regards,
Levani
From: Manu Parra <mparra at iaa.es>
Date: Sunday, 26 December 2021 at 12:28
To: Levani Gventsadze <lgventsadze at oroinc.com>
Cc: Jon Jensen <jon at endpointdev.com>, bucardo-general at bucardo.org <bucardo-general at bucardo.org>
Subject: Re: [Bucardo-general] Problem when copying big table
Hi all,
Is good to know two things:
- How much RAM do you have?
- Have your seen tables sync and syncrun in Bucardo database?, Both tables are storing status of the transactions.
- Could you export your database from posgresql without problems?
- Both instances are running same postgresSQL version and have same RAM and HD capacity (or almost the same).
- Did you check Bucardo service logs ? They might located in /var/log/bucardo (check your installation), here you can see what is the problem internally.
- Did you check if time/date is the same for both Machines?
Try these things and tell us.
Cheers,
Manu Parra.
On 25 Dec 2021, at 06:30, Levani Gventsadze <lgventsadze at oroinc.com<mailto:lgventsadze at oroinc.com>> wrote:
Hello,
Thank you for your response.
I will need to dive in the logs and get more than these, but I doubt it will help, because the regular method (dumping the data and then importing into the database ) works without problem.
Regards,
Levan
From: Jon Jensen <jon at endpointdev.com<mailto:jon at endpointdev.com>>
Date: Friday, 24 December 2021 at 23:29
To: bucardo-general at bucardo.org<mailto:bucardo-general at bucardo.org> <bucardo-general at bucardo.org<mailto:bucardo-general at bucardo.org>>, Levani Gventsadze <lgventsadze at oroinc.com<mailto:lgventsadze at oroinc.com>>
Subject: Re: [Bucardo-general] Problem when copying big table
Levani,
Do you have earlier logs prior to this point? It looks to me like your
ERROR: cited here is a continued transaction error state that began
earlier, and this COPY is not your actual problem. Are there other ERROR:
lines in your log prior to this?
In any case I would not expect an INSERT to work where a COPY doesn't
work. They should behave the same.
Jon
On Fri, 24 Dec 2021, Levani Gventsadze wrote:
> Hello all,
> We are having an issue with bucardo v5.6 when copying big database table.
> The problem gets aborted without any useful information in the log ( we tried increasing the log level).
> Here’s what we get every time we try to copy that big table:
>
> < 2021-12-23 10:17:28.860 UTC > CONTEXT: COPY oro_product, line 521577
> < 2021-12-23 10:17:28.860 UTC > STATEMENT: /* Bucardo 5.6.0 */COPY public.oro_product("id","organization_id","business_unit_owner_id","primary_unit_precision_id","brand_id","inventory_status_id","attribute_family_id","sku","sku_uppercase","name","name_uppercase","created_at","updated_at","variant_fields","status","type","is_featured","is_new_arrival","pagetemplate_id","category_id","taxcode_id","manageinventory_id","highlightlowinventory_id","inventorythreshold_id","lowinventorythreshold_id","minimumquantitytoorder_id","maximumquantitytoorder_id","decrementquantity_id","backorder_id","isupcoming_id","availability_date","serialized_data","book_type_id","bn_average_rating","bn_audience_age_from","bn_audience_age_to","bn_dimension_depth","bn_dimension_height","bn_dimension_weight","bn_dimension_width","bn_retail_price","bn_bisac_format","bn_dimension_unit","bn_dimension_weight_unit","bn_tax_id","bn_company_name","bn_display_edition_description","bn_url_keywords","bn_publication_date"
,"bn_author_bio","bn_edition_number","bn_image_version","bn_number_of_pages","bn_work_id","bn_discountable_flag","bn_large_print_ind","bn_shippable_flag","bn_audience_id","bn_language_desc_id","bn_display_format_id","bn_parent_format_id","bn_lexile","bn_lexile_value","bn_series_id","bn_series_number","bn_series_title","bn_contributors") FROM STDIN
> < 2021-12-23 10:17:28.861 UTC > ERROR: current transaction is aborted, commands ignored until end of transaction block
> < 2021-12-23 10:17:28.861 UTC > STATEMENT: DEALLOCATE dbdpg_p6534_2
>
> We see this COPY command is utilizing the Postgres COP, is there a way to use INSERT instead (I am not sure if that will help though).
>
> Does anyone have any ideas to help us identify the problem and try to solve it?
>
>
> Thank you in advance.
>
--
Jon Jensen
End Point Corporation
https://www.endpointdev.com/
_______________________________________________
Bucardo-general mailing list
Bucardo-general at bucardo.org<mailto:Bucardo-general at bucardo.org>
https://bucardo.org/mailman/listinfo/bucardo-general
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://bucardo.org/pipermail/bucardo-general/attachments/20211229/f4db864d/attachment-0001.htm>
More information about the Bucardo-general
mailing list