Home:ALL Converter>Deal with Postgresql Error -canceling statement due to conflict with recovery- in psycopg2

Deal with Postgresql Error -canceling statement due to conflict with recovery- in psycopg2

Ask Time:2016-07-22T05:02:55         Author:Juan David

Json Formatter

I'm creating a reporting engine that makes a couple of long queries over a standby server and process the result with pandas. Everything works fine but sometimes I have some issues with the execution of those queries using a psycopg2 cursor: the query is cancelled with the following message:

ERROR: cancelling statement due to conflict with recovery
Detail: User query might have needed to see row versions that must be removed

I was investigating this issue

PostgreSQL ERROR: canceling statement due to conflict with recovery

https://www.postgresql.org/docs/9.0/static/hot-standby.html#HOT-STANDBY-CONFLICT

but all solutions suggest fixing the issue making modifications to the server's configuration. I can't make those modifications (We won the last football game against IT guys :) ) so I want to know how can I deal with this situation from the perspective of a developer. Can I resolve this issue using python code? My temporary solution is simple: catch the exception and retry all the failed queries. Maybe could be done better (I hope so).

Thanks in advance

Author:Juan David,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/38514312/deal-with-postgresql-error-canceling-statement-due-to-conflict-with-recovery-i
Laurenz Albe :

There is nothing you can do to avoid that error without changing the PostgreSQL configuration (from PostgreSQL 9.1 on, you could e.g. set hot_standby_feedback to on).\n\nYou are dealing with the error in the correct fashion – simply retry the failed transaction.",
2016-09-12T13:00:05
David Jaspers :

The table data on the hot standby slave server is modified while a long running query is running. A solution (PostgreSQL 9.1+) to make sure the table data is not modified is to suspend the replication on the slave and resume after the query.\n\nselect pg_xlog_replay_pause(); -- suspend\nselect * from foo; -- your query\nselect pg_xlog_replay_resume(); --resume\n",
2017-10-05T08:59:29
AZhao :

I recently encountered a similar error and was also in the position of not being a dba/devops person with access to the underlying database settings.\n\nMy solution was to reduce the time of the query where ever possible. Obviously this requires deep knowledge of your tables and data, but I was able to solve my problem with a combination of a more efficient WHERE filter, a GROUPBY aggregation, and more extensive use of indexes. \n\nBy reducing the amount of server side execute time and data, you reduce the chance of a rollback error occurring. \n\nHowever, a rollback can still occur during your shortened window, so a comprehensive solution would also make use of some retry logic for when a rollback error occurs.\n\nUpdate: A colleague implemented said retry logic as well as batching the query to make the data volumes smaller. These three solutions have made the problem go away entirely. ",
2019-01-14T17:58:05
yy