I have defined a LOG table.
Then I created lot of child tables, one per minute, in the form of:
create LOG_20231209_1500 .. inherits LOG;
create LOG_20231209_1501 .. inherits LOG;
create LOG_20231209_1502 .. inherits LOG;
...
After many years a count(*) on LOG returns some billion. The number of child tables is: 357299
But now it's time to have a backup, and I'd like to have each table backed up in its own file.
Unfortunately, using the following command:
pg_dump logdb -t LOG_20231209_1501 > LOG_20231209_1501.sql
raises the following error:
pg_dump: error: query failed: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
pg_dump: error: query was: LOCK TABLE public.LOG_20231209_1501 IN ACCESS SHARE MODE
even for a small table with 3000 records.
Note the a direct select * from LOG_20231209_1501 produces results immediately. Also note that I've configured max_locks_per_transaction=1024 in postgres.conf, with no success.
While I'm dumping noone is using the database: can I make pg_dump avoid locking the table and giving no modifications for granted?
PostgreSQL version is: 14.8
As suggested in comments, partial answer to the problem is to increase
max_locks_per_transactionto (at least) the number of objects involved in the query. In my case, since the parent table has 359299 child tables, I had to setmax_locks_per_transaction=360000andpg_dumpworked.It is still a partial answer because
pg_dumpstill tries to lock all the tables to dump structure or data of one single table; due to this it takes up to 1 minute to be executed, even if a simpleCOPY tab INTO 'file' CSVis immediate; this makes impractical to dump all tables usingpg_dump.Still I believe the correct way - if feasible - is to make
pg_dumpavoid locking at all.