One of the good practices is to create backups of your database at regular intervals. If you are using PostgreSQL database, you can use built-in tool called pg_dump. With pg_dump we can export the database structure and data. In case we want to dump all databases, then we can use pg_dumpall.
When I was creating a simple bash script, I was getting a very strange error: pg_dump: permission denied for relation mytable – LOCK TABLE public.mytable IN ACCESS SHARE MODE. Googling around I got few tips how to solve the problem, but no actual solution.
Script to dump
To make our life easier, we use a script to make the whole process easier. It’s also convenient to have a script which we can later call from other processes, from build tools (backup because upgrading) or with cron.
1 2 3 4 5 6 7 8 9 10 11 12 |
#!/bin/bash SCRIPT=$(readlink -m $( type -p $0 )) BASE_DIR=$(dirname $SCRIPT) BASE_DIR=${BASE_DIR/tools/} NAME=$(basename $SCRIPT) now=$(date +"%Y_%m_%d") pg_dump -i -U databaseuser -h localhost -F t databasename > /backups/database/database_backup_$now.tar TODAY=$(date +"%Y-%m-%d") echo "$TODAY: Successful backup of database" >> $BASE_DIR'logs/dbbackup.log' |
When we run this, we get previous error. Big problem.
Locked table problem
Problem is with permissions. There are multiple permission layers. First is if we actually have an access to database. Second layer is if we actually have an access to table; in our case table mytable. To check it, we need to see the structure and permissions of the table.
1 2 3 |
$ su - postgres $ psql -d databasename databasename=# \d+ |
Above commands with output all the tables in the database. If we check the columns, we will notice that there is a owner column. In our case it’s important that table owner and export user is same, otherwise we get the permission problem.
To change the permission of the table, we need to run the command
1 |
ALTER TABLE mytable OWNER TO actual_databaseuser; |
The command will alter the table ownership to our export user. Be sure you change owner for every table in the selected database.
Extra tip – cron
Of course we don’t have time and we especially don’t want to waste time for tasks that can be automated. One of them is actually running our dbbackup.sh every week, month or at some interval you desire. To perform backups every week, we can use cron.
To add a cron job, just run
1 |
$ crontab -e |
It will show simple editor where you write your tasks/jobs. In our case, we will run backup every Sunday at the morning (00:00).
1 |
0 0 * * 0 /home/user/dbbackup.sh >> /var/log/dbbackupcron.log 2>&1 |
To make our script work with cron, we need to add an extra thing. Because if we run the code, we are asked for the password. Cron cannot enter the password, so it will fail. Based on suggestions, we should create ~/.pgpass file and add a line in it.
1 |
localhost:5432:databasename:databaseuser:password |
Now when the cron will run the script, everything will work.