Backing up is good practice, but can you assume that your restore will work reliably when you need it? Occasional testing dramatically reduces the risk of you not having a fallback in an emergency.
Heroku has a fantastic service called which will progressively back up all writes to a database over time.
This service will, for example, retain 4 days worth of rollback data as included in the Standard Plan, and there's an easy process for choosing an arbitrary rollback point to test.
Forking a Rollback Database
To test, we'll use the command-line tools to create a fork of the database at a random point in time and use a new set of credentials provided to connect to it.
We can then run some simple queries to sanity-check that the database is working as expected, and delete it when we're finished.
Creating a fork of a database with Heroku will create a new add-on, which is billable. Be sure to remove it when you're finished so that the client doesn't pay the fee for it ongoing.
We'll be using the database as an example to walk through.
First, we'll need to have the Heroku CLI Tool installed and up to date, and have logged into our account:
$ heroku auth:login
All database-related tasks are done using the pg suffix on the heroku command, like this:
$ heroku pg --help
Identifying the Database
Next, let's have a look at what databases exist for our app:
$ heroku pg:info -a [app-name]
This should give us an overview of the current databases and the earliest rollback point we can use for each of them:
Let's see which is the leader:
$ heroku config -a [app-name]
We won't include the output of that command here because it contains a few secret keys for the production environment, so try it yourself.
We want to work with the leader, not the follower, database which in our example case is HEROKU_POSTGRESQL_NAVY_URL.
Creating the Fork
Now we can create (fork) a rollback database using the addons:create command:
$ heroku addons:create heroku-postgresql:standard-0 --rollback HEROKU_POSTGRESQL_YOURCOLOUR --by '3 days 7 hours 22 minutes' -a [app-name]
You've now started the process of creating a new database. You can check the status of the process by running the pg:wait command which outputs the provisioning status of any new databases.
Get Database Credentials
Once that's done you should see a new database appear heroku pg:info. Grab the database name and use it to get the credentials so you can connect to the database:
$ heroku pg:credentials:url HEROKU_POSTGRESQL_YOURCOLOUR -a [app-name]
Validate Records
Generally, rules for validating the data are defined as part of the design of the maintenance plan and communicated to the client in the report. It's a good idea to run a few queries to interrogate the data for validity according to what would make sense for the business and users.
Use your judgement to work out what relationships may be likely to show issues if there were corruption, etc. Below are some examples from the Safewill maintenance report, and corresponding queries.
SELECT * FROM wills w WHERE status = 'APPROVED'
ORDER BY w."createdAt" DESC;
SELECT * FROM users u WHERE verified = true
ORDER BY u."createdAt" DESC;
SELECT * FROM will_meta wm
WHERE wm."willId" = (SELECT id FROM wills w
ORDER BY w."createdAt" DESC LIMIT 1);
Decomission the Fork
Once you have finished testing you can deccomission the forked database:
$ heroku addons:destroy HEROKU_POSTGRESQL_YOURCOLOUR -a [app-name]
Remember to check carefully that you are decommissioning the correct database!
Heroku PGBackups Databases
Heroku also offers a more traditional back-up process whereby you can schedule snapshots of the database to be taken at a particular time. Although we do sometimes set these up for peace of mind we don't formally offer this as part of our standard maintenance packages.
If these are in place they also need to be verified but the process is different to the forking method used by Continuous Protection.
Checking the Schedule
First, see whether there's a schedule in place:
$ heroku pg:backups:schedules -a [app-name]
This will give you a list of any active schedules.
Listing Backups
Next, see whether we have successful (according to Heroku) backups recorded:
$ heroku pg:backups -a [app-name]
Here's a sample from Safewill, where you can see that the daily schedule has been running against the YELLOW database:
Capture a Backup
You can manually capture a back-up by running the capture command:
$ heroku pg:backups:capture [DATABASE] -a [app-name]
Download and Restore a Backup Locally
Download it to your current directory:
$ heroku pg:backups:download [backup-id] -a [app-name]
-c: clean (drop) database objects before recreating them.
-C: create the database before restoring into it.
-O: do not output commands to set ownership of objects to match the original database.
-d: connect and restore directly into the named database.
Bear in mind that you will need to have already created a database (example above is database_name) on your local server, which you can do with the createdb shell command.
Validate Records
MySQL Back-ups
Bear in mind that most LAMP hosts restrict database access to be limited to connections coming from the IP of the web server, so using external CLI might not be possible unless your IP has been whitelisted.
In general, privileges can differ a lot from host-to-host so you may have to get a little creative in your workflow for dumping/restoring databases.
Getting a Database Dump
Being our most common set-up, phpMyAdmin will be used for the example workflow.
Log-in to the phpMyAdmin instance on the target LAMP server, for example on Media Temple the URL will be at /.tools/phpMyAdmin/current/index.php
Browse to the database you're looking for in the tree browser on the left (you should see all the databases your user has access to), such as db199223_production
Click the Export tab at the top of the main window.
Choose Custom export.
Under Output choose gzippped compression.
Under Object creation options ensure Add DROP TABLE... is selected.
You can also do this via the command like by doing something like this:
$ ssh [user]@[host].com
# You'll be prompted for your password and then
# be logged into the remote shell
$ cd to/site/root
$ mysqldump -h internal-db.[db-host].[host].com --add-drop-table -u[user] -p[password] [database_name] > backups/database-backup-`date '+%Y.%m.%d'`.sql
# Now you'll log back out of the remote and pull
# the backup file down
$ exit
# Back to your local shell
$ rsync -rzv [user]@[host].com:~/[site-root]/backups/database\* .
# That will copy all back-up files, but you can
# specify the exact file, too
Restoring to Another Server
Let's use TablePlus for this part.
Make sure your local MySQL server is installed and running and you have a target database set-up where you'll import to.
Connect to your local instance and select the target database.
Choose File → Import → From SQL Dump... and select your file from the step above.
Ensure that the database has imported successfully without errors and runs properly on your local environment (see below).
Sanitising the Database
In order to run properly in an environment other than the one that the database dump was taken from, references to the host in the data need to be updated.
Browse to the wp_options table.
Change references to the production site in siteurl and home to refer to the local or staging site, as required.
Check that the site functions properly on your local environment, that images load and that routing and CMS access work as expected.
If more than one database is listed, the one currently serving as the will most often be the one assigned to the DATABASE_URL config variable (listed after the database name).
Replace the values for the app and interval with ones that are relevant to the database you're testing. A full reference for the command and process can be found .
That should give you everything you need to log in and query the database using , , , or your favourite SQL tool.
The DATABASE parameter is usually in the form of a colour, capitalised such as YELLOW. Once it's done, you can see and download it by using the process above.
Restore it to your local server using the Postgres.app CLI tool:
Perform similar checks to those described in the section above.
MySQL back-ups can be managed via the (CLI) or by using the web-based tool that comes pre-installed on most LAMP servers. You can also use tools like , or .
A common workflow is to use and to backup/restore databases from production to the local environment and/or the staging site, and this can be scripted fairly easily using existing .env values for the connection.
Some also require you to connect to an "external" vs "internal" host address, for example with , external hosts are prepended with external-db..