Backups (Wheezy)

From Hexwiki
Jump to navigation Jump to search

Disk space is plentiful these days, there is no excuse not to have an extensive backup program.

rsyncd

If you have a master-slave configuration, connected over a secure local network or a crossover cable, rsyncd can be a simpler solution than permitting rsync connections over ssh. In my paranoia, I prefer that it not have the slightest chance of writing to the master.

/etc/rsyncd.secrets

username1:pass1
username2:pass2
chmod 640 /etc/rsyncd.secrets

etc/default/rsync

RSYNC_ENABLE=true

/etc/rsyncd.conf

# Only two groups have access to /docs folders in my config - the user they run under and
# www-data. No reason to use the uid, though.
uid = nobody
gid = www-data
max connections = 3
socket options = SO_KEEPALIVE
# www-data has read-only access anyway, but just to be sure.
read only = true
# Bind to our eth1 local ip
address = 192.168.0.1
# Only let our friend in.
hosts allow = 192.168.0.2
hosts deny = *
list = true
use chroot = true
ignore nonreadable = true
secrets file = /etc/rsyncd.secrets
dont compress = png jpg gif zip 7z rar
# Make sure you make an entry for this in rsyslog.conf
# The log file is the only way you'll find out what is really going wrong.
syslog facility = local4
[module1]
  path = /home/site1/docs
  auth users = user1
 
[module2]
  path = /home/site2/docs
  auth users = user2

Remote User

For each module/site, I make a user on the slave server to handle the backups.

/home/bluehome/sync.sh

#!/bin/sh
# The .rpass file contains user1's password as specified on the main server's
# rsync.secrets file, and nothing else.
/usr/bin/rsync -a --password-file=/home/user1/.rpass user1@192.168.0.1::module1 /home/user1/docs

crontab -e

rsync is really fast. Running it every hour is possibly a bit slow.

14  *   *   *   *    /home/user1/sync.sh
17  7   *   *   *    /home/user1/backup.sh

backup.sh

#!/bin/sh
# If you need it. This ends up creating daily backups, rotating over the course of a week.
stamp="$(date +%a)"
file="/storage/bhomeback/docs.$stamp.tar.bz2"
if [ -e "$file" ]
then
  /bin/rm $file
fi
/bin/tar -cjf $file /home/bluehome/docs 2>&1 /dev/null
/bin/chmod 0640 $file

Database Backup

I run the following twice per day from /etc/cron.d. Since the vast majority of my tables are InnoDB, --single-transaction allows things to run fine even if it's performed on the main database server. It takes up no small amount of I/O and CPU, however, so you may still prefer to run this off of a slave.

dbbackup.sh

#!/bin/sh
export STAMP=`date +%a-%H`
export FILE="/storage/dbback/db-$STAMP.sql"
/usr/bin/mysqldump --all-databases --events --single-transaction > $FILE
/bin/chmod 0640 $FILE
/bin/bzip2 -f9 $FILE
/bin/chgrp dbback $FILE.bz2

Backup Exchange

If your data is really important to you, you will use remote backups. Different cities, and different hosting providers.

While you might start out with a vps for this sort of thing, if you want to store a meaningful amount of data, this quickly becomes expensive, and I/O intensive tasks are not friendly to other VPS users, even if your host doesn't keep good track of that.

A single backup of all my sites, compressed, weighs in at 25gb. If I want to store a week of this, have room to compress/decompress as needed, and have room to grow, a VPS from a good provider costs as much as or more than a decent dedicated server does.

The result is I have a lot of ssh keys and commands of the form

/usr/bin/sftp -P sshport userback@remotehost:/storage/somebackupfolder/something-$STAMP.sql.bz2 /storage/somebackupfolder/

Where $STAMP is a `date +%a` call as above.

There are certainly prettier systems, but this works.