Note: “Easy” is always a relative concept. What is simple for one might be relatively complicated for another.
So Yesterday it happened! My SD-card got corrupted and my RPi3 didn’t want to boot (VFS kernel panic) and I could not repair the file system no matter what I tried. Lucky enough I could still access my Ext4 partition that holds all data. But some parts were corrupt and which parts would that be?
Introduction
Already for years and years I make backups of my servers. Currently that is a bananapi with an SDD to the sata port as fileserver, webserver, DLNA-server and backup pimatic (for my central heating), and my RaspberryPi 3 running pimatic and motion.
My method requires another server or NAS to store your backups and to retrieve your backup.
The best backup is of course a full system backup: Everything and every day! That uses a lot of space especially if you want to keep a few backups. My backup method is not using that. In a real production environment you would use full back for fastest “no tweaking necessary” restore. When possible using snapshot technology.
My Pimatic Setup
My system runs MiniBian, but any linux will do for the most part. The “packages list” backup and restore script are debian based though. If you are not using some kind of Debian (minibian, Jessie Light, raspbian, etc.) you need to create your own “packages list” script.
I have 2 users: root and myself.
Pimatic is installed under my userid (the pimatic install guide uses a user pi which is fine as well).
All my data (not much) is under my userid (mentioned further as <username>). All my general scripts are in ~/bin. This includes the backup and restore scripts also when used by root.
Pimatic is under ~/pimatic/pimatic-app. My pimatic shell and python scripts are under ~/pimatic/scripts. Other pimatic “stuff” is also below ~/pimatic.
(I also use motion for motion detection on my RPi3 and it is backuped/restored at the same time using the same method. You might have other packages).
If you want to do everything under root, which is always a bad idea, you should use as directory /root
, with /root/bin
otherwise you can stick to the /home/<username>
.
This HowTo contains a Backup part and a Restore part in 2 posts: A Backup post (this one) and a restore post (the second post).
Backup
As said: I backup to my NAS. For that you need to have a CIFS share (windows share), NFS mount or the like available.
As Jessie light had a bug with trying to mount from fstab before network was up, I do that from /etc/rc.local
as that is executed after everything is “up and running”. I don’t know whether that bug is still there but my method is more fail safe (I think).
So in my /etc/rc.local
I have among others:
sleep 5
mount 192.168.144.140:/DataVolume/Public /media/PUBLIC -t nfs -o rw,soft,intr,rsize=8192,wsize=8192,auto
This is for an nfs-mount. To make this work on your pimatic you need to do a:
sudo mkdir /media/PUBLIC
sudo apt-get install cifs-util nfs-common
Of course your mount command to your share and the local folder you mount the share on can vary. I backup to the public share of my NAS as I do not want the complication of passwords and the like, also because my NAS is not accessible from the internet.
If you use CIFS (windows shares aka Samba), you would use sometjing like:
mount -t cifs -o rw,iocharset=utf8,file_mode=0777,dir_mode=0777 //192.168.144.140/Public /media/PUBLIC
or with username/password mount -t cifs -o rw,username=USERID,password=PASSWORD,uid=USERID,gid=USERID,iocharset=utf8,file_mode=0777,dir_mode=0777 //192.168.144.140/Public /media/PUBLIC
I have a general backup script that I use already for many years which I got from the internet. This script does a full backup on 1st day of the month (which I disabled) and a full backup every Sunday. On weekdays a sequential “what has changed” backup is made.
#!/bin/bash
# full and incremental backup script
# created 07 February 2000
# Based on a script by Daniel O'Callaghan <danny@freebsd.org>
# and modified by Gerhard Mourani <gmourani@videotron.ca>
#Change the 5 variables below to fit your computer/backup
COMPUTER=RPi3 # name of this computer
DIRECTORIES="/etc /home/<username>" # directories to backup
BACKUPDIR=/media/PUBLIC/server-backups/RPi3 # where to store the backups
TIMEDIR=/media/PUBLIC/server-backups/RPi3/last # where to store time of last backups
TAR=/bin/tar # name and locaction of tar
#You should not have to change anything below here
PATH=/usr/local/bin:/usr/bin:/bin
DOW=`date +%a` # Day of the week e.g. Mon
DOM=`date +%d` # Date of the Month e.g. 27
DM=`date +%d%b` # Date and Month e.g. 27Sep
TODAY=`date +%Y%m%d` # Year, Mont, Date e.g. 20051026
# On the 1st of the month a permanent full backup is made
# Every Sunday a full backup is made - overwriting last Sundays backup
# The rest of the time an incremental backup is made. Each incremental
# backup overwrites last weeks incremental backup of the same name.
#
# if NEWER = "", then tar backs up all files in the directories
# otherwise it backs up files newer than the NEWER date. NEWER
# gets it date from the file written every Sunday.
## Monthly full backup
#if [ $DOM = "01" ]; then
# NEWER=""
# NOW=`date +%d-%b`
#
# # Update full backup date
# echo $NOW > $TIMEDIR/$COMPUTER-full-date
#
# $TAR $NEWER -cjf $BACKUPDIR/$COMPUTER-$TODAY.tar.bz2 $DIRECTORIES
#fi
# Weekly full backup
if [ $DOW = "Sun" ] || [ $DOW = "sun" ] || [ $DOW = "Zo" ] || [ $DOW = "zo" ]; then
NEWER=""
NOW=`date +%d-%b`
# # Update full backup date
echo $NOW > $TIMEDIR/$COMPUTER-full-date
$TAR $NEWER -cjf $BACKUPDIR/$COMPUTER-$TODAY.$DOW.tar.bz2 $DIRECTORIES
# Make incremental backup - overwrite last weeks/days
else
# Get date of last incremental backup
NEWER="--after-date `cat $TIMEDIR/$COMPUTER-incremental-date`"
$TAR $NEWER -cjf $BACKUPDIR/$COMPUTER-$TODAY.$DOW.tar.bz2 $DIRECTORIES | tee $BACKUPDIR/$COMPUTER-$TODAY.$DOW.log
NOW=`date +%d-%b`
# Update last incremental backup date
echo $NOW > $TIMEDIR/$COMPUTER-incremental-date
fi
Note: This requires a one-time action first and that is the creation of the backup folders on your NAS:
mkdir /media/PUBLIC/server-backups/RPi3
mkdir /media/PUBLIC/server-backups/RPi3/last
There are a couple of lines important in the script:
COMPUTER=RPi3 # name of this computer
DIRECTORIES="/etc /home/<username>" # directories to backup
BACKUPDIR=/media/PUBLIC/server-backups/RPi3 # where to store the backups
TIMEDIR=/media/PUBLIC/server-backups/RPi3/last # where to store time of last backups
TAR=/bin/tar # name and locaction of tar
What your backup is going to be named (COMPUTER) will result in something like “RPi3-20161030.Sun.tar.bz2”. DIRECTORIES determines which folders will be backuped.
Note the lines starting with $TAR $NEWER -cjf
: The j means use bzip2 to compress which requires bzip2 of course (sudo apt-get install bzip2
). The files will end with .tar.bz2. If you want to use the traditional gzipped compressed tars you specify $TAR $NEWER -czf
and the files will end with .tar.gz or .tgz. These compressed tars will be 10% to 30% bigger.
Another important line is:
if [ $DOW = "Sun" ] || [ $DOW = "sun" ] || [ $DOW = "Zo" ] || [ $DOW = "zo" ]; then
This is for the weekly Sunday backup. Depending on your LOCALE you need to specify the abbreviation for your name for Sunday (Sonntag, dimanche, domenica,Søndag,etc.). My LOCALE is Dutch on one of my systems.
This backup script is copied inside my own user ~/bin folder (as that one is backuped) but run by root from the root crontab every night at 23:41.
41 23 * * * /home/<username>/bin/autobackup_system.sh >/dev/null 2>&1
Crontabs
The above mentioned backup script will nicely backup your system config (/etc) and your userdata and user(!) pimatic install (/home/<username>).
However, we also have important commands in our user crontab and in our root crontab. If we get a crash we might loose that as well. For that we use 2 simple command lines: one from the root crontab and one from the user crontab.
root crontab: 33 23 * * * /usr/bin/crontab -l > /home/<username>/root-crontab
user crontab: 33 23 * * * /usr/bin/crontab -l > /home/<username>/my-crontab
Again saved to the user folder which is automatically backuped.
Our installed Packages
We also have a lot of installed packages which we would like to automatically restore, so let’s backup a list of our installed packages that we can use to (semi-)automatically restore them.
We use a script under a root crontab to make a backup to our user data folder (as that is automatically backuped).
the script (let’s call it backup_package_lists_repos.sh):
#!/bin/bash
dpkg_list_dir="/home/<username>/installed_packages"
mkdir -p $dpkg_list_dir
dpkg --get-selections > $dpkg_list_dir/Package.list
cp -R /etc/apt/sources.list* $dpkg_list_dir/
apt-key exportall > $dpkg_list_dir/Repo.keys
The root crontab line:
34 23 * * * /home/<username>/bin/backup_package_lists_repos.sh
Concluding for your crontabs:
root crontab:
33 23 * * * /usr/bin/crontab -l > /home/<username>/root-crontab
34 23 * * * /home/<username>/bin/backup_package_lists_repos.sh
41 23 * * * /home/<username>/bin/autobackup_system.sh >/dev/null 2>&1
user crontab:
33 23 * * * /usr/bin/crontab -l > /home/<username>/my-crontab
Now you have your automatic backup arranged every night.
Restore scripts
This post is about the backup but we need to create and backup these restore scripts, before we really need them during the backup (Of course: if you have this info available for copy&paste, you do not need to do this now).
This scripts will also be saved in /home/<username>/bin
restore_all_installed_packages.sh:
#!/bin/bash
# with sudo or as root
dpkg_list_dir="/home/<username>/installed_packages"
apt-key add $dpkg_list_dir/Repo.keys
cp -R $dpkg_list_dir/sources.list* /etc/apt/
apt-get update
apt-get install dselect
dselect update
dpkg --set-selections < $dpkg_list_dir/Package.list
apt-get dselect-upgrade -y
restore_crontabs.sh:
#!/bin/bash
# with sudo or as root
crontab -u <username> /home/<username>/my-crontab
crontab /home/<username>/root-crontab
Housekeeping
I do not know how much space you have to store your backups, but it is good to do some housekeeping here. So we create a script (rm_backups.sh) doing that for us:
#!/bin/bash
Actually finds files older than "number of days" + 1
find /media/PUBLIC/server-backups/RPi3 -type f -mtime +19 -name '*.bz2' -exec rm -f {} \;
find /media/PUBLIC/server-backups/RPi3 -type f -mtime +19 -name '*.log' -exec rm -f {} \;
This will remove all .bz2 backups (replace with .tar.gz or .tgz when using gzipped backups) and the accompanying log files older than 20 days. Adapt for your shares.
So when put in the root crontab (total):
33 23 * * * /usr/bin/crontab -l > /home/<username>/root-crontab
34 23 * * * /home/<username>/bin/backup_package_lists_repos.sh
41 23 * * * /home/<username>/bin/autobackup_system.sh >/dev/null 2>&1
51 23 * * * /home/<username>/bin/rm_backups.sh
That is it for this post.