Simple way to Backup Files from an Ubuntu Server to Amazon S3 Cloud Tips, Tricks and Tutorials 17 MAY 2016

You can never have too many backups, and this is a simple way of backing up files from an Ubuntu server to the Amazon S3 cloud storage system.

Backup-three-laptops-plugged-into-word

For this, you will obviously need an Ubuntu server, an Internet connection, and of course, an Amazon AWS account.

First things first, you’ll need to generate Amazon AWS access keys, which you do from the AWS Security Credentials page (Access Keys section) in the AWS console.

Write these (both the Access Key ID and Secret Access Key) down somewhere safe, because you definitely don’t want to be losing them unnecessarily. (Maybe a Google Doc might be a good idea here?)

Now head over to the S3 Management page in the AWS console, where you will need to create the new bucket (or folder in an existing bucket) where you want to store your backed up files in.

With your bucket created, and your access details at hand, head into your Ubuntu server and install the super useful Amazon S3 targeted s3cmd package:

sudo apt-get install s3cmd

Next configure it by entering the requested information (your Access Key details will be needed here). Note, you do have the option to encrypt the files in transit, and if you choose to do so, it is probably worth your while to jot down the password in that previously mentioned Google Docs file of yours!

s3cmd --configure

Run the connection test and if everything passes, you should be good to go. You can check your current buckets by doing a directory listing with s3cmd:

s3cmd ls

You are pretty much just about there now. To do the file backup, we’ll use s3cmd’s built in sync command. To push files to Amazon S3, we declare the parameters in the order of local files then target directory. So for example, if we have a S3 bucket called server-backup, and want to back up our user account’s home directory to S3, the sync call would look like this:

s3cmd sync ~/* s3://server-backup

You can of course get all clever and target specific folders, exclude or include files and folders using wildcard characters, etc. (See the documentation for more). For example, here I exclude .svn folder files using:

s3cmd sync --exclude '*.svn' ~/* s3://server-backup

If you are happy with the sync result, then all that is left is to throw the command into a short bash file, give it execute rights and add it to the cron scheduled tasks system. So for example, create the file cron_s3_backup.sh:

nano /home/craiglotter/cron_s3_backup.sh

Add this text:

#!/bin/bash
s3cmd sync /home/craiglotter/* s3://server-backup/craiglotter/

Save, and make the file executable:

chmod +x ~/cron_s3_backup.sh

Finally, add it to the cron in the usual manner. Open the crontab for editing:

crontab -e

Add the following line for a daily backup at 07:00 in the morning.

0 7 * * * bash /home/craiglotter/cron_s3_backup.sh >/dev/null 2>&1

Done.

About Craig Lotter

Software developer, husband and dad to two little girls. Writer behind An Exploring South African. I don't have time for myself any more.