Tag Archives: backup

Simple way to Backup Files from an Ubuntu Server to Amazon S3 Cloud Tips, Tricks and Tutorials 17 MAY 2016

You can never have too many backups, and this is a simple way of backing up files from an Ubuntu server to the Amazon S3 cloud storage system.

Backup-three-laptops-plugged-into-word

For this, you will obviously need an Ubuntu server, an Internet connection, and of course, an Amazon AWS account.

First things first, you’ll need to generate Amazon AWS access keys, which you do from the AWS Security Credentials page (Access Keys section) in the AWS console.

Write these (both the Access Key ID and Secret Access Key) down somewhere safe, because you definitely don’t want to be losing them unnecessarily. (Maybe a Google Doc might be a good idea here?)

Now head over to the S3 Management page in the AWS console, where you will need to create the new bucket (or folder in an existing bucket) where you want to store your backed up files in.

With your bucket created, and your access details at hand, head into your Ubuntu server and install the super useful Amazon S3 targeted s3cmd package:

sudo apt-get install s3cmd

Next configure it by entering the requested information (your Access Key details will be needed here). Note, you do have the option to encrypt the files in transit, and if you choose to do so, it is probably worth your while to jot down the password in that previously mentioned Google Docs file of yours!

s3cmd --configure

Run the connection test and if everything passes, you should be good to go. You can check your current buckets by doing a directory listing with s3cmd:

s3cmd ls

You are pretty much just about there now. To do the file backup, we’ll use s3cmd’s built in sync command. To push files to Amazon S3, we declare the parameters in the order of local files then target directory. So for example, if we have a S3 bucket called server-backup, and want to back up our user account’s home directory to S3, the sync call would look like this:

s3cmd sync ~/* s3://server-backup

You can of course get all clever and target specific folders, exclude or include files and folders using wildcard characters, etc. (See the documentation for more). For example, here I exclude .svn folder files using:

s3cmd sync --exclude '*.svn' ~/* s3://server-backup

If you are happy with the sync result, then all that is left is to throw the command into a short bash file, give it execute rights and add it to the cron scheduled tasks system. So for example, create the file cron_s3_backup.sh:

nano /home/craiglotter/cron_s3_backup.sh

Add this text:

#!/bin/bash
s3cmd sync /home/craiglotter/* s3://server-backup/craiglotter/

Save, and make the file executable:

chmod +x ~/cron_s3_backup.sh

Finally, add it to the cron in the usual manner. Open the crontab for editing:

crontab -e

Add the following line for a daily backup at 07:00 in the morning.

0 7 * * * bash /home/craiglotter/cron_s3_backup.sh >/dev/null 2>&1

Done.

Ubuntu: Simple Folder Backups with Grsync CodeUnit 30 DEC 2011

Making backups of important folders should become a routine for everyone, simple as that. Back up to another internal drive, back up to a flash disk, back up to an external drive, or back up to an remote source, it doesn’t really matter as long as you make the effort to backup on a regular basis.

Linux users will be familiar with the powerful rsync command line file and directory synchronization tool, but if you are an Ubuntu user then chances are you’re not particularly keen on messing about on the command line. So enter a great little alternative called Grsync, which is basically a graphic user interface for rsync!

In a nutshell, Grsync allows you to synchronize folders, files and make backups, while utilizing the power of the tried and tested rsync in the background to do the actual heavy lifting.

It costs nothing and is opensource. It can be effectively used to synchronize local directories and it supports remote targets as well (even though it doesn’t support browsing the remote folder). Grsync is available on fair number of Linux flavors (like Ubuntu), Windows and Mac OS X.

Note that you do need the rsync command line tool installed on your system in order for Grsync to work, but don’t fret too much about that as nowadays most distros come with it preinstalled.

Feature list taken from the project site: (To be honest, it’s not a very well written list, but oh well)

...

Ubuntu Terminal: How to Back up Your Crontab File CodeUnit 24 DEC 2010

To generate a back up of a crontab file is actually pretty simple, making use of the built in -l switch that comes with the command:

sudo crontab -l > ~/path/to/your/backup/file

See what we did there? The -l switch spits out the crontab file’s contents and we simply pipe that into a file. Easy peasy. Note that sudo crontab and crontab are two different files, meaning you might want to backup both if you make use of the two different cron job files…

Ubuntu: A Bash Script to Backup All MySQL Databases Running on a Server CodeUnit 28 JUL 2010

The following bash script is written to automate the process of backing up all your various MySQL databases running on either a local or remote MySQL server, using the useful mysqldump utility to do the actual backups.

What the script does is pretty simple to understand really.

First, you define all your server connections. Then it queries the server to find out which databases are currently running in the MySQL Server instance. Armed with this list, it runs through them all (ignoring the ones you specified on the ignore list) and pulls down a mysqldump of each database, gzipping it to its final backup file name.

Simple eh? So let’s see it then:

#!/bin/bash
MyUSER="mysql_user_account"
MyPASS="mysql_user_account_password"
MyHOST="localhost"

# Linux bin paths, change this if it can't be autodetected via which command
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
CHOWN="$(which chown)"
CHMOD="$(which chmod)"
GZIP="$(which gzip)"

# Backup Dest directory, change this if you have someother location
DEST="/var/mysql_backups"

# Main directory where backup will be stored
MBD="$DEST/sql_dumps"

# Get hostname
HOST="$(hostname)"

# Get data in dd-mm-yyyy format
NOW="$(date +"%Y%m%d-%H%M%S")"

# File to store current backup file
FILE=""

# Store list of databases
DBS=""

# DO NOT BACKUP these databases
IGGY="test"

[ ! -d $MBD ] && mkdir -p $MBD || :
# Get all database list first
DBS="$($MYSQL -u $MyUSER -h $MyHOST -p$MyPASS -Bse 'show databases')"

echo "Launching backup script at $(date)"

for db in $DBS
do
    skipdb=-1
    if [ "$IGGY" != "" ];    then
        for i in $IGGY        do
            [ "$db" == "$i" ] && skipdb=1 || :
        done
    fi

    if [ "$skipdb" == "-1" ] ; then
	FILE="$MBD/$db.$MyHOST.$NOW.sql.gz"
        # do all inone job in pipe,
        # connect to mysql using mysqldump for select mysql database
        # and pipe it out to gz file in backup dir
	echo "Starting backup process for $db (espreports.com) [$(date)]"
	$MYSQLDUMP --opt --compress --single-transaction -u $MyUSER -h $MyHOST -p$MyPASS $db | $GZIP -9 > $FILE
	echo "-- Complete ($FILE) [$(date)] --"
    fi
done

echo "Backup script completed execution at $(date)"

And we’re done. Nifty. (And damn useful to boot!)

Ubuntu Terminal: How to Quickly Create a SQL Dump File from a MySQL Database CodeUnit 05 JUL 2010

Backing up your MySQL database or generating a copy of it to shift around is quite a simple affair thanks to the powerful mysqldump command that ships with MySQL.

To generate a backup sqldump, simply execute:

mysqldump -h localhost -u [MySQL user, e.g. root] -p[database password] -c --add-drop-table --add-locks --all --quick --lock-tables [name of the database] > sqldump.sql

Note the lack of a space between -p and the password! Obviously if you don’t have a password assigned, simply omit the -p switch.

And that is it, all done! :)

Note, restoring a database from a mysqldump is as simple as: mysql -u [MySQL user, e.g. root] -p[database password] < sqldump.sql

Failed Flash & Missing Backups CodeUnit 03 MAY 2010

Sigh. After years of solid and faithful service, despite the lack of care, my Transcend JetFlash 4 GB USB flash drive finally borked and said it’s last goodbyes. But it didn’t go quietly into the night.

No, it kicked and screamed, corrupted and declared itself write-protected. I spent hours combing the Internet, trying out various solutions, tricks and suggestions, none of which worked and none of which could get the drive back into working, usable condition.

From low-level formats to registry hacks and just plain begging and pleading, all was for naught as I finally came to the conclusion that it was dead and dusted, leaving me only with one recourse – to open it up and operate on it in the hopes of a miracle happening.

Unfortunately that was not to be as my clumsy hands sliced the top of a connector clean off its housing and brought with it the finality of the waste bin.

But losing a faithful flash drive was not the worst part of this ordeal. No, the worst part was that I, a software technician of all people, had failed myself in that I didn’t keep any backups of the important data on the drive. Not a single backup whatsoever. Important personal documents, desktop application projects in mid development, databases built up over years, all gone because I was too lazy to keep up a decent backup programme.

But the loss now behind me, I have vowed to change my ways and send out this warning to those of you out there like me – backup your data, synchronize your drives, don’t fall in complacency.

Even if it is just by using the simplicity of rsync or its’ graphical counterpart, grsync, schedule your backups and stick to it – or as I have now done, place a perpetual reminder in your calendar and stick to it.

For you never know when the blight that is drive failure will strike again…

MySQL: How to Duplicate a Table Tips, Tricks and Tutorials 10 SEP 2009

There exists in this world a nice little SQL statement know as the SELECT INTO statement, one that works beautifully well in most database systems for when you want to create a backup of an existing table. In MySQL however this doesn’t work straight out of the box, which of course is a pain in the ass.

So how does one duplicate this functionality in MySQL then?

Well funnily enough, it’s actually pretty damn simple. The CREATE TABLE IF NOT EXISTS phrase is key here and combining this with a select statement will in fact create a copy of your existing table, taking all the data from your source table and dumping it into your newly created clone table. (Note that you can control what data gets copied into the new table by modifying the SELECT statement with an appropriate WHERE statement as well as the columns that get created by specifying column names in the first part of the SELECT statement in place of the asterisk).

So for example: “CREATE TABLE IF NOT EXISTS `my_backup_table` SELECT * FROM `my_table`”

…will create the `my_backup_table` if it doesn’t exist and copy over all data currently contained in `my_table`.

Pretty useful, no?

It is however important to note that this process does come with a few drawbacks. Firstly, attribute data like primary keys, comments, etc. gets lost in the process. Also, don’t expect any associated triggers to make the trip either. Finally, certain default values like CURRENT_TIMESTAMP gets converted to 0000-00-00 00:00:00 as well, just to add insult to injury.

But then, if this backup table really is just about keeping the existing data safe, then I guess this really shouldn’t matter all that much to you in the first place! :)

a nest of wooden tables