To Backup MySQL Databases to Object Storage with Percona on Ubuntu 16.04

  • The databases square measure store a number of the foremost valuable data in your infrastructure.
  • As a result of it’s necessary to own reliable backups to protect against information loss within the event of Associate in Nursing accident or hardware failure.

The Percona XtraBackup backup tools offer a technique of playing “hot” backups of MySQL information whereas the system is running. They are doing this by repeating the information files at the filesystem level then playing a crash recovery to realize consistency at intervals the dataset.

  • In a previous guide, we tend to put in Percona’s backup utilities and created a series of scripts to perform rotating native backups.
  • This works well for backing up information to a special drive or network mounted volume to handle issues together with your info machine.

However, in most cases, information ought to be saved off-site wherever it is often simply maintained and fixed up.

  • We can extend our previous backup system to transfer our compressed, encrypted backup files to Associate in Nursing object storage service.

We are going to be victimization DigitalOcean areas as Associate in Nursing example during this guide. However, the essential procedures square measure possible applicable for alternative S3-compatible object storage solutions yet.

Prerequisites

Before you begin this guide, you’ll like a MySQL info server organized with the native Percona backup answer printed in our previous guide. the complete set of guides you wish to follow are:

  • Initial Server Setup with Ubuntu sixteen.04: This guide can assist you to tack together a user account with sudo privileges and tack together a basic firewall.

One of the subsequent MySQL installation guides:

  • How To Install MySQL on Ubuntu sixteen.04: Uses the default package provided and maintained by the Ubuntu team.
  • How To Install the newest MySQL on Ubuntu sixteen.04: Uses updated packages provided by the MySQL project.
  • How To tack together MySQL Backups with Percona XtraBackup on Ubuntu sixteen.04: This guide sets up a neighborhood MySQL backup answer victimization the Percona XtraBackup tools.

In addition to the on top of tutorials, you’ll conjointly generate Associate in Nursing access key and secret key to act together with your object storage account victimization the API. If you’re victimization DigitalOcean areas, you’ll be able to verify the way to generate these credentials by following our the way to produce a DigitalOcean area and API Key guide. You’ll save each the API access key and API secret price.

When you’re finished with the previous guides, log back to your server as your sudo user to urge started.

Installing the Dependencies

  • To generate our backups, we would use python and bash scripts and then transfer them to an remote object storage to keep safe.
  • We can like the boto3 Python library to act with the article storage API. we will transfer this with pip, Python’s package manager.
  • Refresh our native package index then install the Python three version of pip from Ubuntu’s default repositories victimization apt-get by typing:
$ sudo apt-get update
$ sudo apt-get install python3-pip
  • Because Ubuntu maintains its own package life cycle, the version of pip in Ubuntu’s repositories isn’t unbroken in a set with recent releases. However, we will update to a more modern version of pip victimization the tool itself.
  •   We can use sudo to put in globally and embrace the -H flag to line the $HOME variable to a price pip expects:
$ sudo -H pip3 install --upgrade pip

Afterward, we will install boto3 beside the pytz module, that we’ll use to check times accurately victimization the offset-aware format that the article storage API returns:

$ sudo -H pip3 install boto3 pytz

We ought to currently have all of the Python modules we want to act with the article storage API.

Create Associate in Nursing Object Storage Configuration File

  • Our backup and transfer scripts can act with the article storage API so as to transfer files and transfer older backup artifacts after we ought to restore.
  • They can use the access keys we tend to generate within the requirement section. instead of keeping these values within the scripts themselves, we’ll place them in an exceedingly dedicated file which will be browsed by our scripts.
  • This fashion, we are able to share our scripts without worrying of exposing our credentials and that we can lock down the credentials additional heavily than the script itself.
  • In the last guide, we tend to create the /backups/mysql directory to store our backups and our coding key. we’ll place the configuration file here aboard our alternative assets. Produce a file known as object_storage_config.sh:
$ sudo nano /backups/mysql/object_storage_config.sh

Inside, paste the subsequent contents, dynamical the access key and secret key to the prices you obtained from your object storage account and also the bucket name to a novel value. Set the endpoint universal resource locator and region name to the values provided by your object storage service (we can use the values related to DigitalOcean’s NYC3 region for areas here):

/backups/mysql/object_storage_config.sh
#!/bin/bash

export MYACCESSKEY="my_access_key"
export MYSECRETKEY="my_secret_key"
export MYBUCKETNAME="your_unique_bucket_name"
export MYENDPOINTURL="https://nyc3.digitaloceanspaces.com"
export MYREGIONNAME="nyc3"

These lines outline 2 surroundings variables known as MYACCESSKEY and MYSECRETKEY to carry our access and secret keys severally.

The MYBUCKETNAME variable defines the article storage bucket we wish to use to store our backup files.

Bucket names should be universally distinctive, therefore you want to select a reputation that no alternative user has elect. Our script can check the bucket price to envision if it’s already claimed by another user and automatically produce it if it’s on the market.

We export the variables we tend to outline so any processes we tend to decision from at intervals our scripts can have access to those values.

MYENDPOINTURL and MYREGIONNAME:

The MYENDPOINTURL and MYREGIONNAME variables contain the API endpoint and also the specific region symbol offered by your object storage supplier.

DigitalOcean areas, the endpoint is going to be https://region_name.digitaloceanspaces.com. You’ll be able to notice them on the market regions for areas within the DigitalOcean panel (at the time of this writing, solely “nyc3” is available).

Save and shut the file once you square measure finished.

Anyone WHO will access our API keys has complete access to our object storage account, therefore it’s necessary to limit access to the configuration file to the backup user. we will offer the backup user and cluster possession of the file then revoke all alternative access by typing:

$ sudo chown backup:backup /backups/mysql/object_storage_config.sh
$ sudo chmod 600 /backups/mysql/object_storage_config.sh

Our object_storage_config.sh file ought to currently solely be accessible to the backup user.

Creating the Remote Backup Scripts

Now that we’ve got Associate in Nursing object storage configuration file, we will act and start making our scripts. we’ll be making the subsequent scripts:

  • object_storage.py: This script is chargeable for interacting with the article storage API to form buckets, transfer files, transfer content, and prune older backups. Our alternative scripts can decision this script anytime they have to act with the remote object storage account.
  • remote-backup-mysql.sh: This script backs up the MySQL databases by encrypting and pressing the files into one whole then uploading it to the remote object store. It creates a full backup at the start of {every} day then Associate in Nursing progressive backup every hour after. It automatically prunes all files from the remote bucket that square measure older than thirty days.
  • download-day.sh: This script permits the U.S.A. to transfer all of the backups related to a given day. as a result of our backup script creates a full backup every morning then progressive backups throughout the day, this script will transfer all of the assets necessary to revive to any hourly stop.

Along with the new scripts on top of, we’ll leverage the extract-mysql.sh and prepare-mysql.sh scripts from the previous guide to assist restore our files. You’ll be able to read the scripts within the repository for this tutorial on GitHub at any time. If you are doing not need to repeat and paste the contents below, you’ll be able to transfer the new files directly from GitHub by typing:

$ cd /tmp
$ curl -LO https://raw.githubusercontent.com/do-community/ubuntu-1604-mysql-backup/master/object_storage.py
$ curl -LO https://raw.githubusercontent.com/do-community/ubuntu-1604-mysql-backup/master/remote-backup-mysql.sh
$ curl -LO https://raw.githubusercontent.com/do-community/ubuntu-1604-mysql-backup/master/download-day.sh
  • Be sure to inspect the scripts after downloading to make sure they were retrieved successfully and that you approve of the actions they will perform. If you are pleased, then mark the scripts that are executable and then move them into the /usr/local/bin directory by typing:
$ chmod +x /tmp/{remote-backup-mysql.sh,download-day.sh,object_storage.py}
$ sudo mv /tmp/{remote-backup-mysql.sh,download-day.sh,object_storage.py} /usr/local/bin
  • Next, we will set up each of these scripts and discuss them in more detail.

Create the object_storage.py Script

  • If you didn’t download the py script from GitHub, create a new file in the /usr/local/bin directory called object_storage.py:
$ sudo nano /usr/local/bin/object_storage.py
  • Copy and paste the script contents into the file:
/usr/local/bin/object_storage.py
#!/usr/bin/env python3

import argparse
import os
import sys
from datetime import datetime, timedelta

import boto3
import pytz
from botocore.client import ClientError, Config
from dateutil.parser import parse

# "backup_bucket" must be a universally unique name, so choose something
# specific to your setup.
# The bucket will be created in your account if it does not already exist
backup_bucket = os.environ['MYBUCKETNAME']
access_key = os.environ['MYACCESSKEY']
secret_key = os.environ['MYSECRETKEY']
endpoint_url = os.environ['MYENDPOINTURL']
region_name = os.environ['MYREGIONNAME']


class Space():
def __init__(self, bucket):
self.session = boto3.session.Session()
self.client = self.session.client('s3',
region_name=region_name,
endpoint_url=endpoint_url,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
config=Config(signature_version='s3')
)
self.bucket = bucket
self.paginator = self.client.get_paginator('list_objects')

def create_bucket(self):
try:
self.client.head_bucket(Bucket=self.bucket)
except ClientError as e:
if e.response['Error']['Code'] == '404':
self.client.create_bucket(Bucket=self.bucket)
elif e.response['Error']['Code'] == '403':
print("The bucket name \"{}\" is already being used by "
"someone. Please try using a different bucket "
"name.".format(self.bucket))
sys.exit(1)
else:
print("Unexpected error: {}".format(e))
sys.exit(1)

def upload_files(self, files):
for filename in files:
self.client.upload_file(Filename=filename, Bucket=self.bucket,
Key=os.path.basename(filename))
print("Uploaded {} to \"{}\"".format(filename, self.bucket))

def remove_file(self, filename):
self.client.delete_object(Bucket=self.bucket,
Key=os.path.basename(filename))

def prune_backups(self, days_to_keep):
oldest_day = datetime.now(pytz.utc) - timedelta(days=int(days_to_keep))
try:
# Create an iterator to page through results
page_iterator = self.paginator.paginate(Bucket=self.bucket)
# Collect objects older than the specified date
objects_to_prune = [filename['Key'] for page in page_iterator
for filename in page['Contents']
if filename['LastModified'] < oldest_day]
except KeyError:
# If the bucket is empty
sys.exit()
for object in objects_to_prune:
print("Removing \"{}\" from {}".format(object, self.bucket))
self.remove_file(object)

def download_file(self, filename):
self.client.download_file(Bucket=self.bucket,
Key=filename, Filename=filename)

def get_day(self, day_to_get):
try:
# Attempt to parse the date format the user provided
input_date = parse(day_to_get)
except ValueError:
print("Cannot parse the provided date: {}".format(day_to_get))
sys.exit(1)
day_string = input_date.strftime("-%m-%d-%Y_")
print_date = input_date.strftime("%A, %b. %d %Y")
print("Looking for objects from {}".format(print_date))
try:
# create an iterator to page through results
page_iterator = self.paginator.paginate(Bucket=self.bucket)
objects_to_grab = [filename['Key'] for page in page_iterator
for filename in page['Contents']
if day_string in filename['Key']]
except KeyError:
print("No objects currently in bucket")
sys.exit()
if objects_to_grab:
for object in objects_to_grab:
print("Downloading \"{}\" from {}".format(object, self.bucket))
self.download_file(object)
else:
print("No objects found from: {}".format(print_date))
sys.exit()


def is_valid_file(filename):
if os.path.isfile(filename):
return filename
else:
raise argparse.ArgumentTypeError("File \"{}\" does not exist."
.format(filename))


def parse_arguments():
parser = argparse.ArgumentParser(
description='''Client to perform backup-related tasks with
object storage.''')
subparsers = parser.add_subparsers()

# parse arguments for the "upload" command
parser_upload = subparsers.add_parser('upload')
parser_upload.add_argument('files', type=is_valid_file, nargs='+')
parser_upload.set_defaults(func=upload)

# parse arguments for the "prune" command
parser_prune = subparsers.add_parser('prune')
parser_prune.add_argument('--days-to-keep', default=30)
parser_prune.set_defaults(func=prune)

# parse arguments for the "download" command
parser_download = subparsers.add_parser('download')
parser_download.add_argument('filename')
parser_download.set_defaults(func=download)

# parse arguments for the "get_day" command
parser_get_day = subparsers.add_parser('get_day')
parser_get_day.add_argument('day')
parser_get_day.set_defaults(func=get_day)

return parser.parse_args()


def upload(space, args):
space.upload_files(args.files)


def prune(space, args):
space.prune_backups(args.days_to_keep)


def download(space, args):
space.download_file(args.filename)


def get_day(space, args):
space.get_day(args.day)


def main():
args = parse_arguments()
space = Space(bucket=backup_bucket)
space.create_bucket()
args.func(space, args)


if __name__ == '__main__':
main()

This script is answerable for managing the backups inside your object storage account. It will transfer files, take away files, prune recent backups, and transfer files from object storage. Instead of interacting with the article storage API directly, our different scripts can use the practicality outlined here to move with remote resources. The commands it defines are:

  • upload: Uploads to object storage each of the files that are passed in as arguments. Multiple files are also more.
  • download: Downloads one file from remote object storage, that is passed in as associate argument
  • prune: Removes each file older than a precise age from the article storage location. By default, this removes files older than thirty days. You’ll change this by specifying the –days-to-keep choice once occupation prune.
  • get_day: Pass within the day to transfer as associate argument employing a normal date format (using quotations if the date has whitespace in it). Therefore the tool can conceive to take apart it and transfer all of the files from that date.

The script tries to browse the article storage credentials and bucket name from atmosphere variables, thus we are going to ensure those are inhabited from the object_storage_config.sh file before occupation the object_storage.py script.

When you’re finished, save and shut the file.

Next, if you haven’t already done this, create the script workable by typing:

$ sudo chmod +x /usr/local/bin/object_storage.py

Now that the object_storage.py script is accessible to move with the API, we are able to produce the Bash scripts that use it to duplicate and transfer files.

Create the remote-backup-mysql.sh Script

  • Next, we are going to produce the remote-backup-mysql.sh script. This may perform several of a similar function because of the original backup-mysql.sh native backup script, with an additional basic organization structure (since maintaining backups on the native filesystem isn’t necessary) and a few extra steps to transfer to object storage.
  • If you probably did not transfer the script from the repository, produce and open a file known as remote-backup-mysql.sh in the /usr/local/bin directory:
$ sudo nano /usr/local/bin/remote-backup-mysql.sh

Inside, paste the following script:

/usr/local/bin/remote-backup-mysql.sh
#!/bin/bash

export LC_ALL=C

days_to_keep=30
backup_owner="backup"
parent_dir="/backups/mysql"
defaults_file="/etc/mysql/backup.cnf"
working_dir="${parent_dir}/working"
log_file="${working_dir}/backup-progress.log"
encryption_key_file="${parent_dir}/encryption_key"
storage_configuration_file="${parent_dir}/object_storage_config.sh"
now="$(date)"
now_string="$(date -d"${now}" +%m-%d-%Y_%H-%M-%S)"
processors="$(nproc --all)"

# Use this to echo to standard error
error () {
printf "%s: %s\n" "$(basename "${BASH_SOURCE}")" "${1}" >&2
exit 1
}

trap 'error "An unexpected error occurred."' ERR

sanity_check () {
# Check user running the script
if [ "$(id --user --name)" != "$backup_owner" ]; then
error "Script can only be run as the \"$backup_owner\" user"
fi

# Check whether the encryption key file is available
if [ ! -r "${encryption_key_file}" ]; then
error "Cannot read encryption key at ${encryption_key_file}"
fi

# Check whether the object storage configuration file is available
if [ ! -r "${storage_configuration_file}" ]; then
error "Cannot read object storage configuration from ${storage_configuration_file}"
fi

# Check whether the object storage configuration is set in the file
source "${storage_configuration_file}"
if [ -z "${MYACCESSKEY}" ] || [ -z "${MYSECRETKEY}" ] || [ -z "${MYBUCKETNAME}" ]; then
error "Object storage configuration are not set properly in ${storage_configuration_file}"
fi
}

set_backup_type () {
backup_type="full"


# Grab date of the last backup if available
if [ -r "${working_dir}/xtrabackup_info" ]; then
last_backup_date="$(date -d"$(grep start_time "${working_dir}/xtrabackup_info" | cut -d' ' -f3)" +%s)"
else
last_backup_date=0
fi

# Grab today's date, in the same format
todays_date="$(date -d "$(date -d "${now}" "+%D")" +%s)"

# Compare the two dates
(( $last_backup_date == $todays_date ))
same_day="${?}"

# The first backup each new day will be a full backup
# If today's date is the same as the last backup, take an incremental backup instead
if [ "$same_day" -eq "0" ]; then
backup_type="incremental"
fi
}

set_options () {
# List the xtrabackup arguments
xtrabackup_args=(
"--defaults-file=${defaults_file}"
"--backup"
"--extra-lsndir=${working_dir}"
"--compress"
"--stream=xbstream"
"--encrypt=AES256"
"--encrypt-key-file=${encryption_key_file}"
"--parallel=${processors}"
"--compress-threads=${processors}"
"--encrypt-threads=${processors}"
"--slave-info"
)

set_backup_type

# Add option to read LSN (log sequence number) if taking an incremental backup
if [ "$backup_type" == "incremental" ]; then
lsn=$(awk '/to_lsn/ {print $3;}' "${working_dir}/xtrabackup_checkpoints")
xtrabackup_args+=( "--incremental-lsn=${lsn}" )
fi
}

rotate_old () {
# Remove previous backup artifacts
find "${working_dir}" -name "*.xbstream" -type f -delete

# Remove any backups from object storage older than 30 days
/usr/local/bin/object_storage.py prune --days-to-keep "${days_to_keep}"
}

take_backup () {
find "${working_dir}" -type f -name "*.incomplete" -delete
xtrabackup "${xtrabackup_args[@]}" --target-dir="${working_dir}" > "${working_dir}/${backup_type}-${now_string}.xbstream.incomplete" 2> "${log_file}"

mv "${working_dir}/${backup_type}-${now_string}.xbstream.incomplete" "${working_dir}/${backup_type}-${now_string}.xbstream"
}

upload_backup () {
/usr/local/bin/object_storage.py upload "${working_dir}/${backup_type}-${now_string}.xbstream"
}

main () {
mkdir -p "${working_dir}"
sanity_check && set_options && rotate_old && take_backup && upload_backup

# Check success and print message
if tail -1 "${log_file}" | grep -q "completed OK"; then
printf "Backup successful!\n"
printf "Backup created at %s/%s-%s.xbstream\n" "${working_dir}" "${backup_type}" "${now_string}"
else
error "Backup failure! If available, check ${log_file} for more information"
fi
}

main
  • This script handles the particular MySQL backup procedure, controls the backup schedule, and automatically removes older backups from remote storage.
  • You will opt for what number of days of backups you would like to stay on-hand by adjusting the days_to_keep variable.
  • The native backup-mysql.sh script we tend to utilize in the last article maintained separate directories for every day’s backups.
  • Since we tend to area unit storing backups remotely, we’ll solely store the newest backup domestically so as to attenuate the disc space dedicated to backups.
  • Previous backups may be downloaded from object storage as required for restoration.
  • As with the previous script, once checking that many basic necessities area unit glad and configuring the kind of backup that ought to be taken, we tend to encipher and compress every backup into one file archive.
  • The previous computer file is aloof from the native filesystem and any remote backups that area unit older than the worth outlined in days_to_keep area unit removed.
  • Save and shut the file once you area unit finished. Afterward, make sure that the script is practicable by typing:
$ sudo chmod +x /usr/local/bin/remote-backup-mysql.sh

This script may be used as a replacement for the backup-mysql.sh script on this method to change from creating native backups to remote backups.

Create the download-day.sh Script

  • Finally, transfer or produce the download-day.sh script at intervals the /usr/local/bin directory. This script may be accustomed to transfer all of the backups related to a selected day.
  • Create the script to enter your text editor if you probably did not transfer it earlier:
$ sudo nano /usr/local/bin/download-day.sh
  • Inside, paste the following contents:
/usr/local/bin/download-day.sh
#!/bin/bash

export LC_ALL=C

backup_owner="backup"
storage_configuration_file="/backups/mysql/object_storage_config.sh"
day_to_download="${1}"

# Use this to echo to standard error
error () {
printf "%s: %s\n" "$(basename "${BASH_SOURCE}")" "${1}" >&2
exit 1
}

trap 'error "An unexpected error occurred."' ERR

sanity_check () {
# Check user running the script
if [ "$(id --user --name)" != "$backup_owner" ]; then
error "Script can only be run as the \"$backup_owner\" user"
fi

# Check whether the object storage configuration file is available
if [ ! -r "${storage_configuration_file}" ]; then
error "Cannot read object storage configuration from ${storage_configuration_file}"
fi

# Check whether the object storage configuration is set in the file
source "${storage_configuration_file}"
if [ -z "${MYACCESSKEY}" ] || [ -z "${MYSECRETKEY}" ] || [ -z "${MYBUCKETNAME}" ]; then
error "Object storage configuration are not set properly in ${storage_configuration_file}"
fi
}

main () {
sanity_check
/usr/local/bin/object_storage.py get_day "${day_to_download}"
}

main

This script may be known as to transfer all of the archives from a particular day. Since day by day starts with a full backup and accumulates progressive backups throughout the remainder of the day, this may transfer all of the relevant files necessary to revive to any hourly photograph.

The script takes one argument that could be a date or day. It uses the Python’s dateutil.parser.parse perform to browse and interpret a date string provided as an argument.

The performance is fairly versatile and may interpret dates in a very style of formats, together with relative strings like “Friday“, as an example. To avoid ambiguity but, it’s best to use additional well-defined dates. take care to wrap dates in quotations if the format you would like to use contains whitespace.

When you’re able to continue, save and shut the file. build the script practicable by typing:

$ sudo chmod +x /usr/local/bin/download-day.sh

We currently have the flexibility to transfer the backup files from object storage for a particular date after we wish to revive.

Testing the Remote MySQL Backup and transfer Scripts

  • Now that we’ve our scripts in situ, we should always check to form positive they perform obviously.

Perform a Full Backup

  • Begin by vocation the remote-mysql-backup.sh script with the backup user. Since this is often the primary time we tend to area unit running this command, it ought to produce a full backup of our MySQL information.
$ sudo -u backup remote-backup-mysql.sh

Note:

If you receive a blunder indicating that the bucket name you chose is already in use, you may choose a special name. amendment the worth of MYBUCKETNAME within the /backups/mysql/object_storage_config.sh file and delete the native backup directory (sudo rm -rf /backups/mysql/working) so the script will try a full backup with the new bucket name. once you area unit prepared, rerun the command higher than to do once more.

If everything goes well, you may see output like the following:

Output:

Uploaded /backups/mysql/working/full-10-17-2017_19-09-30.xbstream to "your_bucket_name"
Backup successful!
Backup created at /backups/mysql/working/full-10-17-2017_19-09-30.xbstream

This indicates that a full backup has been created at intervals the /backups/mysql/working directory. it’s conjointly been uploaded to remote object storage exploitation the bucket outlined within the object_storage_config.sh file.

If we glance at intervals the /backups/mysql/working directory, we will see files like those created by the backup-mysql.sh script from the last guide:

$ ls /backups/mysql/working

Output:

backup-progress.log  full-10-17-2017_19-09-30.xbstream  xtrabackup_checkpoints  xtrabackup_info

The backup-progress.log file contains the output from the xtrabackup command, whereas xtrabackup_checkpoints and xtrabackup_info contain info concerning choices used, the kind and scope of the backup, and different information.

Perform a progressive Backup

Let’s build a tiny low amendment to our instrumentation table so as to make the extra information not found in our initial backup. we will enter a replacement row within the table by typing:

$ mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("sandbox", 4, "brown");'

Enter your database’s body countersign to feature the new record.

Now, we will take an extra backup. after we decide the script once more, a progressive backup ought to be created as long because it remains a similar day because of the previous backup (according to the server’s clock):

$ sudo -u backup remote-backup-mysql.sh

Output:

Uploaded /backups/mysql/working/incremental-10-17-2017_19-19-20.xbstream to "your_bucket_name"
Backup successful!
Backup created at /backups/mysql/working/incremental-10-17-2017_19-19-20.xbstream

The higher than output indicates that the backup was created at intervals a similar directory domestically, and was once more uploaded to object storage. If we tend to check the /backups/mysql/working directory, we’ll realize that the new backup is present which the previous backup has been removed:

$ ls /backups/mysql/working

Output:

backup-progress.log  incremental-10-17-2017_19-19-20.xbstream  xtrabackup_checkpoints  xtrabackup_info

Since our files area unit uploaded remotely. Deleting the native copy helps to scale back the quantity of disc space used.

Download the Backups from a given Day

  • Since our backups area unit holds on remotely. We’ll pull down the remote files if we’d like to revive our files. To do this, we will use the download-day.sh script.
  • Begin by making and so stepping into a directory that the backup user will safely write to:
$ sudo -u backup mkdir /tmp/backup_archives
$ cd /tmp/backup_archives

Next, decision the download-day.sh script because of the backup user. Pass within the day of the archives you want to transfer. The date format is fairly versatile, however, it’s best to undertake to be unambiguous:

$ sudo -u backup download-day.sh "Oct. 17"

If there area unit archives that match the date you provided, they’ll be downloaded to this directory:

Output:

Looking for objects from Tuesday, Oct. 17 2017
Downloading "full-10-17-2017_19-09-30.xbstream" from your_bucket_name
Downloading "incremental-10-17-2017_19-19-20.xbstream" from your_bucket_name

Verify that the files are downloaded to the native filesystem:

$ ls

Output:

full-10-17-2017_19-09-30.xbstream  incremental-10-17-2017_19-19-20.xbstream

The compressed, encrypted archives area unit currently back on the server once more.

Extract and Prepare the Backups

Once the files area unit collected, we will method them identical approach we have a tendency to processed native backups.

First, pass the .xbstream files to the extract-mysql.sh script victimization the backup user:

$ sudo -u backup extract-mysql.sh *.xbstream

This can decode and decompress the archives into a directory referred to as restore. Enter that directory and prepare the files with the prepare-mysql.sh script:

$ cd restore
$ sudo -u backup prepare-mysql.sh

Output:

Backup looks to be fully prepared.  Please check the "prepare-progress.log" file
to verify before continuing.

If everything looks correct, you can apply the restored files.

First, stop MySQL and move or remove the contents of the MySQL data directory:

sudo systemctl stop mysql
sudo mv /var/lib/mysql/ /tmp/

Then, recreate the data directory and copy the backup files:

sudo mkdir /var/lib/mysql
sudo xtrabackup --copy-back --target-dir=/tmp/backup_archives/restore/full-10-17-2017_19-09-30

Afterward the files are copied, adjust the permissions and restart the service:

sudo chown -R mysql:mysql /var/lib/mysql
sudo find /var/lib/mysql -type d -exec chmod 750 {} \;
sudo systemctl start mysql

The full backup within the /tmp/backup_archives/restore directory ought to currently be ready. we will follow the directions within the output to revive the MySQL information on our system.

Restore the Backup information to the MySQL information Directory

Before we tend to restore the backup information, we’d like to maneuver this information out of the approach.

Start by the move down MySQL to avoid corrupting the info or bloody the service once we replace its information files.

$ sudo systemctl stop mysql

Next, we will move this information directory to the /tmp directory. This way, we will simply move it back if the restore has issues. Since we tend to affect the files to /tmp/mysql within the last article. We will move the files to /tmp/mysql-remote this time:

$ sudo mv /var/lib/mysql/ /tmp/mysql-remote

Next, recreate associate degree empty /var/lib/mysql directory:

$ sudo mkdir /var/lib/mysql

Now, we will sort the xtrabackup restore command that the prepare-mysql.sh command provided to repeat the backup files into the /var/lib/mysql directory:

$ sudo xtrabackup --copy-back --target-dir=/tmp/backup_archives/restore/full-10-17-2017_19-09-30

Once the method completes, modify the directory permissions and possession to make sure that the MySQL method has access:

$ sudo chown -R mysql:mysql /var/lib/mysql
$ sudo find /var/lib/mysql -type d -exec chmod 750 {} \;

When this finishes, begin MySQL once more and certify our information has been properly restored:

$ sudo systemctl start mysql
$ mysql -u root -p -e 'SELECT * FROM playground.equipment;'

Output:

+----+---------+-------+--------+
| id | type | quant | color |
+----+---------+-------+--------+
| 1 | slide | 2 | blue |
| 2 | swing | 10 | yellow |
| 3 | sandbox | 4 | brown |
+----+---------+-------+--------+
  • The knowledge is on the market, that indicates that it’s been with success remodeled.
  • After restoring your knowledge, it’s vital to travel back and delete the restore directory. Future progressive backups cannot be applied to the total backup once it’s been ready, therefore we should always take away it. what is more, the backup directories shouldn’t be left unencrypted on disk for security reasons:
$ cd ~
$ sudo rm -rf /tmp/backup_archives/restore
  • The next time we want clean copies of the backup directories. We are able to extract them once more from the backup archive files.

Creating a Cron Job to Run Backups Hourly

  • We created a cron job to mechanically backup up our information regionally within the last guide. We’ll establish a replacement cron job to require remote backups so disable the native backup job.
  • We will simply switch between native and remote backups as necessary by sanctioning or disabling the cron scripts.
  • To start, produce a file referred to as remote-backup-mysql within the /etc/cron.hourly directory:
$ sudo nano /etc/cron.hourly/remote-backup-mysql
  • Inside, we’ll decide our remote-backup-mysql.sh script with the backup user through the systemd-cat command, that permits the North American nation to log the output to journald:
/etc/cron.hourly/remote-backup-mysql
#!/bin/bash 
sudo -u backup systemd-cat --identifier=remote-backup-mysql /usr/local/bin/remote-backup-mysql.sh
  • Save and shut the file after you square measure finished in python.
  • We can modify our new cron job and disable the previous one by manipulating the viable permission bit on each file in python:
$ sudo chmod -x /etc/cron.hourly/backup-mysql
$ sudo chmod +x /etc/cron.hourly/remote-backup-mysql
  • Test the new remote backup job by executing the script manually:
$ sudo /etc/cron.hourly/remote-backup-mysql
  • Once the prompt returns, we are able to check the log entries with journalctl:
$ sudo journalctl -t remote-backup-mysql
[seconary_label Output]
-- Logs begin at Tue 2017-10-17 14:28:01 UTC, end at Tue 2017-10-17 20:11:03 UTC. --
Oct 17 20:07:17 myserver remote-backup-mysql[31422]: Uploaded /backups/mysql/working/incremental-10-17-2017_22-16-09.xbstream to "your_bucket_name"
Oct 17 20:07:17 myserver remote-backup-mysql[31422]: Backup successful!
Oct 17 20:07:17 myserver remote-backup-mysql[31422]: Backup created at /backups/mysql/working/incremental-10-17-2017_20-07-13.xbstream
  • Check back in a very few hours to form certain that further backups are being taken on schedule.

Backing Up the Extraction Key

  • One final thought that you just can handle is the way to make a copy the cryptography key (found at /backups/mysql/encryption_key).
  • The cryptography key’s needed to revive any of the files secured mistreatment this method. However, storing the cryptography key within the same location because the info files eliminate the protection provided by cryptography.
  • Because of this, it’s necessary to stay a replica of the cryptography key in a very separate location in order that you’ll be able to still use the backup archives if your info server fails or must be remodeled.
  • While an entire backup resolution for non-database files is outside the scope of this text. You’ll be able to copy the key to your native laptop for keeping. To do so, read the contents of the file by typing:
$ sudo less /backups/mysql/encryption_key
  • Open a document on your native laptop and paste the worth within. If you ever have to be compelled to restore backups onto a distinct server, copy the contents of the file to /backups/mysql/encryption_key on the new machine, originated the system made public during this guide. And so restore mistreatment the provided scripts.

Conclusion

  • In this guide, we’ve lined however take hourly backups of a MySQL info and transfer them mechanically to a foreign object space for storing.
  • The system can take a full backup each morning and so hourly progressive backups subsequently to supply the flexibility to revive to any hourly stop. Anytime the backup script runs, it checks for backups in object storage that are older than thirty days and removes them.

Categorized in: