To Backup MySQL Databases to Object Storage with Percona on Ubuntu 16.04
- The databases square measure store a number of the foremost valuable data in your infrastructure.
- As a result of it’s necessary to own reliable backups to protect against information loss within the event of Associate in Nursing accident or hardware failure.
The Percona XtraBackup backup tools offer a technique of playing “hot” backups of MySQL information whereas the system is running. They are doing this by repeating the information files at the filesystem level then playing a crash recovery to realize consistency at intervals the dataset.
- In a previous guide, we tend to put in Percona’s backup utilities and created a series of scripts to perform rotating native backups.
- This works well for backing up information to a special drive or network mounted volume to handle issues together with your info machine.
However, in most cases, information ought to be saved off-site wherever it is often simply maintained and fixed up.
- We can extend our previous backup system to transfer our compressed, encrypted backup files to Associate in Nursing object storage service.
We are going to be victimization DigitalOcean areas as Associate in Nursing example during this guide. However, the essential procedures square measure possible applicable for alternative S3-compatible object storage solutions yet.
Prerequisites
Before you begin this guide, you’ll like a MySQL info server organized with the native Percona backup answer printed in our previous guide. the complete set of guides you wish to follow are:
- Initial Server Setup with Ubuntu sixteen.04: This guide can assist you to tack together a user account with sudo privileges and tack together a basic firewall.
One of the subsequent MySQL installation guides:
- How To Install MySQL on Ubuntu sixteen.04: Uses the default package provided and maintained by the Ubuntu team.
- How To Install the newest MySQL on Ubuntu sixteen.04: Uses updated packages provided by the MySQL project.
- How To tack together MySQL Backups with Percona XtraBackup on Ubuntu sixteen.04: This guide sets up a neighborhood MySQL backup answer victimization the Percona XtraBackup tools.
In addition to the on top of tutorials, you’ll conjointly generate Associate in Nursing access key and secret key to act together with your object storage account victimization the API. If you’re victimization DigitalOcean areas, you’ll be able to verify the way to generate these credentials by following our the way to produce a DigitalOcean area and API Key guide. You’ll save each the API access key and API secret price.
When you’re finished with the previous guides, log back to your server as your sudo user to urge started.
Installing the Dependencies
- To generate our backups, we would use python and bash scripts and then transfer them to an remote object storage to keep safe.
- We can like the boto3 Python library to act with the article storage API. we will transfer this with pip, Python’s package manager.
- Refresh our native package index then install the Python three version of pip from Ubuntu’s default repositories victimization apt-get by typing:
- Because Ubuntu maintains its own package life cycle, the version of pip in Ubuntu’s repositories isn’t unbroken in a set with recent releases. However, we will update to a more modern version of pip victimization the tool itself.
- Â We can use sudo to put in globally and embrace the -H flag to line the $HOME variable to a price pip expects:
Afterward, we will install boto3 beside the pytz module, that we’ll use to check times accurately victimization the offset-aware format that the article storage API returns:
We ought to currently have all of the Python modules we want to act with the article storage API.
Create Associate in Nursing Object Storage Configuration File
- Our backup and transfer scripts can act with the article storage API so as to transfer files and transfer older backup artifacts after we ought to restore.
- They can use the access keys we tend to generate within the requirement section. instead of keeping these values within the scripts themselves, we’ll place them in an exceedingly dedicated file which will be browsed by our scripts.
- This fashion, we are able to share our scripts without worrying of exposing our credentials and that we can lock down the credentials additional heavily than the script itself.
- In the last guide, we tend to create the /backups/mysql directory to store our backups and our coding key. we’ll place the configuration file here aboard our alternative assets. Produce a file known as object_storage_config.sh:
Inside, paste the subsequent contents, dynamical the access key and secret key to the prices you obtained from your object storage account and also the bucket name to a novel value. Set the endpoint universal resource locator and region name to the values provided by your object storage service (we can use the values related to DigitalOcean’s NYC3 region for areas here):
These lines outline 2 surroundings variables known as MYACCESSKEY and MYSECRETKEY to carry our access and secret keys severally.
The MYBUCKETNAME variable defines the article storage bucket we wish to use to store our backup files.
Bucket names should be universally distinctive, therefore you want to select a reputation that no alternative user has elect. Our script can check the bucket price to envision if it’s already claimed by another user and automatically produce it if it’s on the market.
We export the variables we tend to outline so any processes we tend to decision from at intervals our scripts can have access to those values.
MYENDPOINTURL and MYREGIONNAME:
The MYENDPOINTURL and MYREGIONNAME variables contain the API endpoint and also the specific region symbol offered by your object storage supplier.
DigitalOcean areas, the endpoint is going to be https://region_name.digitaloceanspaces.com. You’ll be able to notice them on the market regions for areas within the DigitalOcean panel (at the time of this writing, solely “nyc3” is available).
Save and shut the file once you square measure finished.
Anyone WHO will access our API keys has complete access to our object storage account, therefore it’s necessary to limit access to the configuration file to the backup user. we will offer the backup user and cluster possession of the file then revoke all alternative access by typing:
Our object_storage_config.sh file ought to currently solely be accessible to the backup user.
Creating the Remote Backup Scripts
Now that we’ve got Associate in Nursing object storage configuration file, we will act and start making our scripts. we’ll be making the subsequent scripts:
- object_storage.py: This script is chargeable for interacting with the article storage API to form buckets, transfer files, transfer content, and prune older backups. Our alternative scripts can decision this script anytime they have to act with the remote object storage account.
- remote-backup-mysql.sh: This script backs up the MySQL databases by encrypting and pressing the files into one whole then uploading it to the remote object store. It creates a full backup at the start of {every} day then Associate in Nursing progressive backup every hour after. It automatically prunes all files from the remote bucket that square measure older than thirty days.
- download-day.sh: This script permits the U.S.A. to transfer all of the backups related to a given day. as a result of our backup script creates a full backup every morning then progressive backups throughout the day, this script will transfer all of the assets necessary to revive to any hourly stop.
Along with the new scripts on top of, we’ll leverage the extract-mysql.sh and prepare-mysql.sh scripts from the previous guide to assist restore our files. You’ll be able to read the scripts within the repository for this tutorial on GitHub at any time. If you are doing not need to repeat and paste the contents below, you’ll be able to transfer the new files directly from GitHub by typing:
- Be sure to inspect the scripts after downloading to make sure they were retrieved successfully and that you approve of the actions they will perform. If you are pleased, then mark the scripts that are executable and then move them into the /usr/local/bin directory by typing:
- Next, we will set up each of these scripts and discuss them in more detail.
Create the object_storage.py Script
- If you didn’t download the py script from GitHub, create a new file in the /usr/local/bin directory called object_storage.py:
- Copy and paste the script contents into the file:
This script is answerable for managing the backups inside your object storage account. It will transfer files, take away files, prune recent backups, and transfer files from object storage. Instead of interacting with the article storage API directly, our different scripts can use the practicality outlined here to move with remote resources. The commands it defines are:
- upload: Uploads to object storage each of the files that are passed in as arguments. Multiple files are also more.
- download: Downloads one file from remote object storage, that is passed in as associate argument
- prune: Removes each file older than a precise age from the article storage location. By default, this removes files older than thirty days. You’ll change this by specifying the –days-to-keep choice once occupation prune.
- get_day: Pass within the day to transfer as associate argument employing a normal date format (using quotations if the date has whitespace in it). Therefore the tool can conceive to take apart it and transfer all of the files from that date.
The script tries to browse the article storage credentials and bucket name from atmosphere variables, thus we are going to ensure those are inhabited from the object_storage_config.sh file before occupation the object_storage.py script.
When you’re finished, save and shut the file.
Next, if you haven’t already done this, create the script workable by typing:
Now that the object_storage.py script is accessible to move with the API, we are able to produce the Bash scripts that use it to duplicate and transfer files.
Create the remote-backup-mysql.sh Script
- Next, we are going to produce the remote-backup-mysql.sh script. This may perform several of a similar function because of the original backup-mysql.sh native backup script, with an additional basic organization structure (since maintaining backups on the native filesystem isn’t necessary) and a few extra steps to transfer to object storage.
- If you probably did not transfer the script from the repository, produce and open a file known as remote-backup-mysql.sh in the /usr/local/bin directory:
Inside, paste the following script:
- This script handles the particular MySQL backup procedure, controls the backup schedule, and automatically removes older backups from remote storage.
- You will opt for what number of days of backups you would like to stay on-hand by adjusting the days_to_keep variable.
- The native backup-mysql.sh script we tend to utilize in the last article maintained separate directories for every day’s backups.
- Since we tend to area unit storing backups remotely, we’ll solely store the newest backup domestically so as to attenuate the disc space dedicated to backups.
- Previous backups may be downloaded from object storage as required for restoration.
- As with the previous script, once checking that many basic necessities area unit glad and configuring the kind of backup that ought to be taken, we tend to encipher and compress every backup into one file archive.
- The previous computer file is aloof from the native filesystem and any remote backups that area unit older than the worth outlined in days_to_keep area unit removed.
- Save and shut the file once you area unit finished. Afterward, make sure that the script is practicable by typing:
This script may be used as a replacement for the backup-mysql.sh script on this method to change from creating native backups to remote backups.
Create the download-day.sh Script
- Finally, transfer or produce the download-day.sh script at intervals the /usr/local/bin directory. This script may be accustomed to transfer all of the backups related to a selected day.
- Create the script to enter your text editor if you probably did not transfer it earlier:
- Inside, paste the following contents:
This script may be known as to transfer all of the archives from a particular day. Since day by day starts with a full backup and accumulates progressive backups throughout the remainder of the day, this may transfer all of the relevant files necessary to revive to any hourly photograph.
The script takes one argument that could be a date or day. It uses the Python’s dateutil.parser.parse perform to browse and interpret a date string provided as an argument.
The performance is fairly versatile and may interpret dates in a very style of formats, together with relative strings like “Friday“, as an example. To avoid ambiguity but, it’s best to use additional well-defined dates. take care to wrap dates in quotations if the format you would like to use contains whitespace.
When you’re able to continue, save and shut the file. build the script practicable by typing:
We currently have the flexibility to transfer the backup files from object storage for a particular date after we wish to revive.
Testing the Remote MySQL Backup and transfer Scripts
- Now that we’ve our scripts in situ, we should always check to form positive they perform obviously.
Perform a Full Backup
- Begin by vocation the remote-mysql-backup.sh script with the backup user. Since this is often the primary time we tend to area unit running this command, it ought to produce a full backup of our MySQL information.
Note:
If you receive a blunder indicating that the bucket name you chose is already in use, you may choose a special name. amendment the worth of MYBUCKETNAME within the /backups/mysql/object_storage_config.sh file and delete the native backup directory (sudo rm -rf /backups/mysql/working) so the script will try a full backup with the new bucket name. once you area unit prepared, rerun the command higher than to do once more.
If everything goes well, you may see output like the following:
Output:
This indicates that a full backup has been created at intervals the /backups/mysql/working directory. it’s conjointly been uploaded to remote object storage exploitation the bucket outlined within the object_storage_config.sh file.
If we glance at intervals the /backups/mysql/working directory, we will see files like those created by the backup-mysql.sh script from the last guide:
Output:
The backup-progress.log file contains the output from the xtrabackup command, whereas xtrabackup_checkpoints and xtrabackup_info contain info concerning choices used, the kind and scope of the backup, and different information.
Perform a progressive Backup
Let’s build a tiny low amendment to our instrumentation table so as to make the extra information not found in our initial backup. we will enter a replacement row within the table by typing:
Enter your database’s body countersign to feature the new record.
Now, we will take an extra backup. after we decide the script once more, a progressive backup ought to be created as long because it remains a similar day because of the previous backup (according to the server’s clock):
Output:
The higher than output indicates that the backup was created at intervals a similar directory domestically, and was once more uploaded to object storage. If we tend to check the /backups/mysql/working directory, we’ll realize that the new backup is present which the previous backup has been removed:
Output:
Since our files area unit uploaded remotely. Deleting the native copy helps to scale back the quantity of disc space used.
Download the Backups from a given Day
- Since our backups area unit holds on remotely. We’ll pull down the remote files if we’d like to revive our files. To do this, we will use the download-day.sh script.
- Begin by making and so stepping into a directory that the backup user will safely write to:
Next, decision the download-day.sh script because of the backup user. Pass within the day of the archives you want to transfer. The date format is fairly versatile, however, it’s best to undertake to be unambiguous:
If there area unit archives that match the date you provided, they’ll be downloaded to this directory:
Output:
Verify that the files are downloaded to the native filesystem:
Output:
The compressed, encrypted archives area unit currently back on the server once more.
Extract and Prepare the Backups
Once the files area unit collected, we will method them identical approach we have a tendency to processed native backups.
First, pass the .xbstream files to the extract-mysql.sh script victimization the backup user:
This can decode and decompress the archives into a directory referred to as restore. Enter that directory and prepare the files with the prepare-mysql.sh script:
Output:
The full backup within the /tmp/backup_archives/restore directory ought to currently be ready. we will follow the directions within the output to revive the MySQL information on our system.
Restore the Backup information to the MySQL information Directory
Before we tend to restore the backup information, we’d like to maneuver this information out of the approach.
Start by the move down MySQL to avoid corrupting the info or bloody the service once we replace its information files.
Next, we will move this information directory to the /tmp directory. This way, we will simply move it back if the restore has issues. Since we tend to affect the files to /tmp/mysql within the last article. We will move the files to /tmp/mysql-remote this time:
Next, recreate associate degree empty /var/lib/mysql directory:
Now, we will sort the xtrabackup restore command that the prepare-mysql.sh command provided to repeat the backup files into the /var/lib/mysql directory:
Once the method completes, modify the directory permissions and possession to make sure that the MySQL method has access:
When this finishes, begin MySQL once more and certify our information has been properly restored:
Output:
- The knowledge is on the market, that indicates that it’s been with success remodeled.
- After restoring your knowledge, it’s vital to travel back and delete the restore directory. Future progressive backups cannot be applied to the total backup once it’s been ready, therefore we should always take away it. what is more, the backup directories shouldn’t be left unencrypted on disk for security reasons:
- The next time we want clean copies of the backup directories. We are able to extract them once more from the backup archive files.
Creating a Cron Job to Run Backups Hourly
- We created a cron job to mechanically backup up our information regionally within the last guide. We’ll establish a replacement cron job to require remote backups so disable the native backup job.
- We will simply switch between native and remote backups as necessary by sanctioning or disabling the cron scripts.
- To start, produce a file referred to as remote-backup-mysql within the /etc/cron.hourly directory:
- Inside, we’ll decide our remote-backup-mysql.sh script with the backup user through the systemd-cat command, that permits the North American nation to log the output to journald:
- Save and shut the file after you square measure finished in python.
- We can modify our new cron job and disable the previous one by manipulating the viable permission bit on each file in python:
- Test the new remote backup job by executing the script manually:
- Once the prompt returns, we are able to check the log entries with journalctl:
- Check back in a very few hours to form certain that further backups are being taken on schedule.
Backing Up the Extraction Key
- One final thought that you just can handle is the way to make a copy the cryptography key (found at /backups/mysql/encryption_key).
- The cryptography key’s needed to revive any of the files secured mistreatment this method. However, storing the cryptography key within the same location because the info files eliminate the protection provided by cryptography.
- Because of this, it’s necessary to stay a replica of the cryptography key in a very separate location in order that you’ll be able to still use the backup archives if your info server fails or must be remodeled.
- While an entire backup resolution for non-database files is outside the scope of this text. You’ll be able to copy the key to your native laptop for keeping. To do so, read the contents of the file by typing:
- Open a document on your native laptop and paste the worth within. If you ever have to be compelled to restore backups onto a distinct server, copy the contents of the file to /backups/mysql/encryption_key on the new machine, originated the system made public during this guide. And so restore mistreatment the provided scripts.
Conclusion
- In this guide, we’ve lined however take hourly backups of a MySQL info and transfer them mechanically to a foreign object space for storing.
- The system can take a full backup each morning and so hourly progressive backups subsequently to supply the flexibility to revive to any hourly stop. Anytime the backup script runs, it checks for backups in object storage that are older than thirty days and removes them.