bugfloyd

Taming tech, one bug at a time.

Build a Robust S3-Powered Backup Solution for WordPress Hosted on OpenLiteSpeed Using Bash Scripts

Feature image for WordPress OpenLiteSpeed backup post. Showing WordPress icon on a vault-shaped cube with OpenLiteSpeed text in the corner of the picture

I believe there’s no need to explain why we need proper automated backup solutions for our web servers! When it comes to WordPress, there are plenty of options. Many popular solutions involve installing plugins on WordPress that rely on WordPress cron jobs (WP-Cron) to run automatically. These plugins bundle the website files and dump the database tables using PHP capabilities.

While these plugin-based solutions work well enough in most scenarios, I’ve noticed several important limitations:

  • Backing up an application through the application itself is inherently risky! If something goes wrong with WordPress, the plugin, or the web server running them, the entire backup process fails.
  • The process heavily relies on PHP and the web server’s limits, timeouts, and configurations—and a lot can go wrong.
  • It consumes significant resources, especially with larger websites containing millions of database records and thousands of files. This can keep your web server busy with backup jobs and prevent it from properly responding to actual user requests.
  • These solutions have built-in limitations—for example, you cannot backup the web server or underlying OS configurations.
  • To restore these backups, you need to first install and set up a basic WordPress instance, install and configure the backup plugin, and then run the restore process—hoping everything goes smoothly.
  • These solutions are limited to only a single website and if you want to properly backup multiple websites on the servers it gets more challenging.

I know there are plenty of out-of-the-box solutions for server-level backups, but why install and configure another potentially bloated application with dozens of features you’ll never use? Instead, let’s create a simple but flexible backup solution tailored specifically for OpenLiteSpeed servers hosting WordPress sites, powered by bash scripts and easily deployable with Ansible (or manually)!

Although we’re focusing on OpenLiteSpeed and MariaDB here, with some small tweaks, this solution can be adapted for other web servers like LiteSpeed Enterprise or nginx, and other database systems like MySQL.

In this backup solution I am going to use AWS S3 to store the backups which offers a secure, scalable and relatively cheap remote storage.

Backup on the server meme. A: Sever is crashed! B: Where is backup? A: On the server!

In this post I assume you are running a Debian-based Linux distribution on the server (like Debian, Ubuntu, etc). If you are using other Linux distributions, you have to adjust the commands, scripts (and the playbook) accordingly by yourself.

You can find the complete solution including all the scripts and the optional Ansible playbook in this GitHub repository.

Understanding the Backup Requirements

Before diving into our backup solution, let’s understand what we need to back up on a WordPress installation running on an OpenLiteSpeed web server.

What Needs to Be Backed Up?

A complete WordPress backup solution should cover the following critical components:

  1. Website Files: WordPress core, Themes and plugins,Uploads (images, videos, documents), and in summary whatever we have in the WordPress installation.
  2. Database Content: All the tables and records in the database being used by WordPress, which includes WP core tables and any possible custom tables created by plugins and themes. Database users and their associated privileges should also be included in the backups with clear mappings showing which user belongs to which database.
  3. Web Server Configuration: OpenLiteSpeed configuration files, virtual host settings, and SSL certificates need to be backed up to ensure your server configuration can be restored exactly as it was.
  4. System Configuration: A list of installed packages, cron jobs, and other critical system configurations that make your server environment unique.

A good backup strategy should capture all these components in an automated, scheduled manner, storing output on a remote storage solution securely, while providing a straightforward restoration path.

The bash scripts we’re going to build are designed to achieve all these goals by:

  • Automatically detecting websites, databases, and their associated users
  • Backing up to a temporary local directory before uploading to a remote storage (AWS S3) and cleaning up the local directory after a successful backup
  • Implementing proper error handling and logging throughout the process
  • Including a dedicated restoration script that makes recovery simple and reliable

Backup Script Deep Dive

Let’s create a directory for this project so we can organize all the related files. I’m calling the directory ols-wp-backup with a sub-directory named scripts where we’ll place our backup and restore scripts.

Create a script file under ols-wp-backup/scripts/backup.sh:

ols-wp-backup/scripts/backup.sh
#!/bin/bash

CONFIG_FILE="/etc/backup-config.conf"
LSWS_CONF="/usr/local/lsws/conf/httpd_config.conf"

BACKUP_DIR="/tmp/ols_backups/backups"
LOG_DIR="/var/log/ols-backups/backups"
DATE=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="${LOG_DIR}/backup_${DATE}.log"

SITES=()
DATABASES=()

# Ensure log directory exists
mkdir -p "$LOG_DIR"

# Logging function
log() {
    local LEVEL="$1"
    local MESSAGE="$2"
    echo "$(date +"%Y-%m-%d %H:%M:%S") [$LEVEL] $MESSAGE" | tee -a "$LOG_FILE"
}

log "INFO" "Backup process started."

# Load static configuration from /etc/backup-config.conf
if [[ -f "$CONFIG_FILE" ]]; then
    source "$CONFIG_FILE"
else
    log "ERROR" "Configuration file $CONFIG_FILE not found!"
    exit 1
fi

# Validate required variables
REQUIRED_VARS=("S3_BUCKET" "S3_BACKUP_DIR" "AWS_REGION_BACKUP")

for VAR in "${REQUIRED_VARS[@]}"; do
    if [[ -z "${!VAR}" ]]; then
        log "ERROR" "Required variable '$VAR' is not set in $CONFIG_FILE"
        exit 1
    fi
done

First, we use the shebang (#!/bin/bash) to tell the system to use the bash interpreter to run our script. Then we define several important variables:

  • Paths to configuration files and directories
  • Temporary local storage for our backups
  • Log directory and current timestamp
  • Empty arrays to store our discovered websites and databases

The backup configuration file will contain three essential variables:

S3_BUCKET="s3_backup_bucket"
S3_BACKUP_DIR="s3/backup/dir"
AWS_REGION_BACKUP="aws_region_backup"

I’ll explain how to properly set up this configuration file in the deployment section later.

Our script includes a handy logging function that writes output to both the console and a timestamped log file, making troubleshooting much easier. The script starts by ensuring our log directory exists, then checks that our required config file is present and contains all the necessary variables. If anything’s missing, it logs an error and exits gracefully rather than continuing with an incomplete configuration.

Detecting Websites and Databases

Now let’s add the code to detect available websites and databases on our server:

ols-wp-backup/scripts/backup.sh
# Extract virtual host names from OpenLiteSpeed's configuration
if [[ -f "$LSWS_CONF" ]]; then
    # Extract virtualhost names
    while IFS= read -r line; do
        if [[ "$line" =~ virtualhost\ (.+)\ \{ ]]; then
            SITES+=("${BASH_REMATCH[1]}")
        fi
    done <"$LSWS_CONF"

    # Extract template members
    while IFS= read -r line; do
        if [[ "$line" =~ member\ (.+) ]]; then
            SITES+=("${BASH_REMATCH[1]}")
        fi
    done <"$LSWS_CONF"
else
    log "WARNING" "LiteSpeed configuration file not found at $LSWS_CONF"
fi

# Remove duplicates
SITES=($(echo "${SITES[@]}" | tr ' ' '\n' | sort -u | tr '\n' ' '))

# Required utilities
REQUIRED_TOOLS=("zip" "mariadb-dump" "aws" "dpkg" "crontab")

log "INFO" "Checking required utilities."
for tool in "${REQUIRED_TOOLS[@]}"; do
    if ! command -v "$tool" &>/dev/null; then
        log "ERROR" "Missing required tool: $tool. Install it and rerun the script."
        exit 1
    fi
done

# Fetch database list excluding system databases
DB_LIST=$(mariadb -N -B -e "SHOW DATABASES;" 2>>"$LOG_FILE" | grep -Ev "^(information_schema|mysql|performance_schema|sys)$")

if [[ $? -eq 0 ]]; then
    DATABASES=($DB_LIST)
else
    log "WARNING" "Failed to fetch databases from MariaDB. Ensure MariaDB is running and accessible."
fi

log "INFO" "SITES: ${SITES[*]}"
log "INFO" "DATABASES: ${DATABASES[*]}"
log "INFO" "S3_BUCKET=$S3_BUCKET"
log "INFO" "S3_BACKUP_DIR=$S3_BACKUP_DIR"
log "INFO" "AWS_REGION_BACKUP=$AWS_REGION_BACKUP"

# Create backup directory
mkdir -p "$BACKUP_DIR"
mkdir -p "$BACKUP_DIR/sites"
mkdir -p "$BACKUP_DIR/db"
mkdir -p "$BACKUP_DIR/conf"
mkdir -p "$BACKUP_DIR/sys"

This clever bit of code automatically discovers all websites configured on our OpenLiteSpeed server by parsing the main configuration file. We use two different regex patterns to capture both standard virtual hosts and template members, ensuring we don’t miss any sites.

Important Note: This script assumes that virtual host names in OpenLiteSpeed match the website domain names, and that website files for example.com are located in /var/www/example.com. If your server uses a different directory structure, you’ll need to adjust the script accordingly.

After collecting the site names, we perform a neat trick to remove any duplicates using a combination of text transformations: converting the array to lines, sorting uniquely, and then converting back to space-separated values.

Next, the script verifies that all required tools are available. If any essential utility is missing, it fails gracefully with a clear error message rather than proceeding with a potentially incomplete backup.

For databases, we query MariaDB directly to get a list of all databases, excluding system databases like information_schema and mysql. This approach ensures we only back up actual content databases.

The script outputs the discovered sites and databases to the log for verification, then creates the necessary directory structure for our backup files. Each category of backup data gets its own subdirectory for better organization.

Note that this script assumes it’s running as the Unix root user and that MariaDB’s root user is configured for Unix Socket authentication. That’s why we don’t need to provide username and password parameters to the mariadb and mariadb-dump commands. If your setup differs, you’ll need to adjust these commands accordingly.

Backing Up Website Files and Databases

Now let’s add the code that actually performs the backup operations:

ols-wp-backup/scripts/backup.sh
# Backup website directories
log "INFO" "Backing up website directories."
for SITE in "${SITES[@]}"; do
    ZIP_NAME="${BACKUP_DIR}/sites/${SITE}_${DATE}.zip"
    if [ -d "/var/www/$SITE" ]; then
        (cd /var/www/$SITE && zip -rq "$ZIP_NAME" .) >>"$LOG_FILE" 2>&1
        log "INFO" "Website $SITE backed up successfully: $ZIP_NAME"
    else
        log "WARNING" "Directory /var/www/$SITE does not exist, skipping."
    fi
done

# Backup OpenLiteSpeed configs
log "INFO" "Backing up OpenLiteSpeed configurations."
OLS_ZIP="${BACKUP_DIR}/conf/ols_configs_${DATE}.zip"
(cd /usr/local/lsws && zip -rq "$OLS_ZIP" "conf" "admin/conf") >>"$LOG_FILE" 2>&1
log "INFO" "OpenLiteSpeed configs backed up successfully: $OLS_ZIP"

# Backup MariaDB databases
log "INFO" "Backing up MariaDB databases."
for DB in "${DATABASES[@]}"; do
    DB_ZIP="${BACKUP_DIR}/db/${DB}_${DATE}.sql.gz"
    mariadb-dump --single-transaction --quick --lock-tables=false "$DB" | gzip >"$DB_ZIP" 2>>"$LOG_FILE"
    if [ $? -eq 0 ]; then
        log "INFO" "Database $DB backed up successfully: $DB_ZIP"
    else
        log "ERROR" "Failed to backup database: $DB"
    fi
done

This section handles the core backup operations. For each website we discovered earlier, we create a zip archive of all files in its directory. We use a trick here by first changing into the website’s directory and then zipping everything, which creates a cleaner archive without the full path structure.

The script is smart enough to check if each website’s directory actually exists before attempting to back it up, skipping any that don’t match our expected directory structure and logging a warning.

We also include OpenLiteSpeed’s configuration files in our backup. This is particularly valuable, as these files contain all your virtual host settings, SSL configurations, rewrite rules, and other server-specific settings that would be time-consuming to recreate manually. We target both the main configuration directory and the admin panel configuration.

For the databases, we loop through each one and use mariadb-dump with some carefully chosen flags:

  • --single-transaction ensures consistency without locking tables
  • --quick helps with large tables by processing rows one at a time
  • --lock-tables=false prevents the dump from blocking other connections

Each database dump is piped directly to gzip for compression, saving disk space and making transfers faster. We also check the return status of each database backup operation to catch and log any failures.

This approach gives us a comprehensive backup of all website files, server configurations, and database content – the three pillars of any WordPress installation. Having each component separately archived makes selective restoration possible later on.

Backing Up MariaDB Users and Permissions

A complete backup solution needs to include database users and their permissions. WordPress sites typically have dedicated database users with specific access rights, so we need to preserve this security structure for proper restoration:

ols-wp-backup/scripts/backup.sh
# Backup MariaDB users and privileges
log "INFO" "Backing up MariaDB users and privileges."
USERS_SQL="${BACKUP_DIR}/db/mariadb_users_${DATE}.sql"
USERS_ZIP="${BACKUP_DIR}/db/mariadb_users_${DATE}.sql.gz"
USERS_MAP="${BACKUP_DIR}/db/users_db_map_${DATE}.json"

# Create header for users SQL
echo "-- MariaDB user backup created on $(date)" >"$USERS_SQL"

# First, generate a proper JSON file for user-database mapping
echo "{" >"$USERS_MAP"
echo "  \"users\": {" >>"$USERS_MAP"

# Get all users and the databases they have access to
FIRST_USER=true
mariadb -N -B -e "
    SELECT CONCAT(
        '    \"', u.user, '@', u.host, '\": {',
        '\"plugin\": \"', u.plugin, '\",',
        '\"dbs\": [',
        IF(MAX(db.db) IS NULL, '', 
            GROUP_CONCAT(DISTINCT 
                CONCAT('\"', 
                    CASE WHEN db.db = '*' THEN 'ALL_DBS' ELSE db.db END,
                '\"')
                SEPARATOR ', '
            )
        ),
        ']',
        '}'
    )
    FROM mysql.user u
    LEFT JOIN mysql.db db ON u.user = db.user AND u.host = db.host
    WHERE u.user NOT IN ('mariadb.sys', 'root', 'mysql')
    GROUP BY u.user, u.host, u.plugin;" | while read -r line; do
    if [ "$FIRST_USER" = true ]; then
        echo "$line" >>"$USERS_MAP"
        FIRST_USER=false
    else
        echo "," >>"$USERS_MAP"
        echo "$line" >>"$USERS_MAP"
    fi
done

# Close the JSON structure
echo "  }" >>"$USERS_MAP"
echo "}" >>"$USERS_MAP"

# Add user creation statements with proper authentication methods
mariadb -N -B -e "
    SELECT CONCAT(
        'CREATE USER IF NOT EXISTS ''', user, '''@''', host, ''' ',
        CASE 
            WHEN plugin = 'mysql_native_password' THEN 
                CONCAT('IDENTIFIED WITH mysql_native_password USING ''', authentication_string, '''')
            WHEN plugin = 'unix_socket' THEN 
                'IDENTIFIED WITH unix_socket'
            WHEN plugin = 'ed25519' THEN 
                CONCAT('IDENTIFIED WITH ed25519 USING ''', authentication_string, '''') 
            WHEN plugin = 'pam' THEN 
                'IDENTIFIED WITH pam'
            ELSE
                CONCAT('IDENTIFIED WITH ', plugin, 
                      IF(authentication_string != '', 
                         CONCAT(' USING ''', authentication_string, ''''), 
                         ''))
        END, 
        ';'
    ) 
    FROM mysql.user 
    WHERE user NOT IN ('mariadb.sys', 'root', 'mysql');" >>"$USERS_SQL" 2>>"$LOG_FILE"

# Extract users and their privileges
echo "# Grants for each user" >>"$USERS_SQL"
mariadb -N -B -e "
    SELECT CONCAT('SHOW GRANTS FOR ''', user, '''@''', host, ''';') 
    FROM mysql.user 
    WHERE user NOT IN ('mariadb.sys', 'root', 'mysql');" |
    mariadb 2>/dev/null |
    grep -v "Grants for" |
    sed 's/$/;/' >>"$USERS_SQL" 2>>"$LOG_FILE"

# Compress the users SQL file
gzip "$USERS_SQL" 2>>"$LOG_FILE"

if [ -f "$USERS_ZIP" ]; then
    log "INFO" "MariaDB users backed up successfully: $USERS_ZIP"
    log "INFO" "User-to-database mapping created: $USERS_MAP"
else
    log "ERROR" "Failed to backup MariaDB users"
fi

This sophisticated piece of code handles one of the most overlooked aspects of database backups: user permissions. Most backup solutions focus exclusively on data, but without properly restoring users and their privileges, your WordPress sites might not function correctly.

The script creates two important files:

  1. A compressed SQL file with all the commands needed to recreate users with their exact authentication methods and privileges
  2. A JSON mapping file that shows which users have access to which databases

The SQL generation is particularly smart, for the completeness of the process it is using a CASE statement to handle different authentication plugins correctly (mysql_native_password, unix_socket, ed25519, etc.). This ensures that when users are restored, they’ll use the same authentication method they had before.

We exclude system users like ‘root’ and ‘mysql’ since these are typically managed by the operating system or database installation process. The script captures only the custom users that are relevant for application access.

The JSON mapping file will be invaluable during restoration, as it allows us to selectively restore only the users relevant to a particular database, maintaining proper security boundaries between different WordPress installations.

This approach to user backup demonstrates the advantage of a server-level backup solution over plugin-based approaches, which typically can’t access or manipulate database users at all.

Finalizing the Backup Process

Let’s wrap up our script by backing up system-level configurations and transferring everything to AWS S3:

ols-wp-backup/scripts/backup.sh
# Backup System Package List
log "INFO" "Backing up installed system packages."
dpkg --get-selections >"$BACKUP_DIR/sys/packages_${DATE}.list" 2>>"$LOG_FILE"
log "INFO" "System packages backed up successfully: $BACKUP_DIR/sys/packages_${DATE}.list"

# Backup Crontab
log "INFO" "Backing up crontab."
crontab -l >"$BACKUP_DIR/sys/crontab_${DATE}.bak" 2>>"$LOG_FILE"
log "INFO" "Crontab backed up successfully: $BACKUP_DIR/sys/crontab_${DATE}.bak"

# Upload backups to S3
log "INFO" "Uploading backups to S3."
aws s3 cp --region "$AWS_REGION_BACKUP" --recursive "$BACKUP_DIR" "s3://${S3_BUCKET}/${S3_BACKUP_DIR}/${DATE}/" >>"$LOG_FILE" 2>&1
if [ $? -eq 0 ]; then
    log "INFO" "Backup uploaded successfully to S3: s3://${S3_BUCKET}/${S3_BACKUP_DIR}/${DATE}/"
else
    log "ERROR" "Failed to upload backup to S3."
fi

# Cleanup local backups
log "INFO" "Cleaning up local backup files."
rm -rf "$BACKUP_DIR"

log "INFO" "Backup process completed successfully."

As an added bonus, our script also captures important system-level configurations that can be extremely helpful during a full server restore. By including a list of installed packages with dpkg --get-selections, we can quickly reproduce the exact same software environment on a new server.

The crontab backup is particularly useful since scheduled tasks are often overlooked in manual backups but can be critical for server operations. This simple addition helps ensure that recurring jobs like cache clearing, certificate renewals, or any custom scheduled tasks are preserved.

The final step uses the AWS CLI to recursively upload all our backup files to the specified S3 bucket. The backup is organized in a dated directory structure, making it easy to locate specific backups later. Once the upload completes successfully, we clean up the local temporary files to avoid filling up the server’s disk space.

This comprehensive approach gives us a complete backup solution that captures not just the WordPress sites and databases, but also the server environment they run in. By running this script regularly (ideally via a cron job – will be discussed later), you’ll have everything you need to quickly recover from various disaster scenarios – from a single website corruption to a complete server failure.

And that’s it! With this script, you have a robust, server-level backup solution that addresses all the limitations of plugin-based WordPress backups we discussed earlier. You can find the final version of the script on the GitHup repository.

Setting Up the Environment and Deploying the Solution

Now that we understand what our backup solution needs to accomplish, let’s get everything set up!

First we need to create a S3 bucket to be used as the remote storage in the backup script. It is recommended to store the backups in a different region than your main AWS region including the main infrastructure. For example to create a S3 bucket named bugfloyd-websites-backup in eu-central-1 region, run the below AWS CLI command:

aws s3 mb s3://bugfloyd-websites-backup --region eu-central-1

If you do not have AWS CLI installed and configured, just go to AWS web console and create a new bucket in the region of your choice through the web UI.

After setting the password, if the web server instance is not hosted on AWS, then you also need to create an IAM credential (e.g. a user) and provide the access to that user to write to this bucket. Note that if the web server is running on AWS infrastructure like an EC2 instance, it is STRONGLY RECOMMENDED to use IAM policies to allow S3 uploads to the backup bucket and attach them to IAM roles and apply the roles directly to the instance.

To setup the remote server (your web server hosting WordPress), you have two options here: roll up your sleeves and SSH into your server to set things up manually, or let Ansible do the heavy lifting for you. Either way, I’ll walk you through each step.

Option 1: Manual Setup (The DIY Approach)

If you enjoy the hands-on experience (or just don’t have Ansible set up yet), here’s how to get everything running manually.

First SSH into the remote server

ssh username@IP

Install Required Dependencies: Make sure you have the necessary tools installed.

sudo apt update
sudo apt install zip nano

I also assume mariadb-client is already installed as a part of your MariaDB setup. if not, go ahead and also install it.

Install AWS CLI: We need AWS CLI in our backup and restore scripts to communicate with AWS S3. The latest AWS CLI version isn’t available in most distribution repositories, so let’s install it directly following the official AWS docs:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" \
  -o "/tmp/awscliv2.zip"
unzip /tmp/awscliv2.zip -d /tmp
sudo /tmp/aws/install
rm -rf /tmp/aws /tmp/awscliv2.zip

Configure AWS CLI Configure AWS CLI with credentials that have permission to write to your S3 bucket:

aws configure

Keep in mind that this step is required for servers outside AWS infrastructure. If the the instance is on AWS, follow the note that I wrote above and set the policies on a role level and apply it to the instance instead of manually configuring the CLI by providing credentials.

Create the Configuration File Create the configuration file at /etc/backup-config.conf:

sudo mkdir -p /etc
sudo nano /etc/backup-config.conf

Add the following content, replacing the placeholders with your actual values:

S3_BUCKET="your-backup-bucket-name"
S3_BACKUP_DIR="ols-backups"
AWS_REGION_BACKUP="us-west-2"

Specify the target region for backups in AWS_REGION_BACKUP value.

Create the Script Directory

sudo mkdir -p /opt/ols-backup

Copy the Backup and Restore Scripts Copy the backup.sh and restore.sh scripts to the script directory and make them executable:

sudo nano /opt/ols-backup/backup.sh
sudo nano /opt/ols-backup/restore.sh
sudo chmod +x /opt/ols-backup/backup.sh
sudo chmod +x /opt/ols-backup/restore.sh

Set Up Log Directory

sudo mkdir -p /var/log/ols-backups

Set Up the Cron Job Add a cron job to run the backup script daily at 3 AM (or choose your preferred time):

sudo crontab -e

And add this line:

0 3 * * * /opt/ols-backup/backup.sh

Voilà! Your backup system is now configured to run automatically every night!

Option 2: Ansible Automation (The “I’ve Got Better Things to Do” Approach)

Meme: Hey, Got Ansible?

If you’re more of an automation enthusiast (and honestly, who isn’t these days?), our Ansible playbook will make this process a breeze. If you are new to Ansible, check how it works and read the official docs about how to use it.

Prepare Your Ansible Environment

Make sure you have Ansible installed on your control machine and can connect to your target server.

Set Up the Playbook Structure

Create a sub-directory in the project main directory structure to store the configuration template.

mkdir templates

Create the file templates/backup-config.conf.j2 with the following content:

S3_BUCKET="{{ s3_backup_bucket }}"
S3_BACKUP_DIR="{{ s3_backup_dir }}"
AWS_REGION_BACKUP="{{ aws_region_backup }}"

Create a new file in the project root named playbook.yml. This is the Ansible playbook that we are going to use.

playbook.yml
- name: Setup backup script
  hosts: all
  become: true
  vars:
    s3_backup_bucket: "{{ s3_backup_bucket }}"
    s3_backup_dir: "{{ s3_backup_dir }}"
    aws_region_backup: "{{ aws_region_backup }}"

  tasks:
    - name: Install required packages
      apt:
        name:
          - zip
        state: present

    - name: Install AWS CLI if not present
      shell: |
        if ! command -v aws &> /dev/null; then
          curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "/tmp/awscliv2.zip"
          unzip /tmp/awscliv2.zip -d /tmp
          /tmp/aws/install
          rm -rf /tmp/aws /tmp/awscliv2.zip
        fi
      args:
        executable: /bin/bash

    - name: Create or replace the backup configuration file
      ansible.builtin.template:
        src: templates/backup-config.conf.j2
        dest: /etc/backup-config.conf
        owner: root
        group: root
        mode: "0644"

    - name: Create backup script directory
      file:
        path: /opt/ols-backup
        state: directory
        mode: "0755"

    - name: Deploy backup script template
      template:
        src: scripts/backup.sh
        dest: /opt/ols-backup/backup.sh
        mode: "0755"

    - name: Deploy restore script template
      template:
        src: scripts/restore.sh
        dest: /opt/ols-backup/restore.sh
        mode: "0755"

    - name: Ensure log directory exists
      file:
        path: /var/log/ols-backups
        state: directory
        mode: "0755"

    - name: Add cron job for daily backup at 3 AM
      cron:
        name: "Daily backup"
        minute: "0"
        hour: "3"
        job: "/opt/ols-backup/backup.sh"
        user: root

This playbook automatically applies all the steps above in the manual setup.

Important Note: The playbook does not configure AWS CLI credentials as it assumes the web server is running on AWS infrastructure with proper IAM roles applied to the instance. If you’re using this outside of AWS, make sure to either update the playbook, or after running it, SSH into the server and configure the AWS CLI as mentioned in the manual deployment section above.

Create an inventory file inventory.ini:

INI
[webservers]
your-server-ip ansible_user=your-ssh-user

Create a variables file vars.yml:

YAML
s3_backup_bucket: "your-backup-bucket-name"
s3_backup_dir: "ols-backups"
aws_region_backup: "us-west-2"

Run the playbook

Bash
cd ansible-ols-backup
ansible-playbook -i inventory.ini playbook.yml --extra-vars "@vars.yml"

And that’s it! Ansible will install all required packages, create the necessary directories, deploy your scripts, and set up the cron job – all without you having to manually SSH into the server.

Restoring Your Backups: The Other Half of the Story

Now that we’ve built a solid backup solution, let’s explore its equally important counterpart – the restoration process! A backup is only as good as your ability to restore it when needed, so I’ve created a companion script to handle this critical task.

Meme: Morpheus from Matrix movie: What if I told you restoring is the whole point of creating backups

Let’s dive into our restore.sh script and see how it brings your data back to life when disaster strikes.

Script Setup and Configuration

The restore script begins with similar setup to our backup script:

ols-wp-backup/scripts/restore.sh
#!/bin/bash

CONFIG_FILE="/etc/backup-config.conf"

RESTORE_DIR="/tmp/ols_backups/restore"
DATE=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_DIR="/var/log/ols-backups/restore"
LOG_FILE="${LOG_DIR}/restore_${DATE}.log"

# Ensure log directory exists
mkdir -p "$LOG_DIR"

# Logging function
log() {
    local LEVEL="$1"
    local MESSAGE="$2"
    echo "$(date +"%Y-%m-%d %H:%M:%S") [$LEVEL] $MESSAGE" | tee -a "$LOG_FILE"
}

# Load configuration
if [[ -f "$CONFIG_FILE" ]]; then
    source "$CONFIG_FILE"
else
    log "ERROR" "Configuration file $CONFIG_FILE not found!"
    exit 1
fi

# Validate required variables
REQUIRED_VARS=("S3_BUCKET" "S3_BACKUP_DIR" "AWS_REGION_BACKUP")
for VAR in "${REQUIRED_VARS[@]}"; do
    if [[ -z "${!VAR}" ]]; then
        log "ERROR" "Required variable '$VAR' is not set in $CONFIG_FILE"
        exit 1
    fi
done

We establish the same configuration file and set up a separate restore directory and log file. The script uses the same logging mechanism for consistency and easy troubleshooting.

Parameter Validation

Unlike our backup script that works automatically, the restore script needs specific information about what to restore:

ols-wp-backup/scripts/restore.sh
# Ensure necessary parameters are provided
if [[ $# -ne 3 ]]; then
    log "ERROR" "Usage: $0 <website_domain> <database_name> <backup_date_time>"
    exit 1
fi

WEBSITE_DOMAIN="$1"
DATABASE_NAME="$2"
BACKUP_DATE_TIME="$3"

# Validate backup_date_time format
if ! [[ "$BACKUP_DATE_TIME" =~ ^[0-9]{4}-[0-9]{2}-[0-9]{2}_[0-9]{2}-[0-9]{2}-[0-9]{2}$ ]]; then
    log "ERROR" "Invalid backup date-time format. Expected format: YYYY-MM-DD_HH-MM-SS"
    exit 1
fi

log "INFO" "Restore process started."

# Create restore directory
mkdir -p "$RESTORE_DIR"
mkdir -p "/var/www/$WEBSITE_DOMAIN"

This script is designed for targeted restoration – you can specify exactly which website and database you want to restore from a particular backup timestamp. This selective approach is much more practical than an all-or-nothing restoration.

Restoring Website Files

The script is smart enough to check if a site’s directory already contains files:

ols-wp-backup/scripts/restore.sh
# Check if website directory is empty
if [ -n "$(ls -A /var/www/$WEBSITE_DOMAIN 2>/dev/null)" ]; then
    log "INFO" "Website directory /var/www/$WEBSITE_DOMAIN is not empty. Skipping site restore."
else
    WEBSITE_BACKUP_FILE="sites/${WEBSITE_DOMAIN}_${BACKUP_DATE_TIME}.zip"
    WEBSITE_BACKUP_PATH="s3://${S3_BUCKET}/${S3_BACKUP_DIR}/${BACKUP_DATE_TIME}/${WEBSITE_BACKUP_FILE}"

    aws s3 cp "$WEBSITE_BACKUP_PATH" "$RESTORE_DIR/" --region "$AWS_REGION_BACKUP" >>"$LOG_FILE" 2>&1

    if [[ $? -ne 0 ]]; then
        log "ERROR" "Failed to download website backup from S3."
        exit 1
    fi

    if [[ ! -f "$RESTORE_DIR/${WEBSITE_DOMAIN}_${BACKUP_DATE_TIME}.zip" ]]; then
        log "ERROR" "Website backup file not found after download."
        exit 1
    fi

    log "INFO" "Website backup downloaded. Extracting files."
    unzip -q "$RESTORE_DIR/${WEBSITE_DOMAIN}_${BACKUP_DATE_TIME}.zip" -d "/var/www/$WEBSITE_DOMAIN" >>"$LOG_FILE" 2>&1

    if [[ $? -ne 0 ]]; then
        log "ERROR" "Failed to extract website files."
        exit 1
    fi

    # Ensure correct ownership and permissions
    chown -R www-data:www-data "/var/www/$WEBSITE_DOMAIN"
    chmod -R 755 "/var/www/$WEBSITE_DOMAIN"

    log "INFO" "Website files restored to /var/www/$WEBSITE_DOMAIN with correct ownership and permissions."
fi

This prevents accidental overwrites of existing sites, a safety feature I found essential after a few too many “oops” moments in my own disaster recovery adventures.

The script first checks if the website’s directory already contains files. If it does, the restoration is skipped to avoid overwriting existing content. This is an important safety measure that prevents accidental data loss. If the directory is empty, the script downloads the website backup from the S3 bucket and verifies the download was successful. It then extracts the files to the proper website directory. After extraction, the script sets the correct ownership (www-data user and group) and permissions (755) for the web files, ensuring they’re properly accessible by the web server but still protected from unauthorized modifications.

Note: The script assumes your web server runs as the www-data user, which is common for Apache and Nginx, and OpenLiteSpeed on Debian-based systems. If your OLS web server uses a different user (like nobody or lsadm), be sure to adjust the chown command to use the appropriate username.

Restoring Database Content

Similarly, the database restoration is non-destructive:

ols-wp-backup/scripts/restore.sh
# Check if database exists and has tables
DB_EXISTS=$(mariadb -N -B -e "SHOW DATABASES LIKE '$DATABASE_NAME';")

if [[ -z "$DB_EXISTS" ]]; then
    log "INFO" "Database $DATABASE_NAME does not exist. Creating it..."
    mariadb -e "CREATE DATABASE $DATABASE_NAME;" >>"$LOG_FILE" 2>&1
    if [[ $? -ne 0 ]]; then
        log "ERROR" "Failed to create database $DATABASE_NAME."
        exit 1
    fi
fi

TABLE_COUNT=$(mariadb -N -B -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = '$DATABASE_NAME';" | awk '{print $1}')

if [[ "$TABLE_COUNT" -gt 0 ]]; then
    log "INFO" "Database $DATABASE_NAME exists and has tables. Skipping database restore."
else
    DB_BACKUP_FILE="db/${DATABASE_NAME}_${BACKUP_DATE_TIME}.sql.gz"
    DB_BACKUP_PATH="s3://${S3_BUCKET}/${S3_BACKUP_DIR}/${BACKUP_DATE_TIME}/${DB_BACKUP_FILE}"

    aws s3 cp "$DB_BACKUP_PATH" "$RESTORE_DIR/" --region "$AWS_REGION_BACKUP" >>"$LOG_FILE" 2>&1

    if [[ $? -ne 0 ]]; then
        log "ERROR" "Failed to download database backup from S3."
        exit 1
    fi

    if [[ ! -f "$RESTORE_DIR/${DATABASE_NAME}_${BACKUP_DATE_TIME}.sql.gz" ]]; then
        log "ERROR" "Database backup file not found after download."
        exit 1
    fi

    log "INFO" "Database backup downloaded. Restoring database."
    gunzip -c "$RESTORE_DIR/${DATABASE_NAME}_${BACKUP_DATE_TIME}.sql.gz" | mariadb "$DATABASE_NAME" >>"$LOG_FILE" 2>&1

    if [[ $? -ne 0 ]]; then
        log "ERROR" "Failed to restore database."
        exit 1
    fi

    log "INFO" "Database $DATABASE_NAME restored successfully."
fi

The script takes a similarly cautious approach with database restoration. It first checks if the specified database exists, and creates it if needed. Then it counts the tables in the database to determine if it’s already populated.

If the database already contains tables, the script skips the restoration to prevent overwriting existing data. This is particularly useful when migrating just the website files while keeping the existing database, or when you’re setting up a testing environment with a fresh database.

If the database is empty, the script downloads the backup from S3, verifies the download was successful, and then pipes the uncompressed SQL dump directly into MariaDB. This direct piping approach is more efficient than creating intermediate files, especially for large databases.

Restoring Database Users

One of the most unique features of our script is the database user restoration:

ols-wp-backup/scripts/restore.sh
# Restore database users and permissions
log "INFO" "Checking for user mapping file to restore database users and permissions."

# Download the user-database mapping file
USERS_MAP_FILE="db/users_db_map_${BACKUP_DATE_TIME}.json"
USERS_MAP_PATH="s3://${S3_BUCKET}/${S3_BACKUP_DIR}/${BACKUP_DATE_TIME}/${USERS_MAP_FILE}"

aws s3 cp "$USERS_MAP_PATH" "$RESTORE_DIR/" --region "$AWS_REGION_BACKUP" >>"$LOG_FILE" 2>&1
if [[ $? -ne 0 ]]; then
    log "WARNING" "Failed to download user-database mapping file from S3. Skipping user restoration."
else
    if [[ -f "$RESTORE_DIR/users_db_map_${BACKUP_DATE_TIME}.json" ]]; then
        log "INFO" "User mapping file found. Processing database users."

        # Download the SQL statements for user creation
        USERS_SQL_FILE="db/mariadb_users_${BACKUP_DATE_TIME}.sql.gz"
        USERS_SQL_PATH="s3://${S3_BUCKET}/${S3_BACKUP_DIR}/${BACKUP_DATE_TIME}/${USERS_SQL_FILE}"

        aws s3 cp "$USERS_SQL_PATH" "$RESTORE_DIR/" --region "$AWS_REGION_BACKUP" >>"$LOG_FILE" 2>&1
        if [[ $? -ne 0 ]]; then
            log "WARNING" "Failed to download user SQL file from S3. Skipping user restoration."
        else
            # Extract users SQL file
            gunzip -f "$RESTORE_DIR/mariadb_users_${BACKUP_DATE_TIME}.sql.gz" >>"$LOG_FILE" 2>&1

            if [[ ! -f "$RESTORE_DIR/mariadb_users_${BACKUP_DATE_TIME}.sql" ]]; then
                log "ERROR" "Failed to extract user SQL file."
            else
                log "INFO" "Processing users for database: $DATABASE_NAME"

                # Use jq to extract users with access to the specified database
                # First check if jq is installed
                if ! command -v jq &>/dev/null; then
                    log "WARNING" "jq is not installed. Cannot parse JSON mapping file. Installing jq..."
                    apt-get update && apt-get install -y jq >>"$LOG_FILE" 2>&1
                    if [[ $? -ne 0 ]]; then
                        log "ERROR" "Failed to install jq. Skipping user restoration."
                        rm -f "$RESTORE_DIR/mariadb_users_${BACKUP_DATE_TIME}.sql"
                    fi
                fi

                if command -v jq &>/dev/null; then
                    # Process each user with access to the database
                    jq -r --arg db "$DATABASE_NAME" '.users | to_entries[] | select(.value.dbs | map(. == $db or . == "ALL_DBS") | any) | .key' "$RESTORE_DIR/users_db_map_${BACKUP_DATE_TIME}.json" | while read -r user_host; do
                        if [[ -n "$user_host" ]]; then
                            # Extract username and hostname
                            user=$(echo "$user_host" | cut -d'@' -f1)
                            host=$(echo "$user_host" | cut -d'@' -f2)

                            # Check if user already exists
                            USER_EXISTS=$(mariadb -N -B -e "SELECT EXISTS(SELECT 1 FROM mysql.user WHERE user='$user' AND host='$host')")

                            if [[ "$USER_EXISTS" -eq 0 ]]; then
                                log "INFO" "Creating user: $user@$host"

                                # Extract and execute the CREATE USER statement for this user
                                grep -A 1 -m 1 "CREATE USER.*'$user'@'$host'" "$RESTORE_DIR/mariadb_users_${BACKUP_DATE_TIME}.sql" | mariadb >>"$LOG_FILE" 2>&1

                                if [[ $? -ne 0 ]]; then
                                    log "ERROR" "Failed to create user $user@$host."
                                else
                                    log "INFO" "User $user@$host created successfully."
                                fi
                            else
                                log "INFO" "User $user@$host already exists. Skipping creation."
                            fi

                            # Grant permissions only for the database being restored
                            log "INFO" "Granting permissions to $user@$host for database $DATABASE_NAME"
                            mariadb -e "GRANT ALL PRIVILEGES ON \`$DATABASE_NAME\`.* TO '$user'@'$host';" >>"$LOG_FILE" 2>&1

                            if [[ $? -ne 0 ]]; then
                                log "ERROR" "Failed to grant permissions to $user@$host for database $DATABASE_NAME."
                            else
                                log "INFO" "Permissions granted to $user@$host for database $DATABASE_NAME."
                            fi
                        fi
                    done

                    # Apply the grant changes
                    mariadb -e "FLUSH PRIVILEGES;" >>"$LOG_FILE" 2>&1

                    log "INFO" "Database user restoration completed."
                fi

                # Clean up SQL file
                rm -f "$RESTORE_DIR/mariadb_users_${BACKUP_DATE_TIME}.sql"
            fi
        fi
    else
        log "WARNING" "User mapping file not found after download. Skipping user restoration."
    fi
fi

# Cleanup
log "INFO" "Cleaning up temporary files."
rm -rf "$RESTORE_DIR"

log "INFO" "Restore process completed successfully."

The most sophisticated part of our restore script is the database user restoration process. Here’s how it works:

First, the script downloads the JSON mapping file that contains the relationship between users and databases. If this file can’t be found, the script gracefully skips user restoration while continuing with the rest of the process.

When the mapping file is available, the script also downloads the SQL statements for user creation that we generated during backup. These contain the exact authentication mechanisms and privileges for each user.

The script then uses jq (a command-line JSON processor) to intelligently filter out only the users who should have access to the database being restored. This targeted approach ensures proper security boundaries between different WordPress installations by only restoring relevant users.

For each applicable user:

  1. The script checks if the user already exists in MariaDB
  2. If not, it creates the user with the exact same authentication method as before
  3. It grants appropriate permissions specifically for the restored database
  4. Finally, it applies the changes with FLUSH PRIVILEGES

The script is also resilient enough to handle missing tools – if jq isn’t installed, it attempts to install it automatically.

This careful restoration of users and permissions ensures that your WordPress applications can immediately connect to their databases with the correct credentials, avoiding the common post-restore headache of reconfiguring database access.

After all operations are complete, the script cleans up any temporary files and directories to keep your server tidy.

You can find the final version of the restore script on the GitHup repository.

What the Restore Script Doesn’t Do

It’s important to note that while our restore script handles website files and databases comprehensively, it deliberately doesn’t restore certain server-level components that we backed up:

  • OpenLiteSpeed configurations: The script doesn’t automatically restore OLS configuration files. Restoring web server configs often requires careful handling and may need server restarts, so this is best done manually when needed.
  • System packages: While we back up the list of installed packages, the script doesn’t automatically reinstall them. This prevents potential version conflicts or unintended system changes.
  • Cron jobs: The script backs up but doesn’t restore scheduled tasks automatically, as these might conflict with existing jobs or require special timing considerations.

If you need to restore these components, you can access them from your S3 bucket manually and apply them with appropriate caution.

For a full server rebuild scenario, you might want to examine the backed-up system packages list and OpenLiteSpeed configurations after restoring your websites and databases.

Running the Restore Script

The restore script is designed to be run as needed, rather than scheduled like the backup script. To use it, simply provide the required parameters.

Remember that you have to first create the configuration file including the necessary variables in /etc/backup-config.conf.

sudo ./restore.sh example.com example_db 2023-07-15_03-00-00

This will restore the website files for example.com and the database example_db from the backup taken at the specified time. The script works with just these three parameters, handling all the complexity of finding and restoring the correct files, database content, and associated users.

You might want to run this script when:

  • Moving a website to a new server
  • Recovering from accidental data deletion
  • Reverting to a previous version after a failed update
  • Restoring a site after server hardware issues

The restore script completes our backup solution, providing both the ability to create comprehensive backups and the means to use them when needed. With these two scripts, you’ve got a robust, server-level backup and restoration system that puts you in complete control of your WordPress hosting environment.

Next Steps & Future Enhancements

  • Implement checksum verification both during backup and before restoration to ensure data integrity
  • Create tiered backup schedules:
    • Hourly backups for databases to minimize potential data loss
    • Daily backups for website files
    • Weekly/monthly backups for long-term archiving
  • Add efficient file synchronization options using tools like rsync instead of bundling all files in every backup process
  • Implement monitoring and notification systems:
    • Email or Slack alerts when backups fail
    • Status reports for successful backups
  • Add a dry-run feature to test backup configurations without writing files
  • Implement automated backup retention policies to manage storage costs
  • Improve security by running backup processes with dedicated low-privilege UNIX and database users

Conclusion: Your Server, Your Backups, Your Peace of Mind

And there you have it! A complete, robust, and flexible backup solution for your OpenLiteSpeed WordPress servers that doesn’t rely on WordPress plugins or additional bloated applications. By using bash scripts and AWS S3 storage, we’ve created a solution that:

  • Runs independently of WordPress – no more worrying about your backup plugin breaking when your site is already down!
  • Captures everything that matters – from website files to databases, users, and even server configurations
  • Uses minimal resources – no PHP execution limits or web server timeouts to worry about
  • Works across multiple websites on the same server without configuration headaches
  • Stores backups securely off-site in AWS S3, with options for multi-region redundancy
  • Makes restoration straightforward with a companion script that handles all the complexity for you

The best part? It’s all yours to customize further based on your specific needs. Whether you deploy it manually or automate it with Ansible, you’re now in complete control of your backup strategy.

Remember that no backup solution is perfect without regular testing. I recommend periodically restoring a site to a test environment to verify your backups are working as expected. There’s nothing worse than discovering your backups are incomplete when you actually need them!

I’d love to hear how this solution works for you or any improvements you’ve made to it. Drop a comment below or reach out on social media to share your experiences!

If you add a new feature to the script, feel free to send a PR on GitHub. Your contributions can help make this backup solution even better for the entire community.

Stay safe, back up often, and may your servers never crash (but if they do, now you’re prepared)!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *