bugfloyd

Taming tech, one bug at a time.

Tag: Ansible

  • The Ultimate AWS AMI for WordPress Servers: Automating OpenLiteSpeed & MariaDB Deployment with Packer and Ansible

    The Ultimate AWS AMI for WordPress Servers: Automating OpenLiteSpeed & MariaDB Deployment with Packer and Ansible

    Deploying a web application on AWS Elastic Compute Cloud (EC2) usually means setting up a server with the right software, configurations, and optimizations. But let’s be honest—doing this manually over and over again gets old fast. Instead of setting up everything from scratch each time, Amazon Machine Images (AMIs) let us create a pre-configured system that we can reuse whenever we need to spin up a new EC2 instance.

    And instead of doing this manually every time that we need to change something in the AMI, we’ll use Packer to automate the entire process. But before we get into that, let’s break down what an AMI actually is and why you might want to build your own.

    What is an AMI?

    An Amazon Machine Image (AMI) is essentially a blueprint for an EC2 instance. It includes everything needed to launch a server: the operating system, installed software, configurations, and optional application code. Instead of setting up a fresh instance manually each time, you can use an AMI to deploy identical instances quickly and reliably.

    Amazon Machine Image (AMI) logo

    Think of it like making a pizza at home. You could start from scratch every time—making the dough, preparing the sauce, chopping toppings—but why bother when you can just freeze a fully prepared pizza and bake it whenever you’re hungry? An AMI is that prepped pizza, ready to go. But unlike frozen food, an AMI doesn’t mean sacrificing quality or control. You still get a fresh, optimized setup—just without the hassle of doing it all over again.

    For those who want to skip ahead, the complete solution with all scripts and configurations is available on my GitHub repository: aws-ols-mariadb-ami.

    Why Build a Custom AMI?

    When launching an EC2 instance, you have three main options:

    1. Start from scratch – Use a bare Linux AMI, launch an instance, SSH in, and manually install and configure everything. This gives you full control but is tedious and time-consuming.
    2. Use a prebuilt AMI from AWS Marketplace – These come with software pre-installed, saving setup time, but many require a paid subscription and often include extra software you don’t need.
    3. Build your own custom AMI – The best of both worlds! You get a pre-configured, lightweight setup, tailored to your needs, with only the software you actually use—no unnecessary bloat or extra costs.

    In my previous posts, I explained how to use the OpenLiteSpeed AMI from the AWS Marketplace. It’s a convenient option, but it costs $5 per month. The funny thing? Everything inside that AMI is open-source and free. So instead of paying for it, we can build our own version. This saves money, allows full customization, and lets us configure it once and reuse it as many times as we need. Plus, we can skip unnecessary packages, keeping our AMI lightweight.

    In this post, I’ll walk through how to build a custom AMI based on Ubuntu 24.04, with these software and packages preinstalled and configured:

    • OpenLiteSpeed (with LiteSpeed Cache)
    • PHP (LSPHP)
    • MariaDB (Server & Client)
    • phpMyAdmin
    • WordPress

    And instead of doing this manually every time that we need to change something in the AMI, we’ll use Packer to automate the entire process.

    What is Packer?

    HashiCorp Packer logo

    Packer, created by HashiCorp, is a tool that automates machine image creation. Instead of manually setting up an instance and then taking a snapshot, Packer does everything for you. You define a template (in JSON or HCL), and Packer spins up a temporary server, installs and configures everything, then saves the final golden image as an AMI.

    Why does this matter? Manually setting up AMIs is repetitive, time-consuming, and error-prone. With Packer, you define everything once, and it builds AMIs on autopilot. Need an update? Just tweak the template and rebuild—no clicking around AWS wondering what you forgot.

    In short: Packer Automate AMI creation instead of doing it manually. Ensure consistency across deployments. Save time and avoid configuration headaches.

    Before we get into Ansible, let’s break down how Packer actually works. Packer doesn’t just magically create an AMI—it follows a process with two key components: builders and provisioners.

    • Builders are responsible for creating the machine image. In our case, the Amazon EC2 builder launches a temporary EC2 instance, installs everything needed, and then snapshots it into an AMI.
    • Provisioners handle installing software and configuring the system. Once the instance is up, provisioners take over to set up services, install dependencies, and customize the system before the image is finalized.

    While Packer supports different provisioners, including raw shell scripts, a more structured approach makes things easier to maintain—which brings us to Ansible. If Packer is the robot that builds your AMI, then Ansible is the smart assistant making sure everything inside is set up exactly the way you want.

    What is Ansible?

    Ansible is an automation tool for configuring servers, installing software, and managing infrastructure—without manually SSH-ing into each machine. Instead of writing long, brittle shell scripts, you define what needs to be done in simple YAML playbooks, and Ansible handles the rest.

    What makes Ansible special?

    Ansible logo
    • Agentless – Unlike other automation tools, Ansible doesn’t require any extra software to be installed on the target machine. It just connects over SSH and runs commands.
    • Declarative – Instead of telling the system how to install and configure things step by step, you describe what the final state should be, and Ansible figures out the rest.
    • Idempotent – Running an Ansible playbook multiple times won’t cause issues. If something is already installed or configured, Ansible just skips it, preventing unnecessary work.

    Why Use Ansible with Packer Instead of a Shell Script?

    Meme: Why write automation if you could automate automation

    Packer has a Shell provisioner, so why not just use bash scripts? Well, while shell scripts work, they have drawbacks:

    • Harder to maintain – Bash scripts can quickly turn into a tangled mess of commands and conditionals. Ansible uses structured, declarative YAML playbooks that are easier to read and modify.
    • Idempotency – As mentioned above, Ansible won’t re-run commands but shell scripts happily reinstall everything, every time.
    • Better error handling – If something fails in Ansible, it fails gracefully, showing exactly where and why. A shell script might just stop mid-way, leaving your setup half-broken.
    • More flexibility – Ansible modules allow for cleaner and more portable provisioning logic compared to writing a bunch of apt-get or yum commands.

    How It Works with Packer

    Once Packer spins up a temporary EC2 instance, Ansible takes over as the provisioner, installing software, configuring services, and making sure everything is properly set up before the AMI is saved.

    Now that we know why AMIs make life easier, how Packer automates the heavy lifting, and why Ansible keeps everything neat and organized, it’s time to roll up our sleeves and build our own custom AMI!

    (more…)
  • Build a Robust S3-Powered Backup Solution for WordPress Hosted on OpenLiteSpeed Using Bash Scripts

    Build a Robust S3-Powered Backup Solution for WordPress Hosted on OpenLiteSpeed Using Bash Scripts

    I believe there’s no need to explain why we need proper automated backup solutions for our web servers! When it comes to WordPress, there are plenty of options. Many popular solutions involve installing plugins on WordPress that rely on WordPress cron jobs (WP-Cron) to run automatically. These plugins bundle the website files and dump the database tables using PHP capabilities.

    While these plugin-based solutions work well enough in most scenarios, I’ve noticed several important limitations:

    • Backing up an application through the application itself is inherently risky! If something goes wrong with WordPress, the plugin, or the web server running them, the entire backup process fails.
    • The process heavily relies on PHP and the web server’s limits, timeouts, and configurations—and a lot can go wrong.
    • It consumes significant resources, especially with larger websites containing millions of database records and thousands of files. This can keep your web server busy with backup jobs and prevent it from properly responding to actual user requests.
    • These solutions have built-in limitations—for example, you cannot backup the web server or underlying OS configurations.
    • To restore these backups, you need to first install and set up a basic WordPress instance, install and configure the backup plugin, and then run the restore process—hoping everything goes smoothly.
    • These solutions are limited to only a single website and if you want to properly backup multiple websites on the servers it gets more challenging.

    I know there are plenty of out-of-the-box solutions for server-level backups, but why install and configure another potentially bloated application with dozens of features you’ll never use? Instead, let’s create a simple but flexible backup solution tailored specifically for OpenLiteSpeed servers hosting WordPress sites, powered by bash scripts and easily deployable with Ansible (or manually)!

    Although we’re focusing on OpenLiteSpeed and MariaDB here, with some small tweaks, this solution can be adapted for other web servers like LiteSpeed Enterprise or nginx, and other database systems like MySQL.

    In this backup solution I am going to use AWS S3 to store the backups which offers a secure, scalable and relatively cheap remote storage.

    Backup on the server meme. A: Sever is crashed! B: Where is backup? A: On the server!

    In this post I assume you are running a Debian-based Linux distribution on the server (like Debian, Ubuntu, etc). If you are using other Linux distributions, you have to adjust the commands, scripts (and the playbook) accordingly by yourself.

    You can find the complete solution including all the scripts and the optional Ansible playbook in this GitHub repository.

    Understanding the Backup Requirements

    Before diving into our backup solution, let’s understand what we need to back up on a WordPress installation running on an OpenLiteSpeed web server.

    What Needs to Be Backed Up?

    A complete WordPress backup solution should cover the following critical components:

    1. Website Files: WordPress core, Themes and plugins,Uploads (images, videos, documents), and in summary whatever we have in the WordPress installation.
    2. Database Content: All the tables and records in the database being used by WordPress, which includes WP core tables and any possible custom tables created by plugins and themes. Database users and their associated privileges should also be included in the backups with clear mappings showing which user belongs to which database.
    3. Web Server Configuration: OpenLiteSpeed configuration files, virtual host settings, and SSL certificates need to be backed up to ensure your server configuration can be restored exactly as it was.
    4. System Configuration: A list of installed packages, cron jobs, and other critical system configurations that make your server environment unique.

    A good backup strategy should capture all these components in an automated, scheduled manner, storing output on a remote storage solution securely, while providing a straightforward restoration path.

    The bash scripts we’re going to build are designed to achieve all these goals by:

    • Automatically detecting websites, databases, and their associated users
    • Backing up to a temporary local directory before uploading to a remote storage (AWS S3) and cleaning up the local directory after a successful backup
    • Implementing proper error handling and logging throughout the process
    • Including a dedicated restoration script that makes recovery simple and reliable
    (more…)