Deploying a web application on AWS Elastic Compute Cloud (EC2) usually means setting up a server with the right software, configurations, and optimizations. But let’s be honest—doing this manually over and over again gets old fast. Instead of setting up everything from scratch each time, Amazon Machine Images (AMIs) let us create a pre-configured system that we can reuse whenever we need to spin up a new EC2 instance.
And instead of doing this manually every time that we need to change something in the AMI, we’ll use Packer to automate the entire process. But before we get into that, let’s break down what an AMI actually is and why you might want to build your own.
What is an AMI?
An Amazon Machine Image (AMI) is essentially a blueprint for an EC2 instance. It includes everything needed to launch a server: the operating system, installed software, configurations, and optional application code. Instead of setting up a fresh instance manually each time, you can use an AMI to deploy identical instances quickly and reliably.

Think of it like making a pizza at home. You could start from scratch every time—making the dough, preparing the sauce, chopping toppings—but why bother when you can just freeze a fully prepared pizza and bake it whenever you’re hungry? An AMI is that prepped pizza, ready to go. But unlike frozen food, an AMI doesn’t mean sacrificing quality or control. You still get a fresh, optimized setup—just without the hassle of doing it all over again.
For those who want to skip ahead, the complete solution with all scripts and configurations is available on my GitHub repository: aws-ols-mariadb-ami.
Why Build a Custom AMI?
When launching an EC2 instance, you have three main options:
- Start from scratch – Use a bare Linux AMI, launch an instance, SSH in, and manually install and configure everything. This gives you full control but is tedious and time-consuming.
- Use a prebuilt AMI from AWS Marketplace – These come with software pre-installed, saving setup time, but many require a paid subscription and often include extra software you don’t need.
- Build your own custom AMI – The best of both worlds! You get a pre-configured, lightweight setup, tailored to your needs, with only the software you actually use—no unnecessary bloat or extra costs.
In my previous posts, I explained how to use the OpenLiteSpeed AMI from the AWS Marketplace. It’s a convenient option, but it costs $5 per month. The funny thing? Everything inside that AMI is open-source and free. So instead of paying for it, we can build our own version. This saves money, allows full customization, and lets us configure it once and reuse it as many times as we need. Plus, we can skip unnecessary packages, keeping our AMI lightweight.
In this post, I’ll walk through how to build a custom AMI based on Ubuntu 24.04, with these software and packages preinstalled and configured:
- OpenLiteSpeed (with LiteSpeed Cache)
- PHP (LSPHP)
- MariaDB (Server & Client)
- phpMyAdmin
- WordPress
And instead of doing this manually every time that we need to change something in the AMI, we’ll use Packer to automate the entire process.
What is Packer?

Packer, created by HashiCorp, is a tool that automates machine image creation. Instead of manually setting up an instance and then taking a snapshot, Packer does everything for you. You define a template (in JSON or HCL), and Packer spins up a temporary server, installs and configures everything, then saves the final golden image as an AMI.
Why does this matter? Manually setting up AMIs is repetitive, time-consuming, and error-prone. With Packer, you define everything once, and it builds AMIs on autopilot. Need an update? Just tweak the template and rebuild—no clicking around AWS wondering what you forgot.
In short: Packer Automate AMI creation instead of doing it manually. Ensure consistency across deployments. Save time and avoid configuration headaches.
Before we get into Ansible, let’s break down how Packer actually works. Packer doesn’t just magically create an AMI—it follows a process with two key components: builders and provisioners.
- Builders are responsible for creating the machine image. In our case, the Amazon EC2 builder launches a temporary EC2 instance, installs everything needed, and then snapshots it into an AMI.
- Provisioners handle installing software and configuring the system. Once the instance is up, provisioners take over to set up services, install dependencies, and customize the system before the image is finalized.
While Packer supports different provisioners, including raw shell scripts, a more structured approach makes things easier to maintain—which brings us to Ansible. If Packer is the robot that builds your AMI, then Ansible is the smart assistant making sure everything inside is set up exactly the way you want.
What is Ansible?
Ansible is an automation tool for configuring servers, installing software, and managing infrastructure—without manually SSH-ing into each machine. Instead of writing long, brittle shell scripts, you define what needs to be done in simple YAML playbooks, and Ansible handles the rest.
What makes Ansible special?

- Agentless – Unlike other automation tools, Ansible doesn’t require any extra software to be installed on the target machine. It just connects over SSH and runs commands.
- Declarative – Instead of telling the system how to install and configure things step by step, you describe what the final state should be, and Ansible figures out the rest.
- Idempotent – Running an Ansible playbook multiple times won’t cause issues. If something is already installed or configured, Ansible just skips it, preventing unnecessary work.
Why Use Ansible with Packer Instead of a Shell Script?

Packer has a Shell provisioner, so why not just use bash scripts? Well, while shell scripts work, they have drawbacks:
- Harder to maintain – Bash scripts can quickly turn into a tangled mess of commands and conditionals. Ansible uses structured, declarative YAML playbooks that are easier to read and modify.
- Idempotency – As mentioned above, Ansible won’t re-run commands but shell scripts happily reinstall everything, every time.
- Better error handling – If something fails in Ansible, it fails gracefully, showing exactly where and why. A shell script might just stop mid-way, leaving your setup half-broken.
- More flexibility – Ansible modules allow for cleaner and more portable provisioning logic compared to writing a bunch of
apt-get
oryum
commands.
How It Works with Packer
Once Packer spins up a temporary EC2 instance, Ansible takes over as the provisioner, installing software, configuring services, and making sure everything is properly set up before the AMI is saved.
Now that we know why AMIs make life easier, how Packer automates the heavy lifting, and why Ansible keeps everything neat and organized, it’s time to roll up our sleeves and build our own custom AMI!
Prerequisites
Before we dive into building our custom AMI, let’s get a few things set up. You’ll need to install and configure the following:
Install and Configure AWS CLI
Packer needs access to AWS to build and publish the AMI. There are multiple ways to grant it permission, but for simplicity, we’ll use shared credential files—and the easiest way to generate those is by installing and configuring the AWS CLI.
Install AWS CLI by following the official AWS document. And then configure it have access to the AWS account that you are going to deploy the resources.
aws configure
For simplicity, I recommend using a credential with administrator access (at least for this guide).
If you have multiple AWS profiles configured, make sure you’re using the right one before running Packer commands. You can do this by exporting the AWS_PROFILE
environment variable:
export AWS_PROFILE=<AWS_PROFILE>
Install Packer
Let’s get Packer installed. Download and install Packer by following the official HashiCorp Packer documentation.
Verify that Packer is installed correctly:
packer version
You should see output similar to:
Packer v1.12.0
Install Ansible
Since we’re using Ansible as a provisioner in Packer, we need to install it first. Follow Ansible official documentation and install it on your machine.
Verify the installation:
ansible --version
You should see output similar to:
ansible [core 2.x.x]
Packer Setup
Now that we have the prerequisites out of the way, it’s time to get our hands dirty and start building our custom AMI.
First, let’s organize our project. Run the following command to create a new directory where we’ll keep all the necessary files and configurations:
mkdir aws-ols-mariadb-ami && cd aws-ols-mariadb-ami
Now, inside the main directory of the project, create a new file called main.pkr.hcl
. This will be our main Packer configuration file, which defines everything Packer needs to build our AMI.
variable "aws_region_main" {
type = string
}
variable "aws_region_backup" {
type = string
}
variable "s3_backup_bucket" {
type = string
}
variable "s3_backup_dir" {
type = string
default = "ec2-backups/ols"
}
variable "mariadb_admin_user" {
type = string
default = "dbadmin"
}
variable "ols_admin_user" {
type = string
default = "admin"
}
packer {
required_plugins {
amazon = {
source = "github.com/hashicorp/amazon"
version = "~> 1"
}
ansible = {
version = "~> 1"
source = "github.com/hashicorp/ansible"
}
}
}
source "amazon-ebs" "ols_mariadb" {
region = var.aws_region_main
instance_type = "t3.small"
ssh_username = "ubuntu"
ami_name = "openlitespeed-mariadb-ami-{{timestamp}}"
ami_description = "Ubuntu 24 based API including: OpenLightSpeed, LSPHP, MariaDB"
source_ami_filter {
filters = {
name = "ubuntu-pro-server*24.04-amd64*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
owners = ["099720109477"] # Canonical's AWS Account ID for Ubuntu
most_recent = true
}
tags = {
Name = "OLS-Webserver"
}
}
build {
sources = ["source.amazon-ebs.ols_mariadb"]
provisioner "ansible" {
playbook_file = "playbook.yml"
extra_arguments = [
"-e", "s3_backup_bucket=${var.s3_backup_bucket}",
"-e", "s3_backup_dir=${var.s3_backup_dir}",
"-e", "aws_region_backup=${var.aws_region_backup}",
"-e", "mariadb_admin_user=${var.mariadb_admin_user}",
"-e", "ols_admin_user=${var.ols_admin_user}",
"--scp-extra-args", "'-O'" # To resolve https://github.com/hashicorp/packer/issues/11783
]
}
}
We define variables at the top of the file to keep our configuration flexible and reusable:
aws_region_main
&aws_region_backup
– The AWS regions where the AMI and backups will be stored.s3_backup_bucket
– The S3 bucket where backups will be stored. This is used by theols-wp-backup
scripts.s3_backup_dir
– The directory inside the S3 bucket for backups (default:"ec2-backups/ols"
).mariadb_admin_user
– The username for the user account with admin access on MariaDB.ols_admin_user
– The username for OpenLiteSpeed’s admin panel.
Then we define the required Packer plugins for this project. The Amazon plugin allows Packer to create AMIs in AWS. The Ansible plugin lets Packer use Ansible for provisioning instead of raw shell scripts.
source "amazon-ebs"
defines an Amazon EC2 instance that Packer will use to create the AMI:
instance_type = "t3.small"
– Specifies the EC2 instance size.source_ami_filter
– Finds the latest Ubuntu 24.04 Pro AMI from Canonical’s AWS account.
Note that this is the configuration to setup an instance in which our AMI is going to be cooked and it is not about your final instances using this AMI. However some of these configurations like ami_name
, ami_description
, ssh_username
, and tags
are going present in the final AMI as well.
Build Block (Provisioning with Ansible)
This section tells Packer to:
- Use
source.amazon-ebs.ols_mariadb
to run the builder in. - Run an Ansible playbook (
playbook.yml
) to configure the instance. - Pass variables to Ansible, such as the OpenLiteSpeed password and S3 backup details.
Now that we’ve set up Packer, the next step is to create our Ansible playbook to configure the AMI!
Ansible Playbooks
Now that we have our Packer configuration ready, let’s talk about the real brains of the operation: our Ansible playbooks. These are the instructions that tell our server what to install, configure, and optimize.
Main Playbook Structure
Just like how you wouldn’t try to cook an entire five-course meal with a single giant recipe, we’ve broken down our server setup into smaller, more manageable playbooks. Our playbook.yml
file acts as the master chef, coordinating all the other playbooks. This playbook is being used as the entry point for the Packer provisioner as described above.
- name: Setup Webserver
hosts: all
become: yes
vars:
s3_backup_bucket: "{{ s3_backup_bucket }}"
s3_backup_dir: "{{ s3_backup_dir }}"
aws_region_backup: "{{ aws_region_backup }}"
mariadb_admin_user: "{{ mariadb_admin_user }}"
ols_admin_user: "{{ ols_admin_user }}"
- name: Pre Setup
ansible.builtin.import_playbook: playbook_pre.yml
- name: Setup OpenLiteSpeed and PHP
ansible.builtin.import_playbook: playbook_webserver.yml
- name: Setup MariaDB
ansible.builtin.import_playbook: playbook_db.yml
- name: Setup phpMyAdmin
ansible.builtin.import_playbook: playbook_phpmyadmin.yml
- name: Setup Firewall
ansible.builtin.import_playbook: playbook_firewall.yml
- name: Setup Backups
import_playbook: backup/playbook.yml
vars:
s3_backup_bucket: "{{ s3_backup_bucket }}"
s3_backup_dir: "{{ s3_backup_dir }}"
aws_region_backup: "{{ aws_region_backup }}"
- name: Post Setup
ansible.builtin.import_playbook: playbook_post.yml
- name: Setup first-boot initialization
ansible.builtin.import_playbook: playbook_init.yml
vars:
mariadb_admin_user: "{{ mariadb_admin_user }}"
ols_admin_user: "{{ ols_admin_user }}"
The first section defines our variables, which get passed from Packer to Ansible. Think of these as the settings that control how everything else works. The remaining sections import all the specific playbooks we’ll need, in the exact order they should run.
Understanding the Playbook Organization
Rather than cramming everything into one massive playbook (which would be like writing a novel without chapters), we’ve split our configuration into logical modules. Here’s the breakdown:
playbook_init.yml: Sets up first-boot initialization tasks like generating secure passwords.
playbook_pre.yml: The warm-up routine. Updates packages, removes unnecessary software, and does basic security hardening.
playbook_webserver.yml: Sets up OpenLiteSpeed and PHP—the engine that will power our web applications.
playbook_db.yml: Installs and secures MariaDB.
playbook_phpmyadmin.yml: Adds phpMyAdmin for easier database management (because nobody likes managing databases via command line only).
playbook_firewall.yml: Configures the firewall to protect our server while allowing necessary connections.
backup/playbook.yml: This is our backup submodule (more on this later) that handles automated backups to S3.
playbook_post.yml: The clean-up crew. Tidies up after everything else is done.
This modular approach makes our configuration easier to maintain and understand. Need to tweak the firewall? Just update playbook_firewall.yml
. Want to change how MariaDB is configured? Head to playbook_db.yml
. It’s like having a well-organized toolbox where you know exactly where to find each tool.
Importing Sub-Playbooks
You might have noticed all those import_playbook
lines in our main playbook. This is how Ansible combines multiple playbooks into one workflow. It’s like including chapters in a book—each maintains its own identity but contributes to the whole story. One cool thing about this approach is that we can pass variables down to the imported playbooks.
Now let’s dive into each of the playbooks and explain what they do.
Pre-Setup Tasks
This is where we prepare our server with essential updates and security configurations before installing our main software stack. The playbook_pre.yml
file handles all the initial tasks that need to happen before we install any specialized software.
- name: Pre-Setup
hosts: all
become: yes
tasks:
- name: Update apt packages
apt:
update_cache: yes
upgrade: dist
- name: Remove unnecessary packages
apt:
name:
- snapd
- lxd-agent-loader
- command-not-found
- python3-commandnotfound
- apport
- apport-core-dump-handler
- apport-symptoms
- python3-apport
- open-iscsi
- multipath-tools
- motd-news-config
- landscape-common
- ubuntu-pro-auto-attach
- ubuntu-pro-client
- ubuntu-pro-client-l10n
state: absent
purge: yes
- name: Install required packages
apt:
name:
- curl
- cloud-init
state: present
- name: Configure unattended-upgrades for security updates only
copy:
dest: /etc/apt/apt.conf.d/50unattended-upgrades
content: |
Unattended-Upgrade::Origins-Pattern {
"o=Ubuntu,a=${distro_codename}-security";
};
Unattended-Upgrade::Package-Blacklist {
};
Unattended-Upgrade::Automatic-Reboot "false";
Unattended-Upgrade::MinimalSteps "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
- name: Enable unattended-upgrades
copy:
dest: /etc/apt/apt.conf.d/20auto-upgrades
content: |
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "7";
- name: Disable root SSH login
lineinfile:
path: /etc/ssh/sshd_config
line: "PermitRootLogin no"
state: present
- name: Restart SSH service
service:
name: ssh
state: restarted
First, we update all system packages to ensure our AMI has the latest security patches. Then we remove a bunch of unnecessary packages to slim down our AMI. Most of these packages are Ubuntu-specific tools for error reporting (apport
), support services (landscape
, ubuntu-pro
), container/VM tools (lxd
, open-iscsi
), and command suggestions (command-not-found
). For a server AMI, these packages just add bloat, consume resources, and potentially introduce security concerns without providing value for our web server use case.
After trimming the fat, we install only what we absolutely need: curl
for downloads and API interactions, and cloud-init
for first-boot initialization. This minimalist approach keeps our AMI lightweight and reduces potential attack vectors.
Instead of removing unattended-upgrades
completely, we configure it to only apply security updates. This gives us the best of both worlds: critical security patches are automatically installed, while other updates that might affect stability are held back for manual review. We specifically configure it not to automatically reboot the server, as unexpected restarts could disrupt services.
We also implement a basic security measure by disabling direct root SSH login. This is standard practice—even if someone gets your SSH private key, they still can’t log in directly as root. Finally, we restart the SSH service to apply our configuration changes.
In the next section, we’ll look at setting up OpenLiteSpeed and PHP—the core of our web server.
Web Server Configuration
Next up in our playbook lineup is playbook_webserver.yml
, which handles the installation and configuration of OpenLiteSpeed and PHP. This is the engine that will power our web applications, so let’s see how we set it up:
- name: Setup OpenLiteSpeed and PHP
hosts: all
become: yes
vars:
php_version: "83"
tasks:
- name: Add OpenLiteSpeed repository
shell: |
wget -O - https://repo.litespeed.sh | bash
args:
executable: /bin/bash
- name: Install OpenLiteSpeed
apt:
name: openlitespeed
state: present
update_cache: yes
- name: Install LSPHP 8.1
apt:
name:
- lsphp{{ php_version }}
- lsphp{{ php_version }}-common
- lsphp{{ php_version }}-imap
- lsphp{{ php_version }}-mysql
- lsphp{{ php_version }}-opcache
state: present
- name: Create a symbolic link for PHP
file:
src: /usr/local/lsws/lsphp{{ php_version }}/bin/php
dest: /usr/bin/php
state: link
- name: Enable and start OpenLiteSpeed
systemd:
name: lsws
enabled: yes
state: started
Adding the OpenLiteSpeed Repository
The first task adds the official OpenLiteSpeed repository to our system. We’re using a shell command here because that’s what the OpenLiteSpeed team provides. It’s a simple script that adds their repository to our package manager. Note that we specify /bin/bash
as the executable to ensure compatibility.
Installing OpenLiteSpeed
Once the repository is added, we install OpenLiteSpeed using the Apt package manager. The update_cache: yes
option ensures that Apt refreshes its package list before installing, which is important since we just added a new repository.
Setting Up PHP with LSPHP
OpenLiteSpeed works best with its own PHP implementation called LSPHP, which is optimized for performance with the LiteSpeed server. We’ve made our playbook more flexible by using a variable for the PHP version (php_version: "83"
) which makes it easier to upgrade in the future.
The extensions we’re including are:
- lsphp83-common: Common PHP libraries and files
- lsphp83-imap: For email functionality (if needed by your applications)
- lsphp83-mysql: For connecting to MariaDB/MySQL databases
- lsphp83-opcache: For PHP code caching, which significantly improves performance
You can add more extensions to this list depending on your specific needs. PHP extensions like curl
, gd
(for image processing), or xml
are commonly used in many web applications.
Creating a System-Wide PHP Symlink
I also included a task that creates a symbolic link from the LSPHP binary to /usr/bin/php
. This is an important usability improvement that allows PHP to be called directly from the command line simply as php
rather than using the full path to the LSPHP binary. This makes it easier to run scripts and commands, especially if you’re working with tools that expect to find PHP in the standard location.
Starting and Enabling the Service
Finally, we make sure OpenLiteSpeed starts automatically at boot and is running immediately. This ensures that the web server is ready to serve content as soon as the instance boots up.
Why OpenLiteSpeed?
You might be wondering why we chose OpenLiteSpeed over more common web servers like Apache or Nginx. OpenLiteSpeed offers several advantages:
- Performance: OpenLiteSpeed is known for its speed and efficiency, often outperforming Apache and even Nginx in benchmarks.
- Built-in Cache: It includes server-level caching without additional modules.
- WordPress Optimization: If you’ll be running WordPress (as many will with this setup), OpenLiteSpeed has specific optimizations and a dedicated LiteSpeed Cache plugin for WordPress.
- Resource Efficiency: It uses fewer resources than Apache while handling more concurrent connections.
- Web Admin Interface: Unlike Nginx, OpenLiteSpeed comes with a web-based admin panel for easier configuration.
In the next section, we’ll look at setting up MariaDB—the database server that will store our application data.
Database Setup
After setting up our web server, it’s time to configure the database. Our playbook_db.yml
handles the installation and security hardening of MariaDB, which will store all our application data:
- name: Setup MariaDB
hosts: all
become: yes
tasks:
- name: Add MariaDB repository
shell: |
wget -O - https://r.mariadb.com/downloads/mariadb_repo_setup | bash
args:
executable: /bin/bash
- name: Install db and python plugin
apt:
name:
- python3-pymysql
- mariadb-server
- mariadb-client
- mariadb-backup
state: present
update_cache: yes
- name: Enable and start MariaDB
systemd:
name: mariadb
enabled: yes
state: started
- name: Change MySQL root authentication to UNIX socket
community.mysql.mysql_query:
login_unix_socket: /var/run/mysqld/mysqld.sock
query: "ALTER USER 'root'@'localhost' IDENTIFIED WITH unix_socket;"
- name: Flush privileges
community.mysql.mysql_query:
login_unix_socket: /var/run/mysqld/mysqld.sock
query: "FLUSH PRIVILEGES;"
- name: Remove anonymous MySQL users
community.mysql.mysql_user:
login_unix_socket: /var/run/mysqld/mysqld.sock
name: ""
host_all: yes
state: absent
column_case_sensitive: false
- name: Disallow root login remotely
community.mysql.mysql_user:
login_unix_socket: /var/run/mysqld/mysqld.sock
name: root
host: "{{ item }}"
state: absent
column_case_sensitive: false
loop:
- "%"
- "0.0.0.0"
- "::"
- name: Remove test database
community.mysql.mysql_db:
login_unix_socket: /var/run/mysqld/mysqld.sock
name: test
state: absent
- name: Remove privileges for test database
community.mysql.mysql_query:
login_unix_socket: /var/run/mysqld/mysqld.sock
query: "DELETE FROM mysql.db WHERE Db='test' OR Db='test\\_%'"
- name: Reload privilege tables
community.mysql.mysql_query:
login_unix_socket: /var/run/mysqld/mysqld.sock
query: "FLUSH PRIVILEGES;"
Adding the MariaDB Repository
Similar to OpenLiteSpeed, we start by adding the official MariaDB repository. This ensures we get the latest stable version rather than whatever might be in the Ubuntu repositories. The script also handles adding the necessary GPG keys for package verification.
Installing MariaDB and Dependencies
We install several MariaDB-related packages:
- mariadb-server: The database server itself
- mariadb-client: Command-line tools for interacting with MariaDB
- mariadb-backup: Tools for database backup and recovery
- python3-pymysql: A Python library that allows Ansible to interact with MariaDB
Securing the Database
The majority of tasks in this playbook focus on security. They implement the same steps typically handled by the mysql_secure_installation
script that database administrators often run manually, but here we’re automating it:
- Change root authentication method: We set the root user to authenticate using the UNIX socket, which means only system users with sudo privileges can access the MariaDB root account. This is more secure than password-based authentication for root.
- Remove anonymous users: By default, MariaDB comes with anonymous user accounts that allow anyone to connect without credentials (though only from localhost). We remove these to strengthen security.
- Restrict root login: We ensure that the root database user can only connect from the local machine, not from remote hosts.
- Remove test database: MariaDB ships with a test database that we don’t need in production, so we remove it.
- Clean up test database privileges: Even after removing the test database, there might be leftover privileges in the system tables, so we clean those up too.
After each set of security changes, we reload the privilege tables to ensure all changes take effect immediately.
We haven’t created any regular database users in this playbook because those will be generated during the first boot of instances created from our AMI. This approach is more secure and flexible, as it means each instance gets fresh, unique credentials instead of all sharing the same hardcoded passwords.
In the next section, we’ll set up phpMyAdmin to provide a web-based interface for managing our MariaDB databases.
Adding phpMyAdmin
Now that we have our database server running, let’s add phpMyAdmin to provide a convenient web-based interface for database management. The playbook_phpmyadmin.yml
file handles this installation:
- name: Install phpMyAdmin
hosts: all
become: yes
tasks:
- name: Ensure dependencies are installed
ansible.builtin.apt:
name:
- unzip
- wget
state: present
- name: Ensure /var/www/ directory exists
ansible.builtin.file:
path: /var/www/
state: directory
owner: www-data
group: www-data
mode: "0755"
- name: Download phpMyAdmin latest version
ansible.builtin.get_url:
url: "https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.zip"
dest: "/var/www/phpMyAdmin-latest-all-languages.zip"
mode: "0644"
- name: Unzip phpMyAdmin
ansible.builtin.unarchive:
src: "/var/www/phpMyAdmin-latest-all-languages.zip"
dest: "/var/www/"
remote_src: yes
- name: Remove downloaded zip file
ansible.builtin.file:
path: "/var/www/phpMyAdmin-latest-all-languages.zip"
state: absent
- name: Find extracted phpMyAdmin directory
ansible.builtin.find:
paths: /var/www/
patterns: "phpMyAdmin-*-all-languages"
file_type: directory
register: pma_directory
- name: Rename phpMyAdmin directory
ansible.builtin.command: mv "{{ pma_directory.files[0].path }}" /var/www/phpmyadmin
args:
creates: /var/www/phpmyadmin # Ensures idempotency
- name: Copy config sample to config.inc.php
ansible.builtin.copy:
src: "/var/www/phpmyadmin/config.sample.inc.php"
dest: "/var/www/phpmyadmin/config.inc.php"
remote_src: yes
mode: "0644"
- name: Ensure phpMyAdmin directory is owned by www-data
ansible.builtin.file:
path: /var/www/phpmyadmin
state: directory
owner: www-data
group: www-data
mode: "0755"
recurse: yes
Why Manual Installation Instead of Package?
You might be wondering why we’re installing phpMyAdmin manually instead of using Ubuntu’s package manager. There are a few good reasons:
- Latest Version: The package repositories often lag behind the latest releases. By downloading directly from the phpMyAdmin website, we ensure we get the most up-to-date version with all security patches.
- Flexibility: Manual installation gives us more control over where and how phpMyAdmin is installed, making it easier to integrate with OpenLiteSpeed.
- Consistency: This approach works the same way regardless of the underlying distribution or package system.
Installation Process
Our playbook follows a straightforward installation process:
First, we ensure the required dependencies (unzip
and wget
) are installed, then we create the /var/www/
directory if it doesn’t already exist. This is where we’ll store the phpMyAdmin files.
We then download the latest version of phpMyAdmin directly from the official website as a ZIP file. After extracting the contents, we clean up by removing the ZIP file to save space.
Since the extracted directory has a version number in its name, we use the find module to locate it dynamically, then rename it to the simpler /var/www/phpmyadmin
path for easier reference.
We create a basic configuration file by copying the provided sample configuration, and finally, we ensure all files are owned by the web server user (www-data
) with appropriate permissions.
What’s Missing?
You might notice that we don’t configure a few important aspects of phpMyAdmin:
- Authentication Secret: We don’t set the blowfish secret used for cookie authentication.
- Database Connection: We don’t configure database credentials.
This is intentional. These settings contain sensitive information like passwords, which should be unique for each instance. Instead of hard-coding them in our AMI, we’ll generate them during the first boot of each new instance using the initialization scripts we’ll set up later. This approach is more secure and flexible.
Security Considerations
Running phpMyAdmin can introduce security risks if not properly configured, as it provides a web interface to your entire database. In a production environment, you should consider additional security measures:
- Access Control: Restrict access to the phpMyAdmin URL using OpenLiteSpeed’s authentication or by placing it behind a VPN.
- Use HTTPS: Always access phpMyAdmin over encrypted connections.
- Regular Updates: Keep phpMyAdmin updated to patch security vulnerabilities.
Our setup provides a solid foundation, but you might want to add these extra security layers depending on your specific requirements.
In the next section, we’ll set up the firewall to protect our server from unauthorized access.
Firewall Configuration
Securing our server with a properly configured firewall is essential for protecting our application from unauthorized access. For this, we’ll use Ubuntu’s Uncomplicated Firewall (UFW), which provides a simple interface to iptables. Let’s look at our playbook_firewall.yml
file:
- name: Setup Firewall
hosts: all
become: yes
tasks:
- name: Allow OpenLiteSpeed WebAdmin port
ufw:
rule: allow
port: "7080"
proto: tcp
- name: Allow HTTP and HTTPS traffic
ufw:
rule: allow
port: "{{ item }}"
proto: tcp
loop:
- "80"
- "8088"
- "443"
- name: Allow SSH
ufw:
rule: allow
port: ssh
proto: tcp
- name: Enable UFW firewall
ufw:
state: enabled
Understanding Firewall Rules
Our firewall configuration follows the principle of “default deny with explicit allows.” This means we block all traffic by default and only open specific ports that our applications need:
- Port 7080: This is the administration interface for OpenLiteSpeed. We open this port so we can access the OpenLiteSpeed Web Admin panel.
- Ports 80 and 443: These are the standard HTTP and HTTPS ports needed for web traffic. Port 80 is for unencrypted web requests, while 443 is for encrypted SSL/TLS connections.
- Port 8088: This is the port being used in the OLS default example listener and vhost. Feel free to close it later after launching the instance if you don’t need it.
- SSH port: We keep SSH access open so we can still log into the server remotely for management. UFW understands the service name “ssh” and maps it to the default port 22.
The final task enables the firewall with our configured rules. UFW is smart enough to add its own rule to allow established connections to continue, so enabling the firewall won’t disconnect your current SSH session.
Security Considerations
While this configuration provides basic security, there are some additional considerations for a production environment:
- Restricting OpenLiteSpeed Admin Access: The WebAdmin port (7080) is currently open to any IP address. For better security, you should limit access to your specific IP address or VPN network IP block. This can be done either in UFW or, preferably, using AWS security groups. For publicly exposed instances or if the instance is behind NAT on a private subnet, AWS security groups provide an additional layer of protection at the network level.
- Securing SSH Access: Instead of leaving SSH open to the world, restrict it to specific trusted IP addresses or ranges using AWS security groups. This significantly reduces the attack surface by preventing unauthorized login attempts from unknown sources.
With the firewall in place, our server now has a basic but effective security perimeter. External access is limited to just the ports we need for our web application to function.
In the next section, we’ll cover the first-boot initialization process, which handles generating unique credentials and completing the setup when a new instance is launched from our AMI.
First-Boot Initialization

One of the most crucial aspects of creating a secure, reusable AMI is proper initialization when an instance first boots. Instead of hard-coding credentials, we’ll dynamically generate them at boot time. Let’s examine our playbook_init.yml
file, which sets this up:
---
- name: Setup MariaDB first-boot credential generation
hosts: all
become: yes
tasks:
- name: Create db-init script
copy:
dest: /usr/local/bin/db-init.sh
content: |
#!/bin/bash
set -e
# Log all output
exec > >(tee /var/log/db-init.log) 2>&1
echo "Starting MariaDB credential generation on first boot..."
# Generate a secure random password
DB_PASSWORD=$(openssl rand -base64 32 | tr -d "=/+" | cut -c1-24)
# Create database admin user for phpMyAdmin
echo "Creating database admin user..."
mariadb -e "CREATE USER '{{ mariadb_admin_user | default('dbadmin') }}'@'localhost' IDENTIFIED BY '$DB_PASSWORD';"
mariadb -e "CREATE USER '{{ mariadb_admin_user | default('dbadmin') }}'@'%' IDENTIFIED BY '$DB_PASSWORD';"
mariadb -e "GRANT ALL PRIVILEGES ON *.* TO '{{ mariadb_admin_user | default('dbadmin') }}'@'localhost' WITH GRANT OPTION;"
mariadb -e "GRANT ALL PRIVILEGES ON *.* TO '{{ mariadb_admin_user | default('dbadmin') }}'@'%' WITH GRANT OPTION;"
mariadb -e "FLUSH PRIVILEGES;"
# Create credentials file
echo "Creating credentials file..."
CREDS_FILE="/home/ubuntu/mariadb_credentials.txt"
cat > $CREDS_FILE << EOF
# MariaDB Admin Credentials - Use for phpMyAdmin admin login
Username: {{ mariadb_admin_user | default('dbadmin') }}
Password: $DB_PASSWORD
# To change password, run:
# mariadb -e "SET PASSWORD FOR '{{ mariadb_admin_user | default('dbadmin') }}'@'localhost' = PASSWORD('new_password');"
# mariadb -e "SET PASSWORD FOR '{{ mariadb_admin_user | default('dbadmin') }}'@'%' = PASSWORD('new_password');"
EOF
# Set proper permissions
chown ubuntu:ubuntu $CREDS_FILE
chmod 600 $CREDS_FILE
echo "MariaDB credential generation completed successfully."
mode: "0755"
- name: Create OpenLiteSpeed password generation script
copy:
dest: /usr/local/bin/ols-init.sh
content: |
#!/bin/bash
set -e
# Log all output
exec > >(tee /var/log/ols-init.log) 2>&1
echo "Starting OpenLiteSpeed admin password generation on first boot..."
# Generate a strong random password (24 alphanumeric chars)
OLS_PASSWORD=$(openssl rand -base64 32 | tr -d "=/+" | cut -c1-24)
# Generate the encrypted password for OpenLiteSpeed
# We need to use the OpenLiteSpeed password encryption tool
ENCRYPTED_PASS=$(/usr/local/lsws/admin/fcgi-bin/admin_php \
-c /usr/local/lsws/admin/conf/php.ini \
-q /usr/local/lsws/admin/misc/htpasswd.php "$OLS_PASSWORD")
# Update the OpenLiteSpeed htpasswd file
ADMIN_USER="{{ ols_admin_user | default('admin') }}"
sed -i "/^$ADMIN_USER:/d" /usr/local/lsws/admin/conf/htpasswd
echo "$ADMIN_USER:$ENCRYPTED_PASS" >> /usr/local/lsws/admin/conf/htpasswd
# Create credentials file for user access
CREDS_FILE="/home/ubuntu/openlitespeed_credentials.txt"
cat > $CREDS_FILE << EOF
# OpenLiteSpeed Admin Credentials
Username: $ADMIN_USER
Password: $OLS_PASSWORD
# Admin URL: https://your-server-ip:7080
EOF
# Set proper permissions
chown ubuntu:ubuntu $CREDS_FILE
chmod 600 $CREDS_FILE
# Restart OpenLiteSpeed to apply changes
systemctl restart lsws
echo "OpenLiteSpeed admin password generation completed successfully."
mode: "0755"
- name: Create cloud-init configuration
copy:
dest: /etc/cloud/cloud.cfg.d/99_ols_init.cfg
content: |
#cloud-config
runcmd:
- systemctl start mariadb
- /usr/local/bin/db-init.sh
- /usr/local/bin/ols-init.sh
- rm -f /usr/local/bin/db-init.sh
- rm -f /usr/local/bin/ols-init.sh
mode: "0644"
Auto-Generating Secure Credentials
Our initialization approach involves creating two scripts that will run on the first boot of a new instance:
- db-init.sh: Generates a random, secure password for the MariaDB admin user, creates the user account with full privileges, and saves the credentials to a file.
- ols-init.sh: Creates a random password for the OpenLiteSpeed admin interface, encrypts it using OpenLiteSpeed’s password tool, updates the configuration, and saves the credentials to a file.
These scripts use OpenSSL to generate cryptographically secure random passwords, ensuring each instance has unique credentials that aren’t hardcoded in the AMI.
Using cloud-init for First Boot Tasks
To ensure these scripts run automatically when an instance launches, we use cloud-init
, which is specifically designed for initialization tasks in cloud environments. Our cloud-init configuration:
- Starts the MariaDB service
- Runs our database initialization script
- Runs our OpenLiteSpeed initialization script
- Removes the initialization scripts (for security)
Cloud-init is ideal for this task because:
- It runs automatically on first boot
- It’s specifically designed for cloud instance initialization
- It integrates well with AWS
Storing Credentials Securely
After generating the credentials, both scripts save them to text files in the ubuntu
user’s home directory with strict permissions (only readable by the ubuntu
user). This provides an easy way for administrators to access the credentials when they first connect to the instance, while keeping them secure from other users on the system.
For the MariaDB credentials, we create a user that can log in from both localhost and remote hosts (the ‘%’ wildcard), giving flexibility in how you connect to the database. The credentials file also includes instructions for changing the password if needed.
For OpenLiteSpeed, we need to handle its specific password encryption format. The script uses OpenLiteSpeed’s own tools to generate a properly encrypted password hash, then updates the htpasswd
file and restarts the server to apply the changes.
In a production environment, you might want to consider additional security measures, such as automatically uploading the credentials to AWS Secrets Manager or sending them to a secure notification channel rather than storing them in plain text files on the server. If you stick with the current approach, make sure to SSH into the server after first boot, retrieve and store the credentials securely in your password manager or other secure storage, and then immediately remove these text files from the server to prevent potential exposure.
In the next section, we’ll cover the backup system integration bonus playbook, which helps protect your data once instances are running.
Automated Backups Integration
Let’s face it—nobody thinks about backups until they desperately need one! To save future-you from that panic-inducing moment when something goes wrong, we’ll integrate an automated backup system right into our AMI.
Adding the Backup Submodule
Remember those WordPress backup scripts I created in my previous tutorial? If you’re not aware of them, go and check that post! I have published the scripts and an Ansible playbook for its setup in bugfloyd/ols-wp-backup
GitHub repository. We’ll be using those same scripts to provide bulletproof backup functionality for our AMI. Instead of reinventing the wheel, we’ll pull those scripts directly into our project using Git submodules.

Since we’re using git submodules to pull in these backup scripts, we first need to initialize our project as a git repository (if you haven’t already):
cd aws-ols-mariadb-ami
git init
Now we can add the backup repository as a submodule:
git submodule add git@github.com:bugfloyd/ols-wp-backup.git backup
This will create a .gitmodules
file in your project that looks like this:
[submodule "backup"]
path = backup
url = git@github.com:bugfloyd/ols-wp-backup.git
If you’re cloning this project from a repository later, you’ll need to initialize and update the submodule:
git submodule init
git submodule update
This pulls in the entire backup script repository, complete with its own Ansible playbook that we can use directly in our AMI build process.
Configuring S3 Backups
Now that we have the backup scripts as part of our project, we need to configure them properly. If you look back at our main playbook.yml
, you’ll see this section toward the end:
- name: Setup Backups
import_playbook: backup/playbook.yml
vars:
s3_backup_bucket: "{{ s3_backup_bucket }}"
s3_backup_dir: "{{ s3_backup_dir }}"
aws_region_backup: "{{ aws_region_backup }}"
This imports the playbook from our submodule and passes three critical variables:
s3_backup_bucket
: The S3 bucket where backups will be storeds3_backup_dir
: The directory path within the bucket (defaults to “ec2-backups/ols”)aws_region_backup
: A secondary AWS region for storing backups (for disaster recovery)
These variables come all the way from our Packer configuration, making our backup setup flexible and reusable across different environments.
The backup playbook handles several important tasks:
- Installing required dependencies (AWS CLI)
- Setting up the backup scripts in the correct location
- Create the backup configuration file
- Configuring automated backup schedules via cron jobs
Important Note on AWS IAM Permissions
For the backup system to work properly, you must ensure two things:
- The S3 bucket specified in your configuration must exist before running any backups
- The EC2 instance must have IAM permissions to access this bucket
The Ansible playbook doesn’t handle creating the S3 bucket or configuring IAM roles – these need to be set up separately in your AWS environment. The best practice is to create a dedicated IAM role for your web server instances with the required permissions and attach it when launching instances.
For a complete breakdown of how the backup scripts work, including detailed flow and explanation of each component, check out my previous post on building an S3-powered backup solution for WordPress and also its GitHub repository.
Post Setup
After installing and configuring all our components, we need a cleanup phase to ensure our AMI is ready for production use. This is handled by the playbook_post.yml
file, which takes care of final housekeeping task:
- name: Pre-Setup
hosts: all
become: yes
tasks:
- name: Clean package cache
shell: apt-get clean && rm -rf /var/lib/apt/lists/*
The single task in this playbook executes a shell command to clean up the APT package cache and remove downloaded package files.
In the next section, we’ll wrap everything up and show you how to build and use your new, fully configured AMI.
Building the AMI
Now that we’ve set up all our configuration files and playbooks, it’s time to put everything together and build our custom AMI. This is where all our hard work pays off!
Running Packer
Before running Packer, we need to create a variables file to customize our build. Create a file named variables.pkrvars.hcl
with your specific values:
aws_region_main = "eu-west-1"
aws_region_backup = "eu-central-1"
s3_backup_bucket = "your-backup-bucket-name"
Feel free to adjust these values to match your AWS environment. The aws_region_main
is where your AMI will be built and stored, while aws_region_backup
specifies where your backups will go (ideally a different region for disaster recovery).
Now, let’s initialize Packer to download the required plugins:
packer init ami.pkr.hcl
Next, validate your configuration to make sure everything is set up correctly:
packer validate -var-file=variables.pkrvars.hcl ami.pkr.hcl
If that looks good, it’s time to build the AMI:
packer build -var-file=variables.pkrvars.hcl ami.pkr.hcl
Now sit back and watch the magic happen! Packer will:
- Launch a temporary EC2 instance
- Connect to it via SSH
- Run all our Ansible playbooks in sequence
- Create an AMI from the configured instance
- Terminate the temporary instance
- Register the AMI in your AWS account and in the requested region
This process typically takes 10-15 minutes. It’s a good time to grab a coffee… or maybe do a few jumping jacks to celebrate how much manual work you’re avoiding!
Verifying the AMI
Once Packer completes, you’ll see a message with your new AMI ID. It will look something like ami-0abc123def456789
. Let’s verify the AMI before using it.
Head over to the AWS Management Console and navigate to EC2 > AMIs (within the region that you have used). You should see your newly created AMI named openlitespeed-mariadb-ami-[timestamp]
. Check that it has the right name, description, and tags as specified in your Packer configuration.
You can also use the AWS CLI to verify your AMI:
aws ec2 describe-images --image-ids ami-0abc123def456789 --region <YOUR_REGION>
Replace ami-0abc123def456789
with your actual AMI ID.
Launching an Instance Using the AMI
The moment of truth! Let’s launch an instance using our custom AMI to make sure everything works correctly.
From the AWS Console:
- Navigate to EC2 > Instances > Launch Instance
- Click “My AMIs” and select your custom AMI
- Choose an instance type (
t3.small
or larger recommended) - Configure instance details:
- Set up a VPC and subnet
- Assign a public IP if needed (to provide direct OLS and SSH access without configuring NAT and routing tables)
- Attach an IAM role with S3 access for backups (critical for the backup system to work!)
- Configure storage (default is usually fine, but consider your needs if hosting multiple sites)
- Configure security groups:
- Configure security groups:
- Allow SSH (port 22) – consider restricting to your IP address only
- Allow default example listener HTTP (8088) – You can remove this rule later
- Allow HTTP (port 80) – Allow from
0.0.0.0/0
if not using a CDN (e.g. AWS CloudFront) - Allow HTTPS (port 443) – Allow from
0.0.0.0/0
if not using a CDN (e.g. AWS CloudFront) - Allow OpenLiteSpeed Admin (port 7080) – ideally restrict to your IP address
- Launch the instance with your SSH key pair
Verify the Deployment
After the instance launches (usually takes 1-2 minutes), connect to it via SSH:
ssh ubuntu@your-instance-public-ip
Once connected, check that everything is set up correctly:
Verify credentials are generated:
ls -l ~/mariadb_credentials.txt ~/openlitespeed_credentials.txt
Verify services are running:
systemctl status lsws mariadb
Check that the firewall is enabled:
sudo ufw status
Verify the backup system:
ls -l /opt/ols-backup/
sudo grep "Daily backup" /var/spool/cron/crontabs/root
cat /etc/backup-config.conf
If everything checks out, congratulations! You’ve successfully created a custom AMI with OpenLiteSpeed, MariaDB, PHP, and an automated backup system!
You can now access:
- Example website with OLS documents
http://instance-public-ip:8088
- OpenLiteSpeed Admin:
https://instance-public-ip:7080
(using credentials from~/openlitespeed_credentials.txt
)
Setting Up phpMyAdmin
To access phpMyAdmin, you’ll need to create a virtual host configuration in OpenLiteSpeed. For testing purposes, we can add it to the default example vhost:
- Open OpenLiteSpeed Admin -> Virtual Hosts
- Click on the view icon in the “Actions” column for the “Example” vhost
- Go to “Context” tab and click on the plus (+) icon to create a new one
- Select “Static” for the Type and click “Next”
- Enter these details:
- URI:
/phpmyadmin/
(Note the trailing slash is important) - Location:
/var/www/phpmyadmin
- Accessible:
Yes
- Index Files:
index.php
- Keep the other default/empty values and click on “Save”
- URI:
- Do a “Graceful Restart” by clicking on the green turning arrow icon on the top right
- Navigate to
http://instance-public-ip:8088/phpmyadmin/
and login using credentials from~/mariadb_credentials.txt
Remember to store these credentials securely and delete the plain text files from the server once you’ve recorded them somewhere safe!
Installing & Configuring WordPress
Now that we have a running instance using our custom AMI, it’s time to set up a WordPress website on it. When testing the OLS official AMI from AWS Marketplace, I noticed it creates a new WordPress instance on the first SSH login by running an interactive script. To be honest, I don’t find this very sustainable or consistent since the created vhost and file locations don’t follow the recommended approach for adding websites. For our custom AMI, I decided not to include any default WordPress installation, so we can create sites after launching the instance using the recommended approach.
The good news is that our built AMI is compatible with OLS scripts, so we can use the official script approach to create vhosts and WordPress instances.
DNS Configuration
First, create an A record in your domain’s DNS manager pointing to your instance’s IP address (note that in production, you might need a load balancer or CDN). After setting the DNS record, verify that the domain is resolving correctly by testing it on a service like DNS Checker.
Setting DNS records is required if you want to generate SSL certificates on the instance. Otherwise, you can evaluate the setup and test it using just the instance IP for now.
Creating OLS Listeners
Before running the OLS setup script, we need to create listeners in our OpenLiteSpeed instance:
- Open OpenLiteSpeed Admin -> Listeners
- Click on the plus (+) icon to create a new Listener and provide the below details:
- Listener Name:
HTTP Listener
- IP Address: Any IPv4
- Port: 80
- Secure: No
- Listener Name:
- Save the listener configuration.
- If you plan to use SSL certificates, add another listener for HTTPS connections:
- Listener Name:
HTTPS Listener
- IP Address: Any IPv4
- Port: 443
- Secure: Yes
- Listener Name:
- Save the HTTPS listener. No need to provide certificates yet—we’ll do this later.
- Do a “Graceful Restart” by clicking on the green turning arrow icon on the top right.
Running the WordPress Setup Script
SSH into the server and download the OLS script:
wget https://raw.githubusercontent.com/litespeedtech/ls-cloud-image/master/Setup/vhsetup.sh
chmod +x vhsetup.sh
When building our AMI, we configured the DB root user to use Unix socket authentication. Unfortunately, the OLS script doesn’t support this and expects native password authentication. To work around this limitation, we’ll temporarily enable password authentication:
DB_ROOT_PASSWORD="my_secure_password"
sudo mariadb -u root -e "ALTER USER 'root'@'localhost' IDENTIFIED VIA unix_socket OR mysql_native_password USING PASSWORD('$DB_ROOT_PASSWORD');"
echo "root_mysql_pass=\"$DB_ROOT_PASSWORD\"" > ~/.db_password
chmod 600 ~/.db_password
The above commands enables mysql_native_password
alongside the unix_socket
for the root user and then stores the password in ~/.db_password
file as the OLS script expects this. Be sure to replace my_secure_password
with an actual secure password.
I’ve also noticed that vhsetup.sh
uses WP-CLI to set up WordPress, which might hit PHP memory limits. To prevent this without modifying the original script, we’ll create a temporary php.ini
file with higher memory limits:
I also noticed that vhsetup.sh
uses WP-CLI to setup WordPress which under the hood uses PHP and Composer to download and configure WordPress files. One issue that you might face is that using the default PHP configuration WP-CLI might reach the PHP memory limits and throw a fatal error. To overcome this issue without modifying the original OLS script we can create a temporarily php.ini
file and force the script to use it to gain higher memory limits:
echo "memory_limit = 512M" > /tmp/php-memory.ini
Now we can run the script by providing the custom php.ini override in PHPRC
environment variable:
sudo PHPRC=/tmp/php-memory.ini bash vhsetup.sh
Provide the requested information and proceed. Here is my inputs and the script output for the test website that I was setting up on test.bugfloyd.com
subdomain. I make my commands and input in bold:
ubuntu@ip-10-0-1-91:~$ sudo PHPRC=/tmp/php-memory.ini bash vhsetup.sh
Current platform is UBUNTU24 ubuntu noble.
Please enter your domain: e.g. www.domain.com or sub.domain.com
Your domain: test.bugfloyd.com
The domain you put is: test.bugfloyd.com
Please verify it is correct. [y/N]: y
Vhost created success!
Do you wish to issue a Let's encrypt certificate for this domain? [y/N]: y
test.bugfloyd.com check PASS
Please enter your E-mail: info@bugfloyd.com
The E-mail you entered is: info@bugfloyd.com
Please verify it is correct. [y/N]: y
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Requesting a certificate for test.bugfloyd.com
Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/test.bugfloyd.com/fullchain.pem
Key is saved at: /etc/letsencrypt/live/test.bugfloyd.com/privkey.pem
This certificate expires on 2025-06-14.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.
certificate has been successfully installed...
Web Server Restart hook already set!
Please choose whether to install WordPress or ClassicPress
Install WordPress? [y/N]: y
The Application you input is WordPress. [y/N]: y
Setting WordPress
Install litespeed-cache.zip
ed exist
Finish WordPress
WP downloaded, please access your domain to complete the setup.
Do you wish to add a force https redirection rule? [y/N]: y
Force HTTPS rules added success!
Setup finished!
If you see deprecation warnings about using the mysql
command, don’t worry—this is because MariaDB has deprecated the original mysql command, but the OLS script still uses it. Everything should still work correctly.
Configuring SSL for the Listener
Now let’s configure the SSL certificate on the HTTPS listener:
- Open OpenLiteSpeed Admin -> Listeners
- Open
HTTPS Listener
and go to the SSL tab Edit the “SSL Private Key & Certificate” section and provide the certificate paths from the script output:- Private Key File:
/etc/letsencrypt/live/your-domain.com/privkey.pem
- Certificate File:
/etc/letsencrypt/live/your-domain.com/fullchain.pem
- Chained Certificate: Yes
- Private Key File:
- Save the HTTPS listener.
- Do a graceful restart.
Don’t forget to clean up by removing the temporary password authentication from the DB root user and deleting the password file:
sudo mariadb -e "ALTER USER 'root'@'localhost' IDENTIFIED VIA unix_socket;"
rm /tmp/php-memory.ini ~/.db_password
That’s it! Now if you open the website in your browser, you should see the WordPress installation wizard served over HTTPS with the database credentials and table name pre-configured! Yaaay!
Appendix: How to Use the Backup System
Once an instance is launched from our AMI, the backup system is already set up and ready to go. Here’s how to use it:
Automatic Backups
By default, the system will:
- Run a daily backup at 3 AM (configured via cron)
- Automatically detect virtual hosts from OpenLiteSpeed configuration
- Back up all databases (excluding system databases)
- Back up OpenLiteSpeed configuration files
- Back up system package list and crontab for easier recovery
- Upload everything to your configured S3 bucket
Manual Backups
If you need to run a backup immediately, simply execute:
sudo /opt/ols-backup/backup.sh
This will run through the same backup process as the automatic backup, but on demand.
Restoring from Backup
The restoration process is surgical and selective – you can restore specific sites and databases without affecting others:
sudo /opt/ols-backup/restore.sh example.com example_db 2025-03-16_03-00-00
Where:
example.com
is the website domain to restoreexample_db
is the database name to restore2025-03-16_03-00-00
is the backup timestamp
The restore script is smart enough to avoid overwriting existing content. It will only restore websites and databases that don’t already exist, making it safe to use even on active servers.
Taking Your Custom AMI to the Next Level
With your new custom AMI, you can:
- Use it as a base for launching multiple identical web servers
- Further customize it for specific applications and pre-install other packages and software that you need like Redis or Memcached
- Adapt it to different environments by modifying configuration variables
The beauty of this approach is that you can modify the Ansible playbooks, rebuild the AMI, and deploy updated versions while keeping the same basic architecture. The complete solution with all scripts and configurations used in this tutorial is available on my GitHub repository: aws-ols-mariadb-ami.
So there you have it—your very own custom web server AMI, built exactly the way you want it, with all the optimizations and security configurations baked right in. No more repetitive manual setup, no more forgetting critical steps, and no more paying for marketplace AMIs when you can build your own better version!
Have you built custom AMIs before? What other configurations would you add to this setup? Let me know in the comments!
Leave a Reply