This is an article I wrote for LinkedIn, published here for posterity.
Throughout the many jobs I’ve had developing software, we’ve talked a lot about technical debt. Usually this is in reference to legacy systems that are so very vital to the business that they can’t be replaced, or architectural choices that only made sense at the time. Recently, I’ve been struggling with another type of tech debt altogether: the ghosts of past hobby projects. I’m talking about things I built some 15 years ago that I still use regularly. Seeing that old code still doing what it’s doing is a source of comfort and concern in equal measure.
Sometimes, it’s easy. Just recently, I needed an app that I had written in 2016 and hadn’t touched since. Thankfully, it was .NET, and if there’s one thing you can count on our friends at Microsoft for, it’s backwards compatibility. When I opened up the Jurassic-era source code, I was cheerfully suggested to upgrade it to the latest framework, and Visual Studio did so without a hitch, the app whizzing back to life. An altogether different experience to the time that I tried to modernize an app originally built with AngularDart. What was I thinking?
I used to build a lot of sites using WordPress, because the low cost and ubiquity of LAMP (Linux, Apache, MySQL, PHP) hosting made it an attractive choice. Even today, it’s still a very popular way to get content on the web. Unfortunately, WordPress has a poor – and mostly undeserved – reputation when it comes to security. Major flaws in WordPress itself are historically rare, and unless running shady plugins, keeping up to date with the latest version has usually kept one out of harm’s way.
However, if the interactive element of running a blog-type site isn’t necessary – as it rarely is nowadays, with discussions having moved to social media – hosting a WordPress blog as a static site on AWS can be more cost-effective than even traditional LAMP hosting, and a very effective way to alleviate security concerns. So, this last weekend I decided to convert a couple of my old WordPress instances.
Finding a good step-by-step guide that covered the whole process was difficult, so I wrote one myself. Please have a look if you’re interested.
As for that venerable AngularDart app – I’m very slowly rewriting it in Flutter, and the backend for it is eventually going to get a microservice makeover. Until then, like all the other legacy code out there, it’s continuing to do what it was designed to do, unbothered.
Do you have a piece of old software that you wrote yourself, and still depend on? Or is that just a me thing?
Migrating a WordPress blog to Amazon CloudFront
This guide is intended to take you through the process of migrating an existing live WordPress blog to a static website hosted on Amazon CloudFront. Typically more cost-effective than traditional hosting, replacing dynamically executed PHP code with static HTML pages also considerably improves your security posture. You will no longer have to worry about keeping WordPress, or any other component of the stack up to date with security patches. The disadvantage is that any interactive elements of your blog – such as comments, pingbacks, and many plugins – are lost.
Necessary information: I work at Amazon Web Services, but this guide is based purely on my personal opinions and experiences. Where specific products such as plug-ins are mentioned, they are merely the ones I personally decided to use, and not official endorsements or recommendations by AWS.
Prerequisites
- You have an AWS account.
- You understand that following this guide will incur costs on your AWS account.
- You have a live WordPress site, with admin rights.
- You have a domain name for your site, where you control the DNS records.
Creating a static copy of your WordPress site
- On your live site, install and activate the WP Migrate Lite plug-in.
- Use this to create a backup of your site. You will typically only need to include the database, uploads, and themes. If you have made extensive customizations, you may need to select additional files.
- After making the backup, deactivate the plug-in on the live site.
- Install Local on your local machine and run it.
- Import the backup into Local, creating a new site. I had issues using nginx as the web server for my sites, even though it’s the preferred choice; I had better luck with Apache.
- Trust the SSL certificate generated by Local, otherwise you won’t be able to open the WordPress admin console.
- Click “Open site” and browse through the site, observing that everything is working as it did when the site was live. If not, troubleshoot.
- Open the admin console by clicking “WP Admin”. You will probably want to make some changes. For example, you should remove WordPress’s built-in search, disable comments, and so on. No worries; if you make a mistake, you can revert changes by deleting the site from Local and re-importing the backup.
- Install and activate the Simply Static plug-in.
- Configure Simply Static according to your needs. I mainly had to adjust the crawlers in order to get all the pages I needed generated.
- In the Deploy menu, select a local directory as your deployment destination. If you bought Simply Static Pro, you can directly select an S3 bucket here, but I will assume you’re manually uploading to S3.
- Generate the static export. This can take a fair bit of time, and the Activity Log does not update immediately. Simply Static generates a detailed debug log (in the wp-content/uploads/simply-static folder) that you can check to see what’s going on.
You now have a static export of your site. You can repeat the export with different settings later, if need be.
Setting up static website hosting in AWS
- In the AWS console, ensure you have selected your preferred region. Most services we’ll use are global, but not all. The preferred region will typically be the one closest to you.
- If you didn’t do so already, go to Billing and Cost Management and create a budget. While the costs of this solution should be modest, you always want to remain in control of your cloud spend.
- Go to S3 and create a general-purpose bucket. The bucket name should, by convention, be the same as your website domain name. You do not need to enable public access, since we’ll be using CloudFront.
- Go to Route 53 and create a public hosted zone for your domain name. You can use your existing registrar if you don’t want to transfer the domain to Route 53, as long as your registrar will let you change the DNS records.
- Go to Certificate Manager and change the region to us-east-1 (N. Virginia). You must use us-east-1 for the certificate used by CloudFront, which is what we’re creating now.
- Create a public certificate request for your domain name and any other names you want to use (like www.*).
- To validate the request, you will have to create CNAME records. If your domain is already hosted in Route 53, the AWS console can create them for you, otherwise you’ll have to copy-paste. Note that if you change the NS records of your domain to point to Route 53 before the static site is up and running, the site might temporarily become unavailable, so I set the validation CNAME records on my former DNS provider prior to changing the NS records over to AWS.
- Wait for the certificate to be validated. This typically takes less than a minute.
- Go to CloudFront and create a distribution. Enter your domain name for both the distribution name and description (some list views only show the description). Also enter and check your domain name as the Route 53 managed domain for the distribution. Click next.
- Select the Amazon S3 origin type and select your S3 bucket by clicking Browse S3. Click next.
- I recommend enabling Web Application Firewall, even though your site will be static. More thoughts on this below. Click next.
- Select the certificate you created for your domain. Click next, review and click Create distribution. CloudFront will provide the bucket security policy needed for CloudFront to access it.
- Now, open the distribution and click Edit. Add any alternate domain names (again, like www.*) that your site uses. Enter index.html for the default root object. If you do not need global distribution, change the price class. Save changes.
- Click the shortcut to route the domains to CloudFront, this will create the necessary records in Route 53.
- Make note of the CloudFront distribution ID and the distribution domain name (*.cloudfront.net).
- Go to Functions in CloudFront. Create a function to rewrite paths to index.html, as you need this behavior for WordPress – you can simply copy the official sample. Publish the function, and associate it with your distribution.
Your site is now set up, but the bucket is still empty, and if your domain isn’t already hosted by Route 53, the domain is not resolving to your CloudFront distribution. This is fine. Next, we’ll import the files.
Syncing your local site to the S3 bucket
- In the AWS console, go to IAM and create a user that you’ll be using to sync the files to S3. It’s a good practice to not use your root user, or an IAM user with broad administrative rights, for this.
- Create permissions policies for the user that allow list/get/put/delete actions on your bucket and its contents, and invalidate rights on your CloudFront distributions. I have provided policy samples below that you can edit for your use.
- Create an access key for the IAM user, for use with the CLI. Save the Access key and the Secret access key for later, or download the .csv. To follow best practices for security, you should rotate this key regularly.
- Install the AWS CLI on your local machine.
- Open a terminal in the folder where you deployed the static export of your local WordPress site.
- Create a profile based on your IAM user and the access key you created by running aws configure –profile
(docs). The profile name is the name of your IAM user. The CLI will prompt you for values. Enter the region where you created the S3 bucket. - Now sync the files to S3. Always start with a dry run to make sure you got everything right: aws s3 sync
s3:// (docs)–delete –profile –dryrun - If you encounter errors, troubleshoot. If not, run the command without –dryrun and wait for the site to upload.
- Invalidate the CloudFront distribution: aws cloudfront create-invalidation –distribution-id
–paths “/*” –profile (docs) - Now, open the distribution domain name in your favorite browser and verify that the site is displaying correctly.
Any time you update the site, you will need to re-run the sync and invalidation commands – so putting these in a script can be helpful. From now on, you’ll update the site by running it in Local, exporting it using Simply Static, and then syncing it to S3.
Invalidating the CloudFront distribution on every update isn’t necessary, it will simply make your changes appear faster.
Updating your former DNS provider’s records
- In the AWS console, go to Route 53, open your domain in Hosted zones, and make note of the NS records.
- Update the NS records with your DNS provider/domain registrar to be the same as those in Route 53. Typically, your DNS provider or domain registrar will remove all other records at this point, so if you made any customizations, take note of them first so you can re-apply them in Route 53.
- If your domain used DNSSEC, you will need to take additional steps to enable signing in Route 53, otherwise the new records will be rejected.
- Once the DNS changes propagate, you should be able to access your new static site using your domain name. This can take between a few minutes and several hours, depending on the TTL of the former records.
Everything should be happily working now, so consider the following supplementary reading; there are nuances to how you set this up that can have an impact on the cost and performance of the solution.
Notes on S3 Static Hosting vs. CloudFront
When researching this, many of the guides I found suggested hosting the website directly out of the S3 bucket, not using CloudFront. While this is certainly a possibility, the main issues I had with this are that the bucket must have public access, HTTPS isn’t supported(!), and Web Application Firewall (WAF) isn’t available. However, because you’re using fewer services, it’s going to cost you even less. While HTTPS is arguably not technically necessary for static content, it is practically a necessity these days for a public-facing web site.
Another important note is that traffic between S3 and CloudFront is not charged, so by using CloudFront, you get global distribution from a single S3 bucket sitting in a single region. You can even change the object storage class of the items in your bucket to a lower cost option, if you wish.
Notes on Web Application Firewall
You might wonder why WAF is needed for a purely static site. The short answer is: it’s not! But, it’s a cost-effective way to protect against the so-called Denial Of Wallet Attack, where malicious crawlers are looking around your site for non-existent vulnerabilities, wasting bandwidth. There is a fixed charge for enabling WAF and if your site sees very limited traffic, it might look like a large portion of your overall cost. However, remember that CloudFront does not charge you for requests blocked by WAF, so instead of the unknown variable cost of malicious traffic, you have a known fixed cost for the Web ACLs and rules in your WAF, and a very small cost per request that is controlled by WAF.
Sample IAM policies for a CLI user
This policy will allow your user to run aws s3 sync:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3Sync",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::",
"arn:aws:s3:::/*"
]
}
]
}
This policy will allow your user to run aws cloudfront create-invalidation:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudfrontInvalidate",
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": ""
}
]
}


