many years ago I created my own webpage, it all started with pure, HTML evolved to a wordpress and finally became a pelican based setup. It got served on many different hosting providers but since a few years it's running on S3 storage and hosted through cloudfront all over the world.

It's a very fast setup, and once the site has been deployed and every little service has been configured and implemented the only thing I need to do is writing content in markdown without having to consider how to deploy or how it will look.

In this post I'll try to describe how I configured every service, connected them to each other and automated them through travis-ci.

pelican

it all starts by initiliazing your pelican framework following the quickstart guide. Before you proceed you can configure and write some initial content for your webpage locally and see how it will look like without having it published to the world.

You can choose a theme of your choose, adding plugins for various use cases or even import an existing webpage.

Once you have something you want to publish we can proceed to publish it to the world.

github

versioning is something very important in my opinion, by doing so you can easily track changes and collaborate with a team on one web project. Also other people can easily propose changes on your website this way through pull requests. Another purpose of using a github repository is the way we could trigger automation which could deploy our project to different hosting providers.

A nice side effect is that you have a backup in the "cloud".

For the different pelican plugins and themes I use git submodules so I can easily update them with upstream changes.

AWS

as I already mentioned I opted for AWS to host my blog and some other websites I manage. It's easy to deploy, it's fast and rather cheap compared to other providers, I pay about 30 EUR a year for everything, including domain registration, traffic all over the world and storage.

IAM

I learned that using dedicated users for every single use case isn't a bad idea. So for this setup we need a dedicated user with pro-grammatic access, which have only full access to S3 and cloudfront only for the distributions we configure. The generated access and secret keys will be used by travis to upload new content to our S3 bucket and invalidate cache. They can be created by following the documentation

Access for letsencrypt policy needs to be granted to the user which will be used to update the blog.

route53

Creating your own domain or migrating to the DNS service route53 is a very easy way to manage your domain also on amazon. It's easy in the end after all by having one bill for everything.

The only thing I struggled with was the way to update your nameservers after you migrated the domain and made an error in them when migrating. In the route53 configuration pane it can be found in the "Registered domains" tab and not in the hosted zones! Took me some time to figure out the difference between those 2.

Also don't forget to hide your personal data for the different contacts you configured for every registered domain.

S3

Amazon S3 object storage service can be used to serve static files and therefore a static webpage we will be using this feature to host our pelican based website on.

I found a great how to on stackoverflow which explains perfectly how you have to create 2 buckets to redirect between www and the naked domain and how to enable https once you have that feature enabled.

cloudfront

cloudfront is amazon's own CDN serving your website around the world on different edge locations. It's easy to implement with your static S3 based setup too.

Cloudfront caches your site on the different edge locations, by using cache invalidation we can trigger the different locations to update their cache according to the new files when being pushed through travis later on.

Letsencrypt

letsencrypt is a free, automated and open Certificate Authority which can be used in combination with S3 using the certbot-s3front tool to get your site served through https.

I automated this process with a script based in /usr/local/bin/:

#!/bin/bash
#
# renew certificates for X

export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""

certbot --agree-tos -a certbot-s3front:auth \
--certbot-s3front:auth-s3-bucket BUCKET-NAME \
--certbot-s3front:auth-s3-region REGION \
-i certbot-s3front:installer \
--certbot-s3front:installer-cf-distribution-id CLOUDFRONT-ID \
-d DOMAIN --renew-by-default --text

if [[ $? -ne 0 ]]; then
    /usr/bin/ntfy -b telegram send "ERROR | Certificate renewal for DOMAIN has failed on $(date)!"
    exit 1;
fi

/usr/bin/ntfy -b telegram send "SUCCESS | Certificates for DOMAIN have been renewed till $(date -d "3 months") "

It will source the IAM credentials you created for this specific use case, use certbot to renew your certificate on the specified cloudfront distribution and use ntfy to inform you about it through telegram in this case.

When the renewal fails it will also sent a notification, I didn't had this feature in the past which lead to expiration of the certificate..

It's triggered by cron:

0 0 1 */2 * /usr/local/bin/letsencrypt-*

it will be triggered the first day every 2 months, I used this sequence so in case of issues I have time enough to solve it before it expires.

Since we now have everything in place and your website should already be available hosted on AWS we can now automate the whole setup. Meaning the only thing you'll have to perform afterwards is writing content and pushing to git.

Travis

travis is a tool which enables you to easily write automation tasks every time a new commit has been pushed to your repository. Once you've created your account and linked it to github you'll have to enable travis through their GUI on the repositories you want to monitor for automation.

Once you've done that for your repository you'll have to configure some credentials and deploy keys. First you'll need the git deploy key, the process is nicely explained by Steve Klabnik. That way you'll have a Github token we'll configure in a bit in our .travis.yaml file.

Besides the github token you'll also need to configure the previously created AWS user access and secret keys in travis so travis will be able to update your S3 bucket and invalidates your caches on cloudfront. You'll need to configure those through the GUI of travis on the particular repository as explained by (renzo)[https://renzo.lucioni.xyz/s3-deployment-with-travis/].

Now the most of the administrative part is done a .travis.yaml file is needed in your repository which contains a list of steps to be performed by travis every time a new commit will be pushed.

dependencies

The travis file I'm referring to is divided in four parts, the first one being the installation of the different python dependencies needed to build and deploy our website.

pip install -r requirements.txt

make

Secondly we rely on the Makefile to first build and deploy the website to github pages and secondly build it for AWS;

$ make clean
$ make github-travis
$ make clean
$ make aws-create

deployment to AWS

When the website is prepared for AWS it can be deployed using the built in deploy of travis.

cache invalidation

Last but not least is the invalidation of the cache on the different cloudfront edge locations so the updated website is also renewed on those servers.

$ aws configure set preview.cloudfront true
$ aws cloudfront create-invalidation --distribution-id $CLOUDFRONT_DISTRIBUTION_ID --paths "/*"

The result of your build can be followed on the travis webpage as for example the build of this page