In the previous post, I showed how I have set up the build of the site. The process is incomplete without deployment. As already said, the deployment happens to AWS S3, and here is how:

Deploy the site to AWS S3

I wrote a Gist for this task using BitBucket, Wercker (CI) and AWS S3 back in July 2017. Here is a gist of it (no pun intended):

There are two parts to configuring the deployment:

  1. Configuring the S3 bucket
  2. Setting up the keys
  3. Setting up the s3_website gem

Configure the S3 bucket

First comes the creation of the S3 bucket that would host the site contents:

  1. I went to the S3 Management Console. The prerequisite to this is that you have an AWS account.
  2. I chose to create a new bucket.
  3. Entered the name of the bucket exactly as the URL to host the site; I entered iam.ramiyer.me (because I want the site to be live at https://iam.ramiyer.me). Also, I have configured SSL through CloudFront. S3 hosts non-SSL sites by default.
  4. Selected the region. Any region would do for me, since I use CloudFront anyway.
  5. I do not want logging or versioning configured.
  6. Reviewed the settings and created the bucket.
  7. Once the bucket was created, I opened the bucket and went to the Permissions tab. I created a new bucket policy.
  8. Entered the following JSON in the editor area:
    {
      "Version":"2012-10-17",
      "Statement":
      [
        {
          "Sid":"PublicReadForGetBucketObjects",
          "Effect":"Allow",
          "Principal": "*",
          "Action":
          [
            "s3:GetObject"
          ],
          "Resource":
          [
            "arn:aws:s3:::iam.ramiyer.me",
            "arn:aws:s3:::iam.ramiyer.me/*"
          ]
        }
      ]
    }
    
  9. Saved the configuration.

That completed the bucket setup part for one bucket. Configure the other buckets in a similar fashion, based on your requirements. I use two buckets.

Setup AWS Access and Secret keys

GitLab CI needs to authenticate itself to push content to the S3 bucket. So, I set up an IAM user for this.

  1. I went to the IAM console.
  2. Added a new user under Users.
  3. Selected the box against Programmatic access, and proceeded to add permissions.
  4. In the permissions screen, I selected Add existing policies directly. And created a new policy.
  5. Named the policy, AmazonS3-JekyllSite-ReadWrite. Entered a description to help me remember what I created the policy for.
  6. In the Policy Document area, entered:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "JekyllBucketWrite",
                "Effect": "Allow",
                "Action": [
                    "s3:DeleteObject",
                    "s3:GetObject",
                    "s3:PutObject"
                ],
                "Resource": [
                    "arn:aws:s3:::iam.ramiyer.me/*"
                ]
            },
            {
                "Sid": "JekyllBucketList",
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket"
                ],
                "Resource": [
                    "arn:aws:s3:::iam.ramiyer.me"
                ]
            }
        ]
    }
    
  7. Chose AmazonS3-JekyllSite-ReadWrite, reviewed the configuration and created the user.
  8. I copied the Access key ID and the Secret key from the next screen.

Configure CloudFront to deliver the site

To find that endpoint where the site would be live, I opened my bucket, went to the Properties tab, and clicked on Static website hosting. The URL mentioned there is the endpoint, and the site would be live here. However, there are two reasons I chose to go with CloudFront as well.

  1. CloudFront makes the content delivery fast, throughout the world. Granted, this is not an issue for small static sites, which brings us to the next point.
  2. You can have an SSL certificate, issued by Amazon for free, tagged to your site to encrypt the connections to the site.

Let’s first proceed with getting ourselves an SSL certificate. This is optional. I like to encrypt connections to my sites, so I chose to go with it.

  1. In the AWS Certificate Manager console,
  2. I entered the domain (or subdomain) name, which should be the URL to the site, as well as the name of the bucket, iam.ramiyer.me. The prerequisite here is that you have sufficient authority on your domain, including the ability to receive admin emails. An email would be sent to the admin address of the domain for verification. You could also use the DNS verification method.
  3. I chose the email method and requested a certificate. I logged into the mailbox configured with the admin address of my domain. I had received an email to approve the certificate request. I approved it after verifying the details.
  4. If you refresh the ACM screen, you should see the certificate that was just issued.

It’s now time to set up the CloudFront distribution. Here are the steps I followed:

  1. Go to the CloudFront console.
  2. Click on Create distrubution.
  3. Click on Get started in the Web section.
  4. Clicking in the Origin Domain Name field should drop down a few values, the S3 bucket you created for the site being one of them. Select it.
  5. In the Default cache behaviour settings section, select the radio button against Redirect HTTP to HTTPS.
  6. Scroll down to Distribution settings, and enter the name of your bucket (and the subdomain URL of your site), iam.ramiyer.me, in the Alternate domain names field.
  7. Select Custom SSL certificate, and select the certificate you just created, from the dropdown.
  8. Enter index.html in the Default Root Object field.
  9. Go ahead and click on Create Distribution.
  10. Wait. The process of creating the distribution can take anywhere from ten minutes to a few hours.
  11. Once the Status says Deployed, and State says Enabled, the CloudFront distribution should be live. Note down the Domain name value for the distribution; it should be something like qdw3xburi4bfy.cloudfront.net.

Configure DNS routing

Your site should now be accessible using the domain name (qdw3xburi4bfy.cloudfront.net) as well. But we want it to be accessible at iam.ramiyer.me. This takes some DNS configuration.

  1. Log into the DNS console of your DNS provider (your domain provider in most cases).
  2. Create a new CNAME record, with iam.ramiyer.me in the Name field, and qdw3xburi4bfy.cloudfront.net in the Points to field.
  3. Save the zone file.
  4. Wait. DNS modifications may take between a few seconds and several hours to propagate, depending on how you have configured the TTL.

Set up GitLab CI for S3 deployment

The final part is to set up the CI/CD pipelines to use the necessary S3 configuration. This is done using two components:

  1. The s3_website configuration file
  2. The secret variables for deployment

The s3_website config is a simple YAML file. You specify the S3_ID which is the key name, and the S3_SECRET, which is the secret key. You also need to specify the bucket you would like to deploy the files to. In my case, it is different for different environments: testiam.ramiyer.me and iam.ramiyer.me. I have configured GitLab CI so that it deploys everything to the test environment, and only commits to the master branch to the prod bucket. This is because the CI config runs both the jobs (staging and production) for commits on any branch (no only or except in staging).

In order for the s3_website gem to be able to interact with S3, it must have a set of credentials that give it programmatic access. This is where you would use the ID and the secret key that you got in the final step of Setup AWS Access and Secret keys. Here is how we configure s3_website. First, we create a YAML called s3_website.yml. Next, within the YAML, we place the ID and the key. Do not place the actual ID and key. Use variables, instead. In GitLab, within the settings for your repository, you can configure the CI/CD variables. This is visible only to the maintainers of the repository. Therefore, those who contribute to your project or the public will not be able to see the credentials that s3_website would use.

Let me reiterate: If you show these values in the code, anyone who comes across it will have programmatic access to your AWS account. If you misconfigure the permissions for these credentials and give the IAM user more access, you will be in trouble.

Remember not to turn on the Protected parameter in front of these variables. The variable values are hidden by default. Turning on Protected state tells GitLab CI to use the variable(s) only on protected branches. That is not out goal here.

Now that we have the variables out of our way, let us build the config.

s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>

This way, you are telling s3_website to use environment variables, which it will pick from the repository settings. Next, we tell s3_website which bucket to use. We have two, and we decide which one to use, based on the environment we are deploying to.

<%
  if ENV['ENV'] == 'production'
    @s3_bucket = 'iam.ramiyer.me'
  else
    @s3_bucket = 'testiam.ramiyer.me'
  end
%>
s3_bucket: <%= @s3_bucket %>

We assign the bucket based on the environment we specify in:

ENV=production bundle exec s3_website push

Next, you tell s3_website where to pick the files for upload from. This location is where you built the site to in the CI configuration.

site: public/

Anything after this is additional configuration. I have the following:

extensionless_mime_type: text/html

exclude_from_upload:
  - Gemfile
  - Gemfile.lock

gzip: true
s3_reduced_redundancy: true

cloudfront_distribution_id: 'Z15A7X9SB765KW'
cloudfront_wildcard_invalidation: true

max_age:
  "assets/main.css": 691200
  "search/search-script.min.js": 691200

I explicitly mention that the mime type for extensionless files is text/html. This will be ignored by the GitLab CI Runner, in favour of using Apache Tika. But as a fallback, you can mention this. We tell s3_website where to pick the files from, which is public/. We tell the gem to exclude the Gemfile and Gemfile lock from upload; these are necessary only for the site build, and have no task sitting in the S3 bucket. I have turned on GZIP compression on the contents, and reduced redundancy, because it is unnecessary.

Since I have enabled CloudFront distribution, I have specified it so that s3_website can invalidate the CloudFront cache after every build. This is to propagate the changes I make to the cached files across. I have also set the age for a couple of CSS and JS files to 8 days. These files don’t change frequently after you have set your design and features. This way, you instruct the browser to cache the files for eight days, thereby improving the performance of the site for returning visitors.

Summary

That brings us to the summary of all that is done to get the site to go live.

First, you set up your local machine with Jekyll. This way, you can test everything locally. After you are done with that, you add the configuration file for GitLab CI Runner, so that it can build your site according to your the configuration, which is based on your requirements. GitLab CI Runner builds your code on Docker containers. You install the dependencies as part of before_script, run the build, perform tasks that you have to, and at the end, deploy the code to the S3 bucket. This can be done for staging and production separately. You can have any number of environments based on your requirements.

The next part is deployment. Luckily, we have a Ruby gem, called s3_website, which can handle the deployment of static sites to S3. We create a configuration for this and save it in the root directory of the site. We set up the S3 bucket, make the contents public, we configure CloudFront. We also create a new IAM user for the Runner, so that it can work with our buckets and the CloudFront distribution. We configure the DNS routing accordingly, and finally, push all of these additions/changes to the repository.

If everything went as planned, your build should kick off, and you will see your files in the S3 bucket. If not, you should see relevant errors in the Runner log, which you would need to fix before the build and deployment can happen.