Use AWS Lambda, SNS and Node.js to automatically deploy your static site from Github to S3

Rafal Wilinski
8 min readJul 3, 2016

Recently, I finished making my own, personal website rwilinski.me which I’m recommending you to check out. During multiple iterations of that project one thing worried me:

Deployment

Each time I wanted to show off my progress to someone else I had to go to S3 Console and manually upload all the files, change permissions, set MIME types etc.

I started to look for automatic solutions like AWS CodeDeploy but I didn’t find anything satisfying. CodeDeploy works fine but only for EC2 instances, it assume that your application couldn’t be built using only S3.

Let me know if there are better, prebuild solutions. Even though there are, I think this tutorial might be a good start to work with Lambda.

So I came up with my own solution

Using AWS Lambda, SNS, Github Webhooks and AWS-SDK I’ve a created quite simple process to automate that. Here’s how I’ve done it in a big picture perspective:

Quick overview of our deployment workflow

Creating Github — AWS SNS Topic connection

  1. First, Login to your AWS console ► SNS ► Create new SNS Topic called something like “github-deploy”. Copy to clipboard Topic ANR key.
  2. Once you’ve done that, head to your GitHub Repository Settings ► Webhooks & Services ► Add Service (SNS)

3. Fill necessary fields. Paste here your Access Keys and Topic ANR.

Done.

Now, every time you commit something to master, message will be sent to SNS Topic. We also have to handle that information — download repository files and upload them to S3. I’ve done that using Lambda.

Setting up AWS Lambda Function

Lambda, like many other services in AWS can have it’s Roles and should so we also have to create one if we would like to work with other services.

Creating IAM Role

Go to your AWS Console ► Identity and Access Management ► Roles ► Create new Role.

When creating new role mind two things:

  1. Select proper AWS S3 service role: AWS Lambda

2. Attach following policies to that role:

  • AmazonS3FullAccess — we are going to save our files to S3 obviously
  • AmazonSNSReadOnlyAccess — we need to listen for incoming events

After creating it, copy Role ARN, you’ll need that for local project setup.

Creating Lambda function itself

When it comes to creating Lambdas, you can try creating Lambda function in browser but I think it’s kind of painful process. You don’t get all the linting, editor is limited, also you cannot add more node_modules.

Lambda Creation Panel in Browser

Instead, I recommend creating a new node project somewhere on your computer and installing node-lambda. This awesome module has an ability to setup, dry-run, package and deploy your lambdas to AWS without much effort.

Creating lambda locally

  1. Fire following set of commands:
cd ~ && mkdir github-to-s3-deploy && cd github-to-s3-deploy
npm init -f
touch index.js
npm install node-lambda aws-sdk --save-dev
npm install request --save

For those who are not familliar with command line, it creates new Node.js project named “github-to-s3-deploy”, creates new file index.js and installs all modules that we’ll need.

2. Open package.json and change scripts section to this:

“scripts”: {
“start”: “node index.js”,
“dry-run”: “node-lambda run”,
“deploy”: “node-lambda deploy --configFile deploy.env”,
“setup”: “node-lambda setup”,
“package”: “node-lambda package”
},

This adds a set of useful commands and doesn’t require us to install “node-lambda” globally.

Let’s finish setting up our project by executing this line:

npm run setup

This will add some configuration files to our directory. Your project should now look something like this:

Open .env file and paste your Access Keys (Session_token is not needed), Role ARN (which you’ve copied before) and fill rest of the parameters as you like. I recommend following settings:

AWS_ENVIRONMENT=development
AWS_ACCESS_KEY_ID=<your access_key here>
AWS_SECRET_ACCESS_KEY=<your secret_here>
AWS_SESSION_TOKEN=
AWS_ROLE_ARN=<your role anr here>
AWS_REGION=us-east-1
AWS_FUNCTION_NAME=github-to-s3-deploy-function
AWS_HANDLER=index.handler
AWS_MEMORY_SIZE=128
AWS_TIMEOUT=10
AWS_DESCRIPTION=Uploades new version of code each time new commit appears in repo
AWS_RUNTIME=nodejs4.3
AWS_VPC_SUBNETS=
AWS_VPC_SECURITY_GROUPS=
EXCLUDE_GLOBS=”event.json”
PACKAGE_DIRECTORY=build

Enough of setup. Let’s code

Open index.js and paste following code:

exports.handler = (event, context, callback) => { 
console.log (event);
};

It will be base for our further experiments. Run

npm run deploy

and wait until process completes. After it’s done, head to AWS Console Lambda [your function name] Event Sources and link it to previously created SNS Topic, so Lambda will subscribe to SNS Topic which receives data from Github. It should look something like this:

Event source setup

Now, let’s test whole setup. Go back again to Github Webhooks, Choose already integrated SNS Topic and hit “Test Service”.

After few seconds, Lambda should trigger, and you should be able to see the result using CloudWatch Management Console.

CloudWatch Management Console

In CloudWatch, select /aws/lambda/[your function name] and then latest logs revision, you should see something like this:

Detailed logs

If you don’t see any logs or there are no logs revisions, try waiting a few seconds. If it doesn’t help, make sure you completed all previous steps.

Setting up local test environment

From this point, you can focus only on code but testing your function will be a very slow process. Each time you iterate, you’ll have to deploy your function, wait a moment to populate it, re-run “Test Service”, check logs etc.

Fortunately, there’s a much better way to test your lambdas without deploying it to AWS. According to node-lambda documentation:

Setup — Initializes the event.json, context.json, .env files, and deploy.env files. event.json is where you mock your event.

So, we can paste example payload to event.json file and imitate real integration, cool.

Let’s open it and paste your message from CloudWatch logs or this one, from my message: https://gist.github.com/RafalWilinski/69f14bdbfba97f8de5d4552879cd9d1a

Next, test that by using following command:

npm run dry-run

You should see something like this:

npm run dry-run

Making it actually work

Right now our function does nothing but logs incoming event. Pretty boring so let’s extend that into something like this:

‘use strict’;
const request = require(‘request’);
const AWS = require(‘aws-sdk’);
const fs = require(‘fs’);
const path = 'static/';exports.handler = (event, context, callback) => {
const downloadsUrl = JSON.parse(event.Records[0].Sns.Message).repository.contents_url.replace(‘{+path}’, path);
request({
uri: downloadsUrl,
headers: {
‘User-Agent’: ‘AWS Lambda Function’ // Without that Github will reject all requests
}
}, (error, response, body) => {
JSON.parse(body).forEach((fileObject) => {
console.log(fileObject);
});
});
};

Our function will now extract URL from Github payload to download all files from repository and then iterate over all downloadable files.

Logging all downloadable files is kind of pointless so let’s replace:

console.log(fileObject); 

with:

putFileToS3(fileObject)
.then(() => updateProgress(payload.data.length))
.catch((error) => callback(error, `Error while uploading ${fileObject.name} file to S3`));

As you can see, I’ve introduced two functions here.

  • putFileToS3 which is quite self-explanatory. It will download file from Github and pipe it to S3
  • If putFileToS3 succeeds, it will fire updateProgress function. This piece of code will count uploads which succeeded. If number of uploaded images will be equal to number of files in repository, then it will fire callback that reports Lambda completion.

putFileToS3 should look something like this:

const putFileToS3 = (fileObject) => new Promise((resolve, reject) => {
request(fileObject.download_url)
.pipe(fs.createWriteStream(fileObject.name))
.on(‘finish’, () => {
s3.upload({
Bucket: bucketName,
Key: fileObject.name,
Body: fs.createReadStream(fileObject.name),
ACL: ‘public-read’,
ContentType: computeContentType(fileObject.name),
}, (error, data) => {
if (error) throw new Error(error);
else console.log(data);
});
});
});

This function takes as an argument every file in repository, downloads to local directory and then re-uploads to S3. It’s important to make files public by setting ACL to “public-read”. Since everything in Lambda local filesystem is temporary, we don’t have to worry about removing tmp files.

One thing that’s also worth noticing is `ContentType`. Since we want html files to be rendered, and JS files to be interpreted while image files just displayed, we have to instruct S3 and browser what should it do with all of these files.

To do that, I’ve created simple switch which will guess ContentType basing on file type, it goes like this:

const computeContentType = (filename) => {
const parts = filename.split(‘.’);
switch (filename.split(‘.’)[parts.length-1]) {
case ‘png’:
return “image/png”;
case ‘html’:
return “text/html”;
case ‘js’:
return “application/javascript”;
case ‘css’:
return “text/css”;
}
};

Feel free to extend this if you need more file types or use different extensions.

After all, let’s get to our helper function which is responsible for reporting task completion — updateProgress

let processed = 0;const updateProgress = (totalCount) => {
processed++;
console.log(`Progress: ${processed} out of ${totalCount}`);
if (processed === totalCount) {
if (confirmationTopicArn) confirmUpload(callback);
else callback(null, ‘Done!’);
}
}

Very simple function which counts all the succeeded uploads. If that number equals to number of files to upload, we can notify ourselves by mail (if confirmationTopicArn is supplied) nor just end function by invoking callback.

Bonus: Sending mail to us about completion

If you’re one of those like me who are total control freaks, you should probably consider implementing very basic reporting.

First, let’s create another SNS Topic dedicated for sending notifications and subscribe your dev mail to that topic:

Create Mail subscription, don’t forget to confirm it

Once again, copy your Topic ANR and save it into .env file under some meaningful key, like AWS_CONFIRMATION_SNS_TOPIC_ANR. Save it also in deploy.env file, without that env variable will be not present in live lambda environment.

When it comes to code, this simple function should do the trick:

const confirmUpload = (callback) => {
sns.publish({
Message: 'Deploy successful!',
Subject: 'Github to S3 Deployment',
TopicArn: process.env.AWS_CONFIRMATION_SNS_TOPIC_ANR,
}, (err, data) => {
if (err) callback(err, ‘Failed to send confirmation’);
else callback(null, ‘Done!’);
});
};

And that’s it. If you’ve followed instructions carefully, you should be able to clone whole process and also automate your deployment.

If you liked it and would like to learn something more, try making it a bit more complicated by including CI somewhere in the middle or even add code linting/formatting as a step before deployment.

In case you’ve missed something, complete source code is available here: https://github.com/RafalWilinski/github-to-s3-lambda-deployer

If you liked this article, the chances are you’re also exposed to DynamoDB. If you’re not big fan of AWS Console, check out my other project which aims to solve this problem: https://dynobase.dev

--

--

Rafal Wilinski

Founder of https://dynobase.dev, AWS Certified Architect, love minimalism and procedural content generation.