Serverless framework 1.25 was released on 20/Dec/2017 and brought about some changes which we were looking forward to incorporating in our development workflow.
I have classified the changes into two parts, one where we were hoping we could get some quick solutions, and the second was more towards optimizing the code and features we were looking to explore and integrate into our system as apart of continuous improvement.
Part One - Changes which we incorporated
S3 Transfer acceleration
The most important change for us which speeds up our development workflow is the S3 transfer acceleration. Our node packages are around 40 MB and with the typical upload speeds in our development center a deployment typically used to take around 15 - 20 min. We have 8 node developers working with us, who on an average deploy code 5-6 times a day. So, from that perspective the developers were spending around (10*20*5) 16 hours on deployment alone. The options which we were exploring were to test more offline, reduce node dependencies and somehow speed up transfer amongst others.
With the S3 transfer acceleration option our uploads are now taking around 10 - 12 min on an average which is a savings of around 40% or around 6 hours per day. It adds up to quite a lot of hours saved by this simple tweak.
Lambda version generation
In our testing and optimizing Lambda billing usage we used to run into issues where the versions were not getting updated when the basic settings (Memory, timeout) were being changed. This patch fixed up the issue for us.
API Gateway Endpoint Type Configuration
One of the drawbacks of using micro services architecture is that the latency between the services increase due to multiple APIs being called for a job to be accomplished. We invoke REST APIs from a lot of services for getting one piece of work being done. We are always looking at ways of combining and decomposing our architecture so that the functionalities are better optimized and one of the ways which came up was AWS announcing the availability of a regional endpoint. With serverless supporting the same feature, we see a general reduction in latency in some of our services which we have deployed this. Instrumentation and metrics of how much improvement has been achieved is still being done.
Part Two - Changes which we are added into our product roadmap
Our online platform is deployed using a combination of serverless and CF scripts. At present, most of the functions are quite stable and although it is not the best possible/optimized way of doing things, it works. So, we have added the following into our roadmap and will use them to rework our deployment scripts. I think these are a step in the right direction
Lot of learning from this article.