A quick review of a pure functional serverless application deployed to production

Date: June 3, 2020 Last modified: August 9, 2020

/images/car-dashboard-small.jpg

Photo by Dawid Zawiła on Unsplash

This was adapted from a tweet thread on June 3rd, 2020.

Notes:

  • all references to $ (dollars) is in reference to US Dollars (USD)

  • latencies are quoted in milliseconds (ms) unless otherwise noted

  • this serverless application was deployed using the AWS primitives API Gateway (REST) and AWS Lambda


Last week I deployed my second "serverless app" to production. Being fairly new to deploying serverless applications at scale I was concerned about the costs.

Without any cost optimization (during our first week in production) we spent ~$15/day on API Gateway + AWS Lambda invocations with ~32k/minute (approx 530 invocations per second) lambda invocations at peak.

Our P99 (99th percentile) was at ~50ms latency and we used 50-75% of memory allowance (max 128MB). We shipped using the more expensive REST API Gateway due to familiarity but for this use case we could use the newer (and cheaper) HTTP API Gateway since we don't need caching.

An amusing side note is that our CloudWatch Logs cost a whopping $0.40/per day (yes, that means it was 40 cents a day).

I thought I was going to be surprised in a negative way about the costs but after this week we haven't broken the bank and there is room for cost optimization that will not take too long now that functional barebones is running and fully automated for delivery.

This is the second production backend I have launched written in PureScript during this pandemic (COVID-19 in case we have a sequence of them and you can't remember). Besides the SAM/AWS automation (docs conflicting with implementation) hell it was mostly fun to build. First one was a different beast (lower traffic volume, but higher complexity with an authorizer lambda).

For those that are used to a pre-serverless world, in terms of debugging, deploying a serverless application is definitely not like what you are used to, depending on how deep you would troubleshoot issues. It will require some adjustment from developers used to that model but the change is not insurmountable.

For instance, I can't strace, perf, ss, eBPF, etc inside the lambda container running in AWS. However, what I can do is run a simulated API Gateway locally using SAM with localhost endpoint.

The metrics service we use (DataDog) needs a custom Lambda runtime for their full level of APM, however, that is not OK for us security wise even if it is open source because we don't have the time to review it constantly so will leverge the embedded metric format.

Deployment automation (to get to our level of done and satisfy our push-button zero downtime requirements) in AWS always comes with a handful of warts that prevent us from doing exactly as we want. I spent more time in SAM CLI's Github issues reading workarounds than developing app code or deployment automation, perhaps combined.

Our current infrastructure satisfies push-button zero downtime deploys with EC2 instances in autoscaling groups (ASGs) attached to application load balancers (ALB) with some custom code. We wrote it 3 years ago and only tweaked it 2-3 times since. I anticipate a similar payoff here.

We have not yet setup reserved concurrency in AWS Lambda. After switching to the HTTP API Gateway, We will experiment with that for costs. Despite low enough costs currently for our budget our costs could balloon fast if we got a surge so there are a few other parameters we would like to tweak including reserved concurrency.

The nature of the AWS Lambda deployed as part of this application is a write path HTTP Lambda.


I put together a sample Git repository on GitHub of a barebones Express-based PureScript Lambda with a SAM template.yaml (for deployment automation) to help others get started with PureScript serverless applications quicker using familiar HTTP APIs (Express) from JavaScript.