Please help me work out the math here, as I think I am doing this wrong.
A Lambda of 128mb costs $0.0000000021/ms, this works out $0.00756/hour. A Lambda of 512mb costs $0.0000000083/ms, this works out $0.02988/hour.
Now if you look at EC2:
t4g.nano $0.0042/hour (0.5 GiB ram) t4g.micro $0.0084/hour (1GiB ram).
But... the Lambda will likely not run 100% of the time, and will stay warm for 10 minutes (not sure here?). And the RAM usage would be much better utilized if you got a function running, rather than an entire VPC.
Given all that, if the function can run with 128mb or less, it seems like a no-brainer to use Lambda.
However, if the function is bigger, it would only make sense to put it in an EC2 if it runs more than 30% of the time ($0.0084/hour cost of t4g.micro divided by 0.02988/h cost of 512mb lambda).
So why is everyone against Lambdas citing costs as the primary reason...?
Hello, pretty new to building on AWS so I pretty much just threw everything in lambda for the heavy compute and have light polling on EC2. I am doing all CPU and somewhat memory intensive work that lasts around 1-7 minutes on async lambda functions, which sends a webhook back to the polling bot (free t2 micro) when it is complete. For my entire project, lambda is accruing over 50% of the total costs which seems somewhat high as I have around 10 daily users on my service.
Perhaps it is better to wait it out and see how my SaaS stabilises as we are in a volite period as we enter the market, so it's kinda hard to forecast with any precision on our expected usage over the coming months.
Am I better off having an EC2 instance do all of the computation asynchronously or is it better to just keep it in lambda? Better can mean many things, but I mean long term economic scalability. I tried to read some economics on lambda/EC2 but it wasn't that clear and I still lack the intuition of when / when not to use lambda.
It will take some time to move everything onto an ec2 instance first of all, and then configure everything to run asynchronously and scale nicely, so I imagine the learning curve is harder, but it would be cheaper as a result? .
Videos
AWS Lambda pricing is promising - you don't pay for idle, and you are billed for 100 milliseconds intervals. The pricing model is described thoroughly in this page, but honestly - do you really know what the cost of each function is?
There are too many parameters that you need to take into account when calculating the cost of each function, which turns out to be tough work. Also, what about monthly cost estimation?
-
Blog post: https://blog.epsagon.com/how-much-does-my-lambda-function-cost
-
Open source tool: https://github.com/epsagon/lambda-cost-calculator
Hello everyone! For a client I need to create an API endpoint that he will call as a SaaS.
The API is quite simple, it's just a sentiment endpoint on text messages to categorised which people are interested in a product and then callback. I think I'm going to use Amazon comprehend for that purpose, or apply some GPTs just to extract more informations like "negative but open to dialogue"...
We will receive around 23k call per month (~750-800 per day). I'm wondering if AWS lambda Is the right choice in terms of pricing, scalability in order to maximize the output and minimize our cost. Using an API gateway to dispatch the calls could be enough or it's better to use some sqs to increase scalability and performance? Will AWS lambda automatically handle for example 50-100 currency calls?
What's your opinion about it? Is it the right choice?
Thank you guys!
As title asks. Lambda functions are so cheap, I am curious if anyone actually runs them at a scale where costs are now a concern? If so, that would be impressive.
Hey everyone, So, I wanted to share some hard-won lessons about optimizing Lambda function costs when you're dealing with a lot of invocations. We're talking millions per day. Initially, we just deployed our functions and didn't really think about the cost implications too much. Bad idea, obviously. The bill started creeping up, and suddenly, Lambda was a significant chunk of our AWS spend. First thing we tackled was memory allocation. It's tempting to just crank it up, but that's a surefire way to burn money. We used CloudWatch metrics (Duration, Invocations, Errors) to really dial in the minimum memory each function needed. This made a surprisingly big difference. y'know, we also found some functions were consistently timing out, and bumping up memory there actually reduced cost by letting them complete successfully. Next, we looked at function duration. Some functions were doing a lot of unnecessary work. We optimized code, reduced dependencies, and made sure we were only pulling in what we absolutely needed. For Python Lambdas, using layers helped a bunch to keep our deployment packages small, tbh. Also, cold starts were a pain, so we started experimenting with provisioned concurrency for our most critical functions. This added some cost, but the improved performance and reduced latency were worth it in our case. Another big win was analyzing our invocation patterns. We found that some functions were being invoked far more often than necessary due to inefficient event triggers. We tweaked our event sources (Kinesis, SQS, etc.) to batch records more effectively and reduce the overall number of invocations. Finally, we implemented better monitoring and alerting. CloudWatch alarms are your friend. We set up alerts for function duration, error rates, and overall cost. This helped us quickly identify and address any new performance or cost issues. Anyone else have similar experiences or tips to share? I'm always looking for new ideas!
Did you ever look at your Lambda bill thinking:
๐ช๐ต๐ฎ๐ ๐๐ต๐ฒ ๐ต๐ฒ๐น๐น ๐ฎ๐ฟ๐ฒ ๐๐-๐๐ฒ๐ฐ๐ผ๐ป๐ฑ๐? ๐คจ
It doesn't sound initiative, but it's also not complex.
A breakdown including examples โ
๐ฃ๐ฟ๐ฒ๐ณ๐ฎ๐ฐ๐ฒ
One of Lambda's major differences to services like EC2 or Fargate is the pay-per-use pricing: ๐๐ผ๐'๐ฟ๐ฒ ๐ผ๐ป๐น๐ ๐ฝ๐ฎ๐๐ถ๐ป๐ด ๐๐ต๐ฒ๐ป ๐๐ผ๐๐ฟ ๐ฐ๐ผ๐ฑ๐ฒ ๐ถ๐ ๐ฎ๐ฐ๐๐๐ฎ๐น๐น๐ ๐ฒ๐ ๐ฒ๐ฐ๐๐๐ฒ๐ฑ.
In detail, you're paying for GB-seconds.
Let's have a look into that:
There are several measures to take for #Lambda
โข number of executions
โข execution durations
โข assigned memory to your Lambda functions
Calculation of the duration starts when the code inside your handler function is executed & stops when it returns or is terminated.
What's worth noting: global code (outside your handler) is executed at cold starts & ๐ถ๐๐ป'๐ ๐ฏ๐ถ๐น๐น๐ฒ๐ฑ ๐ณ๐ผ๐ฟ ๐๐ต๐ฒ ๐ณ๐ถ๐ฟ๐๐ ๐ญ๐ฌ ๐๐ฒ๐ฐ๐ผ๐ป๐ฑ๐.
But back to the cost calculation with a look at AWS free tier.
For Lambda, it is ๐ฐ๐ฌ๐ฌ,๐ฌ๐ฌ๐ฌ ๐๐-๐๐ฒ๐ฐ๐ผ๐ป๐ฑ๐ per month.
Breakdown: we're paying for ๐ด๐ถ๐ด๐ฎ๐ฏ๐๐๐ฒ๐ ๐ผ๐ณ ๐บ๐ฒ๐บ๐ผ๐ฟ๐ assigned to your function ๐ฝ๐ฒ๐ฟ ๐ฟ๐๐ป๐ป๐ถ๐ป๐ด ๐๐ฒ๐ฐ๐ผ๐ป๐ฑ.
For our free tier, this means: we got 400,000 seconds worth of a 1GB memory function.
This means more than 110 hours or 4 days!
If you change the memory assignment of your function, we'll get other numbers for the free tier:
โข 128MB => ~880 hours / 36 days
โข 256 MB => ~440 hours / 18 days
โข 512MB => ~ 220 hours / 9 days
โข 3072MB => ~37 hours / 1.5 days
As seen, calculations are not complex at all.
Let's have a look at a detailed example:
Running a function for ๐ผ๐ป๐ฒ ๐๐ฒ๐ฐ๐ผ๐ป๐ฑ (๐ญ๐ฌ๐ฌ๐ฌ๐บ๐) with ๐ญ๐ฎ๐ด๐ ๐ for ๐ผ๐ป๐ฒ ๐บ๐ถ๐น๐น๐ถ๐ผ๐ป ๐๐ถ๐บ๐ฒ๐.
We're paying for:
โข 1ms: $๐ฌ.๐ฌ๐ฌ๐ฌ๐ฌ๐ฌ๐ฌ๐ฌ๐ฌ๐ฎ๐ญ
โข 1s: $๐ฌ.๐ฌ๐ฌ๐ฌ๐ฌ๐ฌ๐ฎ๐ญ
=> 1m executions: $๐ฎ.๐ญ๐ฌ
Is this included?
Yes, the free tier covers this completely.
We receive 400,000 GB-seconds.
That means:
=> 400,000 GB-seconds = ๐ฏ,๐ฎ๐ฌ๐ฌ,๐ฌ๐ฌ๐ฌ 128MB-seconds
In our example, we're only using ๐ญ,๐ฌ๐ฌ๐ฌ,๐ฌ๐ฌ๐ฌ 128MB-seconds! ๐คฉ
Let's switch from a 128MB to a 10GB function.
Now we end up with $๐ฌ.๐ฌ๐ฌ๐ฌ๐ฌ๐ฌ๐ฌ๐ญ๐ฒ๐ฒ๐ณ per 10GB-ms.
Which means:
โข 1s: $๐ฌ.๐ฌ๐ฌ๐ฌ๐ญ๐ฒ๐ฒ๐ณ
=> 1m executions: $๐ญ๐ฒ๐ฒ,๐ณ ๐คฏ ๐ฅ
Which are equal to 10,000,000 GB-secs!
So free tier doesn't do a lot here with its 400,000 GB-secs.
This calculation is an example.
Your function will execute computations way faster with those memory (& therefore vCPU) differences.
Also, there's a ๐ฆ๐๐ฒ๐ฒ๐ ๐ฆ๐ฝ๐ผ๐ that we'll show you in this article: https://dashbird.io/blog/aws-lambda-cost-optimization-strategies/
๐ ๐ด๐ฒ ๐๐. ๐๐ฅ๐
You can choose to run your Lambda's on different architectures.
Our examples were done for x86, but it's more expensive than on ARM/Graviton2!
โข x86 Price: $0.0000166667 for every GB-second
โข ARM Price: $0.0000133334 for every GB-second
See this article if you'd like to dive even deeper: https://dashbird.io/blog/aws-lambda-pricing-model-explained/
Hi everyone I have a question regarding cost between lambda and ec2 I am building a simple node application using puppeteer that will only act as an api. Obviously donโt expect heavy usage for now but overall Iv read that lambda will be cheaper to run up until heavy usage? Just wanted to get your thoughts if ec2 is better than lambda both in performance and cost. Again this will not be heavily used in either case. Also which ones performance will be better?
I was trying to setup provisioned concurrency in AWS lambda, and wanted some guidance on the cost overhead that I'll bear.
Let's say I decide to setup provisioned concurrency as 5.
My memory size 3072 MB.
Estimated total cost per month: $0.00000005 (cost per ms for 3072 mb) * 1000*60*60*24*30 (total ms in a month) * 5 (provisioned concurrency) = 648 $
A) is this calculation correct?
B) With every update do I have to deploy a new version and setup concurrency?
I'm a bit confused about the Free Tier offering. I've read that AWS Lambda offers a Free Tier with 1 million requests per month and 400,000 GB-seconds of compute time per month, but I'm not sure if this only applies during the initial 12-month Free Tier term or if it's available indefinitely.
Can someone clarify if the AWS Lambda Free Tier is available beyond the first 12 months? And if so, are the 1 million requests and compute time limits still applicable?
I'm working on an application which needs websockets.
Investigating this, it looks like it's possible to set up serverless websockets using API Gateway and Lambda - but I am wondering how expensive this gets.
Lambda is affordable when you are handling REST calls, but in that case you are only using milliseconds of CPU at a time. How does the pricing scale when using lambda for something more long running like a websocket server?
I.e. when my lambda is just waiting for the next WS callback, am I paying for it to be running?
An article I wrote explaining how we reduced our Lambda cost from 2200$ per day to 200$ per day: https://medium.com/foxintelligence-inside/how-we-reduced-lambda-functions-costs-by-thousands-of-dollars-8279b0a69931
On August 1st, AWS started charging for something that was previously free: the initialization phase of Lambdas.
Official blog post here: https://aws.amazon.com/blogs/compute/aws-lambda-standardizes-billing-for-init-phase/
Hereโs the weird part: a few days before that change (around July 29th), we saw init times suddenly increase across multiple AWS accounts for one of our clients.
They went from ~500ms to 1โ3+ seconds
No deployments, no code changes, no new versions
Just noticeably slower inits out of nowhere
Now, when comparing billing, Lambda costs have more than doubled from July to August with no obvious reason.
Has anyone else noticed the same behavior? Is this just bad timing, or something more deliberate?
If youโre running workloads on Lambdas, Iโd recommend checking your metrics and costs. Would love to hear what others are seeing.
What are different approach you will take to avoid those costs impact.
https://aws.amazon.com/blogs/compute/aws-lambda-standardizes-billing-for-init-phase/
Looking for the cheapest way to deploy a yet to be developed application.
It seems to me that Lambda functions behind the API Gateway is the cheapest and most scalable way to deploy a cloud native micro service. Am I wrong to assume this?
The other options from AWS like EC2 (replicate the original server environment), Beanstalk (traditional Java/Python web apps with slightly modified repackaging), ECS/EKS (Docker container image based) are more for backward compatibility with legacy/non cloud native apps and services that cannot/should not be changed.