what does “unrealistically low” mean for you? 10? 3789$? this question is the mother of “it depends”. Answer from vater-gans on reddit.com
🌐
Reddit
reddit.com › r/aws › aurora serverless 2 - cost estimation
r/aws on Reddit: Aurora Serverless 2 - Cost estimation
August 9, 2022 -

Hi everyone.

I would like to know your opinion on how to evaluate costs for using Aurora Serverless 2 against a current RDS workload.

Basically, I have a client running a PG RDS. Given the workload they have with sudden spikes during business hours and very low usage off business hours and weekends, I would like to recommend serverless 2.

But is their a way I can predict their TCO (Total Cost of Ownership) using Serveless 2 based on their current RDS metrics from Cloud Watch?? That would be really helpful to get the client to buy in

Thanks in advance!!

🌐
Reddit
reddit.com › r/aws › is there a way to get a realistic estimate of how much aurora would cost?
r/aws on Reddit: Is there a way to get a realistic estimate of how much Aurora would cost?
May 22, 2025 -

Our production database needs some maintenance because it was neglected for a while. Some dba friends I know keep telling me to migrate to Postgres compatible Aurora. Others tell me it is too expensive.

When I did some quick estimates in the aws calculator, the cost seems unrealistically low.

Is there some tool that would give me a better idea of how much it would realistically cost?

🌐
Reddit
reddit.com › r/aws › rds aurora cost optimization help — serverless v2 spiked costs, now on db.r5.2xlarge but need advice
r/aws on Reddit: RDS Aurora Cost Optimization Help — Serverless V2 Spiked Costs, Now on db.r5.2xlarge but Need Advice
April 29, 2025 -

Hey folks,
I’m managing a critical live production workload on Amazon Aurora MySQL (8.0.mysql_aurora.3.05.2), and I need some urgent help with cost optimization.

Last month’s RDS bill hit $966, and management asked me to reduce it. I tried switching to Aurora Serverless V2 with ACUs 1–16, but it was unstable — connections dropped frequently. I raised it to 22 ACUs and realized it was eating cost unnecessarily, even during idle periods.

I switched back to a provisioned db.r5.2xlarge, which is stable but expensive. I tried evaluating t4g.2xlarge, but it couldn’t handle the load. Even db.r5.large chokes under pressure.

Constraints:

  • Can’t downsize the current instance without hurting performance.

  • This is real-time, critical db.

  • I'm already feeling the pressure as the “cloud expert” on the team 😓

My Questions:

  • Has anyone faced similar cost issues with Aurora and solved it elegantly?

  • Would adding a read replica meaningfully reduce cost or just add more?

  • Any gotchas with I/O-Optimized I should be aware of?

  • Anything else I should consider for real-time, production-grade optimization?

Thanks in advance — really appreciate any suggestions without ego. I’m here to learn and improve.

🌐
Reddit
reddit.com › r/aws › need some guidance on reducing the cost of my aurora serverless database.
r/aws on Reddit: Need some guidance on reducing the cost of my Aurora serverless database.
October 31, 2022 -

Hey there,

Sorry if I lack any technical jargon for this question, I'm still pretty novice to AWS.Right now I have a desktop application that has a leaderboard function. For this I decided with would be best to go down the RDS path. I know pretty little about connecting and running databases, so I opted to go with the serverless route, and wanted to access the database using the Aurora API/ lambda.

I saw AWS deprecated mySQL for serverless 1.0, and since serverless 2.0 does not support the aurora API, I went with the postgres option, as I could still use the API.

I think this is the first mistake, as it seems the minimum ACUs for the postgres option is double of the mySQL. But either way my database has a min and max of 2 ACUs which is probably far more than my application needs. I would estimate max, my user pool will be about 50k and its just storing simple leaderboard numbers.

After one month of running the database, my monthly bill came out to ~230 dollars, which is just a lot especially since I have not even launched this product yet.My main cost was in just running the database :

$0.08 per Aurora Capacity Unit hour running Amazon Aurora PostgreSQL Serverless

2,840.991 ACU-Hr

$227.28

So does anyone have any advice on where to start in reducing the cost ? Should I move off postgres?Would running the EC2 and manually managing the database be cheaper? Would no longer using the API be cheaper ? Any help appreciated

Edit : Wow I just realized while posting this my ACU numbers did not add up and it turns out I was running a second database all month with nothing in it. So thats half the cost atleast lol. But still my questions apply

🌐
Reddit
reddit.com › r/aws › aurora serverless v2 vs rds cost comparison?
r/aws on Reddit: Aurora Serverless v2 vs RDS cost comparison?
February 4, 2023 -

I have an app in production running on RDS postgresql db.r5.xlarge , the traffic is normal peaking during the day and almost sleeps during the night without any clear spikes.
I have a read replica that is used for reporting queries, this one is problematic, it has spikes whenever the users enter the google data studio reports, and even db.r5.2xlarge doesn't do the job fairly well.
I started thinking about evaluating Aurora Serverless v2 as an option, do you think using serverless will decrease the costs? what sorts of problems using serverless might cause or you have experience with?

Thanks everyone

🌐
AWS
aws.amazon.com › amazon rds › amazon aurora › pricing
Amazon Aurora Pricing
4 days ago - Aurora PostgreSQL and Aurora MySQL offer On-Demand and Reserved Instance pricing. Aurora charges for database instances and storage, along with any optional features you choose to enable. Aurora DSQL has a serverless pricing model, and you can learn more on Aurora DSQL pricing page.
Find elsewhere
🌐
Reddit
reddit.com › r/aws › experiences with aurora serverless v2?
r/aws on Reddit: Experiences with Aurora Serverless v2?
September 2, 2024 -

Hi all,

I've been reading some older threads about using Serverless v2 and see a lot of mentions of DBs never idling at 0.5.

I'm looking to migrate a whole bunch of Wordpress MySQL DBs and was thinking about migrating to Aurora to save on costs, by combining multiple DBs in one instance, as most of them, especially the Test and Staging DBs, are almost never used.

However seeing this has me worried, as any cost savings would be diminished immediately if the clusters wouldn't idle at .5 ACU.

What are your experiences with Serverless? Happy to hear them, especially in relation to Wordpress DBs!

Any other suggestions RE WP DBs are welcome too!

Top answer
1 of 11
15
My experience is that is performs poorly in absolute terms compared to similarly sized provisioned instances and that its like 10x worse perf/$. Its fine for something that has near zero load 2/3 of the time.
2 of 11
10
Pasting something I wrote elsewhere. I've been trying Aurora Serverless v2 PostgreSQL on a project and the results are pretty surprising. We were using db.t4g.medium (2 vCPUs and 4GB RAM) instances before and switched to 2 ACUs (according to the docs this gives you 4GB RAM and "corresponding CPU, and network"). The word corresponding is doing a lot of heavy lifting in that sentence We have a writer and a reader instance as it's a production system. If you set the failover priority of the reader to tier 2 or higher then the reader is supposed to scale down independently of the writer rather than remaining at the same ACU.  We want this as the writer is never busy and if it does failover we're happy to wait for reader to scale up. See https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html#aurora-serverless-v2-choosing-promotion-tier The result we saw was that post the switch the average CPU went from ~ 15% to ~ 35% and maximum CPU went from ~ 15% to 100% and the writer and reader were both scaled up to 2 ACUs. Opened a ticket with AWS to say "wtf" as I expected the reader to be sitting at 0.5 ACUs as it does nothing and CPU to be broadly what it was previously. AWS said that for the reader "From the Performance insights, I don't see any SQL queries waiting on CPU but I do see SQL queries running on the instance that are consuming CPU. From enhanced monitoring, I see that most of the CPU usage is consumed by RDS processes running within the instance. This also includes the backend Aurora processes and the processes needed for data replication." That seems pretty off as the writer is not busy at all so I can't fathom why the reader is doing much at all. I also queried why 2 ACUs wasn't broadly equivalent to a db.t4g.medium and support came back with: There is no direct relationship or correlation between ACU and vCPU, only that  "Each ACU is a combination of approximately 2 gibibytes (GiB) of memory, corresponding CPU, and network. Please note, we do not have any exact figures regarding how many vCPU each ACU has, however, based on my testing of Aurora Serverless v2 and based on estimated vCPUs from performance insights, I can provide you with a rough estimate of vCPU & memory per ACU: 2 ACU - 1 vCPU (4 GiB RAM) 4 ACU - 1 vCPU (8 GiB RAM) 8 ACU - 2 vCPU (16 GiB RAM) 16 ACU - 4 vCPU (32 GiB RAM) 32 ACU - 8 vCPU (64 GiB RAM) 64 ACU - 16 vCPU (122 GiB RAM) 128 ACU - 32 vCPU (256 GiB RAM) Prior to changing to serverless config, you were using db.t4g.medium instance type which has 2vCPUs and 4GB memory [1] Based on your current configuration, with 2 ACUs as max capacity, you are getting only 1vCPU and 4GB RAM and hence the CPU usage is high. To get 2 vCPUs, you would need to configure at least 8 ACUs. Please change the max capacity of the cluster accordingly and verify if it addresses the CPU usage issue. An ACU costs $0.12 per ACU-hour so if I follow AWS' guidance in order to get 2 vCPUs I'm going to need to pay:- 0.12*8*720*2 = $1382.40 versus the cost for a db.t4g.medium $59.8600 x 2 = $119.72 This all seems utterly bonkers.
Top answer
1 of 2
2
With Serverless v2, the hourly cost is somewhere like 12-20 cents per ACU per hour, depending on the AWS Region. You can check the price for each combination of AWS Region and Aurora database engine here: https://aws.amazon.com/rds/aurora/pricing/ Let's consider us-east-1, which (as of January 2024) is 12 cents per ACU per hour. The minimum for Serverless v2 is 0.5 ACUs, so 6 cents / hour. A typical month has 720 hours (30 days) or 744 (31 days). So if you set minimum capacity to 0.5 ACUs, leave the cluster idle, and nothing unexpected happens, best case is roughly $43-45 per month for instance charges. Plus whatever usage-based charges for storage, I/O, and there are some other optional features that could result in charges. (That's why you would go through the exercise with the pricing calculator.) What could interfere with the best case? Turning on memory-consuming or CPU-consuming features could prevent the idle cluster from scaling down to 0.5 ACUs. Something like Performance Insights (minimum 2 ACUs) or global database (minimum 8 ACUs). Cleanup operations like PostgreSQL vacuum could run and cause scaling up when you think the database should be idle. What actions could you take to make the best case even better? Do "stop cluster" overnight or other long periods when you don't need to use the database. If you need to add reader instances to the cluster to test out multi-AZ usage (read/write splitting etc.), delete the reader instances when they're not needed. Have cron jobs to run stop-db-cluster, modify-db-cluster, etc. to put things into a cheaper state during overnight periods if you forget to do it at the end of the day.
2 of 2
0
Your best bet is to use AWS Calculator # https://calculator.aws/#/ in order to estimate the operating cost with the services that you plan to use. Secondly using the Graviton2 instances would save a lot compared with other instances. I have listed some common instance types that you may start using and then change later based on your project workload. t4g : For dev/test workload m6g : For general purpose workload r6g : For memory optimized workload Go with small storage initially and then you can scale it based on the need to optimize the cost.
Top answer
1 of 2
1
Hey Kyle, Here's what is mentioned in the Serverless documentation[1]: If your provisioned workload has memory requirements that are too high for small DB instance classes such as T3 or T4g, choose a minimum ACU setting that provides memory comparable to an R5 or R6g DB instance. In particular, we recommend the following minimum capacity for use with the specified features (these recommendations are subject to change): Performance Insights – 2 ACUs Aurora global databases – 8 ACUs (applies only to the primary AWS Region) Now, I can tell you that Performance Insights is definitely wanting 2 ACU/4gb because you need extra RAM to store the performance_schema tables Performance Insights requires. The 8 ACU is precisely in line with the memory requirements for provisioned instances - the r5.large is 2 vCPU and 16gb, and 8 ACU is the same 16gb of RAM. So they're asking roughly the same specifications. They are recommendations, not absolute rules, so you can go lower, though it does risk out-of-memory errors, and problems with slower than expected replication. It's important to remember to ask how much money might be saved, and how much a crash or restart might affect your application. If it's a new application, still being tested, you can probably afford a lot of scaling up (then back down), and/or the occasional crash. If it's for production, you'd want to measure the cost savings versus the costs of a crash or OOM error. In regards to the AWS pricing calculator please know that the charges for Aurora Serverless v2 capacity are measured in terms of ACU-hours. This is calculated by the average # of ACU's used per hour. For example, if on average every hour you use 4 ACUs for 30 minutes and 2 ACUs for the other 30 minutes each hour would be 3 ACU's. How you calculate this estimate is entirely based on your workload and how often it scales up/down ACUs. Please take a look at the following documentation for more examples on how this estimation process works [2]. Please note in addition to ACU’s used per hour you will be charged for storage rate and I/O rate. For Aurora Standard in US-East-1 you are charged $0.10 per GB-month and $0.20 per 1 million requests. For more information on these prices please view here [3]. Also if you leave the instance running it will continue to use the minimum ACU value that you configured for this cluster. As per Doc [4] “Aurora Serverless v2 writers and readers don't scale all the way down to zero ACUs. Idle Aurora Serverless v2 writers and readers can scale down to the minimum ACU value that you specified for the cluster.” **References:** [1]https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html#aurora-serverless-v2.min_capacity_considerations [2]Aurora Pricing Examples: https://aws.amazon.com/rds/aurora/pricing/#:~:text=Japanese%20Consumption%20Tax.-,Aurora%20pricing%20examples,-The%20following%20examples [3]Amazon Aurora Pricing: https://aws.amazon.com/rds/aurora/pricing/ [4]https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-works.scaling
2 of 2
0
hi Kyle, Hope you are well. Maybe you find global database serverless v2 pgsql capacity that config ACUs set to 8 (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html), Actually, Serverless v2 price base on capacity(ACUs)/seconds, the smallest ACU is the smallest billing specification of the cluster in the current month, and new fees will be added as the ACUs is elastic. Meantime, you can also configure the smallest ACUs according to your own practice
🌐
Reddit
reddit.com › r/aws › aurora serverless pricing
r/aws on Reddit: Aurora Serverless pricing
May 11, 2016 -

https://aws.amazon.com/rds/aurora/serverless/

It has the following: "You pay a flat rate per second of ACU usage, with a minimum of 1 minute of usage each time the database is activated."

Does that mean if I use it for lets say 10 seconds, I'll be charged for a minute? If I then run it 10 mins later and use it again for 10 seconds, I am charged for using it for a minute?

Or is it totaled up at the end of the month i.e. I've used it for 20 seconds, so I am charged for 1 minute?

I'm just trying to figure out if it's worth using serverless Aurora for personal projects that require a DB but run very infrequently.

🌐
CloudZero
cloudzero.com › home › blog › aws aurora pricing: how to save costs in 2025
AWS Aurora Pricing: How To Save Costs In 2025
March 25, 2025 - For Aurora Serverless, users are charged based on Aurora Capacity Units (ACUs), where one ACU corresponds to approximately 2 GB of memory. The cost for Aurora Serverless v2 is $0.12 per ACU-hour, which is double that of v1 at $0.06 per ACU-hour.
🌐
Cloudexmachina
cloudexmachina.io › blog › aws-aurora-pricing
AWS Aurora Pricing Explained: What You Really Pay for and Why
September 2, 2025 - Note that actual prices vary, so ... Frankfurt). Serverless v2 rates fluctuate by region, though generally in the $0.06-$0.08 ACU/hr range....
🌐
Reddit
reddit.com › r/aws › new to aurora serverless & serverless in general
r/aws on Reddit: New to Aurora Serverless & serverless in general
May 6, 2023 -

Hi

I'm new to Aurora Serverless as well as serverless services in general. I'm looking for relational db managed services that support foreign keys (I looked up planetscale, but they don't support it) that doesn't cost too much.

The usage is for hobby project.

I'm thinking to use Aurora Serverless v2 but I'm confused with the pricing calculator especially about the number of ACU running per hour. Shouldn't serverless mean "pay as you go"? Why the calculator say monthly cost?

Does the pricing calculator assume the instance keeps running and never stops due to inactivity for 30 day straight? For how long of inactivity that the db instance would stop on its own?

🌐
Reddit
reddit.com › r/serverless › amazon aurora serverless v2 is generally available!
r/serverless on Reddit: Amazon Aurora Serverless v2 is Generally Available!
April 21, 2022 - That's not yet 100% serverless pricing (I'm especially bothered by the starting price), but definitely closer to it than what we had before. Edit: confirmed that it doesn't scale down to 0 😢 ... I have the same impression. And it’s seriously a bummer. One of the usescases described is dev/test environments, but there is no way to even pause a v2 cluster, so you’ll end up with a lot of incurring cost if each dev has their own database.