With Serverless v2, the hourly cost is somewhere like 12-20 cents per ACU per hour, depending on the AWS Region. You can check the price for each combination of AWS Region and Aurora database engine here: https://aws.amazon.com/rds/aurora/pricing/
Let's consider us-east-1, which (as of January 2024) is 12 cents per ACU per hour. The minimum for Serverless v2 is 0.5 ACUs, so 6 cents / hour. A typical month has 720 hours (30 days) or 744 (31 days). So if you set minimum capacity to 0.5 ACUs, leave the cluster idle, and nothing unexpected happens, best case is roughly $43-45 per month for instance charges. Plus whatever usage-based charges for storage, I/O, and there are some other optional features that could result in charges. (That's why you would go through the exercise with the pricing calculator.)
What could interfere with the best case? Turning on memory-consuming or CPU-consuming features could prevent the idle cluster from scaling down to 0.5 ACUs. Something like Performance Insights (minimum 2 ACUs) or global database (minimum 8 ACUs). Cleanup operations like PostgreSQL vacuum could run and cause scaling up when you think the database should be idle.
What actions could you take to make the best case even better? Do "stop cluster" overnight or other long periods when you don't need to use the database. If you need to add reader instances to the cluster to test out multi-AZ usage (read/write splitting etc.), delete the reader instances when they're not needed. Have cron jobs to run stop-db-cluster, modify-db-cluster, etc. to put things into a cheaper state during overnight periods if you forget to do it at the end of the day. Answer from rePost-User-6113899 on repost.aws
AWS
aws.amazon.com › amazon rds › amazon aurora › pricing
Amazon Aurora Pricing
1 day ago - We want to calculate the compute costs of running this workload on Aurora Serverless with the database cluster configured as Aurora Standard and Aurora I/O-Optimized in US East (N. Virginia). With Aurora Serverless, the minimum database capacity you can set is 0.5 ACUs.
Amazon Web Services
amazonaws.cn › en › rds › aurora › pricing
Amazon Aurora Pricing
1 week ago - Examples using Aurora Serverless: Consider a workload that needs 5 ACUs and runs for 30 minutes. We want to calculate the compute costs of running this workload on Aurora Serverless v2 with the database cluster configured as Aurora Standard and Aurora I/O-Optimized in China (Ningxia).
Videos
49:47
AWS re:Invent 2023 - Using Aurora Serverless to simplify ...
Essential Strategies for Aurora Serverless v2 Adoption | AWS ...
15:14
Getting Started with Amazon Aurora Serverless v2- AWS Database ...
46:04
AWS re:Invent 2024 - Build scalable and cost-optimized apps with ...
Aurora Serverless v2: Automatic Database Scaling - AWS
Top answer 1 of 2
2
With Serverless v2, the hourly cost is somewhere like 12-20 cents per ACU per hour, depending on the AWS Region. You can check the price for each combination of AWS Region and Aurora database engine here: https://aws.amazon.com/rds/aurora/pricing/
Let's consider us-east-1, which (as of January 2024) is 12 cents per ACU per hour. The minimum for Serverless v2 is 0.5 ACUs, so 6 cents / hour. A typical month has 720 hours (30 days) or 744 (31 days). So if you set minimum capacity to 0.5 ACUs, leave the cluster idle, and nothing unexpected happens, best case is roughly $43-45 per month for instance charges. Plus whatever usage-based charges for storage, I/O, and there are some other optional features that could result in charges. (That's why you would go through the exercise with the pricing calculator.)
What could interfere with the best case? Turning on memory-consuming or CPU-consuming features could prevent the idle cluster from scaling down to 0.5 ACUs. Something like Performance Insights (minimum 2 ACUs) or global database (minimum 8 ACUs). Cleanup operations like PostgreSQL vacuum could run and cause scaling up when you think the database should be idle.
What actions could you take to make the best case even better? Do "stop cluster" overnight or other long periods when you don't need to use the database. If you need to add reader instances to the cluster to test out multi-AZ usage (read/write splitting etc.), delete the reader instances when they're not needed. Have cron jobs to run stop-db-cluster, modify-db-cluster, etc. to put things into a cheaper state during overnight periods if you forget to do it at the end of the day.
2 of 2
0
Your best bet is to use AWS Calculator # https://calculator.aws/#/ in order to estimate the operating cost with the services that you plan to use.
Secondly using the Graviton2 instances would save a lot compared with other instances. I have listed some common instance types that you may start using and then change later based on your project workload.
t4g : For dev/test workload
m6g : For general purpose workload
r6g : For memory optimized workload
Go with small storage initially and then you can scale it based on the need to optimize the cost.
Top answer 1 of 2
1
Hey Kyle,
Here's what is mentioned in the Serverless documentation[1]:
If your provisioned workload has memory requirements that are too high for small DB instance classes such as T3 or T4g, choose a minimum ACU setting that provides memory comparable to an R5 or R6g DB instance.
In particular, we recommend the following minimum capacity for use with the specified features (these recommendations are subject to change):
Performance Insights – 2 ACUs
Aurora global databases – 8 ACUs (applies only to the primary AWS Region)
Now, I can tell you that Performance Insights is definitely wanting 2 ACU/4gb because you need extra RAM to store the performance_schema tables Performance Insights requires. The 8 ACU is precisely in line with the memory requirements for provisioned instances - the r5.large is 2 vCPU and 16gb, and 8 ACU is the same 16gb of RAM. So they're asking roughly the same specifications.
They are recommendations, not absolute rules, so you can go lower, though it does risk out-of-memory errors, and problems with slower than expected replication. It's important to remember to ask how much money might be saved, and how much a crash or restart might affect your application. If it's a new application, still being tested, you can probably afford a lot of scaling up (then back down), and/or the occasional crash. If it's for production, you'd want to measure the cost savings versus the costs of a crash or OOM error.
In regards to the AWS pricing calculator please know that the charges for Aurora Serverless v2 capacity are measured in terms of ACU-hours. This is calculated by the average # of ACU's used per hour. For example, if on average every hour you use 4 ACUs for 30 minutes and 2 ACUs for the other 30 minutes each hour would be 3 ACU's. How you calculate this estimate is entirely based on your workload and how often it scales up/down ACUs. Please take a look at the following documentation for more examples on how this estimation process works [2].
Please note in addition to ACU’s used per hour you will be charged for storage rate and I/O rate. For Aurora Standard in US-East-1 you are charged $0.10 per GB-month and $0.20 per 1 million requests. For more information on these prices please view here [3].
Also if you leave the instance running it will continue to use the minimum ACU value that you configured for this cluster. As per Doc [4] “Aurora Serverless v2 writers and readers don't scale all the way down to zero ACUs. Idle Aurora Serverless v2 writers and readers can scale down to the minimum ACU value that you specified for the cluster.”
**References:**
[1]https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html#aurora-serverless-v2.min_capacity_considerations
[2]Aurora Pricing Examples: https://aws.amazon.com/rds/aurora/pricing/#:~:text=Japanese%20Consumption%20Tax.-,Aurora%20pricing%20examples,-The%20following%20examples
[3]Amazon Aurora Pricing: https://aws.amazon.com/rds/aurora/pricing/
[4]https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-works.scaling
2 of 2
0
hi Kyle,
Hope you are well. Maybe you find global database serverless v2 pgsql capacity that config ACUs set to 8 (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html),
Actually, Serverless v2 price base on capacity(ACUs)/seconds, the smallest ACU is the smallest billing specification of the cluster in the current month, and new fees will be added as the ACUs is elastic.
Meantime, you can also configure the smallest ACUs according to your own practice
Reddit
reddit.com › r/aws › aurora serverless 2 - cost estimation
r/aws on Reddit: Aurora Serverless 2 - Cost estimation
August 6, 2022 -
Hi everyone.
I would like to know your opinion on how to evaluate costs for using Aurora Serverless 2 against a current RDS workload.
Basically, I have a client running a PG RDS. Given the workload they have with sudden spikes during business hours and very low usage off business hours and weekends, I would like to recommend serverless 2.
But is their a way I can predict their TCO (Total Cost of Ownership) using Serveless 2 based on their current RDS metrics from Cloud Watch?? That would be really helpful to get the client to buy in
Thanks in advance!!
Top answer 1 of 2
4
Hello, I asked AWS about this and it’s complicated to evaluate. 1 ACU is about 2GB of memory and corresponding CPU. You can track CPU on Cloudwatch but the real memory consumption is a calculated metric based on your engine. My client used aurora MySQL, I tried to compare all the instances of there clusters to ACU (based on the memory of the instances). For the usage trend I took the CPU usage graph even if it’s not totally relevant. It costs three times the price. Our strategy is to used it for the non productions workloads in a near future but not in production. If you don’t have a significant usage spike on your workload, aurora serverless is not the answer I think.
2 of 2
2
I would personally calculate it as 4x the cost of PG RDS IF the workload is flat (which almost none are). If the workload averages at 25% usage of peak load needed, then it will be cheaper due to the honestly magical scaling.
AWS
aws.amazon.com › databases › amazon rds › pricing
Managed Relational Database - Amazon RDS Pricing - Amazon Web Services
1 day ago - For provisioned instances on Amazon Aurora, RDS for MySQL, and RDS for PostgreSQL, RDS Extended Support is priced per vCPU per hour. For Aurora Serverless v2, RDS Extended Support is priced per Aurora Capacity Unit (ACU) per hour consumed by your database.
Economize
economize.cloud › blog › amazon-aurora-serverless-pricing
AWS Aurora Serverless Pricing & 3 Tips for Cost Optimization
AWS Aurora Serverless Pricing & 3 Tips for Cost Optimization
GitHub
github.com › orlyandico › aurora_serverless_v2_savings_calculator
GitHub - orlyandico/aurora_serverless_v2_savings_calculator: Estimate potential savings from migrating qualifying databases in your current RDS fleet to Aurora Serverless V2
calculates the monthly cost for that average ACU, using the pricing API (currently hardwired to use Aurora PostgreSQL ACU, which is the same as Aurora MySQL ACU so this is OK for now) calculates the cost of Aurora IOPS based on the Read+Write ...
Author orlyandico
Top answer 1 of 2
1
To estimate the cost of a cross-region headless cluster using AWS Aurora Global Database in the AWS Pricing Calculator, you’ll need to manually configure the components since the calculator doesn’t provide a direct “headless cluster” option. In a headless Aurora cluster, where no writer or reader instances are provisioned and only storage is used (for scenarios like DR or cross-region replication), you can focus on the storage and I/O costs. First, select Amazon Aurora in the calculator and configure the Global Database option. Then, for each region involved, specify the estimated amount of storage in GB under “Storage” (using Aurora’s default storage type, usually General Purpose SSD). You can also estimate the backup storage and I/O requests based on your expected usage. Make sure to set the instance count to zero or deselect instances if possible to simulate a headless configuration. While the calculator may still require at least one instance to be configured for some settings, you can use a minimal instance temporarily just to calculate base storage and I/O, and then exclude instance pricing manually in your internal cost estimates. Additionally, don’t forget to account for cross-region data transfer costs, which are incurred when replication traffic flows between primary and secondary clusters
2 of 2
0
When calculating costs for an Aurora Serverless cluster, you need to consider several components:
For Aurora Serverless v2, pricing is based on Aurora Capacity Units (ACUs). Each ACU represents approximately 2 GiB of memory with corresponding CPU and networking resources. You're billed for every second your database runs, with costs calculated per ACU-hour.
For a cross-region Aurora Global Database using Serverless, you would need to account for:
1. ACU usage in each region - The primary region and each secondary region will incur ACU charges based on actual usage. If you're running in multiple Availability Zones, the ACU usage is cumulative (e.g., 1 ACU per AZ across 3 AZs would be 3 ACUs per hour).
2. Storage costs - Aurora storage is billed per GB-month and is separate from compute costs.
3. Data transfer costs - For a Global Database, you'll incur charges for data replicated between regions.
To estimate costs when there's no direct option in the AWS Pricing Calculator for a headless cluster:
- Calculate the minimum ACU configuration you expect to use
- Estimate your average ACU usage based on your workload patterns
- Add storage costs (based on GB-month)
- Include cross-region data transfer costs
For a more accurate estimate, you could:
1. Set up a test environment with Aurora Serverless
2. Monitor the actual ACU consumption over a few days of typical use
3. Use CloudWatch metrics to observe scaling patterns
4. Calculate costs based on the observed usage patterns
This approach will give you a more realistic cost projection than theoretical calculations alone.
**Sources**
Serverless relational database – Amazon Aurora DSQL pricing – AWS
Moving from Aurora to Aurora Serverless | AWS re:Post
RDS Aurora Serverless - Is cost multiplied by availability zone? | AWS re:Post
Mydbops
mydbops.com › blog › aws-rds-vs-aurora-vs-serverless-cost-comparison
RDS vs Aurora vs Aurora Serverless: A Real-World Cost Comparison for AWS Databases
June 10, 2025 - Observation: Serverless v2 with scale-to-zero offers the most significant savings for environments with substantial idle time. Stopping provisioned instances saves compute but incurs storage costs. ... (Disclaimer: These are highly simplified estimations. Actual costs depend on precise usage, region, current pricing, RI strategy, and features used. Use the AWS Pricing Calculator for accurate estimates.)
TrustRadius
trustradius.com › home › database-as-a-service (dbaas) › amazon aurora › pricing
Amazon Aurora Pricing 2025: Compare Plans and Costs
Right now the serverless is pretty expensive. … scaling of server size Scaling metrics to determine the right time to scale for cost efficiency Self updates Better … best suited when you're not using Amazon We… ... operating under a budget, this may not be the right tool. RDS is slightly cheaper than Aurora.