When you use Elastic Resize to change the size of your cluster Redshift moves the existing slices to the new compute nodes so the number of slices per node will change.

https://docs.aws.amazon.com/redshift/latest/mgmt/rs-resize-tutorial.html#elastic-resize

Answer from Joe Harris on Stack Overflow
🌐
AWS
docs.aws.amazon.com › amazon redshift › management guide › amazon redshift provisioned clusters
Amazon Redshift provisioned clusters - AWS Documentation
The headings in the tables have these meanings: ... RAM is the amount of memory in gibibytes (GiB) for each node. Default slices per node is the number of slices into which a compute node is partitioned when a cluster is created or resized with classic resize.
Discussions

Number of node and slices in Redshift cluster - Stack Overflow
Data slices are redistributed to the new nodes (when increasing nodes) or the existing nodes (when decreasing nodes). This allows the resize to complete very quickly. The total slice count does not change until you do a "classic" resize. docs.aws.amazon.com/redshift/latest/mgmt/… ... More on stackoverflow.com
🌐 stackoverflow.com
Issue distributing data among nodes
I am running daily analyze for tables with >5% stat_off or >5% unsorted, ran a vacuum FULL 100 percent on those tables that I switched to EVEN after changing the diststyle. No difference ... In Redshift data distribution happens across slices. A compute node is partitioned into slices. More on repost.aws
🌐 repost.aws
2
0
July 8, 2024
Redshift cluster uses only 6 of 8 slices after scaling from 2 to 4 nodes
When you scale a Redshift cluster, the data distribution doesn't automatically rebalance across all slices optimally. ... Data Distribution Skew: Even though you have 8 slices (2 per node) in your 4-node ra3.xlplus cluster, your data is not being evenly distributed across all slices. More on repost.aws
🌐 repost.aws
1
0
September 23, 2025
amazon web services - Redshift cluster, how to get information of number of slice - Stack Overflow
Overview of Amazon Redshift clusters says that ra3.4xlarge has a default of 4 slices per node. More on stackoverflow.com
🌐 stackoverflow.com
🌐
Panoply
panoply.io › data-warehouse-guide › redshift-architecture-and-capabilities
Buyer's Guide to Redshift Architecture, Pricing, and Performance | Panoply
In Redshift, each Compute Node is partitioned into slices, and each slice receives part of the memory and disk space. The Leader Node distributes data to the slices, and allocates parts of a user query or other database operation to the slices.
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › introduction to amazon redshift › amazon redshift architecture › data warehouse system architecture
Data warehouse system architecture - Amazon Redshift
The number of slices per node is determined by the node size of the cluster. For more information about the number of slices for each node size, go to About clusters and nodes in the Amazon Redshift Management Guide.
🌐
Stack Overflow
stackoverflow.com › questions › 64210658 › number-of-node-and-slices-in-redshift-cluster
Number of node and slices in Redshift cluster - Stack Overflow
Data slices are redistributed to the new nodes (when increasing nodes) or the existing nodes (when decreasing nodes). This allows the resize to complete very quickly. The total slice count does not change until you do a "classic" resize. docs.aws.amazon.com/redshift/latest/mgmt/… 2020-10-08T12:55:51.703Z+00:00
🌐
Amazon Web Services
docs.amazonaws.cn › 亚马逊云科技 › amazon redshift › database developer guide › automatic table optimization › data distribution for query optimization
Data distribution for query optimization - Amazon Redshift
One node is the leader node, which ... disk storage for a compute node is divided into a number of slices. The number of slices per node depends on the node size of the cluster....
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › system tables and views reference › system monitoring (provisioned only) › stv tables for snapshot data › stv_slices
STV_SLICES - Amazon Redshift
Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the blog post ... Use the STV_SLICES table to view the current mapping of a slice to a node.
🌐
Halodoc Blog
blogs.halodoc.io › demystifying-redshift-cluster-resizing
Demystifying Redshift Cluster Resizing - Halodoc Blog
August 17, 2023 - Hence keeping the write downtime ... ... The process took around ~15 to 20 mins. Initially, we had 10 slices, i.e. 2 slice per node (2 * 5 = 10 slices)....
Find elsewhere
🌐
AWS re:Post
repost.aws › questions › QUy1t2khReQUe8NwNHZTUprg › issue-distributing-data-among-nodes
Issue distributing data among nodes | AWS re:Post
July 8, 2024 - I am running daily analyze for ... No difference ... In Redshift data distribution happens across slices. A compute node is partitioned into slices....
🌐
ProsperOps
prosperops.com › home › exploring amazon redshift architecture: a comprehensive guide
Exploring Amazon Redshift Architecture: A Comprehensive Guide - ProsperOps
September 19, 2024 - Node slices are the divisions of Redshift compute nodes. They get a share of the node’s memory and disk space, which equips them to work on a part of the data query assigned to a compute node.
Top answer
1 of 1
1
This issue appears to be related to how Amazon Redshift distributes data across slices after scaling your cluster. When you scale a Redshift cluster, the data distribution doesn't automatically rebalance across all slices optimally. Here's what's happening and what you can try: 1. **Data Distribution Skew**: Even though you have 8 slices (2 per node) in your 4-node ra3.xlplus cluster, your data is not being evenly distributed across all slices. This is causing CPU imbalance with older nodes working harder than newer ones. 2. **VACUUM and ANALYZE**: Run a full vacuum and analyze on your tables. This can help redistribute data and update statistics: ``` VACUUM FULL; ANALYZE; ``` 3. **Check Data Distribution**: Run a diagnostic query to identify tables with data skew: ``` select trim(pgn.nspname) as schema, trim(a.name) as table, id as tableid, decode(pgc.reldiststyle,0, 'even',1,det.distkey ,8,'all') as distkey, dist_ratio.ratio::decimal(10,4) as skew, det.head_sort as "sortkey", det.n_sortkeys as "#sks", b.mbytes, decode(b.mbytes,0,0,((b.mbytes/part.total::decimal)*100)::decimal(5,2)) as pct_of_total, decode(det.max_enc,0,'n','y') as enc, a.rows, decode( det.n_sortkeys, 0, null, a.unsorted_rows ) as unsorted_rows , decode( det.n_sortkeys, 0, null, decode( a.rows,0,0, (a.unsorted_rows::decimal(32)/a.rows)*100) )::decimal(5,2) as pct_unsorted from (select db_id, id, name, sum(rows) as rows, sum(rows)-sum(sorted_rows) as unsorted_rows from stv_tbl_perm a group by db_id, id, name) as a join pg_class as pgc on pgc.oid = a.id join pg_namespace as pgn on pgn.oid = pgc.relnamespace left outer join (select tbl, count(*) as mbytes from stv_blocklist group by tbl) b on a.id=b.tbl inner join (select attrelid, min(case attisdistkey when 't' then attname else null end) as "distkey", min(case attsortkeyord when 1 then attname else null end ) as head_sort , max(attsortkeyord) as n_sortkeys, max(attencodingtype) as max_enc from pg_attribute group by 1) as det on det.attrelid = a.id inner join ( select tbl, max(mbytes)::decimal(32)/min(mbytes) as ratio from (select tbl, trim(name) as name, slice, count(*) as mbytes from svv_diskusage group by tbl, name, slice ) group by tbl, name ) as dist_ratio on a.id = dist_ratio.tbl join ( select sum(capacity) as total from stv_partitions where part_begin=0 ) as part on 1=1 where mbytes is not null order by mbytes desc; ``` 4. **Deep Copy with DISTSTYLE AUTO**: For tables showing significant skew, try recreating them with DISTSTYLE AUTO instead of EVEN or specific DISTKEY. This lets Redshift choose the optimal distribution style: ``` CREATE TABLE new_table DISTSTYLE AUTO AS SELECT * FROM original_table; ``` 5. **Batch Processing for Large Tables**: For very large tables, consider using a batched approach for the deep copy operations to improve performance. 6. **Check for Missing Statistics**: Run this query to identify tables with missing statistics that might be affecting distribution: ``` select substring(trim(plannode),1,100) as plannode, count(*) from stl_explain where plannode like '%missing statistics%' group by plannode order by 2 desc; ``` If these steps don't resolve the issue, you may need to contact AWS Support as there could be an underlying configuration issue with your cluster after scaling. **Sources** Redshift diagnostics queries to identify the performance bottleneck | AWS re:Post Issue distributing data among nodes | AWS re:Post Data distribution for query optimization - Amazon Redshift Improve COPY ingestion performance for large data loads on Amazon Redshift | AWS re:Post
🌐
LinkedIn
linkedin.com › pulse › underlying-structure-redshift-big-data-sunaina-lalwani
Underlying Structure of Redshift for Big Data
July 8, 2021 - Redshift currently offers 3 families of instances: Dense Compute(dc2), Dense Storage (ds2), and Managed Storage(ra3). The slices can range from 2 per node to 16 per node depending on the instance family and instance type.
🌐
Secoda
secoda.co › learn › what-is-the-role-of-clusters-in-aws-redshift-architecture
What is the Role of Clusters in AWS Redshift Architecture? | Secoda
August 12, 2024 - Compute nodes are the workhorses of AWS Redshift, executing queries and processing data. Each compute node is partitioned into units called slices, which allows for efficient data processing.
🌐
Medium
medium.com › @KuldeepsinhVaghela › amazon-redshift-architecture-explained-leader-node-compute-nodes-and-performance-tuning-197ec98c6e7a
Amazon Redshift Architecture Explained: Leader Node, Compute Nodes, and Performance Tuning | by Kuldeepsinh Vaghela | Medium
April 24, 2025 - Parallel Data Processing (Compute Nodes and Slices): Each compute node, and specifically each slice within those nodes that contains relevant data from the sales table within the specified date range, will independently perform the following operations on its portion of the data: Filtering: Selects the sales records where sale_date falls between '2024-01-01' and '2024-03-31'. Joining: Joins the filtered sales data with the corresponding products data. Since the products table is distributed as ALL, each compute node has a local copy, making the join operation efficient. If sales was distributed differently, Redshift might need to move data between slices or nodes during the join operation (a process called data shuffling).
🌐
Wordpress
piyushchaudhariblog.wordpress.com › 2018 › 09 › 27 › amazon-redshift-architecture-and-its-components
Amazon Redshift Architecture and Its Components | My Passion Behind Blogging
September 27, 2018 - Each Slice has a portion of Compute Node’s memory and disk assigned to it where it performs Query Operations. The Leader Node is responsible for assigning a Query code and data to a slice for execution. Slices once assigned query load work in parallel to generate query results. Data is distributed among the Slices on the basis of Distribution Style and Distribution Key of a particular table. An even distribution of data enables Redshift to assign workload evenly to slices and maximizes the benefit of parallel processing.
🌐
Hevo
hevodata.com › home › learn › data warehousing
8 Important Components of Amazon Redshift Architecture
January 10, 2026 - Each Slice has a portion of Compute Node’s memory and disk assigned to it where it performs query operations. The Leader Node is responsible for assigning a query code and data to a slice for execution. Slices once assigned query load work in parallel to generate query results. Data is distributed among the Slices on the basis of the Distribution Style and Distribution Key of a particular table. An even distribution of data enables Redshift to assign workload evenly to Slices and maximizes the benefit of parallel processing.
🌐
Jayendra's Cloud Certification Blog
jayendrapatil.com › tag › redshift-single-vs-multi-node-cluster
Redshift Single vs Multi-Node Cluster Archives - Jayendra's Cloud Certification Blog
Number of slices per node is determined by the node size of the cluster. When a table is created, one column can optionally be specified as the distribution key. When the table is loaded with data, the rows are distributed to the node slices ...
🌐
Twilio
twilio.com › docs › segment › connections › storage › warehouses › redshift-faq
Redshift cluster and Redshift connector limitations | Twilio
When scaling up your cluster by ... table increases. For example, if you have a table with 10 columns, Redshift will preallocate 20mb of space (10 columns X 2 slices) per node....