unfortunately UNION is the only way here:

WITH bar (baz) AS
(select 'a' union select 'b' union select 'c')
SELECT * from bar;
Answer from AlexYes on Stack Overflow
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › sql reference › sql commands › select › select list
SELECT list - Amazon Redshift
A redundant keyword that defines the default behavior if you don't specify DISTINCT. SELECT ALL * means the same as SELECT * (select all rows for all columns and retain duplicates). ... Option that eliminates duplicate rows from the result set, based on matching values in one or more columns.
Top answer
1 of 3
6

unfortunately UNION is the only way here:

WITH bar (baz) AS
(select 'a' union select 'b' union select 'c')
SELECT * from bar;
2 of 3
3

TLDR: The most efficient way to simulate a multi-row VALUES clause is to create an array-of-arrays for the rows and columns of the data, and then unpack it and (if necessary) cast to the desired data types:

select rowdata[0]::varchar, rowdata[1]::decimal
from
    (select array(
        array('a', 1),
        array('b', 2)
    ) as arr) as data,
    data.arr as rowdata

(The data.arr as rowdata bit is to unnest the array.)


UNION ALL has the unfortunate behavior that each of the SELECT statements will be distributed across the cluster:

explain select * from (select 'a' union all select 'b' union all select 'c')
XN Subquery Scan derived_table1  (cost=0.00..0.09 rows=3 width=32) 
  ->  XN Append  (cost=0.00..0.06 rows=3 width=0) 
        ->  XN Network  (cost=0.00..0.02 rows=1 width=0) 
              Distribute Round Robin 
              ->  XN Subquery Scan "*SELECT* 1"  (cost=0.00..0.02 rows=1 width=0) 
                    ->  XN Result  (cost=0.00..0.01 rows=1 width=0) 
        ->  XN Network  (cost=0.00..0.02 rows=1 width=0) 
              Distribute Round Robin 
              ->  XN Subquery Scan "*SELECT* 2"  (cost=0.00..0.02 rows=1 width=0) 
                    ->  XN Result  (cost=0.00..0.01 rows=1 width=0) 
        ->  XN Network  (cost=0.00..0.02 rows=1 width=0) 
              Distribute Round Robin 
              ->  XN Subquery Scan "*SELECT* 3"  (cost=0.00..0.02 rows=1 width=0) 
                    ->  XN Result  (cost=0.00..0.01 rows=1 width=0)

On more than a few rows, this incurs an absurd overhead and makes queries extremely inefficient. Fortunately, we can use the SUPER data type as a workaround; when we select a single value (the array-of-arrays) the query planner sees this as a single query which it only needs to distribute to one node, which is much more efficient to execute.

Discussions

sql - How to select multiple rows filled with constants in Amazon Redshift? - Stack Overflow
I have already tried the common PostgreSQL answer, but seems like it doesn't work with Redshift: SELECT * FROM VALUES (1) AS q (col1); ERROR: 42883: function values(integer) does not exist I nee... More on stackoverflow.com
🌐 stackoverflow.com
Multi-Select fails with redshift queries prepared statements
The following behavior seems like a possible bug based on this documentation https://docs.retool.com/docs/working-with-select-components. I have a multi Select with the values generated by a query {{company_list.data.company}} I have a query running off this multiselect values select * from ... More on community.retool.com
🌐 community.retool.com
1
0
September 29, 2020
Function/Procedure to Return a Result in Query in Redshift
No. As a hack maybe have the procedure produce a temporary table with the data you want, then after calling the procedure the next query can retrieve the data from the temporary table. More on reddit.com
🌐 r/SQL
2
1
February 3, 2023
Best way to count distinct values
First question: why? More on reddit.com
🌐 r/dataengineering
44
16
November 26, 2025
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › sql reference › sql commands › select
SELECT - Amazon Redshift
March 19, 2026 - Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the blog post ... Returns rows from tables, views, and user-defined functions. The maximum size for a single SQL statement is 16 MB. [ WITH with_subquery [, ...] ] SELECT [ TOP number | [ ALL | DISTINCT ] * | expression [ AS output_name ] [, ...] ] [ EXCLUDE column_list ] [ FROM table_reference [, ...] ] [ WHERE condition ] [ [ START WITH expression ] CONNECT BY expression ] [ GROUP BY ALL | expression [, ...] ] [ HAVING condition ] [ QUALIFY condition ] [ { UNION | ALL | INTERSECT | EXCEPT | MINUS } query ] [ ORDER BY expression [ ASC | DESC ] ] [ LIMIT { number | ALL } ] [ OFFSET start ]
🌐
Workato
docs.workato.com › connectors › redshift › select.html
Workato connectors - Redshift Select actions | Workato docs
May 21, 2025 - This action lets you select rows based on certain criteria defined by a WHERE condition. Rows from the selected table that match the WHERE condition will be returned as the output of this action.
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › sql reference › using sql › expressions › expression lists
Expression lists - Amazon Redshift
select * from venue where (venuecity, venuestate) in (('Miami', 'FL'), ('Tampa', 'FL')) order by venueid; venueid | venuename | venuecity | venuestate | venueseats ---------+-------------------------+-----------+------------+------------ 28 | American Airlines Arena | Miami | FL | 0 54 | St.
🌐
AWS
docs.aws.amazon.com › amazon redshift › getting started guide › amazon redshift provisioned clusters › common database tasks › task 4: create a table › select data from a table
Select data from a table - Amazon Redshift
AWSDocumentationAmazon RedshiftGetting Started Guide · After you create a table and populate it with data, use a SELECT statement to display the data contained in the table. The SELECT * statement returns all the column names and row values for all of the data in a table.
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › sql reference › sql commands › insert › insert examples
INSERT examples - Amazon Redshift
insert into category_stage values (default, default, default, default), (20, default, 'Country', default), (21, 'Concerts', 'Rock', default); select * from category_stage where catid in(0,20,21) order by 1; catid | catgroup | catname | catdesc -------+----------+---------+--------- 0 | General | General | General 20 | General | Country | General 21 | Concerts | Rock | General (3 rows)
Find elsewhere
🌐
Twilio Segment
segment.com › docs › connections › storage › warehouses › redshift-useful-sql
Useful SQL Queries for Redshift | Twilio
Each Track call is stored as a distinct row in a single Redshift table called tracks. To get a table of your completed orders, you can run the following query: Copy code block · 1 · select * 2 · from initech.tracks · 3 · where event = 'completed_order' That SQL query returns a table that looks like this: Expand image ·
🌐
Tibco
docs.tibco.com › pub › flogo-redshift › 2.0.1 › doc › html › GUID-9C2896F5-9E7D-44B6-84BC-13B6792C1E81.html
AmazonRedshiftQuery
Use this activity to execute a ... Amazon Redshift database. The AmazonRedshiftQuery activity returns information in the form of rows. ... This tab contains the input schema. The fields that were selected in the Input Settings tab will be available in the schema. You can either hard code their values or map them to a field from the output ...
🌐
Hevo
hevodata.com › home › learn › data warehousing
Amazon Redshift SELECT INTO | Hevo
January 12, 2026 - select username, lastname, sum(pricepaid-commission) as profit into temp table profits from sales, users where sales.sellerid=users.userid group by 1, 2 order by 3 desc; ... This blog goes into great detail about the Redshift SELECT INTO statement.
🌐
Hevo
hevodata.com › home › learn
Learn about data integration, migration, replication, and strategic data practices.
May 13, 2024 - April 28th, 2026 By Shiny Vineetha in Data Integration · April 28th, 2026 By Skand Agrawal in Data Engineering, Data Pipeline
🌐
Hevo
hevodata.com › home › learn › data warehousing
Redshift First_Value and Last_Value Functions | Hevo
January 10, 2026 - select venuename, venuestate, venueseats, last_value(venuename) over(partition by venuestate order by venueseats descending rows between unbounded preceding and unbounded following) from (select * from venue where venueseats >0) order by venuestate; For the state of California, the Shoreline Amphitheatre has the lowest number of seats, hence, it will be returned for each row in the partition. That is how the Redshift first_value and last_value window functions work, as simple as that.
🌐
LinkedIn
linkedin.com › pulse › how-select-multiple-rows-filled-constants-amazon-hira-tassadaq-f65af
How to select multiple rows filled with constants in Amazon Redshift?
November 5, 2023 - Select the student name and address based on the ID from the students table. Union all joins both the queries and displays the result as one. The query selects the name Hira where the ID is 236 and selects the address Rawalpindi where the ID is 236.
🌐
Amazon
zuar.com › blog › amazon-redshift-cheat-sheet
Amazon Redshift Cheat Sheet | Zuar
June 30, 2023 - INSERT INTO hiking_trails VALUES (‘Craggy Gardens’, ‘Barnardsville’, ‘NC’, ‘1.9 miles, ‘easy’, ‘hiking’); ... Amazon Redshift vs.
🌐
Reddit
reddit.com › r/sql › function/procedure to return a result in query in redshift
r/SQL on Reddit: Function/Procedure to Return a Result in Query in Redshift
February 3, 2023 -

This seems like some basic proc/UDF functionality that I just can't figure out in Redshift. I currently have external tables that I'm partitioning by date. I just wanted to query the latest date in the table:

select *
from some_external_table
where date = (
select max(substring(values, 3, 10))::date
from svv_external_partitions
where tablename = 'some_external_table'
);

That query to svv_external_partition is rather ugly and I wanted to wrap it into a UDF or proc. The restrictions on using SQL for functions is super restrictive (can't use the FROM clause?) so I'm trying to figure out if it's possible to use a procedure.

Here's my proc:

CREATE OR REPLACE PROCEDURE get_last_ds(
schema_param IN varchar(256),
table_param IN varchar(256),
last_ds OUT date
)
AS $$
BEGIN
EXECUTE 'SELECT max(substring(values, 3, 10))::date
FROM svv_external_partitions
WHERE schemaname = ''' || schema_param || '''
AND tablename = ''' || table_param || ''';' INTO last_ds;
END;
$$ LANGUAGE plpgsql;

This works just fine but can only be executed using call:

begin;
call get_last_ds('some_external_schema', 'some_external_table');
end;

Is there a way to achieve the following?

select *
from some_external_table
where date = get_last_ds('some_external_schema', 'some_external_table');

🌐
PopSQL
popsql.com › learn-sql › redshift › how-to-insert-in-redshift
Redshift: Insert Into Command - PopSQL
Note that the VALUES keyword is omitted: INSERT INTO beta_users (first_name, last_name) SELECT first_name, last_name FROM users where beta = 1; While Redshift does not support the JSON datatype, you can still store properly formatted JSON strings in a CHAR or VARCHAR column.
🌐
Apache Airflow
airflow.apache.org › docs › apache-airflow-providers-amazon › 2.4.0 › operators › redshift.html
RedshiftSQLOperator — apache-airflow-providers-amazon Documentation
task_insert_data = RedshiftSQLOperator( task_id='task_insert_data', sql=[ "INSERT INTO fruit VALUES ( 1, 'Banana', 'Yellow');", "INSERT INTO fruit VALUES ( 2, 'Apple', 'Red');", "INSERT INTO fruit VALUES ( 3, 'Lemon', 'Yellow');", "INSERT INTO fruit VALUES ( 4, 'Grape', 'Purple');", "INSERT INTO fruit VALUES ( 5, 'Pear', 'Green');", "INSERT INTO fruit VALUES ( 6, 'Strawberry', 'Red');", ], ) Creating a new table, "more_fruit" from the "fruit" table. airflow/providers/amazon/aws/example_dags/example_redshift.pyView Source · task_get_all_table_data = RedshiftSQLOperator( task_id='task_get_all_table_data', sql="CREATE TABLE more_fruit AS SELECT * FROM fruit;" ) RedshiftSQLOperator supports the parameters attribute which allows us to dynamically pass parameters into SQL statements.