The correct Postgres syntax would be:

SELECT * FROM (VALUES (1)) AS q (col1);

A set of parentheses was missing.

But Redshift does not support free-standing VALUES expressions (outside of INSERT commands). So, for a single row:

SELECT * FROM (SELECT 1) AS q (col1);

For multiple rows (without using UNION ALL like requested) you can use a temporary table:

CREATE TEMP TABLE q(col1 int);
INSERT INTO q(col1)
VALUES (1), (2), (3);

SELECT * FROM q;

The manual:

A temporary table is automatically dropped at the end of the session in which it was created.

If UNION ALL is an option:

SELECT 1 AS col1
UNION ALL SELECT 2
UNION ALL SELECT 3;
Answer from Erwin Brandstetter on Stack Overflow
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › sql reference › sql commands
SQL commands - Amazon Redshift
Learn about the standard SQL commands that Amazon Redshift uses to create database objects, run queries, load tables, and modify the data in tables.
🌐
Google
docs.cloud.google.com › bigquery › amazon redshift sql translation guide
Amazon Redshift SQL translation guide | BigQuery | Google Cloud Documentation
INSERT INTO table (column1, column2) VALUES ('value_1', ( SELECT column2 FROM table2 )) Amazon Redshift's COPYcommand loads data into a table from data files or from an Amazon DynamoDB table.
🌐
Twilio Segment
segment.com › docs › connections › storage › warehouses › redshift-useful-sql
Useful SQL Queries for Redshift | Twilio
Each Identify call is stored in a single Redshift table called identifies. To see how a user's plan changes over time, you can run the following query: ... This SQL query returns a table of Bob's account information, with each entry representing the state of his account at different time periods:
🌐
PopSQL
popsql.com › learn-sql › redshift › how-to-insert-in-redshift
Redshift: Insert Into Command - PopSQL
Note that the VALUES keyword is omitted: INSERT INTO beta_users (first_name, last_name) SELECT first_name, last_name FROM users where beta = 1; While Redshift does not support the JSON datatype, you can still store properly formatted JSON strings in a CHAR or VARCHAR column.
🌐
Hevodata
cdn.hevodata.com › whitepapers › Amazon Redshift SQL - Commands _ Optimization.pdf pdf
Amazon Redshift SQL Commands & Optimization
DELETE →This command will delete a row/s in a table in Redshift. You can also perform · where clause in a table where you are deleting rows or you can also delete row/s based on ... This command will delete all rows from x where column col1 has value 123.
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › sql reference › sql commands › select › select list
SELECT list - Amazon Redshift
For example, a SELECT DISTINCT query might return duplicate rows if the primary key column doesn't contain all unique values. For more information, see Defining table constraints. ... Returns the entire contents of the table (all columns and all rows). ... An expression formed from one or more columns that exist in the tables referenced by the query. An expression can contain SQL functions.
Find elsewhere
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › sql reference › using sql › basic elements › data types
Data types - Amazon Redshift
Following, you can find a discussion about how type conversion rules and data type compatibility work in Amazon Redshift. Data type matching and matching of literal values and constants to data types occurs during various database operations, including the following: Data manipulation language (DML) operations on tables · UNION, INTERSECT, and EXCEPT queries · CASE expressions · Evaluation of predicates, such as LIKE and IN · Evaluation of SQL functions that do comparisons or extractions of data ·
🌐
PopSQL
popsql.com › learn-sql › redshift › how-to-create-a-table-in-redshift
Redshift: Create a Table [Example] - PopSQL
SQL Server · BigQuery · Snowflake ... types: SMALLINT (INT2) INTEGER (INT, INT4) BIGINT (INT8) DECIMAL (NUMERIC) REAL (FLOAT4) DOUBLE PRECISION (FLOAT8) BOOLEAN (BOOL) CHAR (CHARACTER) VARCHAR (CHARACTER VARYING) DATE ...
🌐
Snowflake
docs.snowflake.com › en › migrations › snowconvert-docs › translation-references › redshift › redshift-sql-statements
SnowConvert AI - Redshift - SQL Statements | Snowflake Documentation
(Redshift Language Reference EXECUTE Statement) ... CREATE OR REPLACE PROCEDURE create_dynamic_table(table_name VARCHAR) AS $$ DECLARE sql_statement VARCHAR; BEGIN sql_statement := 'CREATE TABLE IF NOT EXISTS ' || table_name || ' (id INT, value VARCHAR);'; EXECUTE sql_statement; END; $$ LANGUAGE plpgsql;
Top answer
1 of 3
6

unfortunately UNION is the only way here:

WITH bar (baz) AS
(select 'a' union select 'b' union select 'c')
SELECT * from bar;
2 of 3
3

TLDR: The most efficient way to simulate a multi-row VALUES clause is to create an array-of-arrays for the rows and columns of the data, and then unpack it and (if necessary) cast to the desired data types:

select rowdata[0]::varchar, rowdata[1]::decimal
from
    (select array(
        array('a', 1),
        array('b', 2)
    ) as arr) as data,
    data.arr as rowdata

(The data.arr as rowdata bit is to unnest the array.)


UNION ALL has the unfortunate behavior that each of the SELECT statements will be distributed across the cluster:

explain select * from (select 'a' union all select 'b' union all select 'c')
XN Subquery Scan derived_table1  (cost=0.00..0.09 rows=3 width=32) 
  ->  XN Append  (cost=0.00..0.06 rows=3 width=0) 
        ->  XN Network  (cost=0.00..0.02 rows=1 width=0) 
              Distribute Round Robin 
              ->  XN Subquery Scan "*SELECT* 1"  (cost=0.00..0.02 rows=1 width=0) 
                    ->  XN Result  (cost=0.00..0.01 rows=1 width=0) 
        ->  XN Network  (cost=0.00..0.02 rows=1 width=0) 
              Distribute Round Robin 
              ->  XN Subquery Scan "*SELECT* 2"  (cost=0.00..0.02 rows=1 width=0) 
                    ->  XN Result  (cost=0.00..0.01 rows=1 width=0) 
        ->  XN Network  (cost=0.00..0.02 rows=1 width=0) 
              Distribute Round Robin 
              ->  XN Subquery Scan "*SELECT* 3"  (cost=0.00..0.02 rows=1 width=0) 
                    ->  XN Result  (cost=0.00..0.01 rows=1 width=0)

On more than a few rows, this incurs an absurd overhead and makes queries extremely inefficient. Fortunately, we can use the SUPER data type as a workaround; when we select a single value (the array-of-arrays) the query planner sees this as a single query which it only needs to distribute to one node, which is much more efficient to execute.

🌐
Hevo
hevodata.com › home › learn › data warehousing
Redshift First_Value and Last_Value Functions | Hevo
January 10, 2026 - When you have a group of items ... them. The Redshift first_value and last_value analytic functions help you to find the first value and the last value in an expression or column or within a group of rows....
🌐
Amazon
zuar.com › blog › amazon-redshift-cheat-sheet
Amazon Redshift Cheat Sheet | Zuar
June 30, 2023 - INSERT INTO hiking_trails VALUES (‘Craggy Gardens’, ‘Barnardsville’, ‘NC’, ‘1.9 miles, ‘easy’, ‘hiking’); ... Amazon Redshift vs.
🌐
Towards Data Science
towardsdatascience.com › home › latest › 5 redshift sql functions you need to know
5 Redshift SQL Functions You Need to Know | Towards Data Science
January 17, 2025 - For example, if you have multiple contact fields, you could use NVL2 to ensure at least one of the contact values is returned for a user. SELECT user_id, NVL2(user_email, user_email, account_email) AS email_address FROM users · In this case, if a user does not have an email in the user_email column, NVL2 will check if they have a field in the account_email column that can be returned. It’s no secret that handling JSON objects in the data warehouse can be a pain. Luckily, JSON_EXTRACT_PATH_TEXT makes this fairly seamless to do in Redshift.
🌐
AWS
docs.aws.amazon.com › amazon redshift › database developer guide › semi-structured data in amazon redshift › operators and functions
Operators and functions - Amazon Redshift
The most common example is the JSON_TYPEOF scalar function that returns a VARCHAR with values boolean, number, string, object, array, or null, depending on the dynamic type of the SUPER value. Amazon Redshift supports the following boolean functions for SUPER data columns: ... For more information ...
🌐
AWS
docs.aws.amazon.com › amazon redshift › management guide › query a database › using the amazon redshift data api › calling the data api › passing sql statements to an amazon redshift data warehouse
Passing SQL statements to an Amazon Redshift data warehouse - Amazon Redshift
March 26, 2026 - Values for named parameters for the SQL statement are specified in the parameters option. aws redshift-data execute-statement --secret-arn arn:aws:secretsmanager:us-west-2:123456789012:secret:myuser-secret-hKgPWn --cluster-identifier mycluster-test --sql "SELECT ratecode FROM demo_table WHERE trip_distance > :distance" --parameters "[{\"name\": \"distance\", \"value\": \"5\"}]" --database dev