After applying pivot you need to perform an aggregate, in this case the aggregate is first as the count metric has already been computed.

from pyspark.sql import functions as F

df = spark.createDataFrame([(123, 1, 1, ), 
                            (245, 1, 3), 
                            (123, 2, 5),], 
                           ("hashtag_id", "user_id", "count", ))

df.groupBy("user_id")\
  .pivot("hashtag_id")\
  .agg(F.first("count"))\
  .show()

Output

+-------+---+----+
|user_id|123| 245|
+-------+---+----+
|      1|  1|   3|
|      2|  5|null|
+-------+---+----+
Answer from Nithish on Stack Overflow
🌐
Spark By {Examples}
sparkbyexamples.com › home › pivot
Learn about pivot from Team SparkbyExamples - Spark By {Examples}
Read our articles about pivot for more information about using it in real time with examples
People also ask

Do I need to know anything about data science to get started on DataCamp?
Nope! We have courses for everyone, from beginners to experts. Anyone interested in data science and analytics can use DataCamp to learn new skills.
🌐
datacamp.com
datacamp.com › pricing
Plans and Pricing - Choose a Package | DataCamp
Where can I find out more about DataCamp’s business plans?

You can find more information about our offerings on the DataCamp for Business website. The different plans and pricing can be viewed here

🌐
datacamp.com
datacamp.com › pricing
Plans and Pricing - Choose a Package | DataCamp
🌐
Medium
rupeshdeoria.medium.com › pyspark-pivot-example-dc4003816667
pyspark Pivot Example - Rupesh Kumar Singh - Medium
March 25, 2021 - from pyspark.sql import SparkSessionspark = SparkSession.builder.master("local[*]").appName("pivote_app").getOrCreate() sc = spark.sparkContext sc.setLogLevel("Error")product = [ (1, "store1", 95), (1, "store2", 100), (1, "store3", 105), (2, "store1", 70), (2, "store3", 80) ] product_column = ["product_id", "store", "price"] product_df = spark.createDataFrame(product, product_column) product_df.show() rsult_df = ( product_df .groupBy("product_id").pivot("store").max("price") ) rsult_df.show()
🌐
Databricks
docs.databricks.com › sql language reference › query › pivot clause
PIVOT clause | Databricks on AWS
1 month ago - An optional alias for the result of the aggregation. If no alias is specified, PIVOT generates an alias based on aggregate_expression.
🌐
ProjectPro
projectpro.io › recipes › perform-pivot-and-unpivot-dataframe-spark-sql
How to perform Pivot and Unpivot of DataFrame in Spark SQL -
December 16, 2022 - PIVOT transposes a table-valued expression from a unique set of values into multiple columns in the output and performs aggregations. UNPIVOT performs the opposite operation of PIVOT by transforming the columns of a table-valued expression into ...
🌐
EDUCBA
educba.com › home › software development › software development tutorials › python tutorial › pyspark pivot
PySpark pivot | Working and example of PIVOT in PySpark
April 11, 2023 - PYSPARK PIVOT is a PySpark pivot that is used to transpose the data from a column into multiple columns.
Address   Unit no. 202, Jay Antariksh Bldg, Makwana Road, Marol, Andheri (East),, 400059, Mumbai
Find elsewhere
🌐
Medium
medium.com › @shuklaprashant9264 › pivot-in-pyspark-673ae8c520b1
Pivot In Pyspark. pivot function in PySpark with a… | by PrashantShukla | Medium
April 11, 2023 - # create sample data data = [ (1, 'apple', 2.50), (1, 'banana', 1.75), (2, 'apple', 3.00), (2, 'orange', 2.25), (3, 'banana', 1.50), (3, 'orange', 2.50) ]# create DataFrame df = spark.createDataFrame(data, ['customer_id', 'product_name', 'purchase_amount'])# pivot data pivoted_df = df.groupBy('customer_id').pivot('product_name').agg(sum('purchase_amount'))# show results pivoted_df.show()
🌐
Spark By {Examples}
sparkbyexamples.com › home › pyspark › pyspark pivot and unpivot dataframe
PySpark Pivot and Unpivot DataFrame - Spark By {Examples}
October 10, 2025 - PySpark pivot() function is used to rotate/transpose the data from one column into multiple Dataframe columns and back using unpivot(). Pivot() It is an
🌐
DataCamp
datacamp.com › pricing
Plans and Pricing - Choose a Package | DataCamp
Learn data science online with the perfect plan for you - choose from Basic, Premium, and Business access.
🌐
Databricks
docs.databricks.com › aws › en › notebooks › source › pivot-in-sql.html
Pivot in SQL - Databricks
Overview of Databricks notebooks for data science, machine learning, and collaborative code development.
🌐
Py-spark-sql
py-spark-sql.com › home › pyspark › practice lab › advanced › pivot table
Pivot table — PySpark Practice Exercise | py-spark-sql.com
1 month ago - Transform row values into columns using groupBy().pivot().agg(). Create a DataFrame with department, quarter, and revenue ... from pyspark.sql import SparkSession from pyspark.sql import functions as F spark = SparkSession.builder.appName("Pivot").getOrCreate() data = [ ("Engineering", "Q1", 100000), ("Engineering", "Q2", 120000), ("Marketing", "Q1", 80000), ("Marketing", "Q3", 90000), ("Sales", "Q2", 70000), ("Sales", "Q3", 75000), ] df = spark.createDataFrame(data, ["department", "quarter", "revenue"]) # TODO: groupBy department, pivot on quarter, sum revenue
🌐
Spark Playground
sparkplayground.com › tutorials › pyspark › pivoting-data
How to Pivot Data in PySpark - Spark Playground
In PySpark, you generally use .groupBy() + .pivot() + an aggregation to accomplish this.
🌐
AzureLib
azurelib.com › home › how to use pivot() function in pyspark azure databricks?
How to use pivot() function in PySpark Azure Databricks?
December 5, 2022 - We can use the pivot() function, whenever we want to rotate or transpose the data from one column into multiple Dataframe columns. The PySpark function pivot() is the only one that helps in rotating or transposing a column.
🌐
Medium
medium.com › towards-data-engineering › efficient-data-processing-with-pysparks-pivot-and-stack-functions-in-databricks-fa97261ec430
Efficient Data Processing with PySpark’s Pivot and Stack Functions in Databricks | by Naveen Sorout | Towards Data Engineering | Medium
December 29, 2023 - The “pivot()” function works by aggregating data based on one or more columns, and then transposing the values of one of those columns into separate columns in the DataFrame. This helps to organize and analyze the data in a more efficient way.
🌐
Databricks
community.databricks.com › t5 › get-started-discussions › pivot-on-multiple-columns › td-p › 54092
Pivot on multiple columns - Databricks Community - 54092
November 29, 2023 - I want to pass multiple column as argument to pivot a dataframe in pyspark pivot like mydf.groupBy("id").pivot("day","city").agg(F.sum("price").alias("price"),F.sum("units").alias("units")).show(). One way I found is to create multiple df with different pivot and join them which will result in m...
🌐
Apache Spark
spark.apache.org › docs › latest › sql-ref-syntax-qry-select-pivot.html
PIVOT Clause - Spark 4.1.1 Documentation
The PIVOT clause is used for data perspective. We can get the aggregated values based on specific column values, which will be turned to multiple columns used in SELECT clause.
🌐
Machine Learning Plus
machinelearningplus.com › blog › pyspark pivot – a detailed guide harnessing the power of pyspark pivot
PySpark Pivot - A Detailed Guide Harnessing the Power of PySpark Pivot - machinelearningplus
April 19, 2023 - Pivoting is a data transformation technique that involves converting rows into columns. PySpark's ability to pivot DataFrames enables you to reshape data for more convenient analysis
🌐
Meritshot
meritshot.com
Meritshot — Advance Your Career with Expert-Led Upskilling
Investment Banking, Data Science & Tech programs with Microsoft accreditation, MAANG mentors, and 100% placement assistance. 18,000+ professionals trust Meritshot.
🌐
Medium
medium.com › towards-data-engineering › how-to-pivot-dataframes-in-pyspark-a849d4030589
How To Pivot Dataframes In PySpark? | by Omar LARAQUI | Towards Data Engineering | Medium
September 14, 2023 - How To Pivot Dataframes In PySpark? Have you ever wanted to pivot a Spark dataframe to change rows into columns or vice versa? Let me tell you that Spark provides a simple function to do it. In this …