SparkSession is not a replacement for a SparkContext but an equivalent of the SQLContext. Just use it use the same way as you used to use SQLContext:

spark.createDataFrame(...)

and if you ever have to access SparkContext use sparkContext attribute:

spark.sparkContext

so if you need SQLContext for backwards compatibility you can:

SQLContext(sparkContext=spark.sparkContext, sparkSession=spark)
Answer from zero323 on Stack Overflow
🌐
GitHub
github.com › delta-io › delta › issues › 1967
[BUG][Spark] DeltaTable.forPath doesn't work with Spark Connect · Issue #1967 · delta-io/delta
August 9, 2023 - from pyspark.sql import SparkSession spark = SparkSession.builder.remote("sc://localhost").getOrCreate() columns = ["id","name"] data = [(1,"Sarah"),(2,"Maria")] df = spark.createDataFrame(data).toDF(*columns) df.write.format('delta').mode('overwrite').save('s3a://<bucket>/<name>') ... from pyspark.sql import SparkSession from delta.tables import * spark = SparkSession.builder.remote("sc://localhost").getOrCreate() dt = DeltaTable.forPath(spark, "s3a://<bucket>/<name>") --------------------------------------------------------------------------- AttributeError Traceback (most recent call last)
Author   stvno
Discussions

PySpark Recipes persist DataFrame
Hi, I'm using PySpark Recipes. To reduce the time of execution + reduce memory storage, I would like to use the function: DataFrame.persist() More on community.dataiku.com
🌐 community.dataiku.com
April 15, 2020
Topics with Label: Sparksession - Databricks Community
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights More on community.databricks.com
🌐 community.databricks.com
python - createDataFrame not working in Spark 2.0.0 - Stack Overflow
I am trying to work through some of the examples in the new Spark 2.0 documentation. I am working in Jupyter Notebooks and command line. I can create a SparkSession with no problem. However when I ... More on stackoverflow.com
🌐 stackoverflow.com
python - 'PipelinedRDD' object has no attribute 'sparkSession' when creating dataframe in pyspark - Stack Overflow
Communities for your favorite technologies. Explore all Collectives · Stack Overflow for Teams is now called Stack Internal. Bring the best of human thought and AI automation together at your work More on stackoverflow.com
🌐 stackoverflow.com
🌐
Databricks Community
community.databricks.com › t5 › data-engineering › attributeerror-sparksession-object-has-no-attribute-wrapped-when › td-p › 33970
AttributeError: 'SparkSession' object has no attri... - Databricks Community - 33970
December 4, 2022 - I'm getting the error... AttributeError: 'SparkSession' object has no attribute '_wrapped' --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () 1 from sparknlp.training import CoNL...
🌐
MungingData
mungingdata.com › pyspark › sparksession-getorcreate-getactivesession
Creating and reusing the SparkSession with PySpark - MungingData
You should only be using getOrCreate in functions that should actually be creating a SparkSession. getActiveSession is more appropriate for functions that should only reuse an existing SparkSession. The show_output_to_df function in quinn is a good example of a function that uses getActiveSession. This function converts the string that's outputted from DataFrame#show back into a DataFrame object.
🌐
Dataiku Community
community.dataiku.com › questions & discussions › using dataiku
PySpark Recipes persist DataFrame — Dataiku Community
April 15, 2020 - But I have this error message: 'Job failed: Pyspark code failed: At line 186: <type 'exceptions.AttributeError'>: 'SparkSession' object has no attribute '_getJavaStorageLevel'
🌐
Databricks
community.databricks.com › t5 › forums › filteredbylabelpage › board-id › data-engineering › label-name › sparksession
Topics with Label: Sparksession - Databricks Community
Databricks docs here:https://docs.databricks.com/notebooks/notebook-isolation.htmlstate that "Every notebook attached to a cluster has a pre-defined variable named spark that represents a SparkSession." What if 2 users run the same notebook on the sa... ... The spark session is isolated at the notebook level and is not isolated at the user level. So, two users accessing the same notebook will be using the same spark session ... I want to use the same spark session which created in one notebook and need to be used in another notebook in across same environment, Example, if some of the (variable)object got initialized in the first notebook, i need to use the same object in t...
🌐
Cloudera Community
community.cloudera.com › t5 › Support-Questions › Pyspark-issue-AttributeError-DataFrame-object-has-no › m-p › 78093
Pyspark issue AttributeError: 'DataFrame' object has no attribute 'saveAsTextFile'
January 2, 2024 - AttributeError: 'DataFrame' object has no attribute 'saveAsTextFile' Can someone take a look at the code and let me know where I'm going wrong: #%% import findspark findspark.init('/home/packt/spark-2.1.0-bin-hadoop2.7') from pyspark.sql import SparkSession def main(): spark = SparkSession.builder.appName('aggs').getOrCreate() df = spark.read.csv('/home/packt/Downloads/Spark_DataFrames/sales_info.csv',inferSchema=True,header=True) df.createOrReplaceTempView('sales_info') example8 = spark.sql("""SELECT * FROM sales_info ORDER BY Sales DESC""") example8.saveAsTextFile("juyfd") main()
🌐
CSDN
devpress.csdn.net › python › 63045844c67703293080b8b9.html
pyspark error: AttributeError: 'SparkSession' object has no attribute 'parallelize'_python_Mangs-Python
August 23, 2022 - import findspark findspark.init(spark_home='/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive', python_path='python2.7') import pyspark from pyspark.sql import * sc = pyspark.sql.SparkSession.builder.master("yarn-client").config("spark.executor.memory", "2g").config('spark.driver.memory', '1g').config('spark.driver.cores', '4').enableHiveSupport().getOrCreate() sqlContext = SQLContext(sc) ... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-9-1db231ce21c9> in <module>() ----> 1 spark_df = sqlContext.createDataFrame(df_in) /home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/context.pyc in createDataFrame(self, data, schema, samplingRatio) 297 Py4JJavaError: ...
Find elsewhere
🌐
Databricks Community
community.databricks.com › t5 › data-engineering › can-use-graphframes-dbr-14-3 › td-p › 82928
Can use graphframes DBR 14.3 - Databricks Community - 82928
July 20, 2025 - Thanks for the response -werners-. Version 0.8.3 installed via https://pypi.org/project/graphframes-latest/ gives a different error: AttributeError: 'SparkSession' object has no attribute '_sc'. No version above 0.6 is available via %pip install graphframes --upgrade.
🌐
CopyProgramming
copyprogramming.com › howto › sparksession-object-has-no-attribute-createdataframe
AttributeError: 'SparkSession' object has no attribute 'createDataFrame' - Apache spark
June 3, 2023 - AttributeError in Spark: 'createDataFrame' method cannot be accessed in 'SQLContext' object, AttributeError in Pyspark: 'SparkSession' object lacks 'serializer' attribute, Attribute 'sparkContext' not found within 'SparkSession' object, Pycharm fails to function with SparkSession on Ubuntu
🌐
Apache JIRA
issues.apache.org › jira › browse › SPARK-26627
[SPARK-26627] sql_ctx loses '_conf' attribute for a pyspark dataframe converted to jdf and back - ASF JIRA
January 16, 2019 - -> 2079 if self.sql_ctx._conf.pandasRespectSessionTimeZone(): 2080 timezone = self.sql_ctx._conf.sessionLocalTimeZone() 2081 else: AttributeError: 'SparkSession' object has no attribute '_conf'
🌐
GitHub
github.com › awslabs › aws-glue-libs › issues › 124
'SparkSession' object has no attribute 'serializer' · Issue #124 · awslabs/aws-glue-libs
January 21, 2022 - An error was encountered: 'SparkSession' object has no attribute 'serializer' Traceback (most recent call last): File "/home/glue_user/aws-glue-libs/PyGlue.zip/awsglue/transforms/transform.py", line 24, in apply return transform(*args, **kwargs) File "/home/glue_user/aws-glue-libs/PyGlue.zip/awsglue/transforms/dynamicframe_filter.py", line 18, in __call__ return frame.filter(f, transformation_ctx, info, stageThreshold, totalThreshold) File "/home/glue_user/aws-glue-libs/PyGlue.zip/awsglue/dynamicframe.py", line 94, in filter return self.mapPartitions(func, True, transformation_ctx, info, stage
Author   goldengrisha
🌐
GitHub
github.com › commoncrawl › cc-pyspark › issues › 24
Use SparkSession instead of SQLContext · Issue #24 · commoncrawl/cc-pyspark
"As of Spark 2.0, [SQLContext] is replaced by SparkSession." (see SQLContext). replacing SQLContext by SparkSession might simplify the code as the session also holds the SparkContext obje...
🌐
Cloudera Community
community.cloudera.com › t5 › Support-Questions › AttributeError-in-Spark › td-p › 185732
Solved: AttributeError in Spark - Cloudera Community - 185732
July 18, 2018 - Error : AttributeError: 'HiveContext' object has no attribute 'load' ... In spark 2 you should leverage spark session instead of spark context. To read jdbc datasource just use the following code: from pyspark.sql import SparkSession from pyspark.sql import Row spark = SparkSession \ .builder \ .appName("data_import") \ .config("spark.dynamicAllocation.enabled", "true") \ .config("spark.shuffle.service.enabled", "true") \ .enableHiveSupport() \ .getOrCreate() jdbcDF2 = spark.read \ .jdbc("jdbc:sqlserver://10.24.40.29;database=CORE;username=user1;password=Passw0rd", "test")
🌐
Codecademy Forums
discuss.codecademy.com › frequently asked questions › uncategorized
FAQ: RDDs with PySpark - Start Coding with PySpark - Uncategorized - Codecademy Forums
July 1, 2022 - This community-built FAQ covers the “Start Coding with PySpark” exercise from the lesson “RDDs with PySpark”. Paths and Courses This exercise can be found in the following Codecademy content: [Beta] Introduction to…