Most probably your DataFrame is the Pandas DataFrame object, not Spark DataFrame object.

try:

spark.createDataFrame(df).write.saveAsTable("dashboardco.AccountList")
Answer from Alex Ott on Stack Overflow
🌐
Databricks Community
community.databricks.com › t5 › data-engineering › attributeerror-dataframe-object-has-no-attribute › td-p › 61132
AttributeError: 'DataFrame' object has no attribut... - Databricks Community - 61132
February 19, 2024 - Hello, I have some trouble deduplicating rows on the "id" column, with the method "dropDuplicatesWithinWatermark" in a pipeline. When I run this pipeline, I get the error message: "AttributeError: 'DataFrame' object has no attribute 'dropDuplicatesWithinWatermark'" Here is part of the code: @dl...
🌐
Databricks Community
community.databricks.com › t5 › data-engineering › attributeerror-dataframe-object-has-no-attribute-rename › td-p › 28109
Solved: AttributeError: 'DataFrame' object has no attribut... - Databricks Community - 28109
January 2, 2024 - Hello, I am doing the Data Science and Machine Learning course. The Boston housing has unintuitive column names. I want to rename them, e.g. so 'zn' becomes 'Zoning'. When I run this command: df_bostonLegible = df_boston.rename({'zn':'Zoning'}, axis='columns') Then I get the error "AttributeError: '...
🌐
Databricks
kb.databricks.com › python › function-object-no-attribute
AttributeError: ‘function’ object has no attribute - Databricks
May 19, 2022 - Using protected keywords from the DataFrame API as column names results in a function object has no attribute error message. Written by noopur.nigam Last published at: May 19th, 2022 · You are selecting columns from a DataFrame and you get an error message. ERROR: AttributeError: 'function' object has no attribute '_get_object_id' in job
🌐
Cloudera Community
community.cloudera.com › t5 › Support-Questions › Pyspark-issue-AttributeError-DataFrame-object-has-no › m-p › 78093
Pyspark issue AttributeError: 'DataFrame' object has no attribute 'saveAsTextFile'
January 2, 2024 - So, if someone could help resolve this issue that would be most appreciated ... As the error message states, the object, either a DataFrame or List does not have the saveAsTextFile() method. result.write.save() or result.toJavaRDD.saveAsTextFile() shoud do the work, or you can refer to DataFrame ...
🌐
Databricks
community.databricks.com › t5 › machine-learning › problem-creating-featurestore › td-p › 24035
Solved: Problem creating FeatureStore - Databricks Community - 24035
November 3, 2022 - Hi, When trying to create the first table in the Feature Store i get a message: ''DataFrame' object has no attribute 'isEmpty'... but it is not. So I cannot use the function: feature_store.create_table() With this code you should be able to reproduce the problem: #CHUNK 1 raw_data = spark.read.fo...
Find elsewhere
🌐
Microsoft Fabric Community
community.fabric.microsoft.com › t5 › Service › Fabric-Notebook-write-dataframe-to-lakehouse-fails-DataFrame › m-p › 4421535
Re: Fabric Notebook - write dataframe to lakehouse fails - 'DataFrame' object has no attri
February 21, 2025 - Hi. Thanks for the reply . It works using the convertion to spark dataframe as you suggested. The code belows works: import sempy import sempy.fabric as fabric workspaces = fabric.list_workspaces() # For simplicity, keep columns withouth " " in column names in order to avoid Error Invalid c...
Top answer
1 of 4
2

As a workaround, downgrade to pandas v1.5

%pip install --upgrade pandas==1.5

The answers provided till now used to work prior to 3rd April 2023.

As of April 4, with pandas 2.0.0, you are not able to convert a Pandas DataFrame to a Spark DataFrame using the command:

spark.createDataFrame(df)

Using the above command leads to the error mentioned in the question:

AttributeError: 'DataFrame' object has no attribute 'iteritems'

The iteritems function seems to have been removed in pandas 2.0.0. From the changelog of pandas 2.0.0:

Removed deprecated Series.iteritems(), DataFrame.iteritems(), use obj.items instead

While the code written in spark to convert pandas dataframe to a spark dataframe still uses iteritems

/databricks/spark/python/pyspark/sql/pandas/conversion.py in createDataFrame(self, data, schema, samplingRatio, verifySchema)
    308                     warnings.warn(msg)
    309                     raise
--> 310         data = self._convert_from_pandas(data, schema, timezone)
    311         return self._create_dataframe(data, schema, samplingRatio, verifySchema)
    312 

/databricks/spark/python/pyspark/sql/pandas/conversion.py in _convert_from_pandas(self, pdf, schema, timezone)
    340                             pdf[field.name] = s
    341             else:
--> 342                 for column, series in pdf.iteritems():
    343                     s = _check_series_convert_timestamps_tz_local(series, timezone)
    344                     if s is not series:

Looks like we will have to wait for a fix to use Pandas 2.0.0.

2 of 4
2

You just need to use display function passing Pandas DataFrame as the argument - not try to call it as a member of the Pandas DataFrame class.

display(pdf)

Or you can simply specify variable name with Pandas DataFrame object - then it will be printed using Panda's built-in representation

import pyspark.sql.functions as F

pdf = spark.range(10).withColumn("rnd", F.rand()).toPandas()

🌐
Stack Overflow
stackoverflow.com › questions › 72461606 › data-bricks-spark-cluster-attributeerror-dataframe-object-has-no-attribute
data bricks: spark cluster AttributeError: 'DataFrame' object has no attribute 'copy' - Stack Overflow
AttributeError: 'DataFrame' object has no attribute 'copy' monthly_Imp_data_import_anaplan = monthly_Imp_data.copy() monthly_Imp_data_import_anaplan.fillna(0, inplace=True) anaplan_upload_file = monthly_Imp_data_import_anaplan.astype('string') i feel it is because of spark data frame ... You are using Pandas Dataframe syntax in Spark. If you want to continue using Pandas on Databricks then use
🌐
GitHub
github.com › dask › dask › issues › 8624
AttributeError: 'DataFrame' object has no attribute 'name'; Various stack overflow / github suggested fixes not working · Issue #8624 · dask/dask
January 26, 2022 - Then in the main script where I had done all the upstream transformations to the dataframe which we are working on, instantiate this class and call its get_df_of_ids method: { from myimportutils.my_import_utils import SomeUtils tools = SomeUtils(homes_to_import_unnormalized_dd_df) df_of_locale_ids = tools.get_df_of_ids() # This line fails: } ... { File "[ ... redacted]/pandas/core/generic.py", line 5487, in __getattr__ return object.__getattribute__(self, name) AttributeError: 'DataFrame' object has no attribute 'name'.
Author   david-thrower
🌐
Stack Overflow
stackoverflow.com › questions › 74670001 › keep-getting-dataframe-object-has-no-attribute-show-error-on-databricks
pyspark - Keep getting 'DataFrame' object has no attribute 'show' error on DataBricks - Stack Overflow
December 3, 2022 - What are you trying to do? You are aware of the fact that it is a pandas dataframe and not a Spark dataframe? ... On databricks you can use display: display(df) works both if df is a pandas dataframe or a spark dataframe.
🌐
Databricks
community.databricks.com › t5 › forums › filteredbylabelpage › board-id › data-engineering › label-name › object
Topics with Label: Object - Databricks Community
Im facing the same issue while trying to write to Salesforce, if you have found a resolution could you please share it ? ... PrivilegesSELECT: gives read access to an object.CREATE: gives ability to create an object (for example, a table in a schema).MODIFY: gives ability to add, delete, and modify data to or from an object.USAGE: does not give any abilities, but is an add... ... rdd4 = rdd3.reducByKey(lambda x,y: x+y)AttributeError: 'PipelinedRDD' object has no attribute 'reducByKey'Pls help me out with this
🌐
AWS re:Post
repost.aws › questions › QUvWrsRjenSrqHLJqLpy4DWg › attributeerror-dataframe-object-has-no-attribute-get-object-id
AttributeError: 'DataFrame' object has no attribute '_get_object_id' | AWS re:Post
October 11, 2018 - AttributeError: 'DataFrame' object has no attribute '_get_object_id' when I run the script. I'm pretty confident the error is occurring during this line: datasink = glueContext.write_dynamic_frame.from_catalog(frame = source_dynamic_frame, database = target_database, table_name = target_table_name, transformation_ctx = "datasink")
🌐
GitHub
github.com › databricks › spark-xml › issues › 207
DataFrameReader object has no attribute 'select' · Issue #207 · databricks/spark-xml
November 21, 2016 - I am trying to follow the example, but it gives me the following error I load xml from Hadoop >>> df.select("author", "@id").write().format("com. .option("rowTag", "book").save("newbooks.xml"); Fil...
Author   alexisaraya
🌐
Hail Discussion
discuss.hail.is › help [0.1]
AttributeError: 'DataFrame' object has no attribute 'to_spark' - Help [0.1] - Hail Discussion
July 22, 2018 - I am trying to covert a Hail table to a pandas dataframe: kk2 = hl.Table.to_pandas(table1) # convert to pandas I am not sure why I am getting this error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in 1 kk2 = hl.Table.to_pandas(table1) # convert to pandas /home/hail/hail.zip/hail/typecheck/check.py in wrapper(*args, **kwargs) 545 ...