"sklearn.datasets" is a scikit package, where it contains a method load_iris().

load_iris(), by default return an object which holds data, target and other members in it. In order to get actual values you have to read the data and target content itself.

Whereas 'iris.csv', holds feature and target together.

FYI: If you set return_X_y as True in load_iris(), then you will directly get features and target.

from sklearn import datasets
data,target = datasets.load_iris(return_X_y=True)
Answer from vipin bansal on Stack Exchange
🌐
Stack Overflow
stackoverflow.com › questions › 61570636 › how-can-i-fix-data-frame-has-no-attribute-plot
python - How can I fix data frame has no attribute plot - Stack Overflow
May 3, 2020 - dd=df.select(df.Color,df.ListPrice.cast("float")) colordf = dd[['Color','ListPrice']] colordfgroup = colordf.groupby('Color').mean('ListPrice') colordfgroup.show() my_plot = colordfgroup.plot(kind...
Top answer
1 of 5
2

"sklearn.datasets" is a scikit package, where it contains a method load_iris().

load_iris(), by default return an object which holds data, target and other members in it. In order to get actual values you have to read the data and target content itself.

Whereas 'iris.csv', holds feature and target together.

FYI: If you set return_X_y as True in load_iris(), then you will directly get features and target.

from sklearn import datasets
data,target = datasets.load_iris(return_X_y=True)
2 of 5
1

The Iris Dataset from Sklearn is in Sklearn's Bunch format:

print(type(iris))
print(iris.keys())

output:

<class 'sklearn.utils.Bunch'>
dict_keys(['data', 'target', 'target_names', 'DESCR', 'feature_names', 'filename'])

So, that's why you can access it as:

x=iris.data
y=iris.target

But when you read the CSV file as DataFrame as mentioned by you:

iris = pd.read_csv('iris.csv',header=None).iloc[:,2:4]
iris.head()

output is:

    2   3
0   petal_length    petal_width
1   1.4 0.2
2   1.4 0.2
3   1.3 0.2
4   1.5 0.2

Here the column names are '1' and '2'.

First of all you should read the CSV file as:

df = pd.read_csv('iris.csv')

you should not include header=None as your csv file includes the column names i.e. the headers.

So, now what you can do is something like this:

X = df.iloc[:, [2, 3]] # Will give you columns 2 and 3 i.e 'petal_length' and 'petal_width'
y = df.iloc[:, 4] # Label column i.e 'species'

or if you want to use the column names then:

X = df[['petal_length', 'petal_width']]
y = df.iloc['species']

Also, if you want to convert labels from string to numerical format use sklearn LabelEncoder

from sklearn import preprocessing
le = preprocessing.LabelEncoder()
y = le.fit_transform(y)
🌐
Cloudera Community
community.cloudera.com › t5 › Support-Questions › Pyspark-issue-AttributeError-DataFrame-object-has-no › td-p › 78093
Pyspark issue AttributeError: 'DataFrame' object has no attribute 'saveAsTextFile'
January 2, 2024 - #%% import findspark findspark.init('/home/packt/spark-2.1.0-bin-hadoop2.7') from pyspark.sql import SparkSession spark = SparkSession.builder.appName('ops').getOrCreate() df = spark.read.csv('/home/packt/Downloads/Spark_DataFrames/Person_Person.csv',inferSchema=True,header=True) df.createOrReplaceTempView('Person_Person') myresults = spark.sql("""SELECT PersonType ,COUNT(PersonType) AS `Person Count` FROM Person_Person GROUP BY PersonType""") myresults.collect() result = myresults.collect() result result.saveAsTextFile("test") However, I'm now getting the following error message: AttributeError: 'list' object has no attribute 'saveAsTextFile'
🌐
Spark By {Examples}
sparkbyexamples.com › home › hbase › attributeerror: ‘dataframe’ object has no attribute ‘map’ in pyspark
AttributeError: 'DataFrame' object has no attribute 'map' in PySpark - Spark By {Examples}
March 27, 2024 - df2=df.map(lambda x: [x[0],x[1]]) File "C:\ProgramData\Anaconda3\lib\site-packages\pyspark\sql\dataframe.py", line 1401, in __getattr__ "'%s' object has no attribute '%s'" % (self.__class__.__name__, name)) AttributeError: 'DataFrame' object has no attribute 'map'
🌐
GitHub
github.com › Nixtla › statsforecast › discussions › 779
Input dataframe for statsforecast · Nixtla/statsforecast · Discussion #779
February 9, 2024 - I converted it to a pyspark.pandas.frame.DataFrame and it looks good. However, when i apply the function StatsForecast.plot(pd_df) (being pd_df my pandas dataframe), i get the following error: AttributeError: 'NoneType' object has no attribute 'Date'
Author   Nixtla
🌐
Apache
spark.apache.org › docs › latest › api › python › reference › pyspark.pandas › frame.html
DataFrame — PySpark 4.1.1 documentation - Apache Spark
DataFrame.plot is both a callable method and a namespace attribute for specific plotting methods of the form DataFrame.plot.<kind>.
Find elsewhere
🌐
Databricks Community
community.databricks.com › t5 › data-engineering › attributeerror-dataframe-object-has-no-attribute › td-p › 61132
AttributeError: 'DataFrame' object has no attribut... - Databricks Community - 61132
February 19, 2024 - Hello, I have some trouble deduplicating rows on the "id" column, with the method "dropDuplicatesWithinWatermark" in a pipeline. When I run this pipeline, I get the error message: "AttributeError: 'DataFrame' object has no attribute 'dropDuplicatesWithinWatermark'" Here is part of the code: @dl...
🌐
GitHub
github.com › dask › dask › issues › 8624
AttributeError: 'DataFrame' object has no attribute 'name'; Various stack overflow / github suggested fixes not working · Issue #8624 · dask/dask
January 26, 2022 - Then in the main script where I had done all the upstream transformations to the dataframe which we are working on, instantiate this class and call its get_df_of_ids method: { from myimportutils.my_import_utils import SomeUtils tools = SomeUtils(homes_to_import_unnormalized_dd_df) df_of_locale_ids = tools.get_df_of_ids() # This line fails: } ... { File "[ ... redacted]/pandas/core/generic.py", line 5487, in __getattr__ return object.__getattribute__(self, name) AttributeError: 'DataFrame' object has no attribute 'name'.
Author   david-thrower
🌐
Cumulative Sum
cumsum.wordpress.com › 2020 › 10 › 10 › pyspark-attributeerror-dataframe-object-has-no-attribute-_get_object_id
[pyspark] AttributeError: ‘DataFrame’ object has no attribute ‘_get_object_id’
October 10, 2020 - Consider the following two data ... df2: df = spark.createDataFrame([[1, 2, 3], [2, 3, 4], [4, 5, 6]], ['id', 'a', 'b']) df2 = spark.createDataFrame([[1], [2]], ['id']) df.show() +---+---+---+ | id| a| b| +---+---+---+ | 1| 2| 3| | 2| 3| 4| | 4| 5| 6| ...
🌐
Incorta Community
community.incorta.com › t5 › data-schemas-knowledgebase › issue-with-converting-a-pandas-dataframe-to-a-spark-dataframe › ta-p › 5279
Issue with converting a Pandas DataFrame to a Spar... - Incorta Community
November 15, 2023 - Symptoms You received the error when trying to convert a Pandas DataFrame to Spark DataFrame in a PySpark MV. Here is the error.- INC_03070101: Transformation error Error 'DataFrame' object has no attribute 'iteritems' AttributeError : 'DataFrame' object has no attribute 'iteritems' Diagnosis Since...
🌐
GitHub
github.com › YosefLab › Compass › issues › 92
AttributeError: 'DataFrame' object has no attribute 'iteritems' · Issue #92 · YosefLab/Compass
April 5, 2023 - "Evaluating Reaction Pentalties:" Traceback (most recent call last): File "/home/ubuntu/.local/bin/compass", line 8, in sys.exit(entry()) File "/home/ubuntu/.local/lib/python3.8/site-packages/compass/main.py", line 588, in entry penalties = eval_reaction_penalties(args['data'], args['model'], File "/home/ubuntu/.local/lib/python3.8/site-packages/compass/compass/penalties.py", line 96, in eval_reaction_penalties reaction_penalties = eval_reaction_penalties_shared( File "/home/ubuntu/.local/lib/python3.8/site-packages/compass/compass/penalties.py", line 158, in eval_reaction_penalties_shared for name, expression_data in expression.iteritems(): File "/home/ubuntu/.local/lib/python3.8/site-packages/pandas/core/generic.py", line 5989, in getattr return object.getattribute(self, name) AttributeError: 'DataFrame' object has no attribute 'iteritems'
Author   JoelHaas
🌐
Oceanhackweek
oceanhackweek.org › ohw22 › tutorials › 02-Wed › 01-data-visualization-in-python › tutorial › 04_Basic_Plotting.html
Read in the data — OceanHackWeek
Previous sections have focused on putting various simple types of data together in notebooks and deployed servers, but most people will want to include plots as well. In this section, we’ll focus on one of the simplest (but still powerful) ways to get a plot · If you have tried to visualize ...
Top answer
1 of 2
7

The syntax you are using is for a pandas DataFrame. To achieve this for a spark DataFrame, you should use the withColumn() method. This works great for a wide range of well defined DataFrame functions, but it's a little more complicated for user defined mapping functions.

General Case

In order to define a udf, you need to specify the output data type. For instance, if you wanted to apply a function my_func that returned a string, you could create a udf as follows:

import pyspark.sql.functions as f
my_udf = f.udf(my_func, StringType())

Then you can use my_udf to create a new column like:

df = df.withColumn('new_column', my_udf(f.col("some_column_name")))

Another option is to use select:

df = df.select("*", my_udf(f.col("some_column_name")).alias("new_column"))

Specific Problem

Using a udf

In your specific case, you want to use a dictionary to translate the values of your DataFrame.

Here is a way to define a udf for this purpose:

some_map_udf = f.udf(lambda x: some_map.get(x, None), IntegerType())

Notice that I used dict.get() because you want your udf to be robust to bad inputs.

df = df.withColumn('new_column', some_map_udf(f.col("some_column_name")))

Using DataFrame functions

Sometimes using a udf is unavoidable, but whenever possible, using DataFrame functions is usually preferred.

Here is one option to do the same thing without using the udf.

The trick is to iterate over the items in some_map to create a list of pyspark.sql.functions.when() functions.

some_map_func = [f.when(f.col("some_column_name") == k, v) for k, v in some_map.items()]
print(some_map_func)
#[Column<CASE WHEN (some_column_name = a) THEN 0 END>,
# Column<CASE WHEN (some_column_name = c) THEN 1 END>,
# Column<CASE WHEN (some_column_name = b) THEN 1 END>]

Now you can use pyspark.sql.functions.coalesce() inside of a select:

df = df.select("*", f.coalesce(*some_map_func).alias("some_column_name"))

This works because when() returns null by default if the condition is not met, and coalesce() will pick the first non-null value it encounters. Since the keys of the map are unique, at most one column will be non-null.

2 of 2
1

You have a spark dataframe, not a pandas dataframe. To add new column to the spark dataframe:

import pyspark.sql.functions as F
from pyspark.sql.types import IntegerType
df = df.withColumn('new_column', F.udf(some_map.get, IntegerType())(some_column_name))
df.show()
🌐
Kanaries
docs.kanaries.net › topics › Matplotlib › matplotlib-has-no-attribute-plot
Troubleshooting: 'Module Matplotlib Has No Attribute Plot' in Python – Kanaries
May 2, 2023 - Your complete guide to solving the 'module matplotlib has no attribute plot' error in Python, covering both installation and syntax issues with detailed examples.