since strings data types have variable length, it is by default stored as object dtype. If you want to store them as string type, you can do something like this.

df['column'] = df['column'].astype('|S80') #where the max length is set at 80 bytes,

or alternatively

df['column'] = df['column'].astype('|S') # which will by default set the length to the max len it encounters
Answer from Siraj S. on Stack Overflow
🌐
GeeksforGeeks
geeksforgeeks.org › pandas › pandas-convert-column-to-string-type
Pandas Convert Column To String Type - GeeksforGeeks
July 23, 2025 - Lambda function will be a quick way of telling the computer to apply the changes for each value ... import pandas as pd # sample data data = {'NumericColumn': [1, 2, 3, 4]} df = pd.DataFrame(data) df['NumericColumn'] = df['NumericColumn'].apply(lambda x: str(x)) df.info() ... <class 'pandas.core.frame.DataFrame'> RangeIndex: 4 entries, 0 to 3 Data columns (total 1 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 NumericColumn 4 non-null object dtypes: object(1) memory usage: 160.0+ bytes
Discussions

Unable to convert a pandas object to a string in my DataFrame
object is just the dtype pandas uses for columns that contain values of type str (and many other Python types). The actual values themselves are still strings: import pandas as pd df = pd.DataFrame([["hello"], ["world"]], columns=["words"]) df.dtypes >>> words object dtype: object type(df.loc[0, "words"]) >>> Using .astype(str) does convert all values to string if they aren't already, but it doesn't change the column dtype by design. The only way around this is to use pandas's own string type via .astype("string"), but that's different from the str type, obviously. What exactly are the issues you are facing with the API? More on reddit.com
🌐 r/learnpython
6
3
December 10, 2021
python - Convert columns to string in Pandas - Stack Overflow
I have the following DataFrame from a SQL query: (Pdb) pp total_rows ColumnID RespondentCount 0 -1 2 1 3030096843 1 2 3030096845 1 and I More on stackoverflow.com
🌐 stackoverflow.com
python - Change column type in pandas - Stack Overflow
Column 'b' contained string objects, so was changed to pandas' string dtype. By default, this method will infer the type from object values in each column. More on stackoverflow.com
🌐 stackoverflow.com
python - Pandas: change data type of Series to String - Stack Overflow
Original DataFrame: Name Age City ... 60000 B Data types: Name object Age int64 City object Salary int64 Category object dtype: object Object columns found, converting to string: ['Name', 'City', 'Category'] Updated DataFrame: Name Age City Salary Category 0 John 25 New York 50000 ... More on stackoverflow.com
🌐 stackoverflow.com
🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.DataFrame.convert_dtypes.html
pandas.DataFrame.convert_dtypes — pandas 3.0.3 documentation
By using the options convert_string, convert_integer, convert_boolean and convert_floating, it is possible to turn off individual conversions to StringDtype, the integer extension types, BooleanDtype or floating extension types, respectively. For object-dtyped columns, if infer_objects is True, ...
🌐
Reddit
reddit.com › r/learnpython › unable to convert a pandas object to a string in my dataframe
r/learnpython on Reddit: Unable to convert a pandas object to a string in my DataFrame
December 10, 2021 -

Trying to use the YouTube API to pull through some videos for data analysis and am currently using just two videos in a dataframe to play around with the functionality as I'm new to all of this.

I'm using another API to get the transcripts for each video but I need to input the video_id into that API to get transcripts for each video.

The only problem is everything is stored as an object and whenever I try .astype(str) or something like that, it still says the data is an object and means I can't do anything with the data when a string is a required argument for the other API

This is what I get when calling .info() on my dataframe:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2 entries, 0 to 1
Data columns (total 10 columns):
 #   Column                Non-Null Count  Dtype 
---  ------                --------------  ----- 
 0   video_id              2 non-null      object
 1   publishedAt           2 non-null      object
 2   channelId             2 non-null      object
 3   title                 2 non-null      object
 4   description           2 non-null      object
 5   channelTitle          2 non-null      object
 6   tags                  2 non-null      object
 7   categoryId            2 non-null      object
 8   liveBroadcastContent  2 non-null      object
 9   defaultAudioLanguage  2 non-null      object
dtypes: object(10)
memory usage: 288.0+ bytes

Any help would be really appreciated or an explanation of how these issues are usually handled

🌐
Medium
medium.com › @whyamit404 › steps-to-pandas-convert-column-to-string-6154c98e3ae5
Steps to Pandas Convert Column to String | by whyamit404 | Medium
April 12, 2025 - How do I check the column’s data type after conversion? Use the dtypes attribute to check the datatype of each column in your DataFrame. It’s a straightforward way to confirm that your conversion was successful. Is there a way to convert without changing the original DataFrame? Absolutely! You can create a copy of the column first before converting: ... To wrap it up, converting a column to a string in Pandas is a straightforward process that can significantly enhance your data analysis.
🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.DataFrame.astype.html
pandas.DataFrame.astype — pandas 3.0.3 documentation
Use a str, numpy.dtype, pandas.ExtensionDtype or Python type to cast entire pandas object to the same type. Alternatively, use a mapping, e.g. {col: dtype, …}, where col is a column label and dtype is a numpy.dtype or Python type to cast one or more of the DataFrame’s columns to column-specific ...
🌐
Saturn Cloud
saturncloud.io › blog › python-pandas-converting-object-to-string-type-in-dataframes
Python Pandas: Converting Object to String Type in DataFrames | Saturn Cloud Blog
October 26, 2023 - We then create a DataFrame with two columns: A and B. Column A contains integers, and column B contains objects. To convert column B from object to string, we use the astype() function, which is a function that converts the data type of a Pandas ...
Find elsewhere
🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.DataFrame.to_string.html
pandas.DataFrame.to_string — pandas 3.0.3 documentation
Max width to truncate each column in characters. By default, no limit. ... Set character encoding. ... If buf is None, returns the result as a string.
🌐
Spark By {Examples}
sparkbyexamples.com › home › pandas › pandas convert column to string type
Pandas Convert Column to String Type - Spark By {Examples}
July 3, 2025 - In this article, I will explain how to convert single column or multiple columns to string type in pandas DataFrame, here, I will demonstrate using
🌐
Sentry
sentry.io › sentry answers › python › change a column type in a dataframe in python pandas
Change a column type in a DataFrame in Python Pandas | Sentry
Which one to use will depend on the data types we’re converting from and to. If we want to convert a column from any data type to one specific data type (e.g. integer, float, string), we should use the astype method.
🌐
Statology
statology.org › home › how to convert pandas dataframe columns to strings
How to Convert Pandas DataFrame Columns to Strings
July 29, 2020 - df.dtypes player object points object assists object dtype: object · Lastly, we can convert every column in a DataFrame to strings by using the following syntax: #convert every column to strings df = df.astype(str) #check data type of each column df.dtypes player object points object assists object dtype: object
🌐
YouTube
youtube.com › watch
Convert Columns To String In Pandas - YouTube
#Columns #ToString #Pandas
Published   February 5, 2022
🌐
Statistics Globe
statisticsglobe.com › home › python programming language for statistics & data science › convert object data type to string in pandas dataframe column in python (2 examples)
Convert Object Data Type to String in pandas DataFrame Python Column
May 2, 2022 - In Table 2 you can see that we have created an updated version of our pandas DataFrame using the previous Python programming code. In this new DataFrame, you can see a b in front of the values in the column x2. The b stands for bytes, and you can learn more about this here. However, let’s check the dtypes of our updated DataFrame columns: print(data.dtypes) # Print data types of columns # x1 int64 # x2 |S1 # x3 int64 # dtype: object · The column x2 has been converted to the |S1 class (which stands for strings with a length of 1).
🌐
Delft Stack
delftstack.com › home › howto › python pandas › pandas convert column values to string
How to Convert Column Values to String in Pandas | Delft Stack
February 2, 2024 - It converts the datatype of all DataFrame columns to the string type denoted by object in the output. import pandas as pd employees_df = pd.DataFrame( { "Name": ["Ayush", "Bikram", "Ceela", "Kusal", "Shanty"], "Score": [31, 38, 33, 39, 35], "Age": [33, 34, 38, 45, 37], } ) print("DataFrame before Conversion:") print(employees_df, "\n") print("Datatype of columns before conversion:") print(employees_df.dtypes, "\n") employees_df["Score"] = employees_df["Score"].astype(str) print("DataFrame after conversion:") print(employees_df, "\n") print("Datatype of columns after conversion:") print(employees_df.dtypes)
🌐
Saturn Cloud
saturncloud.io › blog › how-to-convert-columns-to-string-in-pandas
How to Convert Columns to String in Pandas | Saturn Cloud Blog
December 2, 2023 - To convert columns to string in Pandas, we can use the astype() method. This method allows us to convert a column to a specified data type.
🌐
datagy
datagy.io › home › pandas tutorials › pandas dataframes › pandas: convert column values to strings
Pandas: Convert Column Values to Strings • datagy
December 15, 2022 - Pandas comes with a column (series) method, .astype(), which allows us to re-cast a column into a different data type. Many tutorials you’ll find only will tell you to pass in 'str' as the argument.
🌐
GeeksforGeeks
geeksforgeeks.org › pandas › how-to-convert-pandas-columns-to-string
How to Convert Pandas Columns to String - GeeksforGeeks
July 23, 2025 - import pandas as pd import numpy as np # Create a DataFrame with random numerical and string columns np.random.seed(42) data = { 'Numeric_Column': np.random.randint(1, 100, 4), 'String_Column': np.random.choice(['A', 'B', 'C', 'D'], 4) } df = pd.DataFrame(data) # Convert 'Numeric_Column' to string using astype() df['Numeric_Column'] = df['Numeric_Column'].astype(str) # Display the result print("Pandas DataFrame:") display(df) ... This method successfully converts the Numeric_Column from an integer type to a string.
Top answer
1 of 16
2637

You have four main options for converting types in pandas:

  1. to_numeric() - provides functionality to safely convert non-numeric types (e.g. strings) to a suitable numeric type. (See also to_datetime() and to_timedelta().)

  2. astype() - convert (almost) any type to (almost) any other type (even if it's not necessarily sensible to do so). Also allows you to convert to categorial types (very useful).

  3. infer_objects() - a utility method to convert object columns holding Python objects to a pandas type if possible.

  4. convert_dtypes() - convert DataFrame columns to the "best possible" dtype that supports pd.NA (pandas' object to indicate a missing value).

Read on for more detailed explanations and usage of each of these methods.


1. to_numeric()

The best way to convert one or more columns of a DataFrame to numeric values is to use pandas.to_numeric().

This function will try to change non-numeric objects (such as strings) into integers or floating-point numbers as appropriate.

Basic usage

The input to to_numeric() is a Series or a single column of a DataFrame.

>>> s = pd.Series(["8", 6, "7.5", 3, "0.9"]) # mixed string and numeric values
>>> s
0      8
1      6
2    7.5
3      3
4    0.9
dtype: object

>>> pd.to_numeric(s) # convert everything to float values
0    8.0
1    6.0
2    7.5
3    3.0
4    0.9
dtype: float64

As you can see, a new Series is returned. Remember to assign this output to a variable or column name to continue using it:

# convert Series
my_series = pd.to_numeric(my_series)

# convert column "a" of a DataFrame
df["a"] = pd.to_numeric(df["a"])

You can also use it to convert multiple columns of a DataFrame via the apply() method:

# convert all columns of DataFrame
df = df.apply(pd.to_numeric) # convert all columns of DataFrame

# convert just columns "a" and "b"
df[["a", "b"]] = df[["a", "b"]].apply(pd.to_numeric)

As long as your values can all be converted, that's probably all you need.

Error handling

But what if some values can't be converted to a numeric type?

to_numeric() also takes an errors keyword argument that allows you to force non-numeric values to be NaN, or simply ignore columns containing these values.

Here's an example using a Series of strings s which has the object dtype:

>>> s = pd.Series(['1', '2', '4.7', 'pandas', '10'])
>>> s
0         1
1         2
2       4.7
3    pandas
4        10
dtype: object

The default behaviour is to raise if it can't convert a value. In this case, it can't cope with the string 'pandas':

>>> pd.to_numeric(s) # or pd.to_numeric(s, errors='raise')
ValueError: Unable to parse string

Rather than fail, we might want 'pandas' to be considered a missing/bad numeric value. We can coerce invalid values to NaN as follows using the errors keyword argument:

>>> pd.to_numeric(s, errors='coerce')
0     1.0
1     2.0
2     4.7
3     NaN
4    10.0
dtype: float64

The third option for errors is just to ignore the operation if an invalid value is encountered:

>>> pd.to_numeric(s, errors='ignore')
# the original Series is returned untouched

This last option is particularly useful for converting your entire DataFrame, but don't know which of our columns can be converted reliably to a numeric type. In that case, just write:

df.apply(pd.to_numeric, errors='ignore')

The function will be applied to each column of the DataFrame. Columns that can be converted to a numeric type will be converted, while columns that cannot (e.g. they contain non-digit strings or dates) will be left alone.

Downcasting

By default, conversion with to_numeric() will give you either an int64 or float64 dtype (or whatever integer width is native to your platform).

That's usually what you want, but what if you wanted to save some memory and use a more compact dtype, like float32, or int8?

to_numeric() gives you the option to downcast to either 'integer', 'signed', 'unsigned', 'float'. Here's an example for a simple series s of integer type:

>>> s = pd.Series([1, 2, -7])
>>> s
0    1
1    2
2   -7
dtype: int64

Downcasting to 'integer' uses the smallest possible integer that can hold the values:

>>> pd.to_numeric(s, downcast='integer')
0    1
1    2
2   -7
dtype: int8

Downcasting to 'float' similarly picks a smaller than normal floating type:

>>> pd.to_numeric(s, downcast='float')
0    1.0
1    2.0
2   -7.0
dtype: float32

2. astype()

The astype() method enables you to be explicit about the dtype you want your DataFrame or Series to have. It's very versatile in that you can try and go from one type to any other.

Basic usage

Just pick a type: you can use a NumPy dtype (e.g. np.int16), some Python types (e.g. bool), or pandas-specific types (like the categorical dtype).

Call the method on the object you want to convert and astype() will try and convert it for you:

# convert all DataFrame columns to the int64 dtype
df = df.astype(int)

# convert column "a" to int64 dtype and "b" to complex type
df = df.astype({"a": int, "b": complex})

# convert Series to float16 type
s = s.astype(np.float16)

# convert Series to Python strings
s = s.astype(str)

# convert Series to categorical type - see docs for more details
s = s.astype('category')

Notice I said "try" - if astype() does not know how to convert a value in the Series or DataFrame, it will raise an error. For example, if you have a NaN or inf value you'll get an error trying to convert it to an integer.

As of pandas 0.20.0, this error can be suppressed by passing errors='ignore'. Your original object will be returned untouched.

Be careful

astype() is powerful, but it will sometimes convert values "incorrectly". For example:

>>> s = pd.Series([1, 2, -7])
>>> s
0    1
1    2
2   -7
dtype: int64

These are small integers, so how about converting to an unsigned 8-bit type to save memory?

>>> s.astype(np.uint8)
0      1
1      2
2    249
dtype: uint8

The conversion worked, but the -7 was wrapped round to become 249 (i.e. 28 - 7)!

Trying to downcast using pd.to_numeric(s, downcast='unsigned') instead could help prevent this error.


3. infer_objects()

Version 0.21.0 of pandas introduced the method infer_objects() for converting columns of a DataFrame that have an object datatype to a more specific type (soft conversions).

For example, here's a DataFrame with two columns of object type. One holds actual integers and the other holds strings representing integers:

>>> df = pd.DataFrame({'a': [7, 1, 5], 'b': ['3','2','1']}, dtype='object')
>>> df.dtypes
a    object
b    object
dtype: object

Using infer_objects(), you can change the type of column 'a' to int64:

>>> df = df.infer_objects()
>>> df.dtypes
a     int64
b    object
dtype: object

Column 'b' has been left alone since its values were strings, not integers. If you wanted to force both columns to an integer type, you could use df.astype(int) instead.


4. convert_dtypes()

Version 1.0 and above includes a method convert_dtypes() to convert Series and DataFrame columns to the best possible dtype that supports the pd.NA missing value.

Here "best possible" means the type most suited to hold the values. For example, this a pandas integer type, if all of the values are integers (or missing values): an object column of Python integer objects are converted to Int64, a column of NumPy int32 values, will become the pandas dtype Int32.

With our object DataFrame df, we get the following result:

>>> df.convert_dtypes().dtypes                                             
a     Int64
b    string
dtype: object

Since column 'a' held integer values, it was converted to the Int64 type (which is capable of holding missing values, unlike int64).

Column 'b' contained string objects, so was changed to pandas' string dtype.

By default, this method will infer the type from object values in each column. We can change this by passing infer_objects=False:

>>> df.convert_dtypes(infer_objects=False).dtypes                          
a    object
b    string
dtype: object

Now column 'a' remained an object column: pandas knows it can be described as an 'integer' column (internally it ran infer_dtype) but didn't infer exactly what dtype of integer it should have so did not convert it. Column 'b' was again converted to 'string' dtype as it was recognised as holding 'string' values.

2 of 16
552

Use this:

a = [['a', '1.2', '4.2'], ['b', '70', '0.03'], ['x', '5', '0']]
df = pd.DataFrame(a, columns=['one', 'two', 'three'])
df

Out[16]:
  one  two three
0   a  1.2   4.2
1   b   70  0.03
2   x    5     0

df.dtypes

Out[17]:
one      object
two      object
three    object

df[['two', 'three']] = df[['two', 'three']].astype(float)

df.dtypes

Out[19]:
one       object
two      float64
three    float64
Top answer
1 of 11
251

A new answer to reflect the most current practices: as of now (v1.2.4), neither astype('str') nor astype(str) work.

As per the documentation, a Series can be converted to the string datatype in the following ways:

df['id'] = df['id'].astype("string")

df['id'] = pandas.Series(df['id'], dtype="string")

df['id'] = pandas.Series(df['id'], dtype=pandas.StringDtype)

End to end example:

import pandas as pd

# Create a sample DataFrame
data = {
    'Name': ['John', 'Alice', 'Bob', 'John', 'Alice'],
    'Age': [25, 30, 35, 25, 30],
    'City': ['New York', 'London', 'Paris', 'New York', 'London'],
    'Salary': [50000, 60000, 70000, 50000, 60000],
    'Category': ['A', 'B', 'C', 'A', 'B']
}

df = pd.DataFrame(data)

# Print the DataFrame
print("Original DataFrame:")
print(df)
print("\nData types:")
print(df.dtypes)
cat_cols_ = None
# Apply the code to change data types
if not cat_cols_:
    # Get the columns with object data type
    object_columns = df.select_dtypes(include=['object']).columns.tolist()
    
    if len(object_columns) > 0:
        print(f"\nObject columns found, converting to string: {object_columns}")
        
        # Convert object columns to string type
        df[object_columns] = df[object_columns].astype('string')
    
    # Get the categorical columns (including string and category data types)
    cat_cols_ = df.select_dtypes(include=['category', 'string']).columns.tolist()

# Print the updated DataFrame and data types
print("\nUpdated DataFrame:")
print(df)
print("\nUpdated data types:")
print(df.dtypes)
print(f"\nCategorical columns (cat_cols_): {cat_cols_}")
Original DataFrame:
    Name  Age      City  Salary Category
0   John   25  New York   50000        A
1  Alice   30    London   60000        B
2    Bob   35     Paris   70000        C
3   John   25  New York   50000        A
4  Alice   30    London   60000        B

Data types:
Name        object
Age          int64
City        object
Salary       int64
Category    object
dtype: object

Object columns found, converting to string: ['Name', 'City', 'Category']

Updated DataFrame:
    Name  Age      City  Salary Category
0   John   25  New York   50000        A
1  Alice   30    London   60000        B
2    Bob   35     Paris   70000        C
3   John   25  New York   50000        A
4  Alice   30    London   60000        B

Updated data types:
Name        string[python]
Age                  int64
City        string[python]
Salary               int64
Category    string[python]
dtype: object

Categorical columns (cat_cols_): ['Name', 'City', 'Category']
2 of 11
127

You can convert all elements of id to str using apply

df.id.apply(str)

0        123
1        512
2      zhub1
3    12354.3
4        129
5        753
6        295
7        610

Edit by OP:

I think the issue was related to the Python version (2.7.), this worked:

df['id'].astype(basestring)
0        123
1        512
2      zhub1
3    12354.3
4        129
5        753
6        295
7        610
Name: id, dtype: object