Use astype with replace:

df = pd.DataFrame({'ID':[805096730.0,805096730.0]})

df['ID'] = df['ID'].astype(str).replace('\.0', '', regex=True)
print (df)
          ID
0  805096730
1  805096730

Or add parameter dtype:

df = pd.read_excel(file, dtype={'ID':str})
Answer from jezrael on Stack Overflow
Discussions

python - How to remove decimal point from string using pandas - Stack Overflow
I'm reading an xls file and converting to csv file in databricks using pyspark. My input data is of string format 101101114501700 in the xls file. But after converting it to CSV format using pandas... More on stackoverflow.com
๐ŸŒ stackoverflow.com
March 19, 2019
python - Deleting decimals from a pandas dataframe - Stack Overflow
I have numbers such as 24.00 2.00 3.00 I want to have 24 2 3 I have used .astype(int), round() but I keep getting the former. How do I get this to work? More on stackoverflow.com
๐ŸŒ stackoverflow.com
how to remove decimal points from values ?
https://www.google.com/search?q=python+remove+decimal More on reddit.com
๐ŸŒ r/learnpython
4
0
December 5, 2023
pandas - Remove Decimal Point in a Dataframe with both Numbers and String Using Python - Stack Overflow
I have a data Frame with about 50,000 records; and I noticed that ".0" have been added behind all numbers in a column. I have been trying to remove the ".0", so that the table below; N | Movies ... More on stackoverflow.com
๐ŸŒ stackoverflow.com
April 10, 2014
๐ŸŒ
Saturn Cloud
saturncloud.io โ€บ blog โ€บ how-to-remove-decimal-points-in-pandas-a-guide-for-data-scientists
How to Remove Decimal Points in Pandas A Guide for Data Scientists | Saturn Cloud Blog
December 23, 2023 - The simplest way to remove decimal points in pandas is by using the round() function. This function rounds a given number to a specified number of decimal places. To remove all decimal points, you can set the number of decimal places to 0.
๐ŸŒ
CodeProject
codeproject.com โ€บ Questions โ€บ 5291723 โ€บ How-do-I-remove-decimals-from-a-pandas-data-frame
How do I remove decimals from a pandas data frame index
September 1, 2021 - Do not try and find the page. Thatโ€™s impossible. Instead only try to realise the truth - For those who code; Updated: 1 Jul 2007
Top answer
1 of 2
2

I think the field is automatically parsed as float when reading the excel. I would correct it afterwards:

df['column_name'] = df['column_name'].astype(int)

If your column contains Nulls you canยดt convert to integer so you will need to fill nulls first:

df['column_name'] = df['column_name'].fillna(0).astype(int)

Then you can concatenate and store the way you were doing it

2 of 2
0

Your question has nothing to do with Spark or PySpark. It's related to Pandas.

This is because Pandas interpret and infer columns' data type automatically. Since all the values of your column are numeric, Pandas will consider it as float data type.

To avoid this, pandas.ExcelFile.parse method accepts an argument called converters, you could use this to tell Pandas the specific column data type by:

# if you want one specific column as string
df = pd.concat([filepath_pd.parse(name, converters={'column_name': str}) for name in names])

OR

# if you want all columns as string
# and you have multi sheets and they do not have same columns
# this merge all sheets into one dataframe
def get_converters(excel_file, sheet_name, dt_cols):
    cols = excel_file.parse(sheet_name).columns
    converters = {col: str for col in cols if col not in dt_cols}
    for col in dt_cols:
        converters[col] = pd.to_datetime
    return converters

df = pd.concat([filepath_pd.parse(name, converters=get_converters(filepath_pd, name, ['date_column'])) for name in names]).reset_index(drop=True)

OR

# if you want all columns as string
# and all your sheets have same columns
cols = filepath_pd.parse().columns
dt_cols = ['date_column']
converters = {col: str for col in cols if col not in dt_cols}
for col in dt_cols:
    converters[col] = pd.to_datetime
df = pd.concat([filepath_pd.parse(name, converters=converters) for name in names]).reset_index(drop=True)
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ how-to-remove-all-decimals-from-a-number-using-python
How to remove all decimals from a number using Python? - GeeksforGeeks
March 27, 2025 - In this article, let's see how to remove numbers from string in Pandas. Currently, we will be using only the .csv file for demonstration purposes, but the process is the same for other types of files. The function read_csv() is used to read CSV files.
๐ŸŒ
YouTube
youtube.com โ€บ watch
Pandas : Python - Remove decimal and zero from string
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
Published ย  February 12, 2022
Find elsewhere
๐ŸŒ
GitHub
github.com โ€บ beepscore โ€บ pandas_decimal
GitHub - beepscore/pandas_decimal: Using Pandas with Python Decimal for accurate currency arithmetic ยท GitHub
Pandas most common types are int, float64, and "object". For type "object", often the underlying type is a string but it may be another type like Decimal. In read_csv use a converter function. from decimal import Decimal import pandas as pd def decimal_from_value(value): return Decimal(value) df = pd.read_csv(filename, converters={'sales': decimal_from_value}) # converter set sales type to "object" (Decimal), not default float64 print(df.dtypes) # week int64 # sales object
Author ย  beepscore
๐ŸŒ
pythontutorials
pythontutorials.net โ€บ blog โ€บ how-to-remove-numbers-from-string-terms-in-a-pandas-dataframe
How to Remove Numbers from String Terms in a Pandas DataFrame Column: A Better Approach โ€” pythontutorials.net
Our goal is to clean product_name by removing all numbers, resulting in: Before introducing the "better approach," letโ€™s critique two common methods and their flaws: One naive approach is to loop through each row, iterate over each character in the string, and keep only non-numeric characters. import pandas as pd df = pd.DataFrame({ "product_name": ["Laptop15", "Phone_2023", "Headphones45"] }) # Manual loop to remove numbers cleaned = [] for s in df["product_name"]: cleaned_str = ''.join([c for c in s if not c.isdigit()]) cleaned.append(cleaned_str) df["cleaned_product_name"] = cleaned
๐ŸŒ
CopyProgramming
copyprogramming.com โ€บ howto โ€บ python-how-to-remove-decimal-number-using-python
How to Remove Decimal Numbers in Python: Complete Guide 2026 - Remove decimal numbers in python complete guide
December 30, 2025 - # โŒ WRONG: Float loses precision before Decimal conversion result = Decimal(0.1) # Already corrupted # โœ“ CORRECT: String preserves exact value result = Decimal('0.1') For arrays or DataFrames, use specialized libraries rather than loops: import numpy as np import pandas as pd # NumPy array = np.array([1.234, 5.678, 9.012]) rounded = np.round(array, decimals=1) # Pandas df = pd.DataFrame({'price': [19.99, 25.50, 100.01]}) df['rounded'] = df['price'].round(2)
๐ŸŒ
Stack Overflow
stackoverflow.com โ€บ questions โ€บ 67722165 โ€บ remove-decimal-in-pandas
python - Remove Decimal in pandas - Stack Overflow
You probably have a nan value in this column. You must remove them first, then convert to int: df["Time Period"].astype(int).
๐ŸŒ
pythontutorials
pythontutorials.net โ€บ blog โ€บ how-to-remove-decimal-points-in-pandas
How to Remove Decimal Points in Pandas DataFrame: Convert Floats to Integers Successfully โ€” pythontutorials.net
This blog will guide you through step-by-step methods to remove decimal points in a Pandas DataFrame, covering common scenarios like whole-number floats, floats with decimal parts, and columns with missing values (NaNs).