pandas.read_csv to the rescue:

import pandas as pd
df = pd.read_csv("data.csv")
print(df)

This outputs a pandas DataFrame:

        Date    price  factor_1  factor_2
0  2012-06-11  1600.20     1.255     1.548
1  2012-06-12  1610.02     1.258     1.554
2  2012-06-13  1618.07     1.249     1.552
3  2012-06-14  1624.40     1.253     1.556
4  2012-06-15  1626.15     1.258     1.552
5  2012-06-16  1626.15     1.263     1.558
6  2012-06-17  1626.15     1.264     1.572
Answer from root on Stack Overflow
🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.read_csv.html
pandas.read_csv — pandas 3.0.1 documentation
Read a comma-separated values (csv) file into DataFrame. Also supports optionally iterating or breaking of the file into chunks. Additional help can be found in the online docs for IO Tools. ... Any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ...
🌐
W3Schools
w3schools.com › python › pandas › pandas_csv.asp
Pandas Read CSV
CSV files contains plain text and is a well know format that can be read by everyone including Pandas. In our examples we will be using a CSV file called 'data.csv'. Download data.csv. or Open data.csv ... Tip: use to_string() to print the entire DataFrame.
Discussions

How to best import a csv file into pandas which is really 3 dataframes in one csv?
I like to do this first parse in just python reading the file as text or csv. Using an generator that would either yield the start row and end row for each dataset so you can use the pandas read csv skip rows / num rows / end row. Or generate 3 different new files with the respective contents. More on reddit.com
🌐 r/learnpython
3
3
March 24, 2023
I wrote a detailed guide of how Pandas' read_csv() function actually works and the different engine options available, including new features in v2.0. Figured it might be of interest here!
Btw, nothing could better characterize simplicity and ergonomics of that (and others) pandas function, than the fact that a single function needs a whole article to master. More on reddit.com
🌐 r/Python
35
477
March 30, 2023
loading a csv into a dataframe with a datetime as an index

You can by specifying the index_col keyword as the column position, as an int, that should be used as the index and the parse_dates keyword as True. Example:

# Pretend the dates are in the 2nd column
df = pd.read_csv('bunch_of_dated_data.csv', index_col=1, parse_dates=True)
More on reddit.com
🌐 r/learnpython
5
9
October 3, 2017
pandas.read_csv() is only working with certain filenames... (note: NOT only with certain files)
read this then: http://stackoverflow.com/questions/6928789/strange-path-separators-on-windows More on reddit.com
🌐 r/learnpython
11
7
June 15, 2014
Top answer
1 of 4
234

pandas.read_csv to the rescue:

import pandas as pd
df = pd.read_csv("data.csv")
print(df)

This outputs a pandas DataFrame:

        Date    price  factor_1  factor_2
0  2012-06-11  1600.20     1.255     1.548
1  2012-06-12  1610.02     1.258     1.554
2  2012-06-13  1618.07     1.249     1.552
3  2012-06-14  1624.40     1.253     1.556
4  2012-06-15  1626.15     1.258     1.552
5  2012-06-16  1626.15     1.263     1.558
6  2012-06-17  1626.15     1.264     1.572
2 of 4
36

To read a CSV file as a pandas DataFrame, you'll need to use pd.read_csv, which has sep=',' as the default.

But this isn't where the story ends; data exists in many different formats and is stored in different ways so you will often need to pass additional parameters to read_csv to ensure your data is read in properly.

Here's a table listing common scenarios encountered with CSV files along with the appropriate argument you will need to use. You will usually need all or some combination of the arguments below to read in your data.

┌───────────────────────────────────────────────────────┬───────────────────────┬────────────────────────────────────────────────────┐
│ pandas Implementation                                 │ Argument              │ Description                                        │
├───────────────────────────────────────────────────────┼──────────────────────┼────────────────────────────────────────────────────┤
│ pd.read_csv(..., sep=';')                             │ sep/delimiter         │ Read CSV with different separator¹                 │
│ pd.read_csv(..., delim_whitespace=True)               │ delim_whitespace      │ Read CSV with tab/whitespace separator             │
│ pd.read_csv(..., encoding='latin-1')                  │ encoding              │ Fix UnicodeDecodeError while reading²              │
│ pd.read_csv(..., header=False, names=['x', 'y', 'z']) │ header and names      │ Read CSV without headers³                          │
│ pd.read_csv(..., index_col=[0])                       │ index_col             │ Specify which column to set as the index⁴          │
│ pd.read_csv(..., usecols=['x', 'y'])                  │ usecols               │ Read subset of columns                             │
│ pd.read_csv(..., thousands='.', decimal=',')          │ thousands and decimal │ Numeric data is in European format (eg., 1.234,56) │
└───────────────────────────────────────────────────────┴───────────────────────┴────────────────────────────────────────────────────┘

Footnotes

  1. By default, read_csv uses a C parser engine for performance. The C parser can only handle single character separators. If your CSV has a multi-character separator, you will need to modify your code to use the 'python' engine. You can also pass regular expressions:

     df = pd.read_csv(..., sep=r'\s*\|\s*', engine='python')
    
  2. UnicodeDecodeError occurs when the data was stored in one encoding format but read in a different, incompatible one. Most common encoding schemes are 'utf-8' and 'latin-1', your data is likely to fit into one of these.

  3. header=False specifies that the first row in the CSV is a data row rather than a header row, and the names=[...] allows you to specify a list of column names to assign to the DataFrame when it is created.

  4. "Unnamed: 0" occurs when a DataFrame with an un-named index is saved to CSV and then re-read after. Instead of having to fix the issue while reading, you can also fix the issue when writing by using

     df.to_csv(..., index=False)
    

There are other arguments I've not mentioned here, but these are the ones you'll encounter most frequently.

🌐
GeeksforGeeks
geeksforgeeks.org › pandas › python-read-csv-using-pandas-read_csv
Pandas Read CSV in Python - GeeksforGeeks
To access data from the CSV file, we require a function read_csv() from Pandas that retrieves data in the form of the data frame. First, we must import the Pandas library, then using Pandas load this data into a DataFrame
Published   February 18, 2026
🌐
DataCamp
datacamp.com › tutorial › pandas-read-csv
pandas read_csv() Tutorial: Importing Data | DataCamp
December 23, 2025 - For data available in a tabular format and stored as a CSV file, you can use pandas to read it into memory using the read_csv() function, which returns a pandas dataframe.
🌐
Reddit
reddit.com › r/learnpython › how to best import a csv file into pandas which is really 3 dataframes in one csv?
r/learnpython on Reddit: How to best import a csv file into pandas which is really 3 dataframes in one csv?
March 24, 2023 -

I work with some csv files that I get from some outside entity (i.e. there's no way to actually change the csv files themselves) that contain data that I'd like to try to process more efficiently. However, the way that the csv files are structured is that they are really 3 datasets in one file. For example the first dataset has 46 columns, the second has 27 and the third has 15. The datatypes for each column doesn't match with each of the datasets either.

Thus, I'm trying to figure out the most efficient and clean way to import these files and have them split into their respective datasets. So far, the best way I've figured out to do this is by importing the entire file into 1 big dataset, then I subset the big dataset into 3 by searching for some substrings in the first column which can be used to figure out which dataset they belong to. Then I rename the columns and make the columns their appropriate datatype. However, this feels like I'm essentially importing the file twice and doesn't seem very clean.

I was wondering if there was a way for pandas to import the file and only read in rows that have a certain number of elements or something, or if you know of another more efficient way to read in a file like the one I mentioned it would be much appreciated!

Find elsewhere
🌐
ITNEXT
itnext.io › the-fastest-way-to-read-a-csv-file-in-pandas-2-0-532c1f978201
The fastest way to read a CSV file in Pandas 2.0 | by Finn Andersen | ITNEXT
April 15, 2023 - The fastest way to read a CSV file in Pandas 2.0 It turns out that the fastest way to create a Pandas DataFrame from a CSV file is to use an entirely different library. In my previous article I …
🌐
Medium
deallen7.medium.com › how-to-read-csv-data-from-a-url-into-a-pandas-dataframe-b35e70d9e17a
How to read CSV data from a URL into a Pandas DataFrame | by David Allen | Medium
July 26, 2022 - If you’ve ever read a CSV from local storage into your Jupyter notebook, this is going to be a breeze for you. It’s the exact same process. Except instead of passing in a path to the file on your computer, you’ll pass in the URL to the raw CSV. For this example, we’re going to use a raw CSV of US State-county-zip data from github.com: https://github.com/scpike/us-state-county-zip/blob/master/geo-data.csv ... Documentation and tutorials on Python, Pandas, Jupyter Notebook, and Data Analysis.
🌐
Hackr
hackr.io › home › articles › data, analysis, & ai
Pandas read.csv() Function | Docs With Examples
February 10, 2025 - Before using pd.read_csv(), it's essential you have Pandas installed and imported, and it's standard practice to import it with an alias: ... Explanation: This reads the CSV file data.csv and stores it as a DataFrame.
🌐
Medium
medium.com › the-code-compass › pandas-read-csv-example-8ce147774f91
Pandas read_csv() Example | The Code Compass
October 11, 2024 - For instance, you can specify how ... an essential skill for data analysis in Python. With the pd.read_csv() function, you can quickly load data into a DataFrame, allowing you to manipulate and analyze it using the powerful tools ...
🌐
TutorialsPoint
tutorialspoint.com › python_pandas › python_pandas_read_csv.htm
Python Pandas - Read CSV
The pandas.read_csv() method in the Pandas library used to read a CSV file and converts the data into a Pandas DataFrame object.
🌐
Towards Data Science
towardsdatascience.com › home › latest › how to “read_csv” with pandas
How to "read_csv" with Pandas | Towards Data Science
January 16, 2025 - One of the most widely used functions of Pandas is read_csv which reads comma-separated values (csv) files and creates a DataFrame.
🌐
GeeksforGeeks
geeksforgeeks.org › python › how-to-read-a-csv-file-to-a-dataframe-with-custom-delimiter-in-pandas
How to read a CSV file to a Dataframe with custom delimiter in Pandas? - GeeksforGeeks
July 15, 2025 - Example 2: Using the read_csv() method with '_' as a custom delimiter. ... # Importing pandas library import pandas as pd # Load the data of example.csv # with '_' as custom delimiter # into a Dataframe df df = pd.read_csv('example2.csv', sep = '_', engine = 'python') # Print the Dataframe df
🌐
Reddit
reddit.com › r/python › i wrote a detailed guide of how pandas' read_csv() function actually works and the different engine options available, including new features in v2.0. figured it might be of interest here!
r/Python on Reddit: I wrote a detailed guide of how Pandas' read_csv() function actually works and the different engine options available, including new features in v2.0. Figured it might be of interest here!
March 30, 2023 - I don't know why you would expect a function called read_csv to be simple and parsimonious though. CSV is not a standardized file format, there's probably just many variations on it as there CSV files. I'm not saying pandas has the best API, but of course complex things have complex solutions.
🌐
GitHub
github.com › Kaggle › kagglehub › blob › main › README.md
kagglehub/README.md at main · Kaggle/kagglehub
dataset_load also supports pandas_kwargs which will be passed as keyword arguments to the pandas.read_* method. Some examples include: import kagglehub from kagglehub import KaggleDatasetAdapter # Load a DataFrame with a specific version of a CSV df = kagglehub.dataset_load( KaggleDatasetAdapter.PANDAS, "unsdsn/world-happiness/versions/1", "2016.csv", ) # Load a DataFrame with specific columns from a parquet file df = kagglehub.dataset_load( KaggleDatasetAdapter.PANDAS, "robikscube/textocr-text-extraction-from-images-dataset", "annot.parquet", pandas_kwargs={"columns": ["image_id", "bbox", "points", "area"]} ) # Load a dictionary of DataFrames from an Excel file where the keys are sheet names # and the values are DataFrames for each sheet's data.
Author   Kaggle
🌐
MQL5
mql5.com › en › articles › 21507
Statistical Arbitrage Through Cointegrated Stocks (Final): Data Analysis with Specialized Database - MQL5 Articles
4 weeks ago - This will generate an one million rows dataframe like this: Table 1. Sample of the synthetic tabular data for benchmarking SQLite x DuckDB · After saving this tabular data as CSV, we ask an in-memory SQLite to calculate the average price for each ticker. def benchmark_sqlite(filename): conn = sqlite3.connect(":memory:") # Using memory for a fair speed test cursor = conn.cursor() # Load data df = pd.read_csv(filename) df.to_sql("prices", conn, index=False) start_time = time.perf_counter() query = "SELECT ticker, AVG(price) FROM prices GROUP BY ticker" cursor.execute(query) results = cursor.fetchall() end_time = time.perf_counter() conn.close() return end_time - start_time
🌐
Vultr Docs
docs.vultr.com › python › third-party › pandas › read_csv
Python Pandas read_csv() - Load CSV File | Vultr Docs
December 25, 2024 - Use read_csv() to load a CSV file into a DataFrame. ... Immediately upon execution, this code reads the CSV file located at 'path/to/your/file.csv' and loads the data into Pandas DataFrame df.
🌐
The-examples-book
the-examples-book.com › projects › spring2026 › 10200 › project6
TDM 10200: Project 6 - Manipulating Data :: The Examples Book
2.1 Save the 20 most popular airport destinations as top_20. 2.2 Display the airports that are in top_20. 2.3 Display the .head() of the dataframe result. For this question, read in the ice cream products file from /anvil/projects/tdm/data/icecream/combined/products.csv as ice_cream.