With the pandas library, this is as easy as using two commands!

df = pd.read_json()

read_json converts a JSON string to a pandas object (either a series or dataframe). Then:

df.to_csv()

Which can either return a string or write directly to a csv-file. See the docs for to_csv.

Based on the verbosity of previous answers, we should all thank pandas for the shortcut.

For unstructured JSON see this answer.

EDIT: Someone asked for a working minimal example:

import pandas as pd

with open('jsonfile.json', encoding='utf-8') as inputfile:
    df = pd.read_json(inputfile)

df.to_csv('csvfile.csv', encoding='utf-8', index=False)
Answer from vmg on Stack Overflow
Top answer
1 of 16
280

With the pandas library, this is as easy as using two commands!

df = pd.read_json()

read_json converts a JSON string to a pandas object (either a series or dataframe). Then:

df.to_csv()

Which can either return a string or write directly to a csv-file. See the docs for to_csv.

Based on the verbosity of previous answers, we should all thank pandas for the shortcut.

For unstructured JSON see this answer.

EDIT: Someone asked for a working minimal example:

import pandas as pd

with open('jsonfile.json', encoding='utf-8') as inputfile:
    df = pd.read_json(inputfile)

df.to_csv('csvfile.csv', encoding='utf-8', index=False)
2 of 16
152

First, your JSON has nested objects, so it normally cannot be directly converted to CSV. You need to change that to something like this:

{
    "pk": 22,
    "model": "auth.permission",
    "codename": "add_logentry",
    "content_type": 8,
    "name": "Can add log entry"
},
......]

Here is my code to generate CSV from that:

import csv
import json

x = """[
    {
        "pk": 22,
        "model": "auth.permission",
        "fields": {
            "codename": "add_logentry",
            "name": "Can add log entry",
            "content_type": 8
        }
    },
    {
        "pk": 23,
        "model": "auth.permission",
        "fields": {
            "codename": "change_logentry",
            "name": "Can change log entry",
            "content_type": 8
        }
    },
    {
        "pk": 24,
        "model": "auth.permission",
        "fields": {
            "codename": "delete_logentry",
            "name": "Can delete log entry",
            "content_type": 8
        }
    }
]"""

x = json.loads(x)

f = csv.writer(open("test.csv", "wb+"))

# Write CSV Header, If you dont need that, remove this line
f.writerow(["pk", "model", "codename", "name", "content_type"])

for x in x:
    f.writerow([x["pk"],
                x["model"],
                x["fields"]["codename"],
                x["fields"]["name"],
                x["fields"]["content_type"]])

You will get output as:

pk,model,codename,name,content_type
22,auth.permission,add_logentry,Can add log entry,8
23,auth.permission,change_logentry,Can change log entry,8
24,auth.permission,delete_logentry,Can delete log entry,8
๐ŸŒ
Python.org
discuss.python.org โ€บ python help
How to transform a JSON file into a CSV one in Python? - Python Help - Discussions on Python.org
April 8, 2024 - Hi Everyone, I have a quick question. Thanks to a previous post : Python: Extract Data from an Interactive Map on the Web, with Several Years, into a CSV file I was been able to extract JSON data from the web. Thanks again to @FelixLeg & @kknechtel to their useful help and advices.
Discussions

How to get Json keys as columns in a csv with python?
I am getting JSON-Data from the AzureDevOps-API and I want to save the data in a csv-file with the JSON-Keys as column-names. I have the following code: import requests import pandas api_url = "***" Headers = { "Authorโ€ฆ More on discuss.python.org
๐ŸŒ discuss.python.org
0
0
September 17, 2023
Pandas json to csv with column names
Whoever told you you needed pandas to read a json file is a jackass! pandas is a data analysis library for like data scientists; json is a perfectly standard programming file format. Vanilla Python is fine. I'm not sure the shape of the JSON that you're reading, so I can't give you exact code to run, but it shouldn't be that much more complicated than something like 1 import csv 2 import json 3 4 with open('jsonfile.json') as infile: 5 data = json.load(infile) 6 7 8 with open('csv-outfile.csv', 'w') as outfile: 9 writer = csv.writer(outfile) 10 headers = data[0].keys() 11 writer.writerow(headers) 12 for record in data: 13 writer.writerow(record.values()) More on reddit.com
๐ŸŒ r/learnpython
7
1
April 1, 2021
python - Using Pandas to convert JSON to CSV with specific fields - Stack Overflow
I am currently trying to convert a JSON file to a CSV file using Pandas. The codes that I'm using now are able to convert the JSON to a CSV file. import pandas as pd json_data = pd.read_json("out... More on stackoverflow.com
๐ŸŒ stackoverflow.com
September 18, 2018
How to convert JSON to csv in Python?
You have two steps: flatten the json code write the CSV file. You can create a recursive function to flatten the json, or use the library: https://github.com/amirziai/flatten flatten_json After that is completed, writing the CSV is trivial. More on reddit.com
๐ŸŒ r/learnpython
3
1
June 17, 2024
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ python โ€บ convert-json-to-csv-in-python
Convert JSON to CSV in Python - GeeksforGeeks
July 12, 2025 - Explanation: This code reads a JSON file (data.json), extracts the emp_details list, and writes it into a CSV file (data_file.csv). It writes the headers (keys) from the first employee and then appends the employee details (values) as rows in the CSV.
๐ŸŒ
Python.org
discuss.python.org โ€บ python help
How to get Json keys as columns in a csv with python? - Python Help - Discussions on Python.org
September 17, 2023 - I am getting JSON-Data from the AzureDevOps-API and I want to save the data in a csv-file with the JSON-Keys as column-names. I have the following code: import requests import pandas api_url = "***" Headers = { "Authorization" : "***" } response = requests.get(api_url, headers=Headers) obj = pandas.read_json(response.text, orient='values') obj.to_csv('output1.csv') I am getting the following csv-output: So the JSON-Keys are not the column names and everything is in Column A nothing in Co...
๐ŸŒ
LearnPython.com
learnpython.com โ€บ blog โ€บ python-json-to-csv
How to Convert JSON to CSV in Python | LearnPython.com
JSON and CSV are two different file formats, but you can convert between them in Python. Weโ€™ll show you how in this article.
๐ŸŒ
Onlinetools1
onlinetools1.github.io โ€บ blogs โ€บ json2csv
convert json to csv in python - Step-by-Step Guide
The first row typically contains column headers (Name, Age, City) to label the data in each column. import csv import json # Sample JSON data json_data = '[{"Name": "John", "Age": 30, "City": "New York"}, {"Name": "Alice", "Age": 25, "City": "San Francisco"}]' # Load JSON data data = json.loads(json_data) # Specify CSV file name csv_file = 'output.csv' # Open CSV file in write mode with open(csv_file, 'w', newline='') as csvfile: # Create CSV writer csv_writer = csv.writer(csvfile) # Write header csv_writer.writerow(data[0].keys()) # Write data for row in data: csv_writer.writerow(row.values()) print(f'Conversion successful.
๐ŸŒ
Spark By {Examples}
sparkbyexamples.com โ€บ home โ€บ pandas โ€บ pandas โ€“ convert json to csv
Pandas - Convert JSON to CSV - Spark By {Examples}
November 1, 2024 - You can select specific columns to include in the CSV file when using the to_csv() method in Pandas. If you only want to export a subset of columns from your DataFrame, you can do so by providing a list of column names to the columns parameter.
Find elsewhere
๐ŸŒ
Enterprise DNA
blog.enterprisedna.co โ€บ python-convert-json-to-csv
Python: Convert JSON to CSV, Step-by-Step Guide โ€“ Master Data Skills + AI
In this example, pd.read_json reads the JSON data from the file, and df.to_csv writes the resulting DataFrame to a CSV file. The index=False parameter ensures that the index column is not included in the CSV output.
๐ŸŒ
Gigasheet
gigasheet.com โ€บ post โ€บ convert-json-to-csv-python
How to Convert JSON to CSV in Python
You can use a language like Python and code the conversion using libraries like Pandas. If youโ€™re not fond of coding, weโ€™ve got a much easier route. You can use Gigasheet. In this article, weโ€™ll walk through you through both ways to convert JSON to CSV, using Python code as well as the the #NoCode way of converting JSON to CSV.
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ python โ€บ convert-nested-json-to-csv-in-python
Convert nested JSON to CSV in Python - GeeksforGeeks
July 23, 2025 - Therefore, the column "education.graduation.major" was simply renamed to "graduation". After renaming the columns, the to_csv() method saves the pandas dataframe object as CSV to the provided file location.
๐ŸŒ
Verpex
verpex.com โ€บ blog โ€บ how-to-convert-json-to-csv-in-python
How to Convert JSON to CSV in Python
Use to_csv() parameters like sep to change the delimiter or columns to select specific columns. By following this approach, even the most complex JSON structures can be converted into a clean and usable CSV file.
Top answer
1 of 1
3

I don't think json_normalize is intended to work on this specific orientation. I could be wrong but from the documentation, it appears that normalization means "Deal with lists within each dictionary".

Assume data is

data = json.load(open('out1.json'))['events']

Look at the first entry

data[0]['timestamp']

1537190572023

json_normalize wants this to be a list

[{'timestamp': 1537190572023}]

Create augmented data2

I don't actually recommend this approach.
If we create data2 accordingly:

data2 = [{**d, **{'timestamp': [{'timestamp': d['timestamp']}]}} for d in data]

We can use json_normalize

json_normalize(
    data2, 'timestamp',
    [['event', 'json', 'level'], ['event', 'json', 'message']]
)

       timestamp event.json.level                                 event.json.message
0  1537190572023             INFO  Disabled camera with QR scan on  by 80801234 a...
1  1537190528619             INFO                Employee number saved successfully.

Comprehension

I think it's simpler to just do

pd.DataFrame([
    (d['timestamp'],
     d['event']['json']['level'],
     d['event']['json']['message'])
    for d in data
], columns=['timestamp', 'level', 'message'])

       timestamp level                                            message
0  1537190572023  INFO  Disabled camera with QR scan on  by 80801234 a...
1  1537190528619  INFO                Employee number saved successfully.

json_normalize

But without the fancy arguments

json_normalize(data).pipe(
    lambda d: d[['timestamp']].join(
        d.filter(like='event.json')
    )
)

       timestamp event.json.level                                 event.json.message
0  1537190572023             INFO  Disabled camera with QR scan on  by 80801234 a...
1  1537190528619             INFO                Employee number saved successfully.
๐ŸŒ
Geekflare
geekflare.com โ€บ development โ€บ how to convert json to csv in python: a step-by-step guide from experts
How to Convert JSON to CSV in Python: A Step-by-Step Guide from Experts
January 17, 2025 - Step 1: To convert JSON files to CSV, you first need to import Pandas in Python. ... Step 2: Load the JSON data into Pandas DataFrame. ... Step 3: Write the data to CSV file. ... The file named โ€˜csv_dataโ€™ will be created in the current working ...
๐ŸŒ
Reddit
reddit.com โ€บ r/learnpython โ€บ how to convert json to csv in python?
r/learnpython on Reddit: How to convert JSON to csv in Python?
June 17, 2024 -

Hey everybody,

I try to convert a huge file (~65 GB) into smaller subsets. The goal is to split the files into the smaller subsets e.g. 1 mil tweets per file and convert this data to a csv format. I currently have a working splitting code that splits the ndjson file to smaller ndjson files, but I have trouble to convert the data to csv. The important part is to create columns for each exsisting varaiable, so columns named __crawled_url or w1_balanced. There are quite a few nested variabels in the data, like w1_balanced is contained in the variable theme_topic, that need to be flattened.

Splitting code:

import json
#function to split big ndjson file to multiple smaller files
def split_file(input_file, lines_per_file): #variables that the function calls
    file_count = 0
    line_count = 0
    output_lines = []
    with open(input_file, 'r', encoding="utf8") as infile:
        for line in infile:
            output_lines.append(line)
            line_count += 1
            if line_count == lines_per_file:
                with open(f'1mio_split_{file_count}.ndjson', 'w', encoding="utf8") as outfile:
                    outfile.writelines(output_lines)
                file_count += 1
                line_count = 0
                output_lines = []
        #handle any remaining lines
        if output_lines:
            with open(f'1mio_split_{file_count}.ndjson', 'w',encoding="utf8") as outfile:
                outfile.writelines(output_lines)
#file containing tweets
input_file = input("path to big file:" )
#example filepath: C:/Users/YourName/Documents/tweet.ndjson
#how many lines/tweets should the new file contain?
lines_per_file = int(input ("Split after how many lines?: "))
split_file(input_file, lines_per_file)
print("Splitting done!")

Here are 2 sample lines from the data I use:

[{"__crawled_url":"https://twitter.com/example1","theme_topic":{"w1_balanced":{"label":"__label__a","confidence":0.3981},"w5_balanced":{"label":"__label__c","confidence":1}},"author":"author1","author_userid":"116718988","author_username":"author1","canonical_url":"https://twitter.com/example1","collected_by":"User","collection_method":"tweety 1.0.9.4","collection_time":"2024-05-27T14:40:32","collection_time_epoch":1716813632,"isquoted":false,"isreply":true,"isretweet":false,"language":"de","mentioning/replying":"twitteruser","num_likes":"0","num_retweets":"0","plain_text":"@twitteruser here is an exmaple text ๐Ÿค”","published_time":"2024-04-18T20:14:51","published_time_epoch":1713471291,"published_time_original":"2024-04-18 20:14:51+00:00","replied_tweet":{"author":"Twitter User","author_userid":"1053198649700827136","author_username":"twitteruser"},"spacy_annotations":{"de_core_news_lg":{"noun_chunks":[{"text":"@twitteruser","start_char":0,"end_char":9},{"text":"more exapmle text","start_char":20,"end_char":34},{"text":"Gel","start_char":40,"end_char":43},{"text":"Haar","start_char":47,"end_char":51}],"named_entities":[{"text":"@twitteruser","start_char":0,"end_char":9,"label_":"MISC"}]},"xx_ent_wiki_sm":{"named_entities":{}},"da_core_news_lg":{"noun_chunks":{},"named_entities":{}},"en_core_web_lg":{"noun_chunks":{},"named_entities":{}},"fr_core_news_lg":{"noun_chunks":{},"named_entities":{}},"it_core_news_lg":{"noun_chunks":{},"named_entities":{}},"pl_core_news_lg":{"named_entities":{}},"es_core_news_lg":{"noun_chunks":{},"named_entities":{}},"fi_core_news_lg":{"noun_chunks":{},"named_entities":{}}},"tweet_id":"1781053802398814682","hashtags":{},"outlinks":{},"quoted_tweet":{"outlinks":{},"hashtags":{},"mentioning/replying":{},"replied_tweet":{}}}]

[{"__crawled_url":"https://twitter.com/example2","theme_topic":{"w1_balanced":{"label":"__label__a","confidence":0.3981},"w5_balanced":{"label":"__label__c","confidence":1}},"author":"author2","author_userid":"116712288","author_username":"author2","canonical_url":"https://twitter.com/example2","collected_by":"User","collection_method":"tweety 1.0.9.4","collection_time":"2024-05-27T14:40:32","collection_time_epoch":1716813632,"isquoted":false,"isreply":true,"isretweet":false,"language":"de","mentioning/replying":"twitteruser","num_likes":"0","num_retweets":"0","plain_text":"@twitteruser here is another exmaple text ๐Ÿค”","published_time":"2024-04-18T20:14:51","published_time_epoch":1713471291,"published_time_original":"2024-04-18 20:14:51+00:00","replied_tweet":{"author":"Twitter User","author_userid":"1053198649700827136","author_username":"twitteruser"},"spacy_annotations":{"de_core_news_lg":{"noun_chunks":[{"text":"@twitteruser","start_char":0,"end_char":9},{"text":"more exapmle text","start_char":20,"end_char":34},{"text":"Gel","start_char":40,"end_char":43},{"text":"Haar","start_char":47,"end_char":51}],"named_entities":[{"text":"@twitteruser","start_char":0,"end_char":9,"label_":"MISC"}]},"xx_ent_wiki_sm":{"named_entities":{}},"da_core_news_lg":{"noun_chunks":{},"named_entities":{}},"en_core_web_lg":{"noun_chunks":{},"named_entities":{}},"fr_core_news_lg":{"noun_chunks":{},"named_entities":{}},"it_core_news_lg":{"noun_chunks":{},"named_entities":{}},"pl_core_news_lg":{"named_entities":{}},"es_core_news_lg":{"noun_chunks":{},"named_entities":{}},"fi_core_news_lg":{"noun_chunks":{},"named_entities":{}}},"tweet_id":"1781053802398814682","hashtags":{},"outlinks":{},"quoted_tweet":{"outlinks":{},"hashtags":{},"mentioning/replying":{},"replied_tweet":{}}}]

As you can see the lines contain stuff like emojis and are in different languages so the encoding="uft8" must be included while opening the file, here are a few examples what I tried and the error message I get. I should also mention, that since every line is it's own list just calling the elements like with a normal json object didn't work.

Thanks a lot for every answer and even reading this post!

#try1
import json
import csv

data = "C:/Users/Sample-tweets.ndjson"
json_data = json.loads(data)
csv_file ="try3.csv"
csv_obj = open(csv_file, "w")
csv_writer = csv.writer(csv_obj)
header = json_data[0].keys()
csv_writer.writerow(header)
for item in json_data:
    csv_writer.writerow(item.values())
csv_obj.close()
#raise JSONDecodeError("Expecting value", s, err.value) from None
#json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)


#try2
import json
import csv

with open('Sample-tweets.ndjson', encoding="utf8") as ndfile:
data = json.load(ndfile)

csv_data = data['emp_details']
data_file = open('try1.csv', 'w', encoding="utf8")
csv_writer = csv.writer(data_file)
count = 0
for data in csv_data:
    if count == 0:
        header = emp.keys()
csv_writer.writerow(header) #spacing error?! can't even run the script 
        count += 1
    csv_writer.writerow(emp.values())
data_file.close()

with open('Sample-tweets.ndjson', encoding="utf8") as ndfile:
jsondata = json.load(ndfile)

data_file = open('try2.csv', 'w', newline='', encoding="uft8")
csv_writer = csv.writer(data_file)

count = 0
for data in ndfile:
if count == 0:
header = data.keys()
csv_writer.writerow(header)
count += 1
csv_writer.writerow(data.values())
data_file.close()
#error message: raise JSONDecodeError("Extra data", s, end)
#json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 1908)



#try3 to see if the auto dictionary works
import json

output_lines=[]
with open('C:/Users/Sample1-tweets.ndjson', 'r', encoding="utf8") as f:
    json_in=f.read()
json_in=json.loads(json_in)
print(json_in[2])
#error message: raise JSONDecodeError("Extra data", s, end)
#json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 1908)
#->same error message as above
๐ŸŒ
Automate the Boring Stuff
automatetheboringstuff.com โ€บ 2e โ€บ chapter16
Chapter 16 โ€“ Working with CSV Files and JSON Data
They are easy for programs to parse while still being human readable, so they are often used for simple spreadsheets or web app data. The csv and json modules greatly simplify the process of reading and writing to CSV and JSON files. The last few chapters have taught you how to use Python to parse information from a wide variety of file formats.
๐ŸŒ
Dadroit
dadroit.com โ€บ blog โ€บ json-to-csv
How To Convert JSON to CSV File: A Comprehensive Guide
September 21, 2023 - The resulting CSV data will be displayed for you to copy or download. Care must be taken to provide valid JSON data and correctly set the Flatten option for nested structures. Handling of nested objects and arrays. Selecting specific fields for the CSV. CSV column names mapping to JSON field names.
๐ŸŒ
Medium
medium.com โ€บ @alexaae9 โ€บ how-to-convert-json-to-csv-in-python-a-complete-guide-d039f0d5a9a5
How to Convert JSON to CSV in Python: A Complete Guide | by Alexander Stock | Medium
September 15, 2025 - This approach allows you to handle deeply nested JSON objects, converting them into CSV columns like supplier_name and supplier_address_street.
Top answer
1 of 2
2

To keep the desired columns try this

cols_to_keep = ['col1', 'col2', 'col3']
df = df[cols_to_keep]
df

You can also read in only the columns you need like this

df = pd.read_csv('test_old.csv', usecols = ['col1', 'col2', 'col3'],   
                  dtype={"col1" : str, "col2" : str})
2 of 2
2

You can do all the grouping in pandas.

The idea behind this solution:

Create a new column subset that has the subset dictionary you want.

Group dataframe by col1 into a new data frame. Here the subset is connected to each item from col1. Extract the series subset.

Loop through this series and collect the data for your json in a list.

Convert that list to json with Python native tools.

import pandas as pd
import json

df = pd.read_csv('test_old.csv', sep=',',
       dtype={
        "col1" : str,
        "col2" : str,
        "col3" : str
    })

# print(df) - compare with example

df['subset'] = df.apply(lambda x: 
                 {'col2': x.col2,
                  'col3': x.col3 }, axis=1)

s = df.groupby('col1').agg(lambda x: list(x))['subset']

results = []

for col1, subset in s.iteritems():
    results.append({'col1': col1, 'subset': subset})

with open('ExpectedJsonFile.json', 'w') as outfile:
    outfile.write(json.dumps(results, indent=4))

UPDATE: Since there's a problem with the example, insert a print(df) line after the pd.read_csv and compare.

The imported data frame should show as:

    col1        col2 state col3  val2  val3  val4  val5
0  95110  2015-05-01    CA   50  30.0   5.0   3.0     3
1  95110  2015-06-01    CA   67  31.0   5.0   3.0     4
2  95110  2015-07-01    CA   97  32.0   5.0   3.0     6

The final result shows like this

[
    {
        "col1": "95110",
        "subset": [
            {
                "col2": "2015-05-01",
                "col3": "50"
            },
            {
                "col2": "2015-06-01",
                "col3": "67"
            },
            {
                "col2": "2015-07-01",
                "col3": "97"
            }
        ]
    }
]

Tested with Python 3.5.6 32bit, Pandas 0.23.4, Windows7