In the pandas example (below) what do the brackets mean? Is there a logic to be followed to go deeper with the []. [...]

result = json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])

Each string or list of strings in the ['state', 'shortname', ['info', 'governor']] value is a path to an element to include, in addition to the selected rows. The second argument json_normalize() argument (record_path, set to 'counties' in the documentation example) tells the function how to select elements from the input data structure that make up the rows in the output, and the meta paths adds further metadata that will be included with each of the rows. Think of these as table joins in a database, if you will.

The input for the US States documentation example has two dictionaries in a list, and both of these dictionaries have a counties key that references another list of dicts:

>>> data = [{'state': 'Florida',
...          'shortname': 'FL',
...         'info': {'governor': 'Rick Scott'},
...         'counties': [{'name': 'Dade', 'population': 12345},
...                      {'name': 'Broward', 'population': 40000},
...                      {'name': 'Palm Beach', 'population': 60000}]},
...         {'state': 'Ohio',
...          'shortname': 'OH',
...          'info': {'governor': 'John Kasich'},
...          'counties': [{'name': 'Summit', 'population': 1234},
...                       {'name': 'Cuyahoga', 'population': 1337}]}]
>>> pprint(data[0]['counties'])
[{'name': 'Dade', 'population': 12345},
 {'name': 'Broward', 'population': 40000},
 {'name': 'Palm Beach', 'population': 60000}]
>>> pprint(data[1]['counties'])
[{'name': 'Summit', 'population': 1234},
 {'name': 'Cuyahoga', 'population': 1337}]

Between them there are 5 rows of data to use in the output:

>>> json_normalize(data, 'counties')
         name  population
0        Dade       12345
1     Broward       40000
2  Palm Beach       60000
3      Summit        1234
4    Cuyahoga        1337

The meta argument then names some elements that live next to those counties lists, and those are then merged in separately. The values from the first data[0] dictionary for those meta elements are ('Florida', 'FL', 'Rick Scott'), respectively, and for data[1] the values are ('Ohio', 'OH', 'John Kasich'), so you see those values attached to the counties rows that came from the same top-level dictionary, repeated 3 and 2 times respectively:

>>> data[0]['state'], data[0]['shortname'], data[0]['info']['governor']
('Florida', 'FL', 'Rick Scott')
>>> data[1]['state'], data[1]['shortname'], data[1]['info']['governor']
('Ohio', 'OH', 'John Kasich')
>>> json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
         name  population    state shortname info.governor
0        Dade       12345  Florida        FL    Rick Scott
1     Broward       40000  Florida        FL    Rick Scott
2  Palm Beach       60000  Florida        FL    Rick Scott
3      Summit        1234     Ohio        OH   John Kasich
4    Cuyahoga        1337     Ohio        OH   John Kasich

So, if you pass in a list for the meta argument, then each element in the list is a separate path, and each of those separate paths identifies data to add to the rows in the output.

In your example JSON, there are only a few nested lists to elevate with the first argument, like 'counties' did in the example. The only example in that datastructure is the nested 'authors' key; you'd have to extract each ['_source', 'authors'] path, after which you can add other keys from the parent object to augment those rows.

The second meta argument then pulls in the _id key from the outermost objects, followed by the nested ['_source', 'title'] and ['_source', 'journal'] nested paths.

The record_path argument takes the authors lists as the starting point, these look like:

>>> d['hits']['hits'][0]['_source']['authors']   # this value is None, and is skipped
>>> d['hits']['hits'][1]['_source']['authors']
[{'affiliations': ['Punjabi University'],
  'author_id': '780E3459',
  'author_name': 'munish puri'},
 {'affiliations': ['Punjabi University'],
  'author_id': '48D92C79',
  'author_name': 'rajesh dhaliwal'},
 {'affiliations': ['Punjabi University'],
  'author_id': '7D9BD37C',
  'author_name': 'r s singh'}]
>>> d['hits']['hits'][2]['_source']['authors']
[{'author_id': '7FF872BC',
  'author_name': 'barbara eileen ryan'}]
>>> # etc.

and so gives you the following rows:

>>> json_normalize(d['hits']['hits'], ['_source', 'authors'])
           affiliations author_id          author_name
0  [Punjabi University]  780E3459          munish puri
1  [Punjabi University]  48D92C79      rajesh dhaliwal
2  [Punjabi University]  7D9BD37C            r s singh
3                   NaN  7FF872BC  barbara eileen ryan
4                   NaN  0299B8E9     fraser j harbutt
5                   NaN  7DAB7B72   richard m freeland

and then we can use the third meta argument to add more columns like _id, _source.title and _source.journal, using ['_id', ['_source', 'journal'], ['_source', 'title']]:

>>> json_normalize(
...     data['hits']['hits'],
...     ['_source', 'authors'],
...     ['_id', ['_source', 'journal'], ['_source', 'title']]
... )
           affiliations author_id          author_name       _id   \
0  [Punjabi University]  780E3459          munish puri  7AF8EBC3  
1  [Punjabi University]  48D92C79      rajesh dhaliwal  7AF8EBC3
2  [Punjabi University]  7D9BD37C            r s singh  7AF8EBC3
3                   NaN  7FF872BC  barbara eileen ryan  7521A721
4                   NaN  0299B8E9     fraser j harbutt  7DAEB9A4
5                   NaN  7DAB7B72   richard m freeland  7B3236C5

                                     _source.journal
0  Journal of Industrial Microbiology & Biotechno...
1  Journal of Industrial Microbiology & Biotechno...
2  Journal of Industrial Microbiology & Biotechno...
3                     The American Historical Review
4                     The American Historical Review
5                     The American Historical Review

                                       _source.title  \
0  Development of a stable continuous flow immobi...
1  Development of a stable continuous flow immobi...
2  Development of a stable continuous flow immobi...
3  Feminism and the women's movement : dynamics o...
4  The iron curtain : Churchill, America, and the...
5  The Truman Doctrine and the origins of McCarth...
Answer from Martijn Pieters on Stack Overflow
🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.json_normalize.html
pandas.json_normalize — pandas 3.0.2 documentation
>>> data = [ ... { ... "id": 1, ... "name": "Cole Volk", ... "fitness": {"height": 130, "weight": 60}, ... }, ... {"name": "Mark Reg", "fitness": {"height": 130, "weight": 60}}, ... { ... "id": 2, ... "name": "Faye Raker", ... "fitness": {"height": 130, "weight": 60}, ... }, ... ] >>> pd.json_normalize(data, max_level=0) id name fitness 0 1.0 Cole Volk {'height': 130, 'weight': 60} 1 NaN Mark Reg {'height': 130, 'weight': 60} 2 2.0 Faye Raker {'height': 130, 'weight': 60} Normalizes nested data up to level 1.
Top answer
1 of 3
62

In the pandas example (below) what do the brackets mean? Is there a logic to be followed to go deeper with the []. [...]

result = json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])

Each string or list of strings in the ['state', 'shortname', ['info', 'governor']] value is a path to an element to include, in addition to the selected rows. The second argument json_normalize() argument (record_path, set to 'counties' in the documentation example) tells the function how to select elements from the input data structure that make up the rows in the output, and the meta paths adds further metadata that will be included with each of the rows. Think of these as table joins in a database, if you will.

The input for the US States documentation example has two dictionaries in a list, and both of these dictionaries have a counties key that references another list of dicts:

>>> data = [{'state': 'Florida',
...          'shortname': 'FL',
...         'info': {'governor': 'Rick Scott'},
...         'counties': [{'name': 'Dade', 'population': 12345},
...                      {'name': 'Broward', 'population': 40000},
...                      {'name': 'Palm Beach', 'population': 60000}]},
...         {'state': 'Ohio',
...          'shortname': 'OH',
...          'info': {'governor': 'John Kasich'},
...          'counties': [{'name': 'Summit', 'population': 1234},
...                       {'name': 'Cuyahoga', 'population': 1337}]}]
>>> pprint(data[0]['counties'])
[{'name': 'Dade', 'population': 12345},
 {'name': 'Broward', 'population': 40000},
 {'name': 'Palm Beach', 'population': 60000}]
>>> pprint(data[1]['counties'])
[{'name': 'Summit', 'population': 1234},
 {'name': 'Cuyahoga', 'population': 1337}]

Between them there are 5 rows of data to use in the output:

>>> json_normalize(data, 'counties')
         name  population
0        Dade       12345
1     Broward       40000
2  Palm Beach       60000
3      Summit        1234
4    Cuyahoga        1337

The meta argument then names some elements that live next to those counties lists, and those are then merged in separately. The values from the first data[0] dictionary for those meta elements are ('Florida', 'FL', 'Rick Scott'), respectively, and for data[1] the values are ('Ohio', 'OH', 'John Kasich'), so you see those values attached to the counties rows that came from the same top-level dictionary, repeated 3 and 2 times respectively:

>>> data[0]['state'], data[0]['shortname'], data[0]['info']['governor']
('Florida', 'FL', 'Rick Scott')
>>> data[1]['state'], data[1]['shortname'], data[1]['info']['governor']
('Ohio', 'OH', 'John Kasich')
>>> json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
         name  population    state shortname info.governor
0        Dade       12345  Florida        FL    Rick Scott
1     Broward       40000  Florida        FL    Rick Scott
2  Palm Beach       60000  Florida        FL    Rick Scott
3      Summit        1234     Ohio        OH   John Kasich
4    Cuyahoga        1337     Ohio        OH   John Kasich

So, if you pass in a list for the meta argument, then each element in the list is a separate path, and each of those separate paths identifies data to add to the rows in the output.

In your example JSON, there are only a few nested lists to elevate with the first argument, like 'counties' did in the example. The only example in that datastructure is the nested 'authors' key; you'd have to extract each ['_source', 'authors'] path, after which you can add other keys from the parent object to augment those rows.

The second meta argument then pulls in the _id key from the outermost objects, followed by the nested ['_source', 'title'] and ['_source', 'journal'] nested paths.

The record_path argument takes the authors lists as the starting point, these look like:

>>> d['hits']['hits'][0]['_source']['authors']   # this value is None, and is skipped
>>> d['hits']['hits'][1]['_source']['authors']
[{'affiliations': ['Punjabi University'],
  'author_id': '780E3459',
  'author_name': 'munish puri'},
 {'affiliations': ['Punjabi University'],
  'author_id': '48D92C79',
  'author_name': 'rajesh dhaliwal'},
 {'affiliations': ['Punjabi University'],
  'author_id': '7D9BD37C',
  'author_name': 'r s singh'}]
>>> d['hits']['hits'][2]['_source']['authors']
[{'author_id': '7FF872BC',
  'author_name': 'barbara eileen ryan'}]
>>> # etc.

and so gives you the following rows:

>>> json_normalize(d['hits']['hits'], ['_source', 'authors'])
           affiliations author_id          author_name
0  [Punjabi University]  780E3459          munish puri
1  [Punjabi University]  48D92C79      rajesh dhaliwal
2  [Punjabi University]  7D9BD37C            r s singh
3                   NaN  7FF872BC  barbara eileen ryan
4                   NaN  0299B8E9     fraser j harbutt
5                   NaN  7DAB7B72   richard m freeland

and then we can use the third meta argument to add more columns like _id, _source.title and _source.journal, using ['_id', ['_source', 'journal'], ['_source', 'title']]:

>>> json_normalize(
...     data['hits']['hits'],
...     ['_source', 'authors'],
...     ['_id', ['_source', 'journal'], ['_source', 'title']]
... )
           affiliations author_id          author_name       _id   \
0  [Punjabi University]  780E3459          munish puri  7AF8EBC3  
1  [Punjabi University]  48D92C79      rajesh dhaliwal  7AF8EBC3
2  [Punjabi University]  7D9BD37C            r s singh  7AF8EBC3
3                   NaN  7FF872BC  barbara eileen ryan  7521A721
4                   NaN  0299B8E9     fraser j harbutt  7DAEB9A4
5                   NaN  7DAB7B72   richard m freeland  7B3236C5

                                     _source.journal
0  Journal of Industrial Microbiology & Biotechno...
1  Journal of Industrial Microbiology & Biotechno...
2  Journal of Industrial Microbiology & Biotechno...
3                     The American Historical Review
4                     The American Historical Review
5                     The American Historical Review

                                       _source.title  \
0  Development of a stable continuous flow immobi...
1  Development of a stable continuous flow immobi...
2  Development of a stable continuous flow immobi...
3  Feminism and the women's movement : dynamics o...
4  The iron curtain : Churchill, America, and the...
5  The Truman Doctrine and the origins of McCarth...
2 of 3
25

You can also have a look at the library flatten_json, which does not require you to write column hierarchies as in json_normalize:

from flatten_json import flatten

data = d['hits']['hits']
dict_flattened = (flatten(record, '.') for record in data)
df = pd.DataFrame(dict_flattened)
print(df)

See https://github.com/amirziai/flatten.

Discussions

Newest 'json-normalize' Questions - Stack Overflow
I tried different ways to get this done, but unable to do so. Please help here. I am trying to explode a nested json list using python pd.json_normalize. More on stackoverflow.com
🌐 stackoverflow.com
python - How do I use json_normalize on a json containing a list of nested lists - Stack Overflow
I'm dealing with a complex JSON string in Python, and I'm having trouble getting the data into a Pandas Data Frame. Sample data, current output, and expected output are all below. I'm attempting to... More on stackoverflow.com
🌐 stackoverflow.com
How can I properly normalize nested JSON arrays?

If you plan on create/read/update/delete operations with this dataset, you should seriously consider splitting that table and that horrible nested JSON into individual tables. Yeah, multiple join statements aren't fun and incur some performance penalty. But otherwise you'd gain a lot of operational flexibility and you've probably recover most of the performance penality by having other places in your codebase avoid recursion & for loops.

More on reddit.com
🌐 r/learnpython
2
10
May 24, 2023
python - Json Normalize to Pandas Dataframe multiple list of dicts - Stack Overflow
How should i completely normalize a json file having multipe objects having nested list of dicts. More on stackoverflow.com
🌐 stackoverflow.com
🌐
LinkedIn
linkedin.com › pulse › useful-pandas-function-nested-list-dicts-brian-han-cfa-frm
Useful Pandas Function for Nested (List of) Dicts - json_normalize()
February 7, 2021 - Recently came across this awesome function json_normalize() from Pandas while working on a complex list of dictionaries situation. Why the function is so great is that it will flatten nested structures and present you df that's much easier to present or work with. This proves again, when it comes to data processing needs in Python...
🌐
GeeksforGeeks
geeksforgeeks.org › python-pandas-flatten-nested-json
Python Pandas - Flatten nested JSON - GeeksforGeeks
December 5, 2023 - In this article, let us consider ... functions. Pandas have a nice inbuilt function called json_normalize() to flatten the simple to moderately semi-structured nested JSON structures to flat tables....
🌐
Stack Overflow
stackoverflow.com › questions › tagged › json-normalize
Newest 'json-normalize' Questions - Stack Overflow
I pasted the JSON structure and the python code to normalize that is giving the ERROR message GOAL: Normalize ... ... I'm using the following code in python to flatten the json structure below, however it doesn't work for all levels. I'm interested in the tags.tags column data specifically shown on the picture below ... ... I'm dealing with a complex JSON string in Python, and I'm having trouble getting the data into a Pandas Data Frame.
Find elsewhere
🌐
Note.nkmk.me
note.nkmk.me › home › python › pandas
pandas: Convert a list of dictionaries to DataFrame with json_normalize | note.nkmk.me
March 16, 2023 - By default, the nested parts have column names in the format <parent key>.<child key>. This separator . can be changed with the sep argument. print(pd.json_normalize(l_nested, sep='_')) # name age id_x id_y # 0 Alice 25.0 2 8 # 1 Bob NaN 10 4
🌐
Reddit
reddit.com › r/learnpython › how can i properly normalize nested json arrays?
r/learnpython on Reddit: How can I properly normalize nested JSON arrays?
May 24, 2023 -

I have this code here: https://pastebin.com/z1QJ1v0C, to normalize column taxHousehold which is a JSON data type. Here's my sample dataset: https://docs.google.com/spreadsheets/d/13pYRDyoejHi4-pQSi-aO3zJzO_MaB5K7/edit?usp=sharing&ouid=103812618953115234689&rtpof=true&sd=true. However, the way I normalize the JSON data in my code, it completely flattens out every object and array, so each of them has their own column, which is not what I want, because there can be multiple household members in the JSON data, so I want each column to have multiple records for multiple household members. For example, in JSON data in my sample dataset, anything inside the object named "parentCaretakerRelatives" (i.e. "relationship", "mainCaretakerOfChildIndicator", "personId", etc) should be columns and these columns should store data for first "householdMember" and second "householdMember" if there are two household members for each ssapApplicationId ("householdMember" are JSON arrays). This is how I expect the table to look like (each ssapApplicationId can have multiple household members) :

ssapApplicationId applicationGuid relationship mainCaretakerOfChildIndicator personId dateOfBirth
8524 100006860 null false null Jan 10, 1986 12:00:00 AM
8524 100006860 null false null Oct 10, 1988 12:00:00 AM
790 100000313 null false null Sep 07, 1986 12:00:00 AM
🌐
Medium
medium.com › @mrgon.pongsakorn › normalize-json-data-using-pandas-json-normalize-7880b8e4d3c9
Normalize JSON Data Using pandas.json_normalize | by Pongsakorn Akkaramanee | Medium
July 13, 2024 - Fortunately, the pandas library provides a powerful function called json_normalize that can simplify this task by flattening nested JSON data into a more manageable tabular format.
🌐
DEV Community
dev.to › ernestinem › normalize-nested-json-objects-with-pandas-1g7m
Normalize nested JSON objects with pandas - DEV Community
August 3, 2020 - Make a python list of the keys we care about. We can accesss nested objects with the dot notation · Put the unserialized JSON Object to our function json_normalize
🌐
GeeksforGeeks
geeksforgeeks.org › normalizing-nested-json-object-into-pandas-dataframe
Normalizing Nested Json Object Into Pandas Dataframe - GeeksforGeeks
April 28, 2025 - This process often entails using the json_normalize() function in Pandas to flatten nested dictionaries or lists within the JSON object and create a DataFrame with appropriate columns.
🌐
Kaggle
kaggle.com › jboysen › quick-tutorial-flatten-nested-json-in-pandas
Quick Tutorial: Flatten Nested JSON in Pandas
September 27, 2017 - Checking your browser before accessing www.kaggle.com · Click here if you are not automatically redirected after 5 seconds
🌐
Medium
medium.com › @ferzia_firdousi › multi-level-nested-json-82d29dd9528
Deeply Nested JSON, json.normalize, pd.read_json | Medium
May 3, 2023 - Reading the JSON into a pandas ... one format: list). We load it into JSON and introduce the .json_normalize() function for straightening the nested key-value pair....
🌐
Stack Overflow
stackoverflow.com › questions › 76609435 › json-normalize-to-pandas-dataframe-multiple-list-of-dicts
python - Json Normalize to Pandas Dataframe multiple list of dicts - Stack Overflow
file_0 = r'test.json' with open(file_0) as json_file_0: json_data_0 = json.load(json_file_0) # pd.json_normalize(json_data_0) pd.json_normalize(json_data_0, record_path=['code'], meta=['resourceType', 'url', 'version', 'title', 'status', 'date', 'description']) However, there are more than one column that is having nested list of dictionaries.
🌐
Pandas
pandas.pydata.org › docs › dev › reference › api › pandas.json_normalize.html
pandas.json_normalize — pandas documentation - PyData |
>>> data = [ ... { ... "id": 1, ... "name": "Cole Volk", ... "fitness": {"height": 130, "weight": 60}, ... }, ... {"name": "Mark Reg", "fitness": {"height": 130, "weight": 60}}, ... { ... "id": 2, ... "name": "Faye Raker", ... "fitness": {"height": 130, "weight": 60}, ... }, ... ] >>> pd.json_normalize(data, max_level=0) id name fitness 0 1.0 Cole Volk {'height': 130, 'weight': 60} 1 NaN Mark Reg {'height': 130, 'weight': 60} 2 2.0 Faye Raker {'height': 130, 'weight': 60} Normalizes nested data up to level 1.
🌐
Towards Data Science
towardsdatascience.com › home
All Pandas json_normalize() you should know for flattening ...
June 18, 2025 - Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.
🌐
Stack Overflow
stackoverflow.com › questions › 79702265 › how-to-normalize-a-json-object-that-has-nested-arrays-and-lists-in-python-pandas
dataframe - How to normalize a JSON Object that has nested arrays and lists in python pandas - Stack Overflow
>>> print(required_records['Claims']['icdDiagnosisCodes'] ... ) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: list indices must be integers or slices, not str 2025-07-15T17:23:38.403Z+00:00 ... I think this cannot be flattened with one or two calls. You need to carefully flatten separately all nested objects. ... import pandas as pd # Sample JSON stored in a variable json_data = [{ "providerCity": "SOME CITY", "providerSpecialtyDescription": "PHYSICAL/OCCUPATIONAL THERAPY", "updateDate": "YYYY-MM-DD", "serviceDate": "YYYY-MM-DD", "providerLastName": "XXXXXXX
🌐
Medium
medium.com › @avishek2020 › exploring-the-power-of-json-normalize-a-step-by-step-guide-to-flattening-complex-json-data-in-5a2d694dd26
Exploring the Power of json_normalize: A Step-by-Step Guide to Flattening Complex JSON Data in Python Data Science | by Avishek | Medium
May 17, 2023 - import pandas as pd import json # Load the JSON data from a file with open('student_data.json', 'r') as f: data = json.load(f) # Use json_normalize to flatten the data df = pd.json_normalize(data, 'child', ['name', 'age']) # Print the resulting DataFrame print(df) ... As you can see, the data has been flattened into a tabular format, where each row corresponds to a child and each column corresponds to a key in the nested objects.
Top answer
1 of 2
28

You could just pass data without any extra params.

df = pd.io.json.json_normalize(data)
df

   complete    mid.c    mid.h    mid.l    mid.o                  time  volume
0      True  119.743  119.891  119.249  119.341  1488319200.000000000   14651
1      True  119.893  119.954  119.552  119.738  1488348000.000000000   10738
2      True  119.946  120.221  119.840  119.888  1488376800.000000000   10041

If you want to change the column order, use df.reindex:

df = df.reindex(columns=['time', 'volume', 'complete', 'mid.h', 'mid.l', 'mid.c', 'mid.o'])
df

                   time  volume  complete    mid.h    mid.l    mid.c    mid.o
0  1488319200.000000000   14651      True  119.891  119.249  119.743  119.341
1  1488348000.000000000   10738      True  119.954  119.552  119.893  119.738
2  1488376800.000000000   10041      True  120.221  119.840  119.946  119.888
2 of 2
9

The data in the OP (after deserialized from a json string preferably using json.load()) is a list of nested dictionaries, which is an ideal data structure for pd.json_normalize() because it converts a list of dictionaries and flattens each dictionary into a single row. So the length of the list determines the number of rows and the total number of key-value pairs in the dictionaries determine the number of columns.

However, if a value under some key is a list, then that no longer is true because presumably the items in the those lists need to be in their separate rows. For example, if my_data.json file is like:

# my_data.json
[
    {'price': {'mid': ['119.743', '119.891', '119.341'], 'time': '123'}},
    {'price': {'mid': ['119.893', '119.954', '119.552'], 'time': '456'}},
    {'price': {'mid': ['119.946', '120.221', '119.840'], 'time': '789'}}
]

and then you'll want to put each value in the list as its own row. In that case, you can pass the path to these lists as record_path= argument. Also, you can make each record have its accompanying metadata, whose path you can also pass as meta= argument.

# deserialize json into a python data structure
import json
with open('my_data.json', 'r') as f:
    data = json.load(f)

# normalize the python data structure
df = pd.json_normalize(data, record_path=['price', 'mid'], meta=[['price', 'time']], record_prefix='mid.')

Ultimately, pd.json_normalize() cannot handle anything more complex than this kind of structure. For example, it cannot add another metadata to the above example if it's nested inside another dictionary. Depending on the data, you'll most probably need a recursive function to parse it (FYI, pd.json_normalize() is a recursive function as well but it's for a general case and won't work for a lot of specific objects).

Often times, you'll need a combination of explode(), pd.DataFrame(col.tolist()) etc. to completely parse the data.

Pandas also has a convenience function pd.read_json() as well but it's even more limited than pd.json_normalize() in that it can only correctly parse a json array of one nesting level. Unlike pd.json_normalize() however, it deserializes a json string under the hood so you can directly pass the path to a json file to it (no need for json.load()). In other words, the following two produce the same output:

df1 = pd.read_json("my_data.json") 
df2 = pd.json_normalize(data, max_level=0)  # here, `data` is deserialized `my_data.json`
df1.equals(df2)  # True