You can put your JSON data in a variable:
data = {"sensors":
{"-KqYN_VeXCh8CZQFRusI":
{"bathroom_temp": 16,
"date": "02/08/2017",
"fridge_level": 8,
"kitchen_temp": 18,
"living_temp": 17,
"power_bathroom": 0,
"power_bathroom_value": 0,
"power_kit_0": 0
},
"-KqYPPffaTpft7B72Ow9":
{"bathroom_temp": 20,
"date": "02/08/2017",
"fridge_level": 19,
"kitchen_temp": 14,
"living_temp": 20,
"power_bathroom": 0,
"power_bathroom_value": 0
},
"-KqYPUld3AOve8hnpnOy":
{"bathroom_temp": 23,
"date": "02/08/2017",
"fridge_level": 40,
"kitchen_temp": 11,
"living_temp": 10,
"power_bathroom": 1,
"power_bathroom_value": 81,
}
}
}
and then use a nested index address for getting the desired parameter:
kitchen_temp = data["sensors"]["-KqYN_VeXCh8CZQFRusI"]["kitchen_temp"]
print(kitchen_temp)
Answer from nima on Stack Overflowparse multi-level json in python
hey so I know this is kind of weird to point out, but it looks like you're using Azure... so SQL Server
SQL Server has native JSON parsing functions. https://docs.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-2017
You might be able to save yourself a ton of headaches.
More on reddit.compython - JSON parsing with multiple nested levels - Stack Overflow
python - parse JSON values by multilevel keys - Stack Overflow
Parsing Json with multiple "levels" with Python - Stack Overflow
Python is not my goto, so sorry if this is a FAQ question. I need to parse json down a few levels and pull out some relevant data to push into a database for analysis. I see examples using both the native json, and ijson, but none that parse more than the first level.
So here is the sample json, munged a bit:
{
"value1": "walterwhite",
"value2": "school teacher",
"services": [
{
"name": "first sub",
"slug": "ovp",
"vendors": [
{
"slug": "wistia",
"name": "Wistia"
}
],
"vendor_count": 1
},
{
"name": "second sub",
"slug": "hosting",
"vendors": [
{
"slug": "013-netvision",
"name": "013 Netvision"
},
{
"slug": "internap",
"name": "Internap Corp."
},
{
"slug": "eurofiber",
"name": "Eurofiber"
},
{
"slug": "register-com--2",
"name": "Register.com"
}
],
"vendor_count": 4
}
]
}i want to return:
a) All "services.name" ie. "first sub"
b) All "vendors.name" (under services) ie. "wistia"
Many thanks!
hey so I know this is kind of weird to point out, but it looks like you're using Azure... so SQL Server
SQL Server has native JSON parsing functions. https://docs.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-2017
You might be able to save yourself a ton of headaches.
Hmm.. I should count this for my json library thanks for your experience
Use nested dictioanry comprehension for flatten values and merge subdictionaries, last pass to DataFrame constructor:
json = {"attribute1": "test1",
"attribute2": "test2",
"data": {
"0":
{"metadata": {
"timestamp": "2022-08-14"},
"detections": {
"0": {"dim1": 40, "dim2": 30},
"1": {"dim1": 50, "dim2": 20}}},
"1":
{"metadata": {
"timestamp": "2022-08-15"},
"detections": {
"0": {"dim1": 30, "dim2": 10},
"1": {"dim1": 100, "dim2": 80}}}}}
L = [{**x['metadata'], **y} for x in json['data'].values()
for y in x['detections'].values()]
df = pd.DataFrame(L)
print (df)
timestamp dim1 dim2
0 2022-08-14 40 30
1 2022-08-14 50 20
2 2022-08-15 30 10
3 2022-08-15 100 80
I think a good solution could be:
data = [dict(d1, **{'detections': list(d1['detections'].values())})
for d1 in d['data'].values()]
#data = list(map(lambda d1: dict(d1,
# **{'detections': list(d1['detections'].values())}),
# d['data'].values()))
print(data)
df = \
pd.json_normalize(data, 'detections', [['metadata', 'timestamp']])\
.rename({'metadata.timestamp': 'timestamp'}, axis=1)
print(df)
#[{'metadata': {'timestamp': '2022-08-14'}, 'detections': [{'dim1': 40, 'dim2': 30}, {'dim1': 50, 'dim2': 20}]}, {'metadata': {'timestamp': '2022-08-15'}, 'detections': [{'dim1': 30, 'dim2': 10}, {'dim1': 100, 'dim2': 80}]}]
# dim1 dim2 timestamp
#0 40 30 2022-08-14
#1 50 20 2022-08-14
#2 30 10 2022-08-15
#3 100 80 2022-08-15