If I understands your question correctly, I think this will solve it:
with jsonlines.open('yourTextFile', mode='a') as writer:
writer.write(...)
As you mentioned you are overwriting the file, I think this is because you use mode='w' (w = writing) instead of using mode='a' (a = appending)
» pip install jsonlines
Videos
If I understands your question correctly, I think this will solve it:
with jsonlines.open('yourTextFile', mode='a') as writer:
writer.write(...)
As you mentioned you are overwriting the file, I think this is because you use mode='w' (w = writing) instead of using mode='a' (a = appending)
None of the solutions using the jsonlines module worked for me. However, I found a different module that does exactly what I need: the JsonLine class from the jsonline library. Here it is!
from jsonline import JsonLine
# dummy json
data = [{
"name": "xxxxxx",
"value": "xxxxxxx",
"domain": ".xxxx.com",
"path": "/",
"expires": 9999999,
"httpOnly": False,
"secure": True,
"sameSite": "xx"
},
{
"name": "xx",
"value": "xxxx",
"domain": ".xxxx.com",
"path": "/",
"expires": 999999,
"httpOnly": True,
"secure": True,
"sameSite": "xx"
}
]
jsondata = JsonLine('yourTextFile')
jsondata.append(data)
jsondata.append(data)
# To print the jsonline file
for item in jsondata:
print(item)
Hope this helps!!!.
import jsonlines
with jsonlines.open('example.jsonl', 'r') as jsonl_f:
lst = [obj for obj in jsonl_f]
The jsonl_f is the reader and can be used directly. It contains the lines in the json file.
Simply:
import jsonlines
with jsonlines.open("json_file.json") as file:
data = list(file.iter())
» pip install json-lines
Hello, As the title, I have thousands to millions json lines spread over multiple JSONL format files.
What would be an efficient approach to process those lines and store in a temporary object that I want to index to some application?
Currently I'm thinking to process N lines per batch, index them and repeat until the last line in last file.