jq . file.json
is what I was looking for. I didn't realize that the . is a filter and not a placeholder for the piped in content:
.The absolute simplest (and least interesting) filter is
.. This is a filter that takes its input and produces it unchanged as output.
And the man page makes it clear that the filter is a required argument.
Answer from k0pernikus on Stack ExchangeNo, you can not process a file with jq and have it output the result to the original file.
You could use a temporary file like so:
cp file.json file.json.tmp &&
jq . file.json.tmp >file.json &&
rm file.json.tmp
This order of operations also retains the original file's metadata. Since each step depends on the successful completion of the previous step (due to &&), you will not lose the document if, for example, jq fails to run.
You may use a tool such as GNU sponge (part of the moreutils package) to hide the manual labour of handling a temporary file:
jq . file.json | sponge file.json
Note that this is still using a temporary file behind the scenes.
Out of these two variants, only the first set of three commands protects you from data loss in case your partition suddenly becomes full or jq fails to execute properly (due to being unavailable or because of an error in the input document).
json-beautify-inplace () {
temp=$(mktemp)
printf 'input = %s\n' "$1"
printf 'temp = %s\n' "$temp"
cp -- "$1" "$temp"
jq . "$temp" > "$1"
}
json-uglify-inplace () {
temp=$(mktemp)
printf 'input = %s\n' "$1"
printf 'temp = %s\n' "$temp"
cp -- "$1" "$temp"
jq -r tostring "$temp" > "$1"
}
Format json using jq
json - jq to replace text directly on file (like sed -i) - Stack Overflow
bash - How to format a JSON string as a table using jq? - Stack Overflow
Editing json file in line with jq - Stack Overflow
Videos
Hey all,
I have this mapping, which worked for a long time, and recently stopped working for me:
nnoremap <leader>=j :%!jq --tab .<cr>:%s/\r<cr>
This sends my file to jq, uses tabs for indent and formats the file, and then I remove `\r` throughout the file.
The problem I'm now having is I get the following error when it's run:
:%!jq --tab . shell returned 1 :%s/\r/e
When I press enter, I get the shell error:
[91mStart-Process: [91mA positional parameter cannot be found that accepts argument '.'.[0me
I run Neovim 0.7.2, in Windows. I've set my Neovim shell to be PowerShell.
This used to work, and I've just verified that this workes in Neovim 0.7.0.
Does anyone know of anything that changed that I need to take into account in my mapping? Or do you think that they somehow broke something in 0.7.2?
Cheers,
This post addresses the question about the absence of the equivalent of sed's "-i" option, and in particular the situation described:
I have a bunch of files and writing each one to a separate file wouldn't be easy.
There are several options, at least if you are working in a Mac or Linux or similar environment. Their pros and cons are discussed at http://backreference.org/2011/01/29/in-place-editing-of-files/ so I'll focus on just three techniques:
One is simply to use "&&" along the lines of:
jq ... INPUT > INPUT.tmp && mv INPUT.tmp INPUT
Another is to use the sponge utility (part of moreutils):
jq ... INPUT | sponge INPUT
The third option might be useful if it is advantageous to avoid updating a file if there are no changes to it. Here is a script which illustrates such a function:
#!/bin/bash
function maybeupdate {
local f="$1"
cmp -s "$f" "$f.tmp"
if [ $? = 0 ] ; then
/bin/rm "$f.tmp"
else
/bin/mv "$f.tmp" "$f"
fi
}
for f
do
jq . "$f" > "$f.tmp"
maybeupdate "$f"
done
instead of sponge :
cat <<< $(jq 'QUERY' sample.json) > sample.json
Using the @tsv filter has much to recommend it, mainly because it handles numerous "edge cases" in a standard way:
.[] | [.id, .name] | @tsv
Adding the headers can be done like so:
jq -r '["ID","NAME"], ["--","------"], (.[] | [.id, .name]) | @tsv'
The result:
ID NAME
-- ------
12 George
18 Jack
19 Joe
As pointed out by @Tobia, you might want to format the table for viewing by using column to post-process the result produced by jq. If you are using a bash-like shell then column -ts $'\t' should be quite portable.
length*"-"
To automate the production of the line of dashes:
jq -r '(["ID","NAME"] | (., map(length*"-"))), (.[] | [.id, .name]) | @tsv'
Why not something like:
echo '[{
"name": "George",
"id": 12,
"email": "[email protected]"
}, {
"name": "Jack",
"id": 18,
"email": "[email protected]"
}, {
"name": "Joe",
"id": 19,
"email": "[email protected]"
}]' | jq -r '.[] | "\(.id)\t\(.name)"'
Output
12 George
18 Jack
19 Joe
Edit 1 : For fine grained formatting use tools like awk
echo '[{
"name": "George",
"id": 12,
"email": "[email protected]"
}, {
"name": "Jack",
"id": 18,
"email": "[email protected]"
}, {
"name": "Joe",
"id": 19,
"email": "[email protected]"
}]' | jq -r '.[] | [.id, .name] | @csv' | awk -v FS="," 'BEGIN{print "ID\tName";print "============"}{printf "%s\t%s%s",$1,$2,ORS}'
ID Name
============
12 "George"
18 "Jack"
19 "Joe"
Edit 2 : In reply to
There's no way I can get a variable containing an array straight from jq?
Why not?
A bit involved example( in fact modified from yours ) where email is changed to an array demonstrates this
echo '[{
"name": "George",
"id": 20,
"email": [ "[email protected]" , "[email protected]" ]
}, {
"name": "Jack",
"id": 18,
"email": [ "[email protected]" , "[email protected]" ]
}, {
"name": "Joe",
"id": 19,
"email": [ "[email protected]" ]
}]' | jq -r '.[] | .email'
Output
[
"[email protected]",
"[email protected]"
]
[
"[email protected]",
"[email protected]"
]
[
"[email protected]"
]
I would replace the whole properties field :
jq '(.arrays[]| select(.name == "foo")).properties |= [{
"type" : "bar",
"file" : "filename"
}]' test.json
You can try it here.
.arrays[].properties |= map({ type: "FOOBAR", file: "someFile" })
Will result in:
{
"arrays": [
{
"name": "foo",
"properties": [
{
"type": "FOOBAR",
"file": "someFile"
}
]
}
]
}