jq can deal with multiple input arrays. You can pipe the whole output of the loop to it:
for service in "$services" ; do
curl "$service/path"
done | jq -r '.[]|[.id,.startDate,.calls]|@csv'
Note that the csv transformation can be done by @csv
jq can deal with multiple input arrays. You can pipe the whole output of the loop to it:
for service in "$services" ; do
curl "$service/path"
done | jq -r '.[]|[.id,.startDate,.calls]|@csv'
Note that the csv transformation can be done by @csv
As @hek2mlg pointed out, it should be possible to invoke jq just once. If the input is sufficiently uniform (admittedly, maybe a big "if"), you could even avoid having to name the fields explicitly, e.g.:
$ for service in "$services" ; do
curl "$service/path"
done | jq -sr 'add[] | [.[]] | @csv'
Output:
"123","2016-12-09T00:00:00Z",4
"456","2016-12-09T00:00:00Z",22
"789","2016-12-09T00:00:00Z",8
"147","2016-12-09T00:00:00Z",10
Note that using -s allows you to perform arbitrary computations on all the inputs, e.g. counting them.
bash - Add JSON objects to array using jq - Unix & Linux Stack Exchange
bash - jq create array and append entry to it - Stack Overflow
jq - add objects from file into json array - Unix & Linux Stack Exchange
Creating an array from objects?
This trick with the jq 1.5 inputs streaming filter seems to do it
... | jq -n '.items |= [inputs]'
Ex.
$ find ~/ -maxdepth 1 -name "D*" |
while read line; do
jq -n --arg name "$(basename "$line")" \
--arg path "$line" \
'{name: $name, path: $path}'
done | jq -n '.items |= [inputs]'
{
"items": [
{
"name": "Downloads",
"path": "/home/steeldriver/Downloads"
},
{
"name": "Desktop",
"path": "/home/steeldriver/Desktop"
},
{
"name": "Documents",
"path": "/home/steeldriver/Documents"
}
]
}
Calling jq directly from find, and then collecting the resulting data with jq to construct the final output, without any shell loops:
find ~ -maxdepth 1 -name '[[:upper:]]*' \
-exec jq -n --arg path {} '{ name: ($path|sub(".*/"; "")), path: $path }' \; |
jq -n -s '{ items: inputs }'
The jq that is being executed via -exec creates a JSON object per found pathname. It strips off everything in the pathname up to the last slash for the name value, and uses the pathname as is for the path value.
The final jq reads the data from find into an array with -s, and simply inserts it as the items array in a new JSON object. The final jq invocation could also be written jq -n '{ items: [inputs] }.
Example result (note that I was using [[:upper:]* in place of D* for the -name pattern with find):
{
"items": [
{
"name": "Documents",
"path": "/home/myself/Documents"
},
{
"name": "Mail",
"path": "/home/myself/Mail"
},
{
"name": "Work",
"path": "/home/myself/Work"
}
]
}
Reusing Glenn's test framework, but calling jq only once for the entire script:
list=( http://RESTURL1 http://RESTURL2 )
declare -A hypothetical_data=(
[http://RESTURL1]='{"foo":"Tiger Nixon","bar":"Edinburgh"}'
[http://RESTURL2]='{"foo":"Garrett Winters","bar":"Tokyo"}'
)
for url in "${list[@]}"; do
echo "${hypothetical_data[$url]}" # or curl "$url"
done | jq -n '{"data": [inputs | [.foo, .bar]]}'
#!/bin/bash
list=( http://RESTURL1 http://RESTURL2 )
declare -A hypothetical_data=(
[http://RESTURL1]='{"foo":"Tiger Nixon","bar":"Edinburgh"}'
[http://RESTURL2]='{"foo":"Garrett Winters","bar":"Tokyo"}'
)
# create the seed file
result="result.json"
echo '{"data":[]}' > "$result"
for url in "${list[@]}"; do
# fetch the data.
json=${hypothetical_data[$url]}
# would really do: json=$(curl "$url")
# extract the name ("foo") and location ("bar") values
name=$( jq -r '.foo' <<<"$json" )
location=$( jq -r '.bar' <<<"$json" )
jq --arg name "$name" \
--arg loc "$location" \
'.data += [[$name, $loc]]' "$result" | sponge "$result"
# "sponge" is in the "moreutils" package that you may have to install.
# You can also write that line as:
#
# tmp=$(mktemp)
# jq --arg name "$name" \
# --arg loc "$location" \
# '.data += [[$name, $loc]]' "$result" > "$tmp" && mv "$tmp" "$result"
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
done
End result:
$ cat result.json
{
"data": [
[
"Tiger Nixon",
"Edinburgh"
],
[
"Garrett Winters",
"Tokyo"
]
]
}
If you jq, you can test whether the input is an empty list:
% echo '["a"]' | jq '. == []'
false
% echo '[]' | jq '. == []'
true
% echo '[]' | jq -e '. | length == 0'
true
% echo '["a"]' | jq -e '. | length == 0'
false
And you can use the -e option:
--exit-status / -e:Sets the exit status of jq to 0 if the last output value was neither false nor null, 1 if the last output value was either false or null, or 4 if no valid result was ever produced. Normally jq exits with 2 if there was any usage problem or system error, 3 if there was a jq program compile error, or 0 if the jq program ran.
So:
if curl --silent -H 'Authorization: token github_access_token' 'https://api.github.com/orgs/OrganizationName/repos?per_page=100' |
jq -e '. == []'
then
echo Empty output
else
echo Got something
fi
You could test with the array length:
if [[ $(jq length <<<"$Response") -eq 0 ]]; then
echo "Empty"
else
echo "Not empty"
fi
jq has a flag for feeding actual JSON contents with its --argjson flag. What you need to do is, store the content of the first JSON file in a variable in jq's context and update it in the second JSON
jq --argjson groupInfo "$(<input.json)" '.[].groups += [$groupInfo]' orig.json
The part "$(<input.json)" is shell re-direction construct to output the contents of the file given and with the argument to --argjson it is stored in the variable groupInfo. Now you add it to the groups array in the actual filter part.
Putting it in another way, the above solution is equivalent of doing this
jq --argjson groupInfo '{"id": 9,"version": 0,"lastUpdTs": 1532371267968,"name": "Training" }' \
'.[].groups += [$groupInfo]' orig.json
This is the exact case that the input function is for:
inputand inputs [...] read from the same sources (e.g., stdin, files named on the command-line) as jq itself. These two builtins, and jq’s own reading actions, can be interleaved with each other.
That is, jq reads an object/value in from the file and executes the pipeline on it, and anywhere input appears the next input is read in and is used as the result of the function.
That means you can do:
jq '.[].groups += [input]' orig.json input.json
with exactly the command you've written already, plus input as the value. The input expression will evaluate to the (first) object read from the next file in the argument list, in this case the entire contents of input.json.
If you have multiple items to insert you can use inputs instead with the same meaning. It will apply across a single or multiple files from the command line equally, and [inputs] represents all the file bodies as an array.
It's also possible to interleave things to process multiple orig files, each with one companion file inserted, but separating the outputs would be a hassle.