In jq 1.3 and up you can use the --arg VARIABLE VALUE command-line option:
jq -n --arg v "$VAR" '{"foo": $v}'
I.e., --arg sets a variable to the given value so you can then use $varname in your jq program, and now you don't have to use shell variable interpolation into your jq program.
EDIT: From jq 1.5 and up, you can use --argjson to pass in an array directly, e.g.
jq -n --argjson v '[1,2,3]' '{"foo": $v}'
Answer from user2259432 on Stack OverflowIn jq 1.3 and up you can use the --arg VARIABLE VALUE command-line option:
jq -n --arg v "$VAR" '{"foo": $v}'
I.e., --arg sets a variable to the given value so you can then use $varname in your jq program, and now you don't have to use shell variable interpolation into your jq program.
EDIT: From jq 1.5 and up, you can use --argjson to pass in an array directly, e.g.
jq -n --argjson v '[1,2,3]' '{"foo": $v}'
Once you have your variable loaded, you should use the split filter to split that string into an array.
$ jq -n --arg inarr "${ARR}" '{ arr: $inarr | split("\n") }'
bash - Add JSON objects to array using jq - Unix & Linux Stack Exchange
linux - creating a nested json file from variables using jq - Unix & Linux Stack Exchange
Creating an array from objects?
How to create array of json objects using jq and set to var in bash - Stack Overflow
This trick with the jq 1.5 inputs streaming filter seems to do it
... | jq -n '.items |= [inputs]'
Ex.
$ find ~/ -maxdepth 1 -name "D*" |
while read line; do
jq -n --arg name "$(basename "$line")" \
--arg path "$line" \
'{name: $name, path: $path}'
done | jq -n '.items |= [inputs]'
{
"items": [
{
"name": "Downloads",
"path": "/home/steeldriver/Downloads"
},
{
"name": "Desktop",
"path": "/home/steeldriver/Desktop"
},
{
"name": "Documents",
"path": "/home/steeldriver/Documents"
}
]
}
Calling jq directly from find, and then collecting the resulting data with jq to construct the final output, without any shell loops:
find ~ -maxdepth 1 -name '[[:upper:]]*' \
-exec jq -n --arg path {} '{ name: ($path|sub(".*/"; "")), path: $path }' \; |
jq -n -s '{ items: inputs }'
The jq that is being executed via -exec creates a JSON object per found pathname. It strips off everything in the pathname up to the last slash for the name value, and uses the pathname as is for the path value.
The final jq reads the data from find into an array with -s, and simply inserts it as the items array in a new JSON object. The final jq invocation could also be written jq -n '{ items: [inputs] }.
Example result (note that I was using [[:upper:]* in place of D* for the -name pattern with find):
{
"items": [
{
"name": "Documents",
"path": "/home/myself/Documents"
},
{
"name": "Mail",
"path": "/home/myself/Mail"
},
{
"name": "Work",
"path": "/home/myself/Work"
}
]
}
I'd do it in 2 steps: (EDIT: artifacts is an array of objects)
inner=$(jq -n --arg name oer \
--arg version "$ot" \
'$ARGS.named'
)
final=$(jq -n --arg configId "$configid" \
--arg objectname "tempfile" \
--arg test "2021" \
--argjson artifacts "[$inner]" \
'$ARGS.named'
)
echo "$final"
{
"configId": "c8f",
"objectname": "tempfile",
"artifacts": [
{
"name": "oer",
"version": "1.01"
}
], "test": "2021"
}
Add a -c if you want the final output to be one line.
The safest way to create JSON on the command line is through using a tool that constructs it for you as jq does. However, since you have a nested structure, you may want to create that sub-structure in a separate call to jq as is shown by glenn jackman.
Another way to do what they show, but in one go:
jq -n \
--arg configId "$configid" \
--arg objectname tempfile \
--argjson artifacts "$(
jq -n \
--arg name oer \
--arg version "$ot" \
'$ARGS.named'
)" \
--arg test 2021 \
'$ARGS.named'
We use --argjson rather than --arg to include the JSON document from the inner jq, as it is a JSON document and not a string that needs encoding.
You could also use jo, which may reduce typing a bit,
jo configId="$configid" \
objectname=tempfile \
artifacts="$( jo name=oer version="$ot" )" \
test=2021
A description of the jo utility is found here: https://jpmens.net/2016/03/05/a-shell-command-to-create-json-jo/
Pass your strings using --arg, then you can create the JSON as expected:
#!/bin/bash
server1_name='server-1'
server1_url='http://server-1.net'
server2_name='server-2'
server2_url='http://server-2.net'
result=$(jq -n \
--arg name1 "$server1_name" \
--arg url1 "$server1_url" \
--arg name2 "$server2_name" \
--arg url2 "$server2_url" \
'[ { "name": $name1, "url": $url1 }, { "name": $name2, "url": $url2 } ]')
echo "$result"
Will produce:
[
{
"name": "server-1",
"url": "http://server-1.net"
},
{
"name": "server-2",
"url": "http://server-2.net"
}
]
You can construct two arrays of names and urls, then adapt this answer to "zip" the two arrays together into the desired array of objects.
jq -n \
--arg name1 "$server1_name" \
--arg url1 "$server1_url" \
--arg name2 "$server2_name" \
--arg url2 "$server2_url" \
'[$name1, $name2] as $names |
[$url1, $url2] as $urls |
[([$names, $urls] | transpose[]) as [$name, $url] |{$name, $url}]'
The benefit is that as the number of name/url pairs grows, you only need to modify the first two filters that define $names and $urls; the rest of the filter stays the same. You could even separate this into separate uses of jq, to facilitate the definition of a large list of servers.
names=$(jq -n --arg v1 "$server1_name" --arg v2 "$server2_name" '[$v1, $v2]')
urls=$(jq -n --arg v1 "$server1_url" --arg v2 "$server2_url" '[$v1, $v2]')
jq -n \
--argjson names "$names" \
--argjson urls "$urls" \
'[([$names, $urls] | transpose[]) as [$name, $url] | {$name, $url}]'
jq has a flag for feeding actual JSON contents with its --argjson flag. What you need to do is, store the content of the first JSON file in a variable in jq's context and update it in the second JSON
jq --argjson groupInfo "$(<input.json)" '.[].groups += [$groupInfo]' orig.json
The part "$(<input.json)" is shell re-direction construct to output the contents of the file given and with the argument to --argjson it is stored in the variable groupInfo. Now you add it to the groups array in the actual filter part.
Putting it in another way, the above solution is equivalent of doing this
jq --argjson groupInfo '{"id": 9,"version": 0,"lastUpdTs": 1532371267968,"name": "Training" }' \
'.[].groups += [$groupInfo]' orig.json
This is the exact case that the input function is for:
inputand inputs [...] read from the same sources (e.g., stdin, files named on the command-line) as jq itself. These two builtins, and jq’s own reading actions, can be interleaved with each other.
That is, jq reads an object/value in from the file and executes the pipeline on it, and anywhere input appears the next input is read in and is used as the result of the function.
That means you can do:
jq '.[].groups += [input]' orig.json input.json
with exactly the command you've written already, plus input as the value. The input expression will evaluate to the (first) object read from the next file in the argument list, in this case the entire contents of input.json.
If you have multiple items to insert you can use inputs instead with the same meaning. It will apply across a single or multiple files from the command line equally, and [inputs] represents all the file bodies as an array.
It's also possible to interleave things to process multiple orig files, each with one companion file inserted, but separating the outputs would be a hassle.