It seems to me that you want to output the two values (VolumeId and Tags[].Value) on the same line?
If that's the case, then a simple string concatenation should be enough:
$ jq -r '.Volumes[] | .VolumeId + " " + .Tags[].Value' volumes.json
vol-00112233 vol-rescue-system
vol-00112234 vol-rescue-swap
vol-00112235 vol-rescue-storage
The above can then be used in a pipeline with while-read:
jq -r '.Volumes[] | .VolumeId + " " + .Tags[].Value' volumes.json \
| while read -r volumeId tagValue; do
other_command "$volumeId" "$tagValue"
done
You should note that if there is more than one element in Tags the result will reflect that. This can however be avoided by referring the first element in Tags: .Tags[0].Value
It seems to me that you want to output the two values (VolumeId and Tags[].Value) on the same line?
If that's the case, then a simple string concatenation should be enough:
$ jq -r '.Volumes[] | .VolumeId + " " + .Tags[].Value' volumes.json
vol-00112233 vol-rescue-system
vol-00112234 vol-rescue-swap
vol-00112235 vol-rescue-storage
The above can then be used in a pipeline with while-read:
jq -r '.Volumes[] | .VolumeId + " " + .Tags[].Value' volumes.json \
| while read -r volumeId tagValue; do
other_command "$volumeId" "$tagValue"
done
You should note that if there is more than one element in Tags the result will reflect that. This can however be avoided by referring the first element in Tags: .Tags[0].Value
As @andlrc observed, you may need to decide what you really want in the event that any Tags array has more or less than one element. Assuming you want Tags[0] in all cases, I would recommend considering the use of @tsv as follows:
jq -r '.Volumes[] | [.VolumeId, .Tags[0].Value] | @tsv' volumes.json
This would be especially appropriate if any of the .VolumeId or .Tags[0].Value values contained spaces, tabs, newlines, etc. The point is that @tsv will handle these in a standard way, so that handling the pair of values can be done in a standard way as well. E.g. using awk, you could read in the pair with awk -F\\t; using bash, IFS=$'\t', etc.
$ jq -r '[ .[].list1[] ] | join(" ")' file
val1 val2 val3 val4 val5 val6
Create a new array with all the elements of each list1 array from each top-level key. Then, join its elements with spaces. This would give you the values in the order they occur in the input file.
An alternative (and arguably neater) approach is with map(.list1) which returns an array of arrays that you may flatten and join up:
$ jq -r 'map(.list1) | flatten | join(" ")' file
val1 val2 val3 val4 val5 val6
Your attempt generates one joined string per top-level key due to .list being one of the list1 arrays in turn. Your approach would work if you encapsulated everything up to the last pipe symbol in a [ ... ] (and expand the .list with .list[]) to generate a single array that you then join. This is what I do in my first approach above; only I use a slightly shorter expression to generate the elements of that array.
$ jq -r '[ to_entries[] | { list: .value.list1 } | .list[] ] | join(" ")' file
val1 val2 val3 val4 val5 val6
Using Raku (formerly known as Perl_6)
~$ raku -MJSON::Tiny -e 'my %hash = from-json($_) given lines;
my @a = %hash.values.map({ $_.values if $_{"list1"} });
.say for @a.sort.join(" ");' file
OR:
~$ raku -MJSON::Tiny -e 'my %hash = from-json($_) given lines;
for %hash.values.sort() { print .values.sort ~ " " if $_{"list1"} };
put "";' file
Raku is a programming language in the Perl-family that provides high-level support for Unicode. Like Perl, Raku has associative arrays (hashes and/or maps) built-in. The above code is admittedly rather verbose (first example), but you should be able to get the flavor of the language from both examples above:
- Raku's community-supported
JSON::Tinyis called at the command line, - All
linesaregivenas one data element to thefrom-jsonfunction, which decodes the input and stores it in%hash, - First Example: Using a
map, thevaluesof the hash are searched through for"list1"keys. If (if) found, these are stored in the@aarray. Then the@aarray is printed. - Second Example: the
%hashis iterated through usingfor, searched through for"list1"keys, andiffound the associatedvaluesareprinted (with at end-of-line). A finalputcall adds a newline.
Sample Input (includes bogus "list2" elements)
{
"key1": {
"list1": [
"val1",
"val2",
"val3"
]
},
"key2": {
"list1": [
"val4",
"val5"
]
},
"key3": {
"list1": [
"val6"
]
},
"key4": {
"list2": [
"val7"
]
}
}
Sample Output:
val1 val2 val3 val4 val5 val6
Finally, in any programming solution it is often instructive to look at intermediate data-structures. So here's what the %hash looks like after decoding JSON input:
~$ raku -MJSON::Tiny -e 'my %hash = from-json($_) given lines; .say for %hash.sort;' file
key1 => {list1 => [val1 val2 val3]}
key2 => {list1 => [val4 val5]}
key3 => {list1 => [val6]}
key4 => {list2 => [val7]}
https://raku.land/cpan:MORITZ/JSON::Tiny
https://docs.raku.org/language/hashmap
https://raku.org
Using jq to parse and display multiple fields in a json serially - Stack Overflow
jq select multiple elements from an array - Unix & Linux Stack Exchange
How to search and replace multiple values in an array using jq? - Unix & Linux Stack Exchange
How do I select multiple keys for output?
I recommend using String Interpolation:
jq '.users[] | "\(.first) \(.last)"'
We are piping down the result of .users[] to generate the string ".first .last" using string interpolation. \(foo) syntax is used for string interpolation in jq. So, for the above example, it becomes "Stevie Wonder" (".users[].first .users[].second" working elementwise) and "Michael Jackson".
jq reference: String interpolation
You can use addition to concatenate strings.
Strings are added by being joined into a larger string.
jq '.users[] | .first + " " + .last'
The above works when both first and last are string. If you are extracting different datatypes(number and string), then we need to convert to equivalent types. Referring to solution on this question. For example.
jq '.users[] | .first + " " + (.number|tostring)'
To change one entry, make sure that the left-hand side of the assignment operator is a path in the original document:
jq --arg name John --arg phone 4321 \
'( .contacts[] | select(.name == $name) ).phone = $phone' file
You can't use .contacts[] | select(.name == "John") | .phone |= ... since the select() extracts a set of elements from the contacts array. You would therefore only change the elements you extract, separately from the main part of the document.
Notice the difference in
( ... | select(...) ).phone = ...
^^^^^^^^^^^^^^^^^^^^^
path in original document
which works, and
... | select(...) | .phone = ...
^^^^^^^^^^^
extracted bits
which doesn't work.
Using a loop for more than one entry, assuming e.g. bash:
names=( John Jane )
phones=( 4321 4321 )
tmpfile=$(mktemp)
for i in "${!names[@]}"; do
name=${names[i]}
phone=${phones[i]}
jq --arg name "$name" --arg phone "$phone" \
'( .contacts[] | select(.name == $name) ).phone = $phone' file >"$tmpfile"
mv -- "$tmpfile" file
done
That is, I put the names in one array and the new numbers in another, then loop over the indexes and update file for each entry that needs changing, using a temporary file an intermediate storage.
Or, with an associative array:
declare -A lookup
lookup=( [John]=4321 [Jane]=4321 )
for name in "${!lookup[@]}"; do
phone=${lookup[$name]}
# jq as above
done
Assuming you have some JSON input document with the new phone numbers, such as
{
"John": 1234,
"Jane": 5678
}
which you can create using
jo John=1234 Jane=5678
Then you can update the numbers in a single jq invocation:
jo John=1234 Jane=5678 |
jq --slurpfile new /dev/stdin \
'.contacts |= map(.phone = ($new[][.name] // .phone))' file
This reads our input JSON with the new numbers in a structure, $new, that looks like
[
{
"John": 1234,
"Jane": 5678
}
]
This is used in the map() call to change the phone numbers of any contact that is listed. The // .phone makes sure that if the name isn't listed, the phone number stays the same.
Based on Kusalananda's answer, if you only want to search and replace 2 values you can do somethig like this in one jq invocation:
jq '( .contacts[] | select(.name == "John") ).phone |= "4321" |
( .contacts[] | select(.name == "Jane") ).phone |= "8765"' \
contacts.json
Or this way chaining 2 jq invocations:
cat contacts.json | \
jq '( .contacts[] | select(.name == "John") ).phone |= "4321"' | \
jq '( .contacts[] | select(.name == "Jane") ).phone |= "8765"'
You want to run a .context,.score filter on each element of v I think:
$ jq -r '.[] | [.c, .e, .score, (.v[] | .context,.score)] | @csv' file.json
"A","B",0.99,"asdf",0.98,"bcdfd",0.97
This is equivalent to using the builtin map function without assembling the results back into an array.
The following creates a JSON-encoded CSV record for each top-level array element, and then extracts and decodes them. For each of the top-level elements, the values of the sub-array is incorporated by "flattening" the array.
jq -r 'map([ .c,.e,.score, (.v|map([.context, .score])) ] | flatten | @csv)[]' file
Given a test document equivalent of the following:
[
{
"c": "A",
"e": "B",
"score": 0.99,
"v": [
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "bcdfd", "score": 0.97, "url": "..." }
]
},
{
"c": "A",
"e": "B",
"score": 0.99,
"v": [
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "bcdfd", "score": 0.97, "url": "..." }
]
},
{
"c": "A",
"e": "B",
"score": 0.99,
"v": [
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "bcdfd", "score": 0.97, "url": "..." }
]
}
]
... we get
"A","B",0.99,"asdf",0.98,"bcdfd",0.97
"A","B",0.99,"asdf",0.98,"asdf",0.98,"bcdfd",0.97
"A","B",0.99,"asdf",0.98,"asdf",0.98,"asdf",0.98,"bcdfd",0.97
One could also reorder the operations so that a single use of the @csv operator gets a set of arrays (rather than repeatedly using @csv on single arrays):
jq -r 'map([ .c,.e,.score, (.v|map([.context, .score])) ] | flatten)[]|@csv' file
Given the following JSON, what is the best way to extract the phone numbers, whether inside an object or an array of objects?
{
"phones": {
"Alex Baker": { "location": "mobile", "number": "+14157459038" },
"Bob Clarke": [
{ "location": "mobile", "number": "+12135637813" },
{ "location": "office", "number": "+13104443200" }
],
"Carl Davies": [
{ "location": "office", "number": "+14083078372" },
{ "location": "lab", "number": "+15102340052" }
],
"Drew Easton": { "location": "office", "number": "+18057459038" }
}
}I'm using the following query, but I wonder if there's a better way to do this:
$ cat phones.json | jq '.phones | to_entries | [ .[].value | objects | .number ] + [ .[].value | arrays | .[].number ]' [ "+14157459038", "+18057459038", "+12135637813", "+13104443200", "+14083078372", "+15102340052" ]
Any suggestions will be appreciated, thanks!