It seems to me that you want to output the two values (VolumeId and Tags[].Value) on the same line?
If that's the case, then a simple string concatenation should be enough:
$ jq -r '.Volumes[] | .VolumeId + " " + .Tags[].Value' volumes.json
vol-00112233 vol-rescue-system
vol-00112234 vol-rescue-swap
vol-00112235 vol-rescue-storage
The above can then be used in a pipeline with while-read:
jq -r '.Volumes[] | .VolumeId + " " + .Tags[].Value' volumes.json \
| while read -r volumeId tagValue; do
other_command "$volumeId" "$tagValue"
done
You should note that if there is more than one element in Tags the result will reflect that. This can however be avoided by referring the first element in Tags: .Tags[0].Value
It seems to me that you want to output the two values (VolumeId and Tags[].Value) on the same line?
If that's the case, then a simple string concatenation should be enough:
$ jq -r '.Volumes[] | .VolumeId + " " + .Tags[].Value' volumes.json
vol-00112233 vol-rescue-system
vol-00112234 vol-rescue-swap
vol-00112235 vol-rescue-storage
The above can then be used in a pipeline with while-read:
jq -r '.Volumes[] | .VolumeId + " " + .Tags[].Value' volumes.json \
| while read -r volumeId tagValue; do
other_command "$volumeId" "$tagValue"
done
You should note that if there is more than one element in Tags the result will reflect that. This can however be avoided by referring the first element in Tags: .Tags[0].Value
As @andlrc observed, you may need to decide what you really want in the event that any Tags array has more or less than one element. Assuming you want Tags[0] in all cases, I would recommend considering the use of @tsv as follows:
jq -r '.Volumes[] | [.VolumeId, .Tags[0].Value] | @tsv' volumes.json
This would be especially appropriate if any of the .VolumeId or .Tags[0].Value values contained spaces, tabs, newlines, etc. The point is that @tsv will handle these in a standard way, so that handling the pair of values can be done in a standard way as well. E.g. using awk, you could read in the pair with awk -F\\t; using bash, IFS=$'\t', etc.
shell - Get value of JSON [] array in jq - Unix & Linux Stack Exchange
Using jq, extract fields and subfields from a list of objects, grouping paired subfields for saving to csv - Unix & Linux Stack Exchange
text processing - jq: Printing multiple values from multiple arrays at once - Unix & Linux Stack Exchange
json - getting all the values of an array with jq - Stack Overflow
I think this is fairly easy to achieve for this inner list. The values seem to be all zero's so...
jq '.[][1]' < yourjsonfile
Just to provide another approach. When working with lists, dicts and other types python is the right tool. To give you an idea on how you could retrieve the values from the list you'd do something lik:
#!/usr/bin/env python
mylist = [[1645128660000,0],[1645128720000,0],[1645128780000,0],[1645128840000,0],[1645128900000,0],[1645128960000,0],[1645129020000,0],[1645129080000,0],[1645129140000,0],[1645129200000,0]]
for k,v in mylist:
print("Key":,k)
print("Value":v)
Or using list comprehension
[v for k,v in mylist]
should be sufficient. There is also this awesome page where you can play around with jq: https://jqplay.org/#
If we assume that the input is an array of elements and that each element looks like [key, value] with integer keys and values, then we may extract the value for a given key using the below command:
mykey=1645128900000
jq --argjson key "$mykey" '.[] | select(first == $key) | last' file
This selects all the array entries with the given key as its first element and then extracts the value, the last element, from each piece chosen.
You want to run a .context,.score filter on each element of v I think:
$ jq -r '.[] | [.c, .e, .score, (.v[] | .context,.score)] | @csv' file.json
"A","B",0.99,"asdf",0.98,"bcdfd",0.97
This is equivalent to using the builtin map function without assembling the results back into an array.
The following creates a JSON-encoded CSV record for each top-level array element, and then extracts and decodes them. For each of the top-level elements, the values of the sub-array is incorporated by "flattening" the array.
jq -r 'map([ .c,.e,.score, (.v|map([.context, .score])) ] | flatten | @csv)[]' file
Given a test document equivalent of the following:
[
{
"c": "A",
"e": "B",
"score": 0.99,
"v": [
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "bcdfd", "score": 0.97, "url": "..." }
]
},
{
"c": "A",
"e": "B",
"score": 0.99,
"v": [
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "bcdfd", "score": 0.97, "url": "..." }
]
},
{
"c": "A",
"e": "B",
"score": 0.99,
"v": [
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "asdf", "score": 0.98, "url": "..." },
{ "context": "bcdfd", "score": 0.97, "url": "..." }
]
}
]
... we get
"A","B",0.99,"asdf",0.98,"bcdfd",0.97
"A","B",0.99,"asdf",0.98,"asdf",0.98,"bcdfd",0.97
"A","B",0.99,"asdf",0.98,"asdf",0.98,"asdf",0.98,"bcdfd",0.97
One could also reorder the operations so that a single use of the @csv operator gets a set of arrays (rather than repeatedly using @csv on single arrays):
jq -r 'map([ .c,.e,.score, (.v|map([.context, .score])) ] | flatten)[]|@csv' file
$ jq -r '[ .[].list1[] ] | join(" ")' file
val1 val2 val3 val4 val5 val6
Create a new array with all the elements of each list1 array from each top-level key. Then, join its elements with spaces. This would give you the values in the order they occur in the input file.
An alternative (and arguably neater) approach is with map(.list1) which returns an array of arrays that you may flatten and join up:
$ jq -r 'map(.list1) | flatten | join(" ")' file
val1 val2 val3 val4 val5 val6
Your attempt generates one joined string per top-level key due to .list being one of the list1 arrays in turn. Your approach would work if you encapsulated everything up to the last pipe symbol in a [ ... ] (and expand the .list with .list[]) to generate a single array that you then join. This is what I do in my first approach above; only I use a slightly shorter expression to generate the elements of that array.
$ jq -r '[ to_entries[] | { list: .value.list1 } | .list[] ] | join(" ")' file
val1 val2 val3 val4 val5 val6
Using Raku (formerly known as Perl_6)
~$ raku -MJSON::Tiny -e 'my %hash = from-json($_) given lines;
my @a = %hash.values.map({ $_.values if $_{"list1"} });
.say for @a.sort.join(" ");' file
OR:
~$ raku -MJSON::Tiny -e 'my %hash = from-json($_) given lines;
for %hash.values.sort() { print .values.sort ~ " " if $_{"list1"} };
put "";' file
Raku is a programming language in the Perl-family that provides high-level support for Unicode. Like Perl, Raku has associative arrays (hashes and/or maps) built-in. The above code is admittedly rather verbose (first example), but you should be able to get the flavor of the language from both examples above:
- Raku's community-supported
JSON::Tinyis called at the command line, - All
linesaregivenas one data element to thefrom-jsonfunction, which decodes the input and stores it in%hash, - First Example: Using a
map, thevaluesof the hash are searched through for"list1"keys. If (if) found, these are stored in the@aarray. Then the@aarray is printed. - Second Example: the
%hashis iterated through usingfor, searched through for"list1"keys, andiffound the associatedvaluesareprinted (with at end-of-line). A finalputcall adds a newline.
Sample Input (includes bogus "list2" elements)
{
"key1": {
"list1": [
"val1",
"val2",
"val3"
]
},
"key2": {
"list1": [
"val4",
"val5"
]
},
"key3": {
"list1": [
"val6"
]
},
"key4": {
"list2": [
"val7"
]
}
}
Sample Output:
val1 val2 val3 val4 val5 val6
Finally, in any programming solution it is often instructive to look at intermediate data-structures. So here's what the %hash looks like after decoding JSON input:
~$ raku -MJSON::Tiny -e 'my %hash = from-json($_) given lines; .say for %hash.sort;' file
key1 => {list1 => [val1 val2 val3]}
key2 => {list1 => [val4 val5]}
key3 => {list1 => [val6]}
key4 => {list2 => [val7]}
https://raku.land/cpan:MORITZ/JSON::Tiny
https://docs.raku.org/language/hashmap
https://raku.org
With .[].name + " " + .[].id' you iterate twice over the array. Iterate once and extract your data in one go:
curl … | jq -r '.data[] | .name + " " + .id'
Netbank734113 8a70803f8045722601804f62d54c5d9d
Netbank734112 8a70801c804568ae01804f625a923f8d
Demo
You might also be interested in using string interpolation:
curl … | jq -r '.data[] | "\(.name) \(.id)"'
Demo
you could take the output of jq and use the unix shell to de-duplicate the output ...
command_output | sort | uniq
Given the following JSON, what is the best way to extract the phone numbers, whether inside an object or an array of objects?
{
"phones": {
"Alex Baker": { "location": "mobile", "number": "+14157459038" },
"Bob Clarke": [
{ "location": "mobile", "number": "+12135637813" },
{ "location": "office", "number": "+13104443200" }
],
"Carl Davies": [
{ "location": "office", "number": "+14083078372" },
{ "location": "lab", "number": "+15102340052" }
],
"Drew Easton": { "location": "office", "number": "+18057459038" }
}
}I'm using the following query, but I wonder if there's a better way to do this:
$ cat phones.json | jq '.phones | to_entries | [ .[].value | objects | .number ] + [ .[].value | arrays | .[].number ]' [ "+14157459038", "+18057459038", "+12135637813", "+13104443200", "+14083078372", "+15102340052" ]
Any suggestions will be appreciated, thanks!
To change one entry, make sure that the left-hand side of the assignment operator is a path in the original document:
jq --arg name John --arg phone 4321 \
'( .contacts[] | select(.name == $name) ).phone = $phone' file
You can't use .contacts[] | select(.name == "John") | .phone |= ... since the select() extracts a set of elements from the contacts array. You would therefore only change the elements you extract, separately from the main part of the document.
Notice the difference in
( ... | select(...) ).phone = ...
^^^^^^^^^^^^^^^^^^^^^
path in original document
which works, and
... | select(...) | .phone = ...
^^^^^^^^^^^
extracted bits
which doesn't work.
Using a loop for more than one entry, assuming e.g. bash:
names=( John Jane )
phones=( 4321 4321 )
tmpfile=$(mktemp)
for i in "${!names[@]}"; do
name=${names[i]}
phone=${phones[i]}
jq --arg name "$name" --arg phone "$phone" \
'( .contacts[] | select(.name == $name) ).phone = $phone' file >"$tmpfile"
mv -- "$tmpfile" file
done
That is, I put the names in one array and the new numbers in another, then loop over the indexes and update file for each entry that needs changing, using a temporary file an intermediate storage.
Or, with an associative array:
declare -A lookup
lookup=( [John]=4321 [Jane]=4321 )
for name in "${!lookup[@]}"; do
phone=${lookup[$name]}
# jq as above
done
Assuming you have some JSON input document with the new phone numbers, such as
{
"John": 1234,
"Jane": 5678
}
which you can create using
jo John=1234 Jane=5678
Then you can update the numbers in a single jq invocation:
jo John=1234 Jane=5678 |
jq --slurpfile new /dev/stdin \
'.contacts |= map(.phone = ($new[][.name] // .phone))' file
This reads our input JSON with the new numbers in a structure, $new, that looks like
[
{
"John": 1234,
"Jane": 5678
}
]
This is used in the map() call to change the phone numbers of any contact that is listed. The // .phone makes sure that if the name isn't listed, the phone number stays the same.
Based on Kusalananda's answer, if you only want to search and replace 2 values you can do somethig like this in one jq invocation:
jq '( .contacts[] | select(.name == "John") ).phone |= "4321" |
( .contacts[] | select(.name == "Jane") ).phone |= "8765"' \
contacts.json
Or this way chaining 2 jq invocations:
cat contacts.json | \
jq '( .contacts[] | select(.name == "John") ).phone |= "4321"' | \
jq '( .contacts[] | select(.name == "Jane") ).phone |= "8765"'
I recommend using String Interpolation:
jq '.users[] | "\(.first) \(.last)"'
We are piping down the result of .users[] to generate the string ".first .last" using string interpolation. \(foo) syntax is used for string interpolation in jq. So, for the above example, it becomes "Stevie Wonder" (".users[].first .users[].second" working elementwise) and "Michael Jackson".
jq reference: String interpolation
You can use addition to concatenate strings.
Strings are added by being joined into a larger string.
jq '.users[] | .first + " " + .last'
The above works when both first and last are string. If you are extracting different datatypes(number and string), then we need to convert to equivalent types. Referring to solution on this question. For example.
jq '.users[] | .first + " " + (.number|tostring)'
I have a json file which i'm using bash to extract.
sample.json
{"extract": { "data": [ {"name": "John Smith", "id": 8752, "address": "1 Anywhere Street", "tel": 1234567890, "email": "john.smith@gmail.com" }, { "name": "Jane Smith", "id": 4568, "address": "719 Anywhere Street", "tel": 0987654321, "email": "janesmith@hotmail.com" } ] } }and store the value within an array
id=($(cat sample.json | jq -r '.extract.data[] .name'))
so in the case of ${id[0]} will output John Smith and ${id[1]} will output Jane Smith.
I am intending to store the values in a database (this will be my first attempt) which will be in a similar to that of the json, each object needs to be relative to how it is in the json so it might be better to go with:
data1=($(cat sample.json | jq -r '.extract.data[0] | .[]))
Lets say i have 1000 names to save to my database along with their id's. I'm some advice whether if there a more sensible (more effective) approach on how:
- Pull the data from Json? will I need to write this 1000 times?e.g
data1=($(cat sample.json | jq -r '.extract.data[0] | .[])) data2=($(cat sample.json | jq -r '.extract.data[1] | .[])) data3=($(cat sample.json | jq -r '.extract.data[2] | .[])) .. data1=($(cat sample.json | jq -r '.extract.data[1000] | .[]))
-Put the data into the DB from the first array? will the code need to reference the array as:
${data1[0]}
${data1[1]}
${data1[2]}Would be grateful for a steer in the right direction? - thanks.