With jo, which makes it easy to generate JSON on the command line:
$ jo -p key1="$value1" key2="$value2"
{
"key1": "foo",
"key2": "bar"
}
or, depending on what you want the end result to be,
$ jo -a -p "$(jo key1="$value1")" "$(jo key2="$value2")"
[
{
"key1": "foo"
},
{
"key2": "bar"
}
]
Note that jo will also properly encode the values in the strings $value1 and $value2.
I have the following string in bash
"3.8,3.9,3.10"
Is there a way using shell to convert it into a json array, i.e.
["3.8", "3.9", "3.10"\]
With jo, which makes it easy to generate JSON on the command line:
$ jo -p key1="$value1" key2="$value2"
{
"key1": "foo",
"key2": "bar"
}
or, depending on what you want the end result to be,
$ jo -a -p "$(jo key1="$value1")" "$(jo key2="$value2")"
[
{
"key1": "foo"
},
{
"key2": "bar"
}
]
Note that jo will also properly encode the values in the strings $value1 and $value2.
With perl:
$ perl -MJSON -e 'print JSON->new->pretty(1)->encode({@ARGV})' -- "${arr[@]}"
{
"key2" : "bar",
"key1" : "foo"
}
How to use jq to convert an bash array in command line to json array? - Unix & Linux Stack Exchange
Generating JSON in a shell script - Unix & Linux Stack Exchange
jq - How to create JSON Array in Bash - Stack Overflow
How to create array from json data in shell script - Stack Overflow
You can do this:
X=("hello world" "goodnight moon")
printf '%s\n' "${X[@]}" | jq -R . | jq -s .
output
[
"hello world",
"goodnight moon"
]
Since jq 1.6 you can do this:
jq --compact-output --null-input '$ARGS.positional' --args -- "${X[@]}"
giving:
["hello world","goodnight moon"]
This has the benefit that no escaping is required at all. It handles strings containing newlines, tabs, double quotes, backslashes and other control characters. (Well, it doesn't handle NUL characters but you can't have them in a bash array in the first place.)
One suggestion is to use --args with jq to create the two arrays and then collect these in the correct location in the main document. Note that --args is required to be the last option on the command line and that all the remaining command line arguments will become elements of the $ARGS.positional array.
{
jq -n --arg key APP-Service1-Admin '{(
ARGS.positional}' --args a b
jq -n --arg key APP-Service1-View '{(
ARGS.positional}' --args c d
} |
jq -s --arg key 'AD Accounts' '{($key): add}' |
jq --arg Service service1-name --arg 'AWS account' service1-dev '$ARGS.named + .'
The first two jq invocations create a set of two JSON objects:
{
"APP-Service1-Admin": [
"a",
"b"
]
}
{
"APP-Service1-View": [
"c",
"d"
]
}
The third jq invocation uses -s to read that set into an array, which becomes a merged object when passed through add. The merged object is assigned to our top-level key:
{
"AD Accounts": {
"APP-Service1-Admin": [
"a",
"b"
],
"APP-Service1-View": [
"c",
"d"
]
}
}
The last jq adds the remaining top-level keys and their values to the object:
{
"Service": "service1-name",
"AWS account": "service1-dev",
"AD Accounts": {
"APP-Service1-Admin": [
"a",
"b"
],
"APP-Service1-View": [
"c",
"d"
]
}
}
With jo:
jo -d . \
Service=service1-name \
'AWS account'=service1-dev \
'AD Accounts.APP-Service1-Admin'="$(jo -a a b)" \
'AD Accounts.APP-Service1-View'="$(jo -a c d)"
The "internal" object is created using .-notation (enabled with -d .), and a couple of command substitutions for creating the arrays.
Or you can drop the -d . and use a form of array notation:
jo Service=service1-name \
'AWS account'=service1-dev \
'AD Account[APP-Service1-Admin]'="$(jo -a a b)" \
'AD Account[APP-Service1-View]'="$(jo -a c d)"
I often use heredocs when creating complicated json objects in bash:
service=$(thing-what-gets-service)
account=$(thing-what-gets-account)
admin=
(thing-what-gets-admin))
view=
(thing-what-gets-view))
read -rd '' json <<EOF
[
{
"Service": "$service",
"AWS Account": "$account",
"AD Accounts": {
"APP-Service1-Admin": $admin,
"APP-Service1-View": $view
}
}
]
EOF
This uses jo to create the arrays as it's a pretty simple way to do it but it could be done differently if needed.
Simply use printf to format the output into JSON
First, you have a very blatant typo in this part of your code right here:
echo "${array[3]:$var-3:4}
Note there is no closing straight quote: ". Fixed it in the rewrite I did below:
But more to the point, doing something like this (using printf) as suggested in this StackOverflow answer. Tested and works in CentOS 7.
#!/bin/bash
readarray -t array <<< "$(df -h)";
var=$(echo "${array[1]}"| grep -aob '%' | grep -oE '[0-9]+');
df_output="${array[3]:$var-3:4}";
manufacturer=$(cat /sys/class/dmi/id/chassis_vendor);
product_name=$(cat /sys/class/dmi/id/product_name);
version=$(cat /sys/class/dmi/id/bios_version);
serial_number=$(cat /sys/class/dmi/id/product_serial);
hostname=$(hostname);
operating_system=$(hostnamectl | grep "Operating System" | cut -d ' ' -f5-);
architecture=$(arch);
processor_name=$(awk -F':' '/^model name/ {print $2}' /proc/cpuinfo | uniq | sed -e 's/^[ \t]*//');
memory$(dmidecode -t 17 | grep "Size.*MB" | awk '{s+=$2} END {print s / 1024"GB"}');
hdd_model=$(cat /sys/block/sda/device/model);
system_main_ip=$(hostname -I);
printf '{"manufacturer":"%s","product_name":"%s","version":"%s","serial_number":"%s","hostname":"%s","operating_system":"%s","architecture":"%s","processor_name":"%s","memory":"%s","hdd_model":"%s","system_main_ip":"%s"}' "$manufacturer" "$product_name" "$version" "$serial_number" "$hostname" "$operating_system" "$architecture" "$processor_name" "$memory" "$hdd_model" "$system_main_ip"
The output I get is this:
{"manufacturer":"Oracle Corporation","product_name":"VirtualBox","version":"VirtualBox","serial_number":"","hostname":"sandbox-centos-7","operating_system":"CentOS Linux 7 (Core)","architecture":"x86_64","processor_name":"Intel(R) Core(TM) i5-1030NG7 CPU @ 1.10GHz","memory":"","hdd_model":"VBOX HARDDISK ","system_main_ip":"10.0.2.15 192.168.56.20 "}
And if you have jq installed, you can pipe the output of the shell script to jq to “pretty print” the output into some human readable format. Like let’s say your script is named my_script.sh, just pipe it to jq like this:
./my_script.sh | jq
And the output would look like this:
{
"manufacturer": "Oracle Corporation",
"product_name": "VirtualBox",
"version": "VirtualBox",
"serial_number": "",
"hostname": "sandbox-centos-7",
"operating_system": "CentOS Linux 7 (Core)",
"architecture": "x86_64",
"processor_name": "Intel(R) Core(TM) i5-1030NG7 CPU @ 1.10GHz",
"memory": "",
"hdd_model": "VBOX HARDDISK ",
"system_main_ip": "10.0.2.15 192.168.56.20 "
}
The following programs can output json:
lshw:
lshw -json
smartmontools v7+:
smartctl --json --all /dev/sda
lsblk:
lsblk --json
lsipc:
lsipc --json
sfdisk
sfdisk --json
One possible solution to this:
declare -A aliases
aliases[Index]=components/Index/Exports
aliases[Shared]=components/Shared/Exports
aliases[Icons]=components/Icons/Exports
jq -n --argjson n "${#aliases[@]}" '
{ compileroption: {
baseurl: ".",
paths:
(
reduce range($n) as $i ({};
.[$ARGS.positional[$i]] = [$ARGS.positional[$i+$n]]
)
)
} }' --args "${!aliases[@]}" "${aliases[@]}"
Does not use jo and instead pass the keys and values of the associative array aliases into jq as positional parameters with --args at the end of the command (--args must always be the last option, if it's used at all). The jq utility receives the keys and values as a single array, $ARGS.positional. This means the first half of the array contains the keys, and the second half of the array contains the corresponding values.
The body of the jq expression creates the output object and uses a reduce operation over a range of $n integers from zero up, where $n is the number of elements in the aliases array. The reduce operation builds the paths object by adding the positional arguments, one by one, using $i:th argument as the key and the $i+$n:th argument as the element in the corresponding array value.
A slightly different approach using jo to create leaf objects of each key-value pair of the associative array:
declare -A aliases
aliases[Index]=components/Index/Exports
aliases[Shared]=components/Shared/Exports
aliases[Icons]=components/Icons/Exports
for key in "${!aliases[@]}"; do
jo "$key[]=${aliases[$key]}"
done
This would output the three objects
{"Icons":["components/Icons/Exports"]}
{"Index":["components/Index/Exports"]}
{"Shared":["components/Shared/Exports"]}
Since we're using jo like this, we impose some obvious restrictions on the keys of the array (may not contain =, [] etc.)
We could use jq in place of jo like so:
for key in "${!aliases[@]}"; do
jq -n --arg key "$key" --arg value "${aliases[$key]}" '.[$key] = [$value]'
done
We may then read these and add them in the correct place in the object we're creating in jq:
declare -A aliases
aliases[Index]=components/Index/Exports
aliases[Shared]=components/Shared/Exports
aliases[Icons]=components/Icons/Exports
for key in "${!aliases[@]}"; do
jo "$key[]=${aliases[$key]}"
done |
jq -n '{ compileroptions: {
baseURL: ".",
paths: (reduce inputs as $item ({}; . += $item)) } }'
The main difference here is that we don't pass stuff into jq as command line options, but rather as a stream of JSON objects.
Personally, I'd use perl or other proper programming language instead of a shell (especially bash!). Or at least switch to zsh with a better associative array support and use perl to do the JSONy stuff:
#! /usr/bin/perl
use JSON;
%my_aliases = (qw(
Index components/Index/Exports
Shared components/Shared/Exports
Icons components/Icons/Exports
));
$j->{compilerOptions}->{baseUrl} = "";
$j->{compilerOptions}->{paths}->{$_} = [$my_aliases{$_}] for keys%my_aliases;
print to_json($j, {"pretty" => 1});
Or:
#! /bin/zsh -
typeset -A my_aliases=(
Index components/Index/Exports
Shared components/Shared/Exports
Icons components/Icons/Exports
)
print -rNC1 -- "${(kv@)my_aliases}" |
perl -MJSON -0e '
chomp (@records = <>);
%my_aliases = @records;
$j->{compilerOptions}->{baseUrl} = "";
$j->{compilerOptions}->{paths}->{$_} = [$my_aliases{$_}] for keys%my_aliases;
print to_json($j, {"pretty" => 1})'
There are two main issues in your data and code:
- You have an input file in DOS or Windows text file format.
- Your code creates multiple single-element arrays rather than a single array with multiple elements.
Your input file, lol, appears to be a text file in DOS/Windows format. This means that when a utility that expects a Unix text file as input reads the file, each line will have an additional carriage-return character (\r) at the end.
You should convert the file to Unix text file format. This can be done with e.g. dos2unix.
As for your code, you can avoid the shell loop and let jq read the whole file in one go. This allows you to create a single result array rather than a set of arrays, each with a single object, which your code does.
The following assumes that the only thing that varies between the elements of the top-level array in the result is the source value (there is nothing in the question that explains how the max and min values of the source and destination ports should be picked):
jq -n -R '
[inputs] |
map( {
source: .,
protocol: "17",
isStateless: true,
udpOptions: {
sourcePortRange: { min: 521, max: 65535 },
destinationPortRange: { min: 1, max: 65535 }
}
} )' cidr.txt
or in the same compact one-line form as in your question:
jq -n -R '[inputs]|map({source:.,protocol:"17",isStateless:true,udpOptions:{sourcePortRange:{min:521,max:65535},destinationPortRange:{min:1,max:65535}}})' cidr.txt
Using inputs, jq reads the remaining inputs. Together with -R, it will read each line of cidr.txt as a single string. Putting this in an array with [inputs] we create an array of strings. The map() call takes each string from this array and transforms it into the source value for a larger, otherwise static object.
Add -c to the invocation of jq to get "compact" output.
If you don't want to, or are unable to, convert the input data from DOS to Unix text form, you can remove the carriage-return characters from within the jq expression instead.
To do this, replace the . after source: with (.|rtrimstr("\r")), including the outer parentheses. This trims the carriage-return from the end of each string read from the file.
Answer
This should get you the exact syntax you require:
In the example , the file containing your CIDR values is named cidr.txt and appears to contain only IP addresses along with subnets, i.e. other parameters remain constant. If you actually need to change these additional parameters (i.e. the port ranges you provided are not actually the same for all cidr then I will update my answer, and provide a fully fleshed out template)
Additionally, you will require 'jq' , which is the ubiquitous application for dealing with JSON through bash. It may likely already be installed these days, but if not then sudo apt install jq per usual.
while read cidr ; do
jq -n --arg CIDR "$cidr" '{"source":$CIDR,"protocol":"17","isStateless":true,"udpOptions": {"destinationPortRange":{"max": 65535,"min": 1},"sourcePortRange": {"min":521,"max": 65535} }}'
done < cidr.txt | jq --slurp
Using the four-line file sample you provided, the output of the above will give you the following in the terminal:
[
{
"source": "1.1.1.0/22",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 65535,
"min": 1
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "2.2.2.0/24",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 65535,
"min": 1
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "5.5.5.0/21",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 65535,
"min": 1
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "6.6.0.0/16",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 65535,
"min": 1
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
}
]
UPDATE
In order to fix the above output, you need to "repair" the line termination of your CIDR file. There are two ways of doing so:
Answer 1:
You can make the following changes to your script
#!/bin/bash
# There are four changes made to the script:
# 1. The addition of `tr` in order to eliminate '\r'.
# 2. Removal of '[' and ']' inside the `jq` command.
# 3. Addition of `jq --slurp` to enforce your specified JSON format.
# 4. Addition of double-quotes around `$lel` to prevent splitting.
lel=$(while read cidr ; do
cidr=$(echo "$cidr" | tr -d '\r' );
jq -n --arg CIDR "$cidr" '{"source":$CIDR,"protocol":"17","isStateless":true,"udpOptions": {"destinationPortRange":{"max": 65535,"min": 1},"sourcePortRange": {"min":521,
"max": 65535} }}'
done < lol | jq --slurp )
echo "$lel"
Alternative answer
You can "repair" the file containing our list of CIDRs:
cp lol lol_old
cat lol_old | tr -d '\r' > lol
Then, you can use the earlier version of your script, albeit with the corrections explained in #2-4 comments of the script included above.
Explanation
The reason for the \r found in your output is actually found in the formatting of your particular file containing your CIDRs, which happens to follow Windows - and not Unix - line termination standard.
The \r symbol you see in your output is actually present in your source file as well, where it is used along with \n to terminate each individual line. Both \r and \n are invisible characters.
The combination of \r\n is known as CRLF - carriage return + line feed - which is a remnant from the age of typewriters, yet for some reason is still used by Windows systems. On the other hand, Unix uses only LF to terminate lines, where it is represented by \n in its escaped form.
To confirm this peciular behavior, you can try executing the following:
head -n 1 lol | xxd -ps
312e312e312e302f32320d0a
In the above output - the first line of your file converted to its hex form - ends with 0d0a. This HEX combination represent CR+LF. On the other hand, if you execute the following directly inside of your Bash terminal:
echo "abcd" | xxd -ps
616263640a
you will find that the output follows Unix standard, where the line termination uses simple 0a, i.e. the hex representation of LF.
Note: This line-termination issue is incredibly common, widespread and something one always needs to be on the lookout for operating from inside Unix on files that may have been generated under Windows.
Info regarding jq
The above example (the while read loop) sends its output to the terminal, but you can of course use redirection if you need to store it in a file, using the standard syntax:
while read cidr; do [...] ; done < cidr.txt > outcidr.json
This file will contain the pretty-printed JSON output, but if you need/prefer your output to be contstrained to a single line, you can do:
cat outcidr.json | tr -d '\n' | tr -s ' '
More importantly, if you ever in the future end up with a single-line, complex JSON output that looks impossible to decipher, jq can be used to reformat and pretty-print it`:
echo '[{"source":"1.1.1.0/24","protocol":"17","isStateless":true,"udpOptions":{"destinationPortRange":{"max":55555,"min":10001},"sourcePortRange":{"min":521,"max":65535}}},{"source":"2.2.2.0/24","protocol":"17","isStateless":true,"udpOptions":{"destinationPortRange":{"max":55555,"min":10001},"sourcePortRange":{"min":521,"max":65535}}},{"source":"3.3.3.0/24","protocol":"17","isStateless":true,"udpOptions":{"destinationPortRange":{"max":55555,"min":10001},"sourcePortRange":{"min":521,"max":65535}}}]' > bad_output.json
cat bad_output.json | tr -d '\r' | jq ''
[
{
"source": "1.1.1.0/24",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 55555,
"min": 10001
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "2.2.2.0/24",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 55555,
"min": 10001
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "3.3.3.0/24",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 55555,
"min": 10001
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
}
]
# Getting first-order keys for each of the 3 objects
jq '.[] | keys' bad_output.json
[
"isStateless",
"protocol",
"source",
"udpOptions"
]
[
"isStateless",
"protocol",
"source",
"udpOptions"
]
[
"isStateless",
"protocol",
"source",
"udpOptions"
]
# Getting values corresponding to the selected key"
jq '.[] | .source ' outcidr.txt
"1.1.1.0/22"
"2.2.2.0/24"
"5.5.5.0/21"
"6.6.0.0/16"
First, your data is not valid json, there is a comma too much:
{
"TestNames": [
{
"Name": "test1",
"CreateDate": "2016-08-30T10:52:52Z",
"Id": "testId1", <--- Remove that!
},
{
"Name": "test2",
"CreateDate": "2016-08-30T10:52:13Z",
"Id": "testId2"
}
]
}
Once you've fixed that you can use jq for parsing json on the command line:
echo "$x" | jq -r '.TestNames[]|"\(.Name) , \(.Id)"'
if you need to keep the output values.
declare -A map1
while read name id ; do
echo "$name"
echo "$id"
map1[$name]=$id
done < <(echo "$x" | jq -r '.TestNames[]|"\(.Name) \(.Id)"')
echo "count : ${#map1[@]}"
echo "in loop: ${map1[$name]}"
I'd recommend using jq, a command-line JSON parser :
$ echo '''{
"Name": "test1",
"CreateDate": "2016-08-30T10:52:52Z",
"Id": "testId1"
}''' | jq '.Name + " , " + .Id'
"test1 , testId1"
$ echo '''{ "TestNames":
[{
"Name": "test1",
"CreateDate": "2016-08-30T10:52:52Z",
"Id": "testId1"
},
{
"Name": "test2",
"CreateDate": "2016-08-30T10:52:13Z",
"Id": "testId2"
}]
}''' | jq '.TestNames[] | .Name + " , " + .Id'
"test1 , testId1"
"test2 , testId2"
Further to Jeff's answer, please note that the transformation can be accomplished with a single invocation of jq. If your jq has the inputs filter:
jq -Rn '[inputs] | {cassandra:{nodes:map({ip_address:.,type:"seed"})}}'
Otherwise:
jq -Rs 'split("\n") | {cassandra:{nodes:map({ip_address:.,type:"seed"})}}' ips.txt
Using jq, you'll need an extra pass to convert from raw text to a workable array but simple:
$ jq -R '.' myseedips | jq -s '{cassandra:{nodes:map({ip_address:.,type:"seed"})}}'
This yields the following:
{
"cassandra": {
"nodes": [
{
"ip_address": "10.204.99.15",
"type": "seed"
},
{
"ip_address": "10.204.99.12",
"type": "seed"
},
{
"ip_address": "10.204.99.41",
"type": "seed"
}
]
}
}
If you really cannot use a proper JSON parser such as jq[1]
, try an awk-based solution:
Bash 4.x:
Copyreadarray -t values < <(awk -F\" 'NF>=3 {print $4}' myfile.json)
Bash 3.x:
CopyIFS=$'\n' read -d '' -ra values < <(awk -F\" 'NF>=3 {print $4}' myfile.json)
This stores all property values in Bash array ${values[@]}, which you can inspect with
declare -p values.
These solutions have limitations:
- each property must be on its own line,
- all values must be double-quoted,
- embedded escaped double quotes are not supported.
All these limitations reinforce the recommendation to use a proper JSON parser.
Note: The following alternative solutions use the Bash 4.x+ readarray -t values command, but they also work with the Bash 3.x alternative, IFS=$'\n' read -d '' -ra values.
grep + cut combination: A single grep command won't do (unless you use GNU grep - see below), but adding cut helps:
Copyreadarray -t values < <(grep '"' myfile.json | cut -d '"' -f4)
GNU grep: Using -P to support PCREs, which support \K to drop everything matched so far (a more flexible alternative to a look-behind assertion) as well as look-ahead assertions ((?=...)):
Copyreadarray -t values < <(grep -Po ':\s*"\K.+(?="\s*,?\s*$)' myfile.json)
Finally, here's a pure Bash (3.x+) solution:
What makes this a viable alternative in terms of performance is that no external utilities are called in each loop iteration; however, for larger input files, a solution based on external utilities will be much faster.
Copy#!/usr/bin/env bash
declare -a values # declare the array
# Read each line and use regex parsing (with Bash's `=~` operator)
# to extract the value.
while read -r line; do
# Extract the value from between the double quotes
# and add it to the array.
[[ $line =~ :[[:blank:]]+\"(.*)\" ]] && values+=( "${BASH_REMATCH[1]}" )
done < myfile.json
declare -p values # print the array
[1] Here's what a robust jq-based solution would look like (Bash 4.x):
readarray -t values < <(jq -r '.[]' myfile.json)
jq is good enough to solve this problem
Copypaste -s <(jq '.files[].name' YourJsonString) <(jq '.files[].age' YourJsonString) <( jq '.files[].websiteurl' YourJsonString)
So that you get a table and you can grep any rows or awk print any columns you want