One suggestion is to use --args with jq to create the two arrays and then collect these in the correct location in the main document. Note that --args is required to be the last option on the command line and that all the remaining command line arguments will become elements of the $ARGS.positional array.
{
jq -n --arg key APP-Service1-Admin '{(
ARGS.positional}' --args a b
jq -n --arg key APP-Service1-View '{(
ARGS.positional}' --args c d
} |
jq -s --arg key 'AD Accounts' '{($key): add}' |
jq --arg Service service1-name --arg 'AWS account' service1-dev '$ARGS.named + .'
The first two jq invocations create a set of two JSON objects:
{
"APP-Service1-Admin": [
"a",
"b"
]
}
{
"APP-Service1-View": [
"c",
"d"
]
}
The third jq invocation uses -s to read that set into an array, which becomes a merged object when passed through add. The merged object is assigned to our top-level key:
{
"AD Accounts": {
"APP-Service1-Admin": [
"a",
"b"
],
"APP-Service1-View": [
"c",
"d"
]
}
}
The last jq adds the remaining top-level keys and their values to the object:
{
"Service": "service1-name",
"AWS account": "service1-dev",
"AD Accounts": {
"APP-Service1-Admin": [
"a",
"b"
],
"APP-Service1-View": [
"c",
"d"
]
}
}
With jo:
jo -d . \
Service=service1-name \
'AWS account'=service1-dev \
'AD Accounts.APP-Service1-Admin'="$(jo -a a b)" \
'AD Accounts.APP-Service1-View'="$(jo -a c d)"
The "internal" object is created using .-notation (enabled with -d .), and a couple of command substitutions for creating the arrays.
Or you can drop the -d . and use a form of array notation:
jo Service=service1-name \
'AWS account'=service1-dev \
'AD Account[APP-Service1-Admin]'="$(jo -a a b)" \
'AD Account[APP-Service1-View]'="$(jo -a c d)"
Answer from Kusalananda on Stack Exchangelinux - How to write JSON file using Bash? - Stack Overflow
Create a Json using bash script - Unix & Linux Stack Exchange
Build a JSON string with Bash variables - Stack Overflow
Output JSON from Bash script - Stack Overflow
Videos
One suggestion is to use --args with jq to create the two arrays and then collect these in the correct location in the main document. Note that --args is required to be the last option on the command line and that all the remaining command line arguments will become elements of the $ARGS.positional array.
{
jq -n --arg key APP-Service1-Admin '{(
ARGS.positional}' --args a b
jq -n --arg key APP-Service1-View '{(
ARGS.positional}' --args c d
} |
jq -s --arg key 'AD Accounts' '{($key): add}' |
jq --arg Service service1-name --arg 'AWS account' service1-dev '$ARGS.named + .'
The first two jq invocations create a set of two JSON objects:
{
"APP-Service1-Admin": [
"a",
"b"
]
}
{
"APP-Service1-View": [
"c",
"d"
]
}
The third jq invocation uses -s to read that set into an array, which becomes a merged object when passed through add. The merged object is assigned to our top-level key:
{
"AD Accounts": {
"APP-Service1-Admin": [
"a",
"b"
],
"APP-Service1-View": [
"c",
"d"
]
}
}
The last jq adds the remaining top-level keys and their values to the object:
{
"Service": "service1-name",
"AWS account": "service1-dev",
"AD Accounts": {
"APP-Service1-Admin": [
"a",
"b"
],
"APP-Service1-View": [
"c",
"d"
]
}
}
With jo:
jo -d . \
Service=service1-name \
'AWS account'=service1-dev \
'AD Accounts.APP-Service1-Admin'="$(jo -a a b)" \
'AD Accounts.APP-Service1-View'="$(jo -a c d)"
The "internal" object is created using .-notation (enabled with -d .), and a couple of command substitutions for creating the arrays.
Or you can drop the -d . and use a form of array notation:
jo Service=service1-name \
'AWS account'=service1-dev \
'AD Account[APP-Service1-Admin]'="$(jo -a a b)" \
'AD Account[APP-Service1-View]'="$(jo -a c d)"
I often use heredocs when creating complicated json objects in bash:
service=$(thing-what-gets-service)
account=$(thing-what-gets-account)
admin=
(thing-what-gets-admin))
view=
(thing-what-gets-view))
read -rd '' json <<EOF
[
{
"Service": "$service",
"AWS Account": "$account",
"AD Accounts": {
"APP-Service1-Admin": $admin,
"APP-Service1-View": $view
}
}
]
EOF
This uses jo to create the arrays as it's a pretty simple way to do it but it could be done differently if needed.
Generally speaking, don't do this. Use a tool that already knows how to quote values correctly, like jq:
jq -n --arg appname "$appname" '{apps: [ {name: $appname, script: "./cms/bin/www", watch: false}]}' > process.json
That said, your immediate issues is that sudo only applies the command, not the redirection. One workaround is to use tee to write to the file instead.
echo '{...}' | sudo tee process.json > /dev/null
To output text, use echo rather than cat (which outputs data from files or streams).
Aside from that, you will also have to escape the double-quotes inside your text if you want them to appear in the result.
echo -e "Name of your app?\n"
read appname
echo "{apps:[{name:\"${appname}\",script:\"./cms/bin/www\",watch:false}]}" > process.json
If you need to process more than just a simple line, I second @chepner's suggestion to use a JSON tool such as jq.
Your -bash: process.json: Permission denied comes from the fact you cannot write to the process.json file. If the file does not exist, check that your user has write permissions on the directory. If it exists, check that your user has write permissions on the file.
Simply use printf to format the output into JSON
First, you have a very blatant typo in this part of your code right here:
echo "${array[3]:$var-3:4}
Note there is no closing straight quote: ". Fixed it in the rewrite I did below:
But more to the point, doing something like this (using printf) as suggested in this StackOverflow answer. Tested and works in CentOS 7.
#!/bin/bash
readarray -t array <<< "$(df -h)";
var=$(echo "${array[1]}"| grep -aob '%' | grep -oE '[0-9]+');
df_output="${array[3]:$var-3:4}";
manufacturer=$(cat /sys/class/dmi/id/chassis_vendor);
product_name=$(cat /sys/class/dmi/id/product_name);
version=$(cat /sys/class/dmi/id/bios_version);
serial_number=$(cat /sys/class/dmi/id/product_serial);
hostname=$(hostname);
operating_system=$(hostnamectl | grep "Operating System" | cut -d ' ' -f5-);
architecture=$(arch);
processor_name=$(awk -F':' '/^model name/ {print $2}' /proc/cpuinfo | uniq | sed -e 's/^[ \t]*//');
memory$(dmidecode -t 17 | grep "Size.*MB" | awk '{s+=$2} END {print s / 1024"GB"}');
hdd_model=$(cat /sys/block/sda/device/model);
system_main_ip=$(hostname -I);
printf '{"manufacturer":"%s","product_name":"%s","version":"%s","serial_number":"%s","hostname":"%s","operating_system":"%s","architecture":"%s","processor_name":"%s","memory":"%s","hdd_model":"%s","system_main_ip":"%s"}' "$manufacturer" "$product_name" "$version" "$serial_number" "$hostname" "$operating_system" "$architecture" "$processor_name" "$memory" "$hdd_model" "$system_main_ip"
The output I get is this:
{"manufacturer":"Oracle Corporation","product_name":"VirtualBox","version":"VirtualBox","serial_number":"","hostname":"sandbox-centos-7","operating_system":"CentOS Linux 7 (Core)","architecture":"x86_64","processor_name":"Intel(R) Core(TM) i5-1030NG7 CPU @ 1.10GHz","memory":"","hdd_model":"VBOX HARDDISK ","system_main_ip":"10.0.2.15 192.168.56.20 "}
And if you have jq installed, you can pipe the output of the shell script to jq to “pretty print” the output into some human readable format. Like let’s say your script is named my_script.sh, just pipe it to jq like this:
./my_script.sh | jq
And the output would look like this:
{
"manufacturer": "Oracle Corporation",
"product_name": "VirtualBox",
"version": "VirtualBox",
"serial_number": "",
"hostname": "sandbox-centos-7",
"operating_system": "CentOS Linux 7 (Core)",
"architecture": "x86_64",
"processor_name": "Intel(R) Core(TM) i5-1030NG7 CPU @ 1.10GHz",
"memory": "",
"hdd_model": "VBOX HARDDISK ",
"system_main_ip": "10.0.2.15 192.168.56.20 "
}
The following programs can output json:
lshw:
lshw -json
smartmontools v7+:
smartctl --json --all /dev/sda
lsblk:
lsblk --json
lsipc:
lsipc --json
sfdisk
sfdisk --json
There are two main issues in your data and code:
- You have an input file in DOS or Windows text file format.
- Your code creates multiple single-element arrays rather than a single array with multiple elements.
Your input file, lol, appears to be a text file in DOS/Windows format. This means that when a utility that expects a Unix text file as input reads the file, each line will have an additional carriage-return character (\r) at the end.
You should convert the file to Unix text file format. This can be done with e.g. dos2unix.
As for your code, you can avoid the shell loop and let jq read the whole file in one go. This allows you to create a single result array rather than a set of arrays, each with a single object, which your code does.
The following assumes that the only thing that varies between the elements of the top-level array in the result is the source value (there is nothing in the question that explains how the max and min values of the source and destination ports should be picked):
jq -n -R '
[inputs] |
map( {
source: .,
protocol: "17",
isStateless: true,
udpOptions: {
sourcePortRange: { min: 521, max: 65535 },
destinationPortRange: { min: 1, max: 65535 }
}
} )' cidr.txt
or in the same compact one-line form as in your question:
jq -n -R '[inputs]|map({source:.,protocol:"17",isStateless:true,udpOptions:{sourcePortRange:{min:521,max:65535},destinationPortRange:{min:1,max:65535}}})' cidr.txt
Using inputs, jq reads the remaining inputs. Together with -R, it will read each line of cidr.txt as a single string. Putting this in an array with [inputs] we create an array of strings. The map() call takes each string from this array and transforms it into the source value for a larger, otherwise static object.
Add -c to the invocation of jq to get "compact" output.
If you don't want to, or are unable to, convert the input data from DOS to Unix text form, you can remove the carriage-return characters from within the jq expression instead.
To do this, replace the . after source: with (.|rtrimstr("\r")), including the outer parentheses. This trims the carriage-return from the end of each string read from the file.
Answer
This should get you the exact syntax you require:
In the example , the file containing your CIDR values is named cidr.txt and appears to contain only IP addresses along with subnets, i.e. other parameters remain constant. If you actually need to change these additional parameters (i.e. the port ranges you provided are not actually the same for all cidr then I will update my answer, and provide a fully fleshed out template)
Additionally, you will require 'jq' , which is the ubiquitous application for dealing with JSON through bash. It may likely already be installed these days, but if not then sudo apt install jq per usual.
while read cidr ; do
jq -n --arg CIDR "$cidr" '{"source":$CIDR,"protocol":"17","isStateless":true,"udpOptions": {"destinationPortRange":{"max": 65535,"min": 1},"sourcePortRange": {"min":521,"max": 65535} }}'
done < cidr.txt | jq --slurp
Using the four-line file sample you provided, the output of the above will give you the following in the terminal:
[
{
"source": "1.1.1.0/22",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 65535,
"min": 1
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "2.2.2.0/24",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 65535,
"min": 1
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "5.5.5.0/21",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 65535,
"min": 1
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "6.6.0.0/16",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 65535,
"min": 1
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
}
]
UPDATE
In order to fix the above output, you need to "repair" the line termination of your CIDR file. There are two ways of doing so:
Answer 1:
You can make the following changes to your script
#!/bin/bash
# There are four changes made to the script:
# 1. The addition of `tr` in order to eliminate '\r'.
# 2. Removal of '[' and ']' inside the `jq` command.
# 3. Addition of `jq --slurp` to enforce your specified JSON format.
# 4. Addition of double-quotes around `$lel` to prevent splitting.
lel=$(while read cidr ; do
cidr=$(echo "$cidr" | tr -d '\r' );
jq -n --arg CIDR "$cidr" '{"source":$CIDR,"protocol":"17","isStateless":true,"udpOptions": {"destinationPortRange":{"max": 65535,"min": 1},"sourcePortRange": {"min":521,
"max": 65535} }}'
done < lol | jq --slurp )
echo "$lel"
Alternative answer
You can "repair" the file containing our list of CIDRs:
cp lol lol_old
cat lol_old | tr -d '\r' > lol
Then, you can use the earlier version of your script, albeit with the corrections explained in #2-4 comments of the script included above.
Explanation
The reason for the \r found in your output is actually found in the formatting of your particular file containing your CIDRs, which happens to follow Windows - and not Unix - line termination standard.
The \r symbol you see in your output is actually present in your source file as well, where it is used along with \n to terminate each individual line. Both \r and \n are invisible characters.
The combination of \r\n is known as CRLF - carriage return + line feed - which is a remnant from the age of typewriters, yet for some reason is still used by Windows systems. On the other hand, Unix uses only LF to terminate lines, where it is represented by \n in its escaped form.
To confirm this peciular behavior, you can try executing the following:
head -n 1 lol | xxd -ps
312e312e312e302f32320d0a
In the above output - the first line of your file converted to its hex form - ends with 0d0a. This HEX combination represent CR+LF. On the other hand, if you execute the following directly inside of your Bash terminal:
echo "abcd" | xxd -ps
616263640a
you will find that the output follows Unix standard, where the line termination uses simple 0a, i.e. the hex representation of LF.
Note: This line-termination issue is incredibly common, widespread and something one always needs to be on the lookout for operating from inside Unix on files that may have been generated under Windows.
Info regarding jq
The above example (the while read loop) sends its output to the terminal, but you can of course use redirection if you need to store it in a file, using the standard syntax:
while read cidr; do [...] ; done < cidr.txt > outcidr.json
This file will contain the pretty-printed JSON output, but if you need/prefer your output to be contstrained to a single line, you can do:
cat outcidr.json | tr -d '\n' | tr -s ' '
More importantly, if you ever in the future end up with a single-line, complex JSON output that looks impossible to decipher, jq can be used to reformat and pretty-print it`:
echo '[{"source":"1.1.1.0/24","protocol":"17","isStateless":true,"udpOptions":{"destinationPortRange":{"max":55555,"min":10001},"sourcePortRange":{"min":521,"max":65535}}},{"source":"2.2.2.0/24","protocol":"17","isStateless":true,"udpOptions":{"destinationPortRange":{"max":55555,"min":10001},"sourcePortRange":{"min":521,"max":65535}}},{"source":"3.3.3.0/24","protocol":"17","isStateless":true,"udpOptions":{"destinationPortRange":{"max":55555,"min":10001},"sourcePortRange":{"min":521,"max":65535}}}]' > bad_output.json
cat bad_output.json | tr -d '\r' | jq ''
[
{
"source": "1.1.1.0/24",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 55555,
"min": 10001
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "2.2.2.0/24",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 55555,
"min": 10001
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
},
{
"source": "3.3.3.0/24",
"protocol": "17",
"isStateless": true,
"udpOptions": {
"destinationPortRange": {
"max": 55555,
"min": 10001
},
"sourcePortRange": {
"min": 521,
"max": 65535
}
}
}
]
# Getting first-order keys for each of the 3 objects
jq '.[] | keys' bad_output.json
[
"isStateless",
"protocol",
"source",
"udpOptions"
]
[
"isStateless",
"protocol",
"source",
"udpOptions"
]
[
"isStateless",
"protocol",
"source",
"udpOptions"
]
# Getting values corresponding to the selected key"
jq '.[] | .source ' outcidr.txt
"1.1.1.0/22"
"2.2.2.0/24"
"5.5.5.0/21"
"6.6.0.0/16"
You are better off using a program like jq to generate the JSON, if you don't know ahead of time if the contents of the variables are properly escaped for inclusion in JSON. Otherwise, you will just end up with invalid JSON for your trouble.
BUCKET_NAME=testbucket
OBJECT_NAME=testworkflow-2.0.1.jar
TARGET_LOCATION=/opt/test/testworkflow-2.0.1.jar
JSON_STRING=$( jq -n \
--arg bn "$BUCKET_NAME" \
--arg on "$OBJECT_NAME" \
--arg tl "$TARGET_LOCATION" \
'{bucketname: $bn, objectname: $on, targetlocation: $tl}' )
You can use printf:
JSON_FMT='{"bucketname":"%s","objectname":"%s","targetlocation":"%s"}\n'
printf "$JSON_FMT" "$BUCKET_NAME" "$OBJECT_NAME" "$TARGET_LOCATION"
much clear and simpler
If you only need to output a small JSON, use printf:
printf '{"hostname":"%s","distro":"%s","uptime":"%s"}\n' "$hostname" "$distro" "$uptime"
Or if you need to produce a larger JSON, use a heredoc as explained by leandro-mora. If you use the here-doc solution, please be sure to upvote his answer:
cat <<EOF > /your/path/myjson.json
{"id" : "$my_id"}
EOF
Some of the more recent distros, have a file called: /etc/lsb-release or similar name (cat /etc/*release). Therefore, you could possibly do away with dependency your on Python:
distro=$(awk -F= 'END { print $2 }' /etc/lsb-release)
An aside, you should probably do away with using backticks. They're a bit old fashioned.
I find it much more easy to create the json using cat:
cat <<EOF > /your/path/myjson.json
{"id" : "$my_id"}
EOF
Further to Jeff's answer, please note that the transformation can be accomplished with a single invocation of jq. If your jq has the inputs filter:
jq -Rn '[inputs] | {cassandra:{nodes:map({ip_address:.,type:"seed"})}}'
Otherwise:
jq -Rs 'split("\n") | {cassandra:{nodes:map({ip_address:.,type:"seed"})}}' ips.txt
Using jq, you'll need an extra pass to convert from raw text to a workable array but simple:
$ jq -R '.' myseedips | jq -s '{cassandra:{nodes:map({ip_address:.,type:"seed"})}}'
This yields the following:
{
"cassandra": {
"nodes": [
{
"ip_address": "10.204.99.15",
"type": "seed"
},
{
"ip_address": "10.204.99.12",
"type": "seed"
},
{
"ip_address": "10.204.99.41",
"type": "seed"
}
]
}
}
That grep/echo block isn't going to do anything useful; $? is going to be set once—it's not going to iterate through the fields.
Thankfully, there appears to be a much easier way to do this: just split the fields apart into variables. Thankfully, read can do this for you:
while IFS=':' read -r randomid id userid dns status; do
printf '{"randomId":{"s":"%s"},"id":{"s":"%s"},"userId":{"s":"%s"},"dns":{"s":"%s"},"status":{"s":"%s"}}\n' \
"$randomid" "$id" "$userid" "$dns" "$status"
done
Using printf instead of the more-familiar echo avoids all the \"-sequences echo would require. Do note the backslash at the end of the line to split it.
BTW: The format you're producing is called JSON, and there might be tools to help generate it (for example, jq). Also, it can require its own escaping if, e.g., your fields can contain double-quotes.
With perl:
perl -MJSON -F: -ple '@A = qw/randomId id userId dns status/; $_ = encode_json({map { shift @A => { "s" => $_ } } @F } )' input.csv
With jo, which makes it easy to generate JSON on the command line:
$ jo -p key1="$value1" key2="$value2"
{
"key1": "foo",
"key2": "bar"
}
or, depending on what you want the end result to be,
$ jo -a -p "$(jo key1="$value1")" "$(jo key2="$value2")"
[
{
"key1": "foo"
},
{
"key2": "bar"
}
]
Note that jo will also properly encode the values in the strings $value1 and $value2.
With perl:
$ perl -MJSON -e 'print JSON->new->pretty(1)->encode({@ARGV})' -- "${arr[@]}"
{
"key2" : "bar",
"key1" : "foo"
}
Hi all,
Trying to create the following JSON structure through bash. There will be a max of 4 environments that I want to be shown even if there are no content within them, and example output can be found below the structure. Apologies for the huge post.
Example General Structure:
{
"ENV":{
"ENV1":{
"Middleware": [
{
"value1": "",
"value2": ""
}
],
"System": [
{
"value1": "",
"value2": "",
"value3": ""
}
],
"Application": [
{
"value1": "",
"value2": ""
}
],
"Utility":[
{
"value1": "",
"value2": "",
"value3": ""
}
]
},
"ENV2":{
"Middleware": [
{
"value1": "",
"value2": ""
}
],
"System": [
{
"value1": "",
"value2": "",
"value3": ""
}
],
"Application": [
{
"value1": "",
"value2": ""
}
],
"Utility":[
{
"value1": "",
"value2": "",
"value3": ""
}
]
},
"ENV3":{
"Middleware": [
{
"value1": "",
"value2": ""
}
],
"System": [
{
"value1": "",
"value2": "",
"value3": ""
}
],
"Application": [
{
"value1": "",
"value2": ""
}
],
"Utility":[
{
"value1": "",
"value2": "",
"value3": ""
}
]
},
"ENV4":{
"Middleware": [
{
"value1": "",
"value2": ""
}
],
"System": [
{
"value1": "",
"value2": "",
"value3": ""
}
],
"Application": [
{
"value1": "",
"value2": ""
}
],
"Utility":[
{
"value1": "",
"value2": "",
"value3": ""
}
]
}
}
}Example json output (output.json):
{
"ENV": {
"ENV1": {
"Middleware": [
{
"value1": "Mqwerty",
"value2": "Mqwerty"
},
{
"value1": "Mqwerty",
"value2": "Mqwerty"
},
{
"value1": "Mqwerty",
"value2": "Mqwerty"
}
],
"System": [
{
"value1": "Sqwerty",
"value2": "Sqwerty",
"value3": "Sqwerty"
}
],
"Application": [
{
"value1": "Aqwerty",
"value2": "Aqwerty"
},
{
"value1": "Aqwerty",
"value2": "Aqwerty"
}
],
"Utility": [
{
"value1": "Uqwerty",
"value2": "Uqwerty",
"value3": "Uqwerty"
}
]
},
"ENV2": {
"Middleware": [],
"System": [],
"Application": [],
"Utility": []
},
"ENV3": {
"Middleware": [
{
"value1": "Mqwerty",
"value2": "Mqwerty"
},
{
"value1": "Mqwerty",
"value2": "Mqwerty"
}
],
"System": [],My input file will look something like this (input.txt):
ENV1,Middleware,Mqwerty,Mqwerty ENV1,Middleware,Mqwerty,Mqwerty ENV1,Middleware,Mqwerty,Mqwerty ENV1,System,Sqwerty,Sqwerty,Sqwerty ENV1,Application,Aqwerty,Aqwerty ENV1,Application,Aqwerty,Aqwerty ENV1,Utility,Uqwerty,Uqwerty,Uqwerty ENV3,Middleware,Mqwerty,Mqwerty ENV3,Middleware,Mqwerty,Mqwerty
I would like to use jq to create the aforementioned structure and then populate the json file with the values in the input file. Also, a secondary question is that after the json file is produced, can you edit or partially update certain components of the file? E.g. Changing ENV.ENV1.Middleware[0].value1 from Mqwerty to Cqwerty without recreating the whole file. I'm super confused with using jq, I've tried jq -R -n '(inputs | split(",")) | {"ENV":{(.[0]):""}}'<<<"$fileinput" as a small step towards creating the file, however even that hasn't helped much. Any help would be appreciated.
I find joy in reading a good book.
A heads-up: You are well outside the domain of Bash, you should do this in Python, which has JSON libraries and many sophisticated ways to parse and transform various data types (and between different data types) -- methods that you would be obliged to laboriously recreate in Bash code, which would then be forgotten and thrown away.
Python eats JSON for breakfast. Bash strangles on it.