Shell scripts, no matter how they are executed, execute one command after the other. So your code will execute results.sh after the last command of st_new.sh has finished.
Now there is a special command which messes this up: &
cmd &
means: "Start a new background process and execute cmd in it. After starting the background process, immediately continue with the next command in the script."
That means & doesn't wait for cmd to do it's work. My guess is that st_new.sh contains such a command. If that is the case, then you need to modify the script:
cmd &
BACK_PID=$!
This puts the process ID (PID) of the new background process in the variable BACK_PID. You can then wait for it to end:
while kill -0 $BACK_PID ; do
echo "Process is still active..."
sleep 1
# You can add a timeout here if you want
done
or, if you don't want any special handling/output simply
wait $BACK_PID
Note that some programs automatically start a background process when you run them, even if you omit the &. Check the documentation, they often have an option to write their PID to a file or you can run them in the foreground with an option and then use the shell's & command instead to get the PID.
Shell scripts, no matter how they are executed, execute one command after the other. So your code will execute results.sh after the last command of st_new.sh has finished.
Now there is a special command which messes this up: &
cmd &
means: "Start a new background process and execute cmd in it. After starting the background process, immediately continue with the next command in the script."
That means & doesn't wait for cmd to do it's work. My guess is that st_new.sh contains such a command. If that is the case, then you need to modify the script:
cmd &
BACK_PID=$!
This puts the process ID (PID) of the new background process in the variable BACK_PID. You can then wait for it to end:
while kill -0 $BACK_PID ; do
echo "Process is still active..."
sleep 1
# You can add a timeout here if you want
done
or, if you don't want any special handling/output simply
wait $BACK_PID
Note that some programs automatically start a background process when you run them, even if you omit the &. Check the documentation, they often have an option to write their PID to a file or you can run them in the foreground with an option and then use the shell's & command instead to get the PID.
Make sure that st_new.sh does something at the end what you can recognize (like touch /tmp/st_new.tmp when you remove the file first and always start one instance of st_new.sh).
Then make a polling loop. First sleep the normal time you think you should wait,
and wait short time in every loop.
This will result in something like
max_retry=20
retry=0
sleep 10 # Minimum time for st_new.sh to finish
while [ ${retry} -lt ${max_retry} ]; do
if [ -f /tmp/st_new.tmp ]; then
break # call results.sh outside loop
else
(( retry = retry + 1 ))
sleep 1
fi
done
if [ -f /tmp/st_new.tmp ]; then
source ../../results.sh
rm -f /tmp/st_new.tmp
else
echo Something wrong with st_new.sh
fi
You'll want to use the wait command to do this for you. You can either capture all of the children process IDs and wait for them specifically, or if they are the only background processes your script is creating, you can just call wait without an argument. For example:
#!/bin/bash
# run two processes in the background and wait for them to finish
nohup sleep 3 &
nohup sleep 10 &
echo "This will wait until both are done"
date
wait
date
echo "Done"
A few points:
If your goal with
nohupis to prevent a remote shell exit from killing your worker processes, you should usenohupon the script itself, not on the individual worker processes it creates.As explained here,
nohuponly prevents processes from receiving SIGHUP and from interacting with the terminal, but it does not break the relationship between the shell and its child processes.Because of the point above, with or without
nohup, a simplewaitbetween the twoforloops will cause the secondforto be executed only after all child processes started by the firstforhave exited.With a simple
wait:all currently active child processes are waited for, and the return status is zero.
If you need to run the second
foronly if there were no errors in the first, then you'll need to save each worker PID with$!, and pass them all towait:pids= for ... worker ... & pids+=" $!" done wait $pids || { echo "there were errors" >&2; exit 1; }
Wait for one processes to complete in a shell script
tar - Wait for process to finish before going to the next line in shell script - Unix & Linux Stack Exchange
Waiting for any process to finish in bash script - Unix & Linux Stack Exchange
process - How to wait in bash for several subprocesses to finish, and return exit code !=0 when any subprocess ends with code !=0? - Stack Overflow
What is theโfunction of the Bash Wait Command?
Does the bash wait commandโgive back status of exit?
Howโdo I wait for multiple commands in bash?
If I create a BASH script using
$ cat > blah
#!/bin/bash
read
ls
Make it executable using chmod
chmod +x blah
Then run it
$ bash blah
-- script has stopped as i type this, it will continue on enter
bionic focal-desktop-amd64.iso kde_neon zsync_disco.sh
blah focal-desktop-amd64.iso.zs-old qa_query.py zsync_eoan.sh
eoan-desktop-amd64.iso focal-desktop-amd64.iso.zsync qatracker.py zsync_focal.sh
eoan-desktop-amd64.iso.zsync focal-desktop-amd64.iso.zsync.old siduction-patience-lxqt-amd64-latest.iso
The script runs and pauses waiting for the read to complete. I type the text "-- script has stopped as i type this, it will continue on enter" and press Enter.
Then and only then (when read has completed) does the ls command execute.
I could add a "&" to the end of the read line so it ran in the background and thus ls would continue without waiting.. but what you want is actually the default.
You can run ps in a loop until your program runs. When it finishes the while loop exists.
#!/bin/bash
appName="appname"
appCount=$(ps ax | grep $appName | grep -v grep | wc -l)
while [ "$appCount" -gt "0" ]
do
sleep 1
appCount=$(ps ax | grep $appName | grep -v grep | wc -l)
done
zenity --info --title="End" --text="Now your game is dead."
Put name of your application instead of appname. After done put lines you want to have to display message. I used zenity for notification dialog. You can use something else like echo if you want to display message in terminal window.
You're already doing it.
Waiting for a command to finish is the shell's normal behavior. (Try typing sleep 5 at a shell prompt.) The only time that doesn't happen is when you append & to the command, or when the command itself does something to effectively background itself (the latter is a bit of an oversimplification).
You can delete the wait %% command from your script; it probably just produces an error message like wait: %%: no such job. (Question: does it actually print such a message?)
Do you have any evidence that the tar command isn't completing before the /home/ftp.sh command starts?
Incidentally, it's a bit odd to have things other than users' home directories directly under /home.
(I know most of this was already covered in comments, but I thought there should be an actual answer.)
You can use:
wait $!
Delete the wait %% from your script.
While wait -n (per comment @icarus) works in this particular situation, it should be noted that$! contains the PID of the last process that is started. So you could test on that as well:
#!/bin/bash
find $HOME/Downloads -name "dummy" &
p1=$!
find $HOME/Downloads -name "dummy" &
p2=$!
find $HOME/Downloads -name "dummy" &
p3=$!
while true
do
if ps $p1 > /dev/null ; then
echo -n "p1 runs "
else
echo -n "p1 ended"
fi
if ps $p2 > /dev/null ; then
echo -n "p2 runs "
else
echo -n "p2 ended"
fi
if ps $p1 > /dev/null ; then
echo -n "p3 runs "
else
echo -n "p3 ended"
fi
echo ''
sleep 1
done
But parallel is a better option.
The problem with the script is that there is nothing in it which is going to call one of the wait system calls. Generally until something calls wait the kernel is going to keep an entry for the process as this is where the return code of the child process is stored. If a parent process ends before a child process the child process is reparented, usually to PID 1. Once the system is booted, PID 1 often is programmed to enter a loop just calling wait to collect these processes exit value.
Rewriting the test script to call the shell builtin function wait we get
pids=()
find $HOME/Downloads -name "dummy" &
pids+=( $! )
find $HOME/Downloads -name "dummy" &
pids+=( $! )
find $HOME/Downloads -name "dummy" &
pids+=( $! )
echo "Initial active processes: ${#pids[@]}"
for ((i=${#pids[@]}; i>1; i--)) ; do
do
wait -n # Wait for one process to exit
echo "A process exited with RC=$?"
# Note that -n is a bash extension, not in POSIX
# if we have bash 5.1 then we can use "wait -np EX" to find which
# job has finished, the value is put in $EX. Then we can remove the
# value from the pids array.
echo "Still outstanding $(jobs -p)"
sleep 1
done
wait also (optionally) takes the PID of the process to wait for, and with $! you get the PID of the last command launched in the background.
Modify the loop to store the PID of each spawned sub-process into an array, and then loop again waiting on each PID.
# run processes and store pids in array
pids=()
for i in $n_procs; do
./procs[${i}] &
pids[${i}]=$!
done
# wait for all pids
for pid in ${pids[*]}; do
wait $pid
done
http://jeremy.zawodny.com/blog/archives/010717.html :
#!/bin/bash
FAIL=0
echo "starting"
./sleeper 2 0 &
./sleeper 2 1 &
./sleeper 3 0 &
./sleeper 2 0 &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi