You'll want to use the wait command to do this for you. You can either capture all of the children process IDs and wait for them specifically, or if they are the only background processes your script is creating, you can just call wait without an argument. For example:
#!/bin/bash
# run two processes in the background and wait for them to finish
nohup sleep 3 &
nohup sleep 10 &
echo "This will wait until both are done"
date
wait
date
echo "Done"
Answer from ParanoidGeek on Stack ExchangeYou'll want to use the wait command to do this for you. You can either capture all of the children process IDs and wait for them specifically, or if they are the only background processes your script is creating, you can just call wait without an argument. For example:
#!/bin/bash
# run two processes in the background and wait for them to finish
nohup sleep 3 &
nohup sleep 10 &
echo "This will wait until both are done"
date
wait
date
echo "Done"
A few points:
If your goal with
nohupis to prevent a remote shell exit from killing your worker processes, you should usenohupon the script itself, not on the individual worker processes it creates.As explained here,
nohuponly prevents processes from receiving SIGHUP and from interacting with the terminal, but it does not break the relationship between the shell and its child processes.Because of the point above, with or without
nohup, a simplewaitbetween the twoforloops will cause the secondforto be executed only after all child processes started by the firstforhave exited.With a simple
wait:all currently active child processes are waited for, and the return status is zero.
If you need to run the second
foronly if there were no errors in the first, then you'll need to save each worker PID with$!, and pass them all towait:pids= for ... worker ... & pids+=" $!" done wait $pids || { echo "there were errors" >&2; exit 1; }
Shell scripts, no matter how they are executed, execute one command after the other. So your code will execute results.sh after the last command of st_new.sh has finished.
Now there is a special command which messes this up: &
cmd &
means: "Start a new background process and execute cmd in it. After starting the background process, immediately continue with the next command in the script."
That means & doesn't wait for cmd to do it's work. My guess is that st_new.sh contains such a command. If that is the case, then you need to modify the script:
cmd &
BACK_PID=$!
This puts the process ID (PID) of the new background process in the variable BACK_PID. You can then wait for it to end:
while kill -0 $BACK_PID ; do
echo "Process is still active..."
sleep 1
# You can add a timeout here if you want
done
or, if you don't want any special handling/output simply
wait $BACK_PID
Note that some programs automatically start a background process when you run them, even if you omit the &. Check the documentation, they often have an option to write their PID to a file or you can run them in the foreground with an option and then use the shell's & command instead to get the PID.
Make sure that st_new.sh does something at the end what you can recognize (like touch /tmp/st_new.tmp when you remove the file first and always start one instance of st_new.sh).
Then make a polling loop. First sleep the normal time you think you should wait,
and wait short time in every loop.
This will result in something like
max_retry=20
retry=0
sleep 10 # Minimum time for st_new.sh to finish
while [ ${retry} -lt ${max_retry} ]; do
if [ -f /tmp/st_new.tmp ]; then
break # call results.sh outside loop
else
(( retry = retry + 1 ))
sleep 1
fi
done
if [ -f /tmp/st_new.tmp ]; then
source ../../results.sh
rm -f /tmp/st_new.tmp
else
echo Something wrong with st_new.sh
fi
tar - Wait for process to finish before going to the next line in shell script - Unix & Linux Stack Exchange
bash - Is there a command to wait X seconds before next command? - Unix & Linux Stack Exchange
linux - Wait for .sh script to finish before executing another .sh script? - Stack Overflow
Getting a bash script to wait for wget to complete before beginning the next step
What is the function of the Bash Wait Command?
Does the bash wait command give back status of exit?
How do I wait for multiple commands in bash?
If I create a BASH script using
$ cat > blah
#!/bin/bash
read
ls
Make it executable using chmod
chmod +x blah
Then run it
$ bash blah
-- script has stopped as i type this, it will continue on enter
bionic focal-desktop-amd64.iso kde_neon zsync_disco.sh
blah focal-desktop-amd64.iso.zs-old qa_query.py zsync_eoan.sh
eoan-desktop-amd64.iso focal-desktop-amd64.iso.zsync qatracker.py zsync_focal.sh
eoan-desktop-amd64.iso.zsync focal-desktop-amd64.iso.zsync.old siduction-patience-lxqt-amd64-latest.iso
The script runs and pauses waiting for the read to complete. I type the text "-- script has stopped as i type this, it will continue on enter" and press Enter.
Then and only then (when read has completed) does the ls command execute.
I could add a "&" to the end of the read line so it ran in the background and thus ls would continue without waiting.. but what you want is actually the default.
You can run ps in a loop until your program runs. When it finishes the while loop exists.
#!/bin/bash
appName="appname"
appCount=$(ps ax | grep $appName | grep -v grep | wc -l)
while [ "$appCount" -gt "0" ]
do
sleep 1
appCount=$(ps ax | grep $appName | grep -v grep | wc -l)
done
zenity --info --title="End" --text="Now your game is dead."
Put name of your application instead of appname. After done put lines you want to have to display message. I used zenity for notification dialog. You can use something else like echo if you want to display message in terminal window.
You're already doing it.
Waiting for a command to finish is the shell's normal behavior. (Try typing sleep 5 at a shell prompt.) The only time that doesn't happen is when you append & to the command, or when the command itself does something to effectively background itself (the latter is a bit of an oversimplification).
You can delete the wait %% command from your script; it probably just produces an error message like wait: %%: no such job. (Question: does it actually print such a message?)
Do you have any evidence that the tar command isn't completing before the /home/ftp.sh command starts?
Incidentally, it's a bit odd to have things other than users' home directories directly under /home.
(I know most of this was already covered in comments, but I thought there should be an actual answer.)
You can use:
wait $!
Delete the wait %% from your script.
Simply use the && connector (i.e. a compound-command):
./script1.sh && ./script2.sh
But please consider that in this case the second script will be executed only if the first returns a 0 exit code (i.e. no error). If you want to execute the sequence whatever the result of the first script, simply go:
./script1.sh ; ./script2.sh
You can test the behaviour of the two connectors:
$ true && echo aa
aa
$ false && echo aa
$ false ; echo aa
aa
Let me give you an example :
ScriptA = print msg "I am A" and after user input it sleeps for 5 sec
ScriptB = print msg "I am B" and wait for user input
scriptC = different calls to ScriptA and ScriptB
ScriptA :
#!/bin/bash
read -p "I am A"
sleep 5
exit
ScriptB :
#!/bin/bash
read -p "I am B"
exit
ScriptC :
#!/bin/bash
./ScriptA
./ScriptB
exit
And now lets run ScriptC.sh :
[prompt] $ ./ScriptC.sh
I am A (enter)
//wait for 5
I am B (enter)
So ScriptC wait each command to finish before executing the next command, if you dont want to wait the first script you can add (&) after command ( ./ScriptA.sh & ). In this case ScriptC will execute ScriptB immedietly after hitting "Enter" and it will not wait 5 Sec.
I hope it helps
Hi guys,
I’ve just started learning to how to write bash scripts and one of my scripts basically depends on downloading a file using wget, then executing the file. The problem is as it stands the script just runs through each command and nothing gets downloaded with wget.
What i’d like it to do is download the file, show its progress and then resume the script once wget has completed. Is this at all possible?
Pastebin of my script: https://pastebin.com/11M1TWfr
EDIT: Before anyone else points it out - the serial number there is not a purchased one. I don't care if you use it, I found it off a website myself.
EDIT 2: Got it working now, not entirely sure how but I basically added this script into a different script I've written and it actually seems to give output now...I must have done something wrong in the original script! Thank you so much to everyone who took the time to answer this, I really appreciate the help. Let it never be said that the Linux community is unhelpful :)!
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel. However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &