[bash] How to wait in bash for several subprocesses to finish and return exit code !=0 when any subprocess ends with code !=0?

How to wait in a bash script for several subprocesses spawned from that script to finish and return exit code !=0 when any of the subprocesses ends with code !=0 ?

Simple script:

#!/bin/bash
for i in `seq 0 9`; do
  doCalculations $i &
done
wait

The above script will wait for all 10 spawned subprocesses, but it will always give exit status 0 (see help wait). How can I modify this script so it will discover exit statuses of spawned subprocesses and return exit code 1 when any of subprocesses ends with code !=0?

Is there any better solution for that than collecting PIDs of the subprocesses, wait for them in order and sum exit statuses?

This question is related to bash process wait

The answer is


I don't believe it's possible with Bash's builtin functionality.

You can get notification when a child exits:

#!/bin/sh
set -o monitor        # enable script job control
trap 'echo "child died"' CHLD

However there's no apparent way to get the child's exit status in the signal handler.

Getting that child status is usually the job of the wait family of functions in the lower level POSIX APIs. Unfortunately Bash's support for that is limited - you can wait for one specific child process (and get its exit status) or you can wait for all of them, and always get a 0 result.

What it appears impossible to do is the equivalent of waitpid(-1), which blocks until any child process returns.


Here's what I've come up with so far. I would like to see how to interrupt the sleep command if a child terminates, so that one would not have to tune WAITALL_DELAY to one's usage.

waitall() { # PID...
  ## Wait for children to exit and indicate whether all exited with 0 status.
  local errors=0
  while :; do
    debug "Processes remaining: $*"
    for pid in "$@"; do
      shift
      if kill -0 "$pid" 2>/dev/null; then
        debug "$pid is still alive."
        set -- "$@" "$pid"
      elif wait "$pid"; then
        debug "$pid exited with zero exit status."
      else
        debug "$pid exited with non-zero exit status."
        ((++errors))
      fi
    done
    (("$#" > 0)) || break
    # TODO: how to interrupt this sleep when a child terminates?
    sleep ${WAITALL_DELAY:-1}
   done
  ((errors == 0))
}

debug() { echo "DEBUG: $*" >&2; }

pids=""
for t in 3 5 4; do 
  sleep "$t" &
  pids="$pids $!"
done
waitall $pids

If you have GNU Parallel installed you can do:

# If doCalculations is a function
export -f doCalculations
seq 0 9 | parallel doCalculations {}

GNU Parallel will give you exit code:

  • 0 - All jobs ran without error.

  • 1-253 - Some of the jobs failed. The exit status gives the number of failed jobs

  • 254 - More than 253 jobs failed.

  • 255 - Other error.

Watch the intro videos to learn more: http://pi.dk/1


set -e
fail () {
    touch .failure
}
expect () {
    wait
    if [ -f .failure ]; then
        rm -f .failure
        exit 1
    fi
}

sleep 2 || fail &
sleep 2 && false || fail &
sleep 2 || fail
expect

The set -e at top makes your script stop on failure.

expect will return 1 if any subjob failed.


There can be a case where the process is complete before waiting for the process. If we trigger wait for a process that is already finished, it will trigger an error like pid is not a child of this shell. To avoid such cases, the following function can be used to find whether the process is complete or not:

isProcessComplete(){
PID=$1
while [ -e /proc/$PID ]
do
    echo "Process: $PID is still running"
    sleep 5
done
echo "Process $PID has finished"
}

I've just been modifying a script to background and parallelise a process.

I did some experimenting (on Solaris with both bash and ksh) and discovered that 'wait' outputs the exit status if it's not zero , or a list of jobs that return non-zero exit when no PID argument is provided. E.g.

Bash:

$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]-  Exit 2                  sleep 20 && exit 2
[2]+  Exit 1                  sleep 10 && exit 1

Ksh:

$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]+  Done(2)                  sleep 20 && exit 2
[2]+  Done(1)                  sleep 10 && exit 1

This output is written to stderr, so a simple solution to the OPs example could be:

#!/bin/bash

trap "rm -f /tmp/x.$$" EXIT

for i in `seq 0 9`; do
  doCalculations $i &
done

wait 2> /tmp/x.$$
if [ `wc -l /tmp/x.$$` -gt 0 ] ; then
  exit 1
fi

While this:

wait 2> >(wc -l)

will also return a count but without the tmp file. This might also be used this way, for example:

wait 2> >(if [ `wc -l` -gt 0 ] ; then echo "ERROR"; fi)

But this isn't very much more useful than the tmp file IMO. I couldn't find a useful way to avoid the tmp file whilst also avoiding running the "wait" in a subshell, which wont work at all.


solution to wait for several subprocesses and to exit when any one of them exits with non-zero status code is by using 'wait -n'

#!/bin/bash
wait_for_pids()
{
    for (( i = 1; i <= $#; i++ )) do
        wait -n $@
        status=$?
        echo "received status: "$status
        if [ $status -ne 0 ] && [ $status -ne 127 ]; then
            exit 1
        fi
    done
}

sleep_for_10()
{
    sleep 10
    exit 10
}

sleep_for_20()
{
    sleep 20
}

sleep_for_10 &
pid1=$!

sleep_for_20 &
pid2=$!

wait_for_pids $pid2 $pid1

status code '127' is for non-existing process which means the child might have exited.


I don't believe it's possible with Bash's builtin functionality.

You can get notification when a child exits:

#!/bin/sh
set -o monitor        # enable script job control
trap 'echo "child died"' CHLD

However there's no apparent way to get the child's exit status in the signal handler.

Getting that child status is usually the job of the wait family of functions in the lower level POSIX APIs. Unfortunately Bash's support for that is limited - you can wait for one specific child process (and get its exit status) or you can wait for all of them, and always get a 0 result.

What it appears impossible to do is the equivalent of waitpid(-1), which blocks until any child process returns.


Here's my version that works for multiple pids, logs warnings if execution takes too long, and stops the subprocesses if execution takes longer than a given value.

function WaitForTaskCompletion {
    local pids="${1}" # pids to wait for, separated by semi-colon
    local soft_max_time="${2}" # If execution takes longer than $soft_max_time seconds, will log a warning, unless $soft_max_time equals 0.
    local hard_max_time="${3}" # If execution takes longer than $hard_max_time seconds, will stop execution, unless $hard_max_time equals 0.
    local caller_name="${4}" # Who called this function
    local exit_on_error="${5:-false}" # Should the function exit program on subprocess errors       

    Logger "${FUNCNAME[0]} called by [$caller_name]."

    local soft_alert=0 # Does a soft alert need to be triggered, if yes, send an alert once 
    local log_ttime=0 # local time instance for comparaison

    local seconds_begin=$SECONDS # Seconds since the beginning of the script
    local exec_time=0 # Seconds since the beginning of this function

    local retval=0 # return value of monitored pid process
    local errorcount=0 # Number of pids that finished with errors

    local pidCount # number of given pids

    IFS=';' read -a pidsArray <<< "$pids"
    pidCount=${#pidsArray[@]}

    while [ ${#pidsArray[@]} -gt 0 ]; do
        newPidsArray=()
        for pid in "${pidsArray[@]}"; do
            if kill -0 $pid > /dev/null 2>&1; then
                newPidsArray+=($pid)
            else
                wait $pid
                result=$?
                if [ $result -ne 0 ]; then
                    errorcount=$((errorcount+1))
                    Logger "${FUNCNAME[0]} called by [$caller_name] finished monitoring [$pid] with exitcode [$result]."
                fi
            fi
        done

        ## Log a standby message every hour
        exec_time=$(($SECONDS - $seconds_begin))
        if [ $((($exec_time + 1) % 3600)) -eq 0 ]; then
            if [ $log_ttime -ne $exec_time ]; then
                log_ttime=$exec_time
                Logger "Current tasks still running with pids [${pidsArray[@]}]."
            fi
        fi

        if [ $exec_time -gt $soft_max_time ]; then
            if [ $soft_alert -eq 0 ] && [ $soft_max_time -ne 0 ]; then
                Logger "Max soft execution time exceeded for task [$caller_name] with pids [${pidsArray[@]}]."
                soft_alert=1
                SendAlert

            fi
            if [ $exec_time -gt $hard_max_time ] && [ $hard_max_time -ne 0 ]; then
                Logger "Max hard execution time exceeded for task [$caller_name] with pids [${pidsArray[@]}]. Stopping task execution."
                kill -SIGTERM $pid
                if [ $? == 0 ]; then
                    Logger "Task stopped successfully"
                else
                    errrorcount=$((errorcount+1))
                fi
            fi
        fi

        pidsArray=("${newPidsArray[@]}")
        sleep 1
    done

    Logger "${FUNCNAME[0]} ended for [$caller_name] using [$pidCount] subprocesses with [$errorcount] errors."
    if [ $exit_on_error == true ] && [ $errorcount -gt 0 ]; then
        Logger "Stopping execution."
        exit 1337
    else
        return $errorcount
    fi
}

# Just a plain stupid logging function to be replaced by yours
function Logger {
    local value="${1}"

    echo $value
}

Example, wait for all three processes to finish, log a warning if execution takes loger than 5 seconds, stop all processes if execution takes longer than 120 seconds. Don't exit program on failures.

function something {

    sleep 10 &
    pids="$!"
    sleep 12 &
    pids="$pids;$!"
    sleep 9 &
    pids="$pids;$!"

    WaitForTaskCompletion $pids 5 120 ${FUNCNAME[0]} false
}
# Launch the function
someting
    

If you have bash 4.2 or later available the following might be useful to you. It uses associative arrays to store task names and their "code" as well as task names and their pids. I have also built in a simple rate-limiting method which might come handy if your tasks consume a lot of CPU or I/O time and you want to limit the number of concurrent tasks.

The script launches all tasks in the first loop and consumes the results in the second one.

This is a bit overkill for simple cases but it allows for pretty neat stuff. For example one can store error messages for each task in another associative array and print them after everything has settled down.

#! /bin/bash

main () {
    local -A pids=()
    local -A tasks=([task1]="echo 1"
                    [task2]="echo 2"
                    [task3]="echo 3"
                    [task4]="false"
                    [task5]="echo 5"
                    [task6]="false")
    local max_concurrent_tasks=2

    for key in "${!tasks[@]}"; do
        while [ $(jobs 2>&1 | grep -c Running) -ge "$max_concurrent_tasks" ]; do
            sleep 1 # gnu sleep allows floating point here...
        done
        ${tasks[$key]} &
        pids+=(["$key"]="$!")
    done

    errors=0
    for key in "${!tasks[@]}"; do
        pid=${pids[$key]}
        local cur_ret=0
        if [ -z "$pid" ]; then
            echo "No Job ID known for the $key process" # should never happen
            cur_ret=1
        else
            wait $pid
            cur_ret=$?
        fi
        if [ "$cur_ret" -ne 0 ]; then
            errors=$(($errors + 1))
            echo "$key (${tasks[$key]}) failed."
        fi
    done

    return $errors
}

main

There are already a lot of answers here, but I am surprised no one seems to have suggested using arrays... So here's what I did - this might be useful to some in the future.

n=10 # run 10 jobs
c=0
PIDS=()

while true

    my_function_or_command &
    PID=$!
    echo "Launched job as PID=$PID"
    PIDS+=($PID)

    (( c+=1 ))

    # required to prevent any exit due to error
    # caused by additional commands run which you
    # may add when modifying this example
    true

do

    if (( c < n ))
    then
        continue
    else
        break
    fi
done 


# collect launched jobs

for pid in "${PIDS[@]}"
do
    wait $pid || echo "failed job PID=$pid"
done

I'm thinking maybe run doCalculations; echo "$?" >>/tmp/acc in a subshell that is sent to the background, then the wait, then /tmp/acc would contain the exit statuses, one per line. I don't know about any consequences of the multiple processes appending to the accumulator file, though.

Here's a trial of this suggestion:

File: doCalcualtions

#!/bin/sh

random -e 20
sleep $?
random -e 10

File: try

#!/bin/sh

rm /tmp/acc

for i in $( seq 0 20 ) 
do
        ( ./doCalculations "$i"; echo "$?" >>/tmp/acc ) &
done

wait

cat /tmp/acc | fmt
rm /tmp/acc

Output of running ./try

5 1 9 6 8 1 2 0 9 6 5 9 6 0 0 4 9 5 5 9 8

trap is your friend. You can trap on ERR in a lot of systems. You can trap EXIT, or on DEBUG to perform a piece of code after every command.

This in addition to all the standard signals.


I'm thinking maybe run doCalculations; echo "$?" >>/tmp/acc in a subshell that is sent to the background, then the wait, then /tmp/acc would contain the exit statuses, one per line. I don't know about any consequences of the multiple processes appending to the accumulator file, though.

Here's a trial of this suggestion:

File: doCalcualtions

#!/bin/sh

random -e 20
sleep $?
random -e 10

File: try

#!/bin/sh

rm /tmp/acc

for i in $( seq 0 20 ) 
do
        ( ./doCalculations "$i"; echo "$?" >>/tmp/acc ) &
done

wait

cat /tmp/acc | fmt
rm /tmp/acc

Output of running ./try

5 1 9 6 8 1 2 0 9 6 5 9 6 0 0 4 9 5 5 9 8

I needed this, but the target process wasn't a child of current shell, in which case wait $PID doesn't work. I did find the following alternative instead:

while [ -e /proc/$PID ]; do sleep 0.1 ; done

That relies on the presence of procfs, which may not be available (Mac doesn't provide it for example). So for portability, you could use this instead:

while ps -p $PID >/dev/null ; do sleep 0.1 ; done

This works, should be just as a good if not better than @HoverHell's answer!

#!/usr/bin/env bash

set -m # allow for job control
EXIT_CODE=0;  # exit code of overall script

function foo() {
     echo "CHLD exit code is $1"
     echo "CHLD pid is $2"
     echo $(jobs -l)

     for job in `jobs -p`; do
         echo "PID => ${job}"
         wait ${job} ||  echo "At least one test failed with exit code => $?" ; EXIT_CODE=1
     done
}

trap 'foo $? $$' CHLD

DIRN=$(dirname "$0");

commands=(
    "{ echo "foo" && exit 4; }"
    "{ echo "bar" && exit 3; }"
    "{ echo "baz" && exit 5; }"
)

clen=`expr "${#commands[@]}" - 1` # get length of commands - 1

for i in `seq 0 "$clen"`; do
    (echo "${commands[$i]}" | bash) &   # run the command via bash in subshell
    echo "$i ith command has been issued as a background job"
done

# wait for all to finish
wait;

echo "EXIT_CODE => $EXIT_CODE"
exit "$EXIT_CODE"

# end

and of course, I have immortalized this script, in an NPM project which allows you to run bash commands in parallel, useful for testing:

https://github.com/ORESoftware/generic-subshell


trap is your friend. You can trap on ERR in a lot of systems. You can trap EXIT, or on DEBUG to perform a piece of code after every command.

This in addition to all the standard signals.


#!/bin/bash
set -m
for i in `seq 0 9`; do
  doCalculations $i &
done
while fg; do true; done
  • set -m allows you to use fg & bg in a script
  • fg, in addition to putting the last process in the foreground, has the same exit status as the process it foregrounds
  • while fg will stop looping when any fg exits with a non-zero exit status

unfortunately this won't handle the case when a process in the background exits with a non-zero exit status. (the loop won't terminate immediately. it will wait for the previous processes to complete.)


If you have GNU Parallel installed you can do:

# If doCalculations is a function
export -f doCalculations
seq 0 9 | parallel doCalculations {}

GNU Parallel will give you exit code:

  • 0 - All jobs ran without error.

  • 1-253 - Some of the jobs failed. The exit status gives the number of failed jobs

  • 254 - More than 253 jobs failed.

  • 255 - Other error.

Watch the intro videos to learn more: http://pi.dk/1


I almost fell into the trap of using jobs -p to collect PIDs, which does not work if the child has already exited, as shown in the script below. The solution I picked was simply calling wait -n N times, where N is the number of children I have, which I happen to know deterministically.

#!/usr/bin/env bash

sleeper() {
    echo "Sleeper $1"
    sleep $2
    echo "Exiting $1"
    return $3
}

start_sleepers() {
    sleeper 1 1 0 &
    sleeper 2 2 $1 &
    sleeper 3 5 0 &
    sleeper 4 6 0 &
    sleep 4
}

echo "Using jobs"
start_sleepers 1

pids=( $(jobs -p) )

echo "PIDS: ${pids[*]}"

for pid in "${pids[@]}"; do
    wait "$pid"
    echo "Exit code $?"
done

echo "Clearing other children"
wait -n; echo "Exit code $?"
wait -n; echo "Exit code $?"

echo "Waiting for N processes"
start_sleepers 2

for ignored in $(seq 1 4); do
    wait -n
    echo "Exit code $?"
done

Output:

Using jobs
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
PIDS: 56496 56497
Exiting 3
Exit code 0
Exiting 4
Exit code 0
Clearing other children
Exit code 0
Exit code 1
Waiting for N processes
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
Exit code 0
Exit code 2
Exiting 3
Exit code 0
Exiting 4
Exit code 0

I don't believe it's possible with Bash's builtin functionality.

You can get notification when a child exits:

#!/bin/sh
set -o monitor        # enable script job control
trap 'echo "child died"' CHLD

However there's no apparent way to get the child's exit status in the signal handler.

Getting that child status is usually the job of the wait family of functions in the lower level POSIX APIs. Unfortunately Bash's support for that is limited - you can wait for one specific child process (and get its exit status) or you can wait for all of them, and always get a 0 result.

What it appears impossible to do is the equivalent of waitpid(-1), which blocks until any child process returns.


This is something that I use:

#wait for jobs
for job in `jobs -p`; do wait ${job}; done

Here's my version that works for multiple pids, logs warnings if execution takes too long, and stops the subprocesses if execution takes longer than a given value.

function WaitForTaskCompletion {
    local pids="${1}" # pids to wait for, separated by semi-colon
    local soft_max_time="${2}" # If execution takes longer than $soft_max_time seconds, will log a warning, unless $soft_max_time equals 0.
    local hard_max_time="${3}" # If execution takes longer than $hard_max_time seconds, will stop execution, unless $hard_max_time equals 0.
    local caller_name="${4}" # Who called this function
    local exit_on_error="${5:-false}" # Should the function exit program on subprocess errors       

    Logger "${FUNCNAME[0]} called by [$caller_name]."

    local soft_alert=0 # Does a soft alert need to be triggered, if yes, send an alert once 
    local log_ttime=0 # local time instance for comparaison

    local seconds_begin=$SECONDS # Seconds since the beginning of the script
    local exec_time=0 # Seconds since the beginning of this function

    local retval=0 # return value of monitored pid process
    local errorcount=0 # Number of pids that finished with errors

    local pidCount # number of given pids

    IFS=';' read -a pidsArray <<< "$pids"
    pidCount=${#pidsArray[@]}

    while [ ${#pidsArray[@]} -gt 0 ]; do
        newPidsArray=()
        for pid in "${pidsArray[@]}"; do
            if kill -0 $pid > /dev/null 2>&1; then
                newPidsArray+=($pid)
            else
                wait $pid
                result=$?
                if [ $result -ne 0 ]; then
                    errorcount=$((errorcount+1))
                    Logger "${FUNCNAME[0]} called by [$caller_name] finished monitoring [$pid] with exitcode [$result]."
                fi
            fi
        done

        ## Log a standby message every hour
        exec_time=$(($SECONDS - $seconds_begin))
        if [ $((($exec_time + 1) % 3600)) -eq 0 ]; then
            if [ $log_ttime -ne $exec_time ]; then
                log_ttime=$exec_time
                Logger "Current tasks still running with pids [${pidsArray[@]}]."
            fi
        fi

        if [ $exec_time -gt $soft_max_time ]; then
            if [ $soft_alert -eq 0 ] && [ $soft_max_time -ne 0 ]; then
                Logger "Max soft execution time exceeded for task [$caller_name] with pids [${pidsArray[@]}]."
                soft_alert=1
                SendAlert

            fi
            if [ $exec_time -gt $hard_max_time ] && [ $hard_max_time -ne 0 ]; then
                Logger "Max hard execution time exceeded for task [$caller_name] with pids [${pidsArray[@]}]. Stopping task execution."
                kill -SIGTERM $pid
                if [ $? == 0 ]; then
                    Logger "Task stopped successfully"
                else
                    errrorcount=$((errorcount+1))
                fi
            fi
        fi

        pidsArray=("${newPidsArray[@]}")
        sleep 1
    done

    Logger "${FUNCNAME[0]} ended for [$caller_name] using [$pidCount] subprocesses with [$errorcount] errors."
    if [ $exit_on_error == true ] && [ $errorcount -gt 0 ]; then
        Logger "Stopping execution."
        exit 1337
    else
        return $errorcount
    fi
}

# Just a plain stupid logging function to be replaced by yours
function Logger {
    local value="${1}"

    echo $value
}

Example, wait for all three processes to finish, log a warning if execution takes loger than 5 seconds, stop all processes if execution takes longer than 120 seconds. Don't exit program on failures.

function something {

    sleep 10 &
    pids="$!"
    sleep 12 &
    pids="$pids;$!"
    sleep 9 &
    pids="$pids;$!"

    WaitForTaskCompletion $pids 5 120 ${FUNCNAME[0]} false
}
# Launch the function
someting
    

Exactly for this purpose I wrote a bash function called :for.

Note: :for not only preserves and returns the exit code of the failing function, but also terminates all parallel running instance. Which might not be needed in this case.

#!/usr/bin/env bash

# Wait for pids to terminate. If one pid exits with
# a non zero exit code, send the TERM signal to all
# processes and retain that exit code
#
# usage:
# :wait 123 32
function :wait(){
    local pids=("$@")
    [ ${#pids} -eq 0 ] && return $?

    trap 'kill -INT "${pids[@]}" &>/dev/null || true; trap - INT' INT
    trap 'kill -TERM "${pids[@]}" &>/dev/null || true; trap - RETURN TERM' RETURN TERM

    for pid in "${pids[@]}"; do
        wait "${pid}" || return $?
    done

    trap - INT RETURN TERM
}

# Run a function in parallel for each argument.
# Stop all instances if one exits with a non zero
# exit code
#
# usage:
# :for func 1 2 3
#
# env:
# FOR_PARALLEL: Max functions running in parallel
function :for(){
    local f="${1}" && shift

    local i=0
    local pids=()
    for arg in "$@"; do
        ( ${f} "${arg}" ) &
        pids+=("$!")
        if [ ! -z ${FOR_PARALLEL+x} ]; then
            (( i=(i+1)%${FOR_PARALLEL} ))
            if (( i==0 )) ;then
                :wait "${pids[@]}" || return $?
                pids=()
            fi
        fi
    done && [ ${#pids} -eq 0 ] || :wait "${pids[@]}" || return $?
}

usage

for.sh:

#!/usr/bin/env bash
set -e

# import :for from gist: https://gist.github.com/Enteee/c8c11d46a95568be4d331ba58a702b62#file-for
# if you don't like curl imports, source the actual file here.
source <(curl -Ls https://gist.githubusercontent.com/Enteee/c8c11d46a95568be4d331ba58a702b62/raw/)

msg="You should see this three times"

:(){
  i="${1}" && shift

  echo "${msg}"

  sleep 1
  if   [ "$i" == "1" ]; then sleep 1
  elif [ "$i" == "2" ]; then false
  elif [ "$i" == "3" ]; then
    sleep 3
    echo "You should never see this"
  fi
} && :for : 1 2 3 || exit $?

echo "You should never see this"
$ ./for.sh; echo $?
You should see this three times
You should see this three times
You should see this three times
1

References


Here's what I've come up with so far. I would like to see how to interrupt the sleep command if a child terminates, so that one would not have to tune WAITALL_DELAY to one's usage.

waitall() { # PID...
  ## Wait for children to exit and indicate whether all exited with 0 status.
  local errors=0
  while :; do
    debug "Processes remaining: $*"
    for pid in "$@"; do
      shift
      if kill -0 "$pid" 2>/dev/null; then
        debug "$pid is still alive."
        set -- "$@" "$pid"
      elif wait "$pid"; then
        debug "$pid exited with zero exit status."
      else
        debug "$pid exited with non-zero exit status."
        ((++errors))
      fi
    done
    (("$#" > 0)) || break
    # TODO: how to interrupt this sleep when a child terminates?
    sleep ${WAITALL_DELAY:-1}
   done
  ((errors == 0))
}

debug() { echo "DEBUG: $*" >&2; }

pids=""
for t in 3 5 4; do 
  sleep "$t" &
  pids="$pids $!"
done
waitall $pids

Here is simple example using wait.

Run some processes:

$ sleep 10 &
$ sleep 10 &
$ sleep 20 &
$ sleep 20 &

Then wait for them with wait command:

$ wait < <(jobs -p)

Or just wait (without arguments) for all.

This will wait for all jobs in the background are completed.

If the -n option is supplied, waits for the next job to terminate and returns its exit status.

See: help wait and help jobs for syntax.

However the downside is that this will return on only the status of the last ID, so you need to check the status for each subprocess and store it in the variable.

Or make your calculation function to create some file on failure (empty or with fail log), then check of that file if exists, e.g.

$ sleep 20 && true || tee fail &
$ sleep 20 && false || tee fail &
$ wait < <(jobs -p)
$ test -f fail && echo Calculation failed.

I don't believe it's possible with Bash's builtin functionality.

You can get notification when a child exits:

#!/bin/sh
set -o monitor        # enable script job control
trap 'echo "child died"' CHLD

However there's no apparent way to get the child's exit status in the signal handler.

Getting that child status is usually the job of the wait family of functions in the lower level POSIX APIs. Unfortunately Bash's support for that is limited - you can wait for one specific child process (and get its exit status) or you can wait for all of them, and always get a 0 result.

What it appears impossible to do is the equivalent of waitpid(-1), which blocks until any child process returns.


solution to wait for several subprocesses and to exit when any one of them exits with non-zero status code is by using 'wait -n'

#!/bin/bash
wait_for_pids()
{
    for (( i = 1; i <= $#; i++ )) do
        wait -n $@
        status=$?
        echo "received status: "$status
        if [ $status -ne 0 ] && [ $status -ne 127 ]; then
            exit 1
        fi
    done
}

sleep_for_10()
{
    sleep 10
    exit 10
}

sleep_for_20()
{
    sleep 20
}

sleep_for_10 &
pid1=$!

sleep_for_20 &
pid2=$!

wait_for_pids $pid2 $pid1

status code '127' is for non-existing process which means the child might have exited.


I think that the most straight forward way to run jobs in parallel and check for status is using temporary files. There are already a couple similar answers (e.g. Nietzche-jou and mug896).

#!/bin/bash
rm -f fail
for i in `seq 0 9`; do
  doCalculations $i || touch fail &
done
wait 
! [ -f fail ]

The above code is not thread safe. If you are concerned that the code above will be running at the same time as itself, it's better to use a more unique file name, like fail.$$. The last line is to fulfill the requirement: "return exit code 1 when any of subprocesses ends with code !=0?" I threw an extra requirement in there to clean up. It may have been clearer to write it like this:

#!/bin/bash
trap 'rm -f fail.$$' EXIT
for i in `seq 0 9`; do
  doCalculations $i || touch fail.$$ &
done
wait 
! [ -f fail.$$ ] 

Here is a similar snippet for gathering results from multiple jobs: I create a temporary directory, story the outputs of all the sub tasks in a separate file, and then dump them for review. This doesn't really match the question - I'm throwing it in as a bonus:

#!/bin/bash
trap 'rm -fr $WORK' EXIT

WORK=/tmp/$$.work
mkdir -p $WORK
cd $WORK

for i in `seq 0 9`; do
  doCalculations $i >$i.result &
done
wait 
grep $ *  # display the results with filenames and contents

There can be a case where the process is complete before waiting for the process. If we trigger wait for a process that is already finished, it will trigger an error like pid is not a child of this shell. To avoid such cases, the following function can be used to find whether the process is complete or not:

isProcessComplete(){
PID=$1
while [ -e /proc/$PID ]
do
    echo "Process: $PID is still running"
    sleep 5
done
echo "Process $PID has finished"
}

This is something that I use:

#wait for jobs
for job in `jobs -p`; do wait ${job}; done

I needed this, but the target process wasn't a child of current shell, in which case wait $PID doesn't work. I did find the following alternative instead:

while [ -e /proc/$PID ]; do sleep 0.1 ; done

That relies on the presence of procfs, which may not be available (Mac doesn't provide it for example). So for portability, you could use this instead:

while ps -p $PID >/dev/null ; do sleep 0.1 ; done

set -e
fail () {
    touch .failure
}
expect () {
    wait
    if [ -f .failure ]; then
        rm -f .failure
        exit 1
    fi
}

sleep 2 || fail &
sleep 2 && false || fail &
sleep 2 || fail
expect

The set -e at top makes your script stop on failure.

expect will return 1 if any subjob failed.


If you have bash 4.2 or later available the following might be useful to you. It uses associative arrays to store task names and their "code" as well as task names and their pids. I have also built in a simple rate-limiting method which might come handy if your tasks consume a lot of CPU or I/O time and you want to limit the number of concurrent tasks.

The script launches all tasks in the first loop and consumes the results in the second one.

This is a bit overkill for simple cases but it allows for pretty neat stuff. For example one can store error messages for each task in another associative array and print them after everything has settled down.

#! /bin/bash

main () {
    local -A pids=()
    local -A tasks=([task1]="echo 1"
                    [task2]="echo 2"
                    [task3]="echo 3"
                    [task4]="false"
                    [task5]="echo 5"
                    [task6]="false")
    local max_concurrent_tasks=2

    for key in "${!tasks[@]}"; do
        while [ $(jobs 2>&1 | grep -c Running) -ge "$max_concurrent_tasks" ]; do
            sleep 1 # gnu sleep allows floating point here...
        done
        ${tasks[$key]} &
        pids+=(["$key"]="$!")
    done

    errors=0
    for key in "${!tasks[@]}"; do
        pid=${pids[$key]}
        local cur_ret=0
        if [ -z "$pid" ]; then
            echo "No Job ID known for the $key process" # should never happen
            cur_ret=1
        else
            wait $pid
            cur_ret=$?
        fi
        if [ "$cur_ret" -ne 0 ]; then
            errors=$(($errors + 1))
            echo "$key (${tasks[$key]}) failed."
        fi
    done

    return $errors
}

main

http://jeremy.zawodny.com/blog/archives/010717.html :

#!/bin/bash

FAIL=0

echo "starting"

./sleeper 2 0 &
./sleeper 2 1 &
./sleeper 3 0 &
./sleeper 2 0 &

for job in `jobs -p`
do
echo $job
    wait $job || let "FAIL+=1"
done

echo $FAIL

if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi

Trapping CHLD signal may not work because you can lose some signals if they arrived simultaneously.

#!/bin/bash

trap 'rm -f $tmpfile' EXIT

tmpfile=$(mktemp)

doCalculations() {
    echo start job $i...
    sleep $((RANDOM % 5)) 
    echo ...end job $i
    exit $((RANDOM % 10))
}

number_of_jobs=10

for i in $( seq 1 $number_of_jobs )
do
    ( trap "echo job$i : exit value : \$? >> $tmpfile" EXIT; doCalculations ) &
done

wait 

i=0
while read res; do
    echo "$res"
    let i++
done < "$tmpfile"

echo $i jobs done !!!

To parallelize this...

for i in $(whatever_list) ; do
   do_something $i
done

Translate it to this...

for i in $(whatever_list) ; do echo $i ; done | ## execute in parallel...
   (
   export -f do_something ## export functions (if needed)
   export PATH ## export any variables that are required
   xargs -I{} --max-procs 0 bash -c ' ## process in batches...
      {
      echo "processing {}" ## optional
      do_something {}
      }' 
   )
  • If an error occurs in one process, it won't interrupt the other processes, but it will result in a non-zero exit code from the sequence as a whole.
  • Exporting functions and variables may or may not be necessary, in any particular case.
  • You can set --max-procs based on how much parallelism you want (0 means "all at once").
  • GNU Parallel offers some additional features when used in place of xargs -- but it isn't always installed by default.
  • The for loop isn't strictly necessary in this example since echo $i is basically just regenerating the output of $(whatever_list). I just think the use of the for keyword makes it a little easier to see what is going on.
  • Bash string handling can be confusing -- I have found that using single quotes works best for wrapping non-trivial scripts.
  • You can easily interrupt the entire operation (using ^C or similar), unlike the the more direct approach to Bash parallelism.

Here's a simplified working example...

for i in {0..5} ; do echo $i ; done |xargs -I{} --max-procs 2 bash -c '
   {
   echo sleep {}
   sleep 2s
   }'

I've had a go at this and combined all the best parts from the other examples here. This script will execute the checkpids function when any background process exits, and output the exit status without resorting to polling.

#!/bin/bash

set -o monitor

sleep 2 &
sleep 4 && exit 1 &
sleep 6 &

pids=`jobs -p`

checkpids() {
    for pid in $pids; do
        if kill -0 $pid 2>/dev/null; then
            echo $pid is still alive.
        elif wait $pid; then
            echo $pid exited with zero exit status.
        else
            echo $pid exited with non-zero exit status.
        fi
    done
    echo
}

trap checkpids CHLD

wait

I used this recently (thanks to Alnitak):

#!/bin/bash
# activate child monitoring
set -o monitor

# locking subprocess
(while true; do sleep 0.001; done) &
pid=$!

# count, and kill when all done
c=0
function kill_on_count() {
    # you could kill on whatever criterion you wish for
    # I just counted to simulate bash's wait with no args
    [ $c -eq 9 ] && kill $pid
    c=$((c+1))
    echo -n '.' # async feedback (but you don't know which one)
}
trap "kill_on_count" CHLD

function save_status() {
    local i=$1;
    local rc=$2;
    # do whatever, and here you know which one stopped
    # but remember, you're called from a subshell
    # so vars have their values at fork time
}

# care must be taken not to spawn more than one child per loop
# e.g don't use `seq 0 9` here!
for i in {0..9}; do
    (doCalculations $i; save_status $i $?) &
done

# wait for locking subprocess to be killed
wait $pid
echo

From there one can easily extrapolate, and have a trigger (touch a file, send a signal) and change the counting criteria (count files touched, or whatever) to respond to that trigger. Or if you just want 'any' non zero rc, just kill the lock from save_status.


#!/bin/bash
set -m
for i in `seq 0 9`; do
  doCalculations $i &
done
while fg; do true; done
  • set -m allows you to use fg & bg in a script
  • fg, in addition to putting the last process in the foreground, has the same exit status as the process it foregrounds
  • while fg will stop looping when any fg exits with a non-zero exit status

unfortunately this won't handle the case when a process in the background exits with a non-zero exit status. (the loop won't terminate immediately. it will wait for the previous processes to complete.)


I almost fell into the trap of using jobs -p to collect PIDs, which does not work if the child has already exited, as shown in the script below. The solution I picked was simply calling wait -n N times, where N is the number of children I have, which I happen to know deterministically.

#!/usr/bin/env bash

sleeper() {
    echo "Sleeper $1"
    sleep $2
    echo "Exiting $1"
    return $3
}

start_sleepers() {
    sleeper 1 1 0 &
    sleeper 2 2 $1 &
    sleeper 3 5 0 &
    sleeper 4 6 0 &
    sleep 4
}

echo "Using jobs"
start_sleepers 1

pids=( $(jobs -p) )

echo "PIDS: ${pids[*]}"

for pid in "${pids[@]}"; do
    wait "$pid"
    echo "Exit code $?"
done

echo "Clearing other children"
wait -n; echo "Exit code $?"
wait -n; echo "Exit code $?"

echo "Waiting for N processes"
start_sleepers 2

for ignored in $(seq 1 4); do
    wait -n
    echo "Exit code $?"
done

Output:

Using jobs
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
PIDS: 56496 56497
Exiting 3
Exit code 0
Exiting 4
Exit code 0
Clearing other children
Exit code 0
Exit code 1
Waiting for N processes
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
Exit code 0
Exit code 2
Exiting 3
Exit code 0
Exiting 4
Exit code 0

I'm thinking maybe run doCalculations; echo "$?" >>/tmp/acc in a subshell that is sent to the background, then the wait, then /tmp/acc would contain the exit statuses, one per line. I don't know about any consequences of the multiple processes appending to the accumulator file, though.

Here's a trial of this suggestion:

File: doCalcualtions

#!/bin/sh

random -e 20
sleep $?
random -e 10

File: try

#!/bin/sh

rm /tmp/acc

for i in $( seq 0 20 ) 
do
        ( ./doCalculations "$i"; echo "$?" >>/tmp/acc ) &
done

wait

cat /tmp/acc | fmt
rm /tmp/acc

Output of running ./try

5 1 9 6 8 1 2 0 9 6 5 9 6 0 0 4 9 5 5 9 8

I see lots of good examples listed on here, wanted to throw mine in as well.

#! /bin/bash

items="1 2 3 4 5 6"
pids=""

for item in $items; do
    sleep $item &
    pids+="$! "
done

for pid in $pids; do
    wait $pid
    if [ $? -eq 0 ]; then
        echo "SUCCESS - Job $pid exited with a status of $?"
    else
        echo "FAILED - Job $pid exited with a status of $?"
    fi
done

I use something very similar to start/stop servers/services in parallel and check each exit status. Works great for me. Hope this helps someone out!


Exactly for this purpose I wrote a bash function called :for.

Note: :for not only preserves and returns the exit code of the failing function, but also terminates all parallel running instance. Which might not be needed in this case.

#!/usr/bin/env bash

# Wait for pids to terminate. If one pid exits with
# a non zero exit code, send the TERM signal to all
# processes and retain that exit code
#
# usage:
# :wait 123 32
function :wait(){
    local pids=("$@")
    [ ${#pids} -eq 0 ] && return $?

    trap 'kill -INT "${pids[@]}" &>/dev/null || true; trap - INT' INT
    trap 'kill -TERM "${pids[@]}" &>/dev/null || true; trap - RETURN TERM' RETURN TERM

    for pid in "${pids[@]}"; do
        wait "${pid}" || return $?
    done

    trap - INT RETURN TERM
}

# Run a function in parallel for each argument.
# Stop all instances if one exits with a non zero
# exit code
#
# usage:
# :for func 1 2 3
#
# env:
# FOR_PARALLEL: Max functions running in parallel
function :for(){
    local f="${1}" && shift

    local i=0
    local pids=()
    for arg in "$@"; do
        ( ${f} "${arg}" ) &
        pids+=("$!")
        if [ ! -z ${FOR_PARALLEL+x} ]; then
            (( i=(i+1)%${FOR_PARALLEL} ))
            if (( i==0 )) ;then
                :wait "${pids[@]}" || return $?
                pids=()
            fi
        fi
    done && [ ${#pids} -eq 0 ] || :wait "${pids[@]}" || return $?
}

usage

for.sh:

#!/usr/bin/env bash
set -e

# import :for from gist: https://gist.github.com/Enteee/c8c11d46a95568be4d331ba58a702b62#file-for
# if you don't like curl imports, source the actual file here.
source <(curl -Ls https://gist.githubusercontent.com/Enteee/c8c11d46a95568be4d331ba58a702b62/raw/)

msg="You should see this three times"

:(){
  i="${1}" && shift

  echo "${msg}"

  sleep 1
  if   [ "$i" == "1" ]; then sleep 1
  elif [ "$i" == "2" ]; then false
  elif [ "$i" == "3" ]; then
    sleep 3
    echo "You should never see this"
  fi
} && :for : 1 2 3 || exit $?

echo "You should never see this"
$ ./for.sh; echo $?
You should see this three times
You should see this three times
You should see this three times
1

References


Wait for all jobs and return the exit code of the last failing job. Unlike solutions above, this does not require pid saving. Just bg away, and wait.

function wait_ex {
    # this waits for all jobs and returns the exit code of the last failing job
    ecode=0
    while true; do
        wait -n
        err="$?"
        [ "$err" == "127" ] && break
        [ "$err" != "0" ] && ecode="$err"
    done
    return $ecode
}

I used this recently (thanks to Alnitak):

#!/bin/bash
# activate child monitoring
set -o monitor

# locking subprocess
(while true; do sleep 0.001; done) &
pid=$!

# count, and kill when all done
c=0
function kill_on_count() {
    # you could kill on whatever criterion you wish for
    # I just counted to simulate bash's wait with no args
    [ $c -eq 9 ] && kill $pid
    c=$((c+1))
    echo -n '.' # async feedback (but you don't know which one)
}
trap "kill_on_count" CHLD

function save_status() {
    local i=$1;
    local rc=$2;
    # do whatever, and here you know which one stopped
    # but remember, you're called from a subshell
    # so vars have their values at fork time
}

# care must be taken not to spawn more than one child per loop
# e.g don't use `seq 0 9` here!
for i in {0..9}; do
    (doCalculations $i; save_status $i $?) &
done

# wait for locking subprocess to be killed
wait $pid
echo

From there one can easily extrapolate, and have a trigger (touch a file, send a signal) and change the counting criteria (count files touched, or whatever) to respond to that trigger. Or if you just want 'any' non zero rc, just kill the lock from save_status.


How about simply:

#!/bin/bash

pids=""

for i in `seq 0 9`; do
   doCalculations $i &
   pids="$pids $!"
done

wait $pids

...code continued here ...

Update:

As pointed by multiple commenters, the above waits for all processes to be completed before continuing, but does not exit and fail if one of them fails, it can be made to do with the following modification suggested by @Bryan, @SamBrightman, and others:

#!/bin/bash

pids=""
RESULT=0


for i in `seq 0 9`; do
   doCalculations $i &
   pids="$pids $!"
done

for pid in $pids; do
    wait $pid || let "RESULT=1"
done

if [ "$RESULT" == "1" ];
    then
       exit 1
fi

...code continued here ...

Wait for all jobs and return the exit code of the last failing job. Unlike solutions above, this does not require pid saving. Just bg away, and wait.

function wait_ex {
    # this waits for all jobs and returns the exit code of the last failing job
    ecode=0
    while true; do
        wait -n
        err="$?"
        [ "$err" == "127" ] && break
        [ "$err" != "0" ] && ecode="$err"
    done
    return $ecode
}

I'm thinking maybe run doCalculations; echo "$?" >>/tmp/acc in a subshell that is sent to the background, then the wait, then /tmp/acc would contain the exit statuses, one per line. I don't know about any consequences of the multiple processes appending to the accumulator file, though.

Here's a trial of this suggestion:

File: doCalcualtions

#!/bin/sh

random -e 20
sleep $?
random -e 10

File: try

#!/bin/sh

rm /tmp/acc

for i in $( seq 0 20 ) 
do
        ( ./doCalculations "$i"; echo "$?" >>/tmp/acc ) &
done

wait

cat /tmp/acc | fmt
rm /tmp/acc

Output of running ./try

5 1 9 6 8 1 2 0 9 6 5 9 6 0 0 4 9 5 5 9 8

I've had a go at this and combined all the best parts from the other examples here. This script will execute the checkpids function when any background process exits, and output the exit status without resorting to polling.

#!/bin/bash

set -o monitor

sleep 2 &
sleep 4 && exit 1 &
sleep 6 &

pids=`jobs -p`

checkpids() {
    for pid in $pids; do
        if kill -0 $pid 2>/dev/null; then
            echo $pid is still alive.
        elif wait $pid; then
            echo $pid exited with zero exit status.
        else
            echo $pid exited with non-zero exit status.
        fi
    done
    echo
}

trap checkpids CHLD

wait

The following code will wait for completion of all calculations and return exit status 1 if any of doCalculations fails.

#!/bin/bash
for i in $(seq 0 9); do
   (doCalculations $i >&2 & wait %1; echo $?) &
done | grep -qv 0 && exit 1

There are already a lot of answers here, but I am surprised no one seems to have suggested using arrays... So here's what I did - this might be useful to some in the future.

n=10 # run 10 jobs
c=0
PIDS=()

while true

    my_function_or_command &
    PID=$!
    echo "Launched job as PID=$PID"
    PIDS+=($PID)

    (( c+=1 ))

    # required to prevent any exit due to error
    # caused by additional commands run which you
    # may add when modifying this example
    true

do

    if (( c < n ))
    then
        continue
    else
        break
    fi
done 


# collect launched jobs

for pid in "${PIDS[@]}"
do
    wait $pid || echo "failed job PID=$pid"
done

Just store the results out of the shell, e.g. in a file.

#!/bin/bash
tmp=/tmp/results

: > $tmp  #clean the file

for i in `seq 0 9`; do
  (doCalculations $i; echo $i:$?>>$tmp)&
done      #iterate

wait      #wait until all ready

sort $tmp | grep -v ':0'  #... handle as required

How about simply:

#!/bin/bash

pids=""

for i in `seq 0 9`; do
   doCalculations $i &
   pids="$pids $!"
done

wait $pids

...code continued here ...

Update:

As pointed by multiple commenters, the above waits for all processes to be completed before continuing, but does not exit and fail if one of them fails, it can be made to do with the following modification suggested by @Bryan, @SamBrightman, and others:

#!/bin/bash

pids=""
RESULT=0


for i in `seq 0 9`; do
   doCalculations $i &
   pids="$pids $!"
done

for pid in $pids; do
    wait $pid || let "RESULT=1"
done

if [ "$RESULT" == "1" ];
    then
       exit 1
fi

...code continued here ...

The following code will wait for completion of all calculations and return exit status 1 if any of doCalculations fails.

#!/bin/bash
for i in $(seq 0 9); do
   (doCalculations $i >&2 & wait %1; echo $?) &
done | grep -qv 0 && exit 1

Just store the results out of the shell, e.g. in a file.

#!/bin/bash
tmp=/tmp/results

: > $tmp  #clean the file

for i in `seq 0 9`; do
  (doCalculations $i; echo $i:$?>>$tmp)&
done      #iterate

wait      #wait until all ready

sort $tmp | grep -v ':0'  #... handle as required

Trapping CHLD signal may not work because you can lose some signals if they arrived simultaneously.

#!/bin/bash

trap 'rm -f $tmpfile' EXIT

tmpfile=$(mktemp)

doCalculations() {
    echo start job $i...
    sleep $((RANDOM % 5)) 
    echo ...end job $i
    exit $((RANDOM % 10))
}

number_of_jobs=10

for i in $( seq 1 $number_of_jobs )
do
    ( trap "echo job$i : exit value : \$? >> $tmpfile" EXIT; doCalculations ) &
done

wait 

i=0
while read res; do
    echo "$res"
    let i++
done < "$tmpfile"

echo $i jobs done !!!

I see lots of good examples listed on here, wanted to throw mine in as well.

#! /bin/bash

items="1 2 3 4 5 6"
pids=""

for item in $items; do
    sleep $item &
    pids+="$! "
done

for pid in $pids; do
    wait $pid
    if [ $? -eq 0 ]; then
        echo "SUCCESS - Job $pid exited with a status of $?"
    else
        echo "FAILED - Job $pid exited with a status of $?"
    fi
done

I use something very similar to start/stop servers/services in parallel and check each exit status. Works great for me. Hope this helps someone out!


Here is simple example using wait.

Run some processes:

$ sleep 10 &
$ sleep 10 &
$ sleep 20 &
$ sleep 20 &

Then wait for them with wait command:

$ wait < <(jobs -p)

Or just wait (without arguments) for all.

This will wait for all jobs in the background are completed.

If the -n option is supplied, waits for the next job to terminate and returns its exit status.

See: help wait and help jobs for syntax.

However the downside is that this will return on only the status of the last ID, so you need to check the status for each subprocess and store it in the variable.

Or make your calculation function to create some file on failure (empty or with fail log), then check of that file if exists, e.g.

$ sleep 20 && true || tee fail &
$ sleep 20 && false || tee fail &
$ wait < <(jobs -p)
$ test -f fail && echo Calculation failed.

http://jeremy.zawodny.com/blog/archives/010717.html :

#!/bin/bash

FAIL=0

echo "starting"

./sleeper 2 0 &
./sleeper 2 1 &
./sleeper 3 0 &
./sleeper 2 0 &

for job in `jobs -p`
do
echo $job
    wait $job || let "FAIL+=1"
done

echo $FAIL

if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi

This works, should be just as a good if not better than @HoverHell's answer!

#!/usr/bin/env bash

set -m # allow for job control
EXIT_CODE=0;  # exit code of overall script

function foo() {
     echo "CHLD exit code is $1"
     echo "CHLD pid is $2"
     echo $(jobs -l)

     for job in `jobs -p`; do
         echo "PID => ${job}"
         wait ${job} ||  echo "At least one test failed with exit code => $?" ; EXIT_CODE=1
     done
}

trap 'foo $? $$' CHLD

DIRN=$(dirname "$0");

commands=(
    "{ echo "foo" && exit 4; }"
    "{ echo "bar" && exit 3; }"
    "{ echo "baz" && exit 5; }"
)

clen=`expr "${#commands[@]}" - 1` # get length of commands - 1

for i in `seq 0 "$clen"`; do
    (echo "${commands[$i]}" | bash) &   # run the command via bash in subshell
    echo "$i ith command has been issued as a background job"
done

# wait for all to finish
wait;

echo "EXIT_CODE => $EXIT_CODE"
exit "$EXIT_CODE"

# end

and of course, I have immortalized this script, in an NPM project which allows you to run bash commands in parallel, useful for testing:

https://github.com/ORESoftware/generic-subshell


I think that the most straight forward way to run jobs in parallel and check for status is using temporary files. There are already a couple similar answers (e.g. Nietzche-jou and mug896).

#!/bin/bash
rm -f fail
for i in `seq 0 9`; do
  doCalculations $i || touch fail &
done
wait 
! [ -f fail ]

The above code is not thread safe. If you are concerned that the code above will be running at the same time as itself, it's better to use a more unique file name, like fail.$$. The last line is to fulfill the requirement: "return exit code 1 when any of subprocesses ends with code !=0?" I threw an extra requirement in there to clean up. It may have been clearer to write it like this:

#!/bin/bash
trap 'rm -f fail.$$' EXIT
for i in `seq 0 9`; do
  doCalculations $i || touch fail.$$ &
done
wait 
! [ -f fail.$$ ] 

Here is a similar snippet for gathering results from multiple jobs: I create a temporary directory, story the outputs of all the sub tasks in a separate file, and then dump them for review. This doesn't really match the question - I'm throwing it in as a bonus:

#!/bin/bash
trap 'rm -fr $WORK' EXIT

WORK=/tmp/$$.work
mkdir -p $WORK
cd $WORK

for i in `seq 0 9`; do
  doCalculations $i >$i.result &
done
wait 
grep $ *  # display the results with filenames and contents

I've just been modifying a script to background and parallelise a process.

I did some experimenting (on Solaris with both bash and ksh) and discovered that 'wait' outputs the exit status if it's not zero , or a list of jobs that return non-zero exit when no PID argument is provided. E.g.

Bash:

$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]-  Exit 2                  sleep 20 && exit 2
[2]+  Exit 1                  sleep 10 && exit 1

Ksh:

$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]+  Done(2)                  sleep 20 && exit 2
[2]+  Done(1)                  sleep 10 && exit 1

This output is written to stderr, so a simple solution to the OPs example could be:

#!/bin/bash

trap "rm -f /tmp/x.$$" EXIT

for i in `seq 0 9`; do
  doCalculations $i &
done

wait 2> /tmp/x.$$
if [ `wc -l /tmp/x.$$` -gt 0 ] ; then
  exit 1
fi

While this:

wait 2> >(wc -l)

will also return a count but without the tmp file. This might also be used this way, for example:

wait 2> >(if [ `wc -l` -gt 0 ] ; then echo "ERROR"; fi)

But this isn't very much more useful than the tmp file IMO. I couldn't find a useful way to avoid the tmp file whilst also avoiding running the "wait" in a subshell, which wont work at all.


Examples related to bash

Comparing a variable with a string python not working when redirecting from bash script Zipping a file in bash fails How do I prevent Conda from activating the base environment by default? Get first line of a shell command's output Fixing a systemd service 203/EXEC failure (no such file or directory) /bin/sh: apt-get: not found VSCode Change Default Terminal Run bash command on jenkins pipeline How to check if the docker engine and a docker container are running? How to switch Python versions in Terminal?

Examples related to process

Fork() function in C How to kill a nodejs process in Linux? Xcode process launch failed: Security Understanding PrimeFaces process/update and JSF f:ajax execute/render attributes Linux Script to check if process is running and act on the result CreateProcess error=2, The system cannot find the file specified How to make parent wait for all child processes to finish? How to use [DllImport("")] in C#? Visual Studio "Could not copy" .... during build How to terminate process from Python using pid?

Examples related to wait

How to make the script wait/sleep in a simple way in unity How do I make a delay in Java? Wait some seconds without blocking UI execution Protractor : How to wait for page complete after click a button? How to wait until an element is present in Selenium? Javascript sleep/delay/wait function How to wait till the response comes from the $http request, in angularjs? How to add a "sleep" or "wait" to my Lua Script? Concept behind putting wait(),notify() methods in Object class How can I wait for 10 second without locking application UI in android