如何在bash脚本中等待从该脚本派生的几个子进程完成,然后在任何子进程以code !=0结束时返回退出代码!=0?

简单的脚本:

#!/bin/bash
for i in `seq 0 9`; do
  doCalculations $i &
done
wait

上面的脚本将等待所有10个子进程,但它总是给出退出状态0(参见help wait)。我如何修改这个脚本,以便它将发现衍生子进程的退出状态,并在任何子进程以code !=0结束时返回退出代码1 ?

有没有比收集子进程的pid、按顺序等待它们并求和退出状态更好的解决方案呢?


当前回答

为了将此并行化…

for i in $(whatever_list) ; do
   do_something $i
done

翻译成这样…

for i in $(whatever_list) ; do echo $i ; done | ## execute in parallel...
   (
   export -f do_something ## export functions (if needed)
   export PATH ## export any variables that are required
   xargs -I{} --max-procs 0 bash -c ' ## process in batches...
      {
      echo "processing {}" ## optional
      do_something {}
      }' 
   )

If an error occurs in one process, it won't interrupt the other processes, but it will result in a non-zero exit code from the sequence as a whole. Exporting functions and variables may or may not be necessary, in any particular case. You can set --max-procs based on how much parallelism you want (0 means "all at once"). GNU Parallel offers some additional features when used in place of xargs -- but it isn't always installed by default. The for loop isn't strictly necessary in this example since echo $i is basically just regenerating the output of $(whatever_list). I just think the use of the for keyword makes it a little easier to see what is going on. Bash string handling can be confusing -- I have found that using single quotes works best for wrapping non-trivial scripts. You can easily interrupt the entire operation (using ^C or similar), unlike the the more direct approach to Bash parallelism.

下面是一个简化的工作示例……

for i in {0..5} ; do echo $i ; done |xargs -I{} --max-procs 2 bash -c '
   {
   echo sleep {}
   sleep 2s
   }'

其他回答

我几乎陷入了使用jobs -p来收集pid的陷阱,如果子进程已经退出,这将不起作用,如下面的脚本所示。我选择的解决方案是简单地调用-n N次,其中N是我有孩子的数量,这是我确定知道的。

#!/usr/bin/env bash

sleeper() {
    echo "Sleeper $1"
    sleep $2
    echo "Exiting $1"
    return $3
}

start_sleepers() {
    sleeper 1 1 0 &
    sleeper 2 2 $1 &
    sleeper 3 5 0 &
    sleeper 4 6 0 &
    sleep 4
}

echo "Using jobs"
start_sleepers 1

pids=( $(jobs -p) )

echo "PIDS: ${pids[*]}"

for pid in "${pids[@]}"; do
    wait "$pid"
    echo "Exit code $?"
done

echo "Clearing other children"
wait -n; echo "Exit code $?"
wait -n; echo "Exit code $?"

echo "Waiting for N processes"
start_sleepers 2

for ignored in $(seq 1 4); do
    wait -n
    echo "Exit code $?"
done

输出:

Using jobs
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
PIDS: 56496 56497
Exiting 3
Exit code 0
Exiting 4
Exit code 0
Clearing other children
Exit code 0
Exit code 1
Waiting for N processes
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
Exit code 0
Exit code 2
Exiting 3
Exit code 0
Exiting 4
Exit code 0

只需将结果存储在shell之外,例如在一个文件中。

#!/bin/bash
tmp=/tmp/results

: > $tmp  #clean the file

for i in `seq 0 9`; do
  (doCalculations $i; echo $i:$?>>$tmp)&
done      #iterate

wait      #wait until all ready

sort $tmp | grep -v ':0'  #... handle as required

http://jeremy.zawodny.com/blog/archives/010717.html:

#!/bin/bash

FAIL=0

echo "starting"

./sleeper 2 0 &
./sleeper 2 1 &
./sleeper 3 0 &
./sleeper 2 0 &

for job in `jobs -p`
do
echo $job
    wait $job || let "FAIL+=1"
done

echo $FAIL

if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi

我想运行doCalculations;echo $ ?”>>/tmp/acc在一个子shell中被发送到后台,然后等待,然后/tmp/acc将包含退出状态,每行一个。不过,我不知道多个进程附加到累加器文件的任何后果。

下面是这个建议的一个例子:

文件:doCalcualtions

#!/bin/sh

random -e 20
sleep $?
random -e 10

文件:

#!/bin/sh

rm /tmp/acc

for i in $( seq 0 20 ) 
do
        ( ./doCalculations "$i"; echo "$?" >>/tmp/acc ) &
done

wait

cat /tmp/acc | fmt
rm /tmp/acc

running ./try的输出

5 1 9 6 8 1 2 0 9 6 5 9 6 0 0 4 9 5 5 9 8

为了将此并行化…

for i in $(whatever_list) ; do
   do_something $i
done

翻译成这样…

for i in $(whatever_list) ; do echo $i ; done | ## execute in parallel...
   (
   export -f do_something ## export functions (if needed)
   export PATH ## export any variables that are required
   xargs -I{} --max-procs 0 bash -c ' ## process in batches...
      {
      echo "processing {}" ## optional
      do_something {}
      }' 
   )

If an error occurs in one process, it won't interrupt the other processes, but it will result in a non-zero exit code from the sequence as a whole. Exporting functions and variables may or may not be necessary, in any particular case. You can set --max-procs based on how much parallelism you want (0 means "all at once"). GNU Parallel offers some additional features when used in place of xargs -- but it isn't always installed by default. The for loop isn't strictly necessary in this example since echo $i is basically just regenerating the output of $(whatever_list). I just think the use of the for keyword makes it a little easier to see what is going on. Bash string handling can be confusing -- I have found that using single quotes works best for wrapping non-trivial scripts. You can easily interrupt the entire operation (using ^C or similar), unlike the the more direct approach to Bash parallelism.

下面是一个简化的工作示例……

for i in {0..5} ; do echo $i ; done |xargs -I{} --max-procs 2 bash -c '
   {
   echo sleep {}
   sleep 2s
   }'