Check exit code in linux

Checking Bash exit status of several commands efficiently

Is there something similar to pipefail for multiple commands, like a ‘try’ statement but within bash. I would like to do something like this:

And at any point, if any command fails, drop out and echo out the error of that command. I don’t want to have to do something like:

command1 if [ $? -ne 0 ]; then echo "command1 borked it" fi command2 if [ $? -ne 0 ]; then echo "command2 borked it" fi 
pipefail -o command1 "arg1" "arg2" | command2 "arg1" "arg2" | command3 

Because the arguments of each command I believe (correct me if I’m wrong) will interfere with each other. These two methods seem horribly long-winded and nasty to me so I’m here appealing for a more efficient method.

@PabloBianchi, set -e is a horrid idea. See the exercises in BashFAQ #105 discussing just a few of the unexpected edge cases it introduces, and/or the comparison showing incompatibilities between different shells’ (and shell versions’) implementations at in-ulm.de/~mascheck/various/set-e.

16 Answers 16

You can write a function that launches and tests the command for you. Assume command1 and command2 are environment variables that have been set to a command.

function mytest < "$@" local status=$? if (( status != 0 )); then echo "error with $1" >&2 fi return $status > mytest "$command1" mytest "$command2" 

Don’t use $* , it’ll fail if any arguments have spaces in them; use «$@» instead. Similarly, put $1 inside the quotes in the echo command.

This is the method I went with. To be honest, I don’t think I was clear enough in my original post but this method allows me to write my own ‘test’ function so I can then perform an error actions in there I like that are relevant to the actions performed in the script. Thanks 🙂

Wouldn’t the exit code returned by test() always return 0 in case of an error since the last command executed was ‘echo’. You might need to save the value of $? first.

This is not a good idea, and it encourages bad practice. Consider the simple case of ls . If you invoke ls foo and get an error message of the form ls: foo: No such file or directory\n you understand the problem. If instead you get ls: foo: No such file or directory\nerror with ls\n you become distracted by superfluous information. In this case, It is easy enough to argue that the superfluity is trivial, but it quickly grows. Concise error messages are important. But more importantly, this type of wrapper encourages too writers to completely omit good error messages.

What do you mean by «drop out and echo the error»? If you mean you want the script to terminate as soon as any command fails, then just do

set -e # DON'T do this. See commentary below. 

at the start of the script (but note warning below). Do not bother echoing the error message: let the failing command handle that. In other words, if you do:

#!/bin/sh set -e # Use caution. eg, don't do this command1 command2 command3 

and command2 fails, while printing an error message to stderr, then it seems that you have achieved what you want. (Unless I misinterpret what you want!)

Читайте также:  Linux удалить непустую папку

As a corollary, any command that you write must behave well: it must report errors to stderr instead of stdout (the sample code in the question prints errors to stdout) and it must exit with a non-zero status when it fails.

However, I no longer consider this to be a good practice. set -e has changed its semantics with different versions of bash, and although it works fine for a simple script, there are so many edge cases that it is essentially unusable. (Consider things like: set -e; foo() < false; echo should not print; >; foo && echo ok The semantics here are somewhat reasonable, but if you refactor code into a function that relied on the option setting to terminate early, you can easily get bitten.) IMO it is better to write:

 #!/bin/sh command1 || exit command2 || exit command3 || exit 
#!/bin/sh command1 && command2 && command3 

Be advised that while this solution is the simplest, it does not let you perform any cleanup on failure.

Also note that the semantics of errexit (set -e) have changed in different versions of bash, and will often behave unexpectedly during function invocation and other settings. I no longer recommend its use. IMO, it is better to write || exit explicitly after each command.

I have a set of scripting functions that I use extensively on my Red Hat system. They use the system functions from /etc/init.d/functions to print green [ OK ] and red [FAILED] status indicators.

You can optionally set the $LOG_STEPS variable to a log file name if you want to log which commands fail.

Usage

step "Installing XFS filesystem tools:" try rpm -i xfsprogs-*.rpm next step "Configuring udev:" try cp *.rules /etc/udev/rules.d try udevtrigger next step "Adding rc.postsysinit hook:" try cp rc.postsysinit /etc/rc.d/ try ln -s rc.d/rc.postsysinit /etc/rc.postsysinit try echo $'\nexec /etc/rc.postsysinit' >> /etc/rc.sysinit next 

Output

Installing XFS filesystem tools: [ OK ] Configuring udev: [FAILED] Adding rc.postsysinit hook: [ OK ] 

Code

#!/bin/bash . /etc/init.d/functions # Use step(), try(), and next() to perform a series of commands and print # [ OK ] or [FAILED] at the end. The step as a whole fails if any individual # command fails. # # Example: # step "Remounting / and /boot as read-write:" # try mount -o remount,rw / # try mount -o remount,rw /boot # next step() < echo -n "$@" STEP_OK=0 [[ -w /tmp ]] && echo $STEP_OK >/tmp/step.$$ > try() < # Check for `-b' argument to run command in the background. local BG= [[ $1 == -b ]] && < BG=1; shift; >[[ $1 == -- ]] && < shift; ># Run the command. if [[ -z $BG ]]; then "$@" else "$@" & fi # Check if command failed and update $STEP_OK if so. local EXIT_CODE=$? if [[ $EXIT_CODE -ne 0 ]]; then STEP_OK=$EXIT_CODE [[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$ if [[ -n $LOG_STEPS ]]; then local FILE=$(readlink -m "$") local LINE=$ echo "$FILE: line $LINE: Command \`$*' failed with exit code $EXIT_CODE." >> "$LOG_STEPS" fi fi return $EXIT_CODE > next() < [[ -f /tmp/step.$$ ]] && < STEP_OK=$(< /tmp/step.$$); rm -f /tmp/step.$$; >[[ $STEP_OK -eq 0 ]] && echo_success || echo_failure echo return $STEP_OK > 

this is pure gold. While I understand how to use the script I don’t fully grasp each step, definitely outside of my bash scripting knowledge but I think it’s a work of art nonetheless.

Читайте также:  Установка сертификата rutoken linux

Does this tool have a formal name? I’d love to read a man page on this style of step/try/next logging

These shell functions seem to be unavailable on Ubuntu? I was hoping to use this, something portable-ish though

@ThorSummoner, this is likely because Ubuntu uses Upstart instead of SysV init, and will soon be using systemd. RedHat tends to maintain backwards compatibility for long, which is why the init.d stuff is still there.

I’ve posted an expansion on John’s solution and allows it to be used on non-RedHat systems such as Ubuntu. See stackoverflow.com/a/54190627/308145

For what it’s worth, a shorter way to write code to check each command for success is:

command1 || echo "command1 borked it" command2 || echo "command2 borked it" 

It’s still tedious but at least it’s readable.

Didn’t think of this, not the method I went with but it is quick and easy to read, thanks for the info 🙂

To execute the commands silently and achieve the same thing: command1 &> /dev/null || echo «command1 borked it»

I’m a fan of this method, is there a way to execute multiple commands after the OR? Something like command1 || (echo command1 borked it ; exit)

@AndreasKralj, yes, you can run one liner to execute multiple commands after failure: command1 || < echo command1 borken it ; exit; >Last semicolon is the must!

@VladimirPerepechenko Thank you very much! I’ve used this method for years now and it’s served me well!

An alternative is simply to join the commands together with && so that the first one to fail prevents the remainder from executing:

command1 && command2 && command3 

This isn’t the syntax you asked for in the question, but it’s a common pattern for the use case you describe. In general the commands should be responsible for printing failures so that you don’t have to do so manually (maybe with a -q flag to silence errors when you don’t want them). If you have the ability to modify these commands, I’d edit them to yell on failure, rather than wrap them in something else that does so.

Читайте также:  Linux amd e 450

Notice also that you don’t need to do:

And when you do need to check return codes use an arithmetic context instead of [ . -ne :

ret=$? # do something if (( ret != 0 )); then 

Instead of creating runner functions or using set -e , use a trap :

trap 'echo "error"; do_cleanup failed; exit' ERR trap 'echo "received signal to stop"; do_cleanup interrupted; exit' SIGQUIT SIGTERM SIGINT do_cleanup () < rm tempfile; echo "$1 $(date)" >> script_log; > command1 command2 command3 

The trap even has access to the line number and the command line of the command that triggered it. The variables are $BASH_LINENO and $BASH_COMMAND .

If you want to mimic a try block even more closely, use trap — ERR to turn the trap off at the end of the «block».

Personally I much prefer to use a lightweight approach, as seen here;

try apt-fast upgrade -y try asuser vagrant "echo 'uname -a' >> ~/.profile" 

I’ve developed an almost flawless try & catch implementation in bash, that allows you to write code like:

try echo 'Hello' false echo 'This will not be displayed' catch echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!" 

You can even nest the try-catch blocks inside themselves!

try < echo 'Hello' try < echo 'Nested Hello' false echo 'This will not execute' >catch < echo "Nested Caught (@ $__EXCEPTION_LINE__)" >false echo 'This will not execute too' > catch

The code is a part of my bash boilerplate/framework. It further extends the idea of try & catch with things like error handling with backtrace and exceptions (plus some other nice features).

Here’s the code that’s responsible just for try & catch:

set -o pipefail shopt -s expand_aliases declare -ig __oo__insideTryCatch=0 # if try-catch is nested, then set +e before so the parent handler doesn't catch us alias try="[[ \$__oo__insideTryCatch -gt 0 ]] && set +e; __oo__insideTryCatch+=1; ( set -e; trap \"Exception.Capture \$; \" ERR;" alias catch=" ); Exception.Extract \$? || " Exception.Capture() < local script="$" if [[ ! -f /tmp/stored_exception_source ]]; then echo "$script" > /tmp/stored_exception_source fi if [[ ! -f /tmp/stored_exception_line ]]; then echo "$1" > /tmp/stored_exception_line fi return 0 > Exception.Extract() < if [[ $__oo__insideTryCatch -gt 1 ]] then set -e fi __oo__insideTryCatch+=-1 __EXCEPTION_CATCH__=( $(Exception.GetLastException) ) local retVal=$1 if [[ $retVal -gt 0 ]] then # BACKWARDS COMPATIBILE WAY: # export __EXCEPTION_SOURCE__="$-1)]>" # export __EXCEPTION_LINE__="$-2)]>" export __EXCEPTION_SOURCE__="$" export __EXCEPTION_LINE__="$" export __EXCEPTION__="$ <__EXCEPTION_CATCH__[@]:0:($<#__EXCEPTION_CATCH__[@]>- 2)>" return 1 # so that we may continue with a "catch" fi > Exception.GetLastException() < if [[ -f /tmp/stored_exception ]] && [[ -f /tmp/stored_exception_line ]] && [[ -f /tmp/stored_exception_source ]] then cat /tmp/stored_exception cat /tmp/stored_exception_line cat /tmp/stored_exception_source else echo -e " \n$\n$" fi rm -f /tmp/stored_exception /tmp/stored_exception_line /tmp/stored_exception_source return 0 > 

Feel free to use, fork and contribute — it’s on GitHub.

Источник

Оцените статью
Adblock
detector