Linux find exit status

How to get the exit code of commands started by find?

I am using «find» in a Travis-CI to check a particular file type with a program. (To be exact, it is a shellcheck check.) However, when using find the exit codes of the command(s)/subshells executed by it are naturally discarded, as they are not passed to the «main script». As an example this is a find command:

find . -type f -iname "*.sh" -exec sh ./testScripts.sh "<>" \; 

./testScripts.sh may exit with 0 or >= 1, depending on the test result. The testScripts.sh exits properly with the correct exit code, but due to find the exit code of the command is always «0». All I want is, that if one file/execution errors, this error is «propagated» up to Travis-CI. How can I accomplish this?

Can you change testScripts.sh so it accepts multiple scripts to run? That way you could use the -exec <> + variant, which exits with status != 0 if the command fails.

2 Answers 2

Using Stephen Kitt’s suggestion in comments:

find . -type f -iname "*.sh" -exec sh -c 'for n; do ./testScripts.sh "$n" || exit 1; done' sh <> + 

This will cause the sh -c script to exit with a non-zero exit status as soon as testScript.sh does. This means that find will also exit with a non-zero exit status:

If terminated by a plus sign, the pathnames for which the primary is evaluated are aggregated into sets, and utility will be invoked once per set, similar to xargs(1) . If any invocation exits with a non-zero exit status, then find will eventually do so as well, but this does not cause find to exit early.

Regarding the questions in comment:

  1. for n; do . ; done looks weird but makes sense when you realize that without anything to iterate over, the for loop will iterate over «$@» implicitly.
  2. The trailing sh at the end will be placed in $0 of the sh -c shell. The <> will be substituted by a number of pathnames. Without sh there, the first pathname would end up in $0 and would not be picked up by the loop, since it’s not in $@ . $0 usually contains the name of the current interpreter (it will be used in error message produced by the sh -c shell).
Читайте также:  Узнать имя файла linux

Источник

Bash — find exec return value

I need a way to tell if grep does find something, and ideally pass that return value to an if statement. Let’s say I have a tmp folder (current folder), in that there are several files and sub-folders. I want to search all files named abc for a pattern xyz . The search is assumed to be successful if it finds any occurrence of xyz (it does not matter how many times xyz is found). The search fails if no occurrence of xyz is found. In bash, it can be done like this:

find . -name "abc" -exec grep "xyz" <> \; 

That would show if xyz is found at all. But I’m not sure how pass the result (successful or not) back to an if statement. Any help would be appreciated.

Note that your example shows usage of the find command, and has nothing in particular to do with bash. Where is your if statement, and how do you want to use the output of your search?

Are you looking to know about the result for each file separately, or collectively (the difference between «does every file abc contain the text xyz » and «does at least one file abc contain the text xyz «)?

5 Answers 5

x=`find . -name abc | xargs grep xyz` echo $x 

That is, x contains your return value. It is blank when there is no match.

Just to be clear, the echo x part is just for you to test. Then you need to figure out how to use an if statement in bash, which should be easy for you to find out. If not then you can ask about this topic.

Читайте также:  Delete directory linux command line

Right, then checking if the results is empty or not seems to do the trick for this case: stackoverflow.com/questions/3061036/…

This is somewhat unsophisticated, and needlessly collects the list of matching files. If you really want to know which files matched, piping the output from find -exec grep to a loop over those file names is usually more natural.

If you want to know that find finds some files abc and that at least one of them contains the string xyz , then you can probably use:

if find . -name 'abc' -type f -exec grep -q xyz <> + then : All invocations of grep found at least one xyz and nothing else failed else : One or more invocations of grep failed to find xyz or something else failed fi 

This relies on find returning an exit status for its own operations, and a non-zero exit status of any of the command(s) it executes. The + at the end groups as many file names as find thinks reasonable into a single command line. You need quite a lot of file name (a large number of fairly long names) to make find run grep multiple times. On a Mac running Mac OS X 10.10.4, I got to about 3,000 files, each with about 32 characters in the name, for an argument list of just over 100 KiB, without grep being run multiple times. OTOH, when I had just under 8000 files, I got two runs of grep , with around 130 KiB of argument list for each.

Someone briefly left a comment asking whether the exit status of find is guaranteed. The answer is ‘yes’ — though I had to modify my description of the exit status a bit. Under the description of -exec , POSIX specifies:

If any invocation [of the ‘utility’, such as grep in this question] returns a non-zero value as exit status, the find utility shall return a non-zero exit status.

And under the general ‘Exit status’ it says:

The following exit values shall be returned:

0 — All path operands were traversed successfully.
>0 — An error occurred.

Thus, find will report success as long as none of its own operations fails and as long as none of the grep commands it invokes fails. Zero files found but no failures will be reported as success. (Failures might be lack of permission to search a directory, for example.)

Читайте также:  Mac vmware fusion linux

Источник

Оцените статью
Adblock
detector