How to check if find command didn’t find anything?
found something or not. If not I want to say echo ‘You don’t have files older than $DAYS days’ or something like this 😉 How I can do that in shell script?
6 Answers 6
Count the number of lines output and store it in a variable, then test it:
lines=$(find . | wc -l) if [ $lines -eq 0 ]; then . fi
To use the find command inside an if condition, you can try this one liner :
[[ ! -z `find 'YOUR_DIR/' -name 'something'` ]] && echo "found" || echo "not found"
[prompt] $ mkdir -p Dir/dir1 Dir/dir2/ Dir/dir3 [prompt] $ ls Dir/ dir1 dir2 dir3 [prompt] $ [[ ! -z `find 'Dir/' -name 'something'` ]] && echo "found" || echo "not found" not found [prompt] $ touch Dir/dir3/something [prompt] $ [[ ! -z `find 'Dir/' -name 'something'` ]] && echo "found" || echo "not found" found
Alternatively, -n can be used instead of ! -z , for example:
[[ -n `find $dir -name $filename` ]] && echo found
-n STRING the length of STRING is nonzero
Exit 0 is easy with find, exit >0 is harder because that usually only happens with an error. However we can make it happen:
if find -type f -exec false <> + then echo 'nothing found' else echo 'something found' fi
This is not correct. The question asks how to return false if it DOES NOT find anything. -exec will only run if it does find something so this would never work properly.
@deltaray: This answer returns false when files are found and true when they’re not. Simply negate the result with a ! or use the then/else as shown which is set up in a negated form.
I wanted to do this in a single line if possible, but couldn’t see a way to get find to change its exit code without causing an error.
However, with your specific requirement, the following should work:
find /directory/whatever -name '*.tar.gz' -mtime +$DAYS | grep 'tar.gz' || echo "You don't have files older than $DAYS days"
This works by passing the output of find into a grep for the same thing, returns a failure exit code if it doesn’t find anything, or will success and echo the found lines if it does.
Everything after || will only execute if the preceding command fails.
How can I use bash’s if test and find commands together?
I have a directory with crash logs, and I’d like to use a conditional statement in a bash script based on a find command. The log files are stored in this format:
/var/log/crashes/app-2012-08-28.log /var/log/crashes/otherapp-2012-08-28.log
I want the if statement to only return true if there is a crash log for a specific app which has been modified in the last 5 minutes. The find command that I would use is:
find /var/log/crashes -name app-\*\.log -mmin -5
if [ test `find /var/log/crashes -name app-\*\.log -mmin -5` ] then service myapp restart fi
- I’ve looked at the if flags but I’m not sure which one, if any, that I should use.
- Do I need the test directive or should I just process against the results of the find command directly, or maybe use find. | wc -l to get a line count instead?
- Not 100% necessary to answer this question, but test is for testing against return codes that commands return? And they are sort of invisible — outside of stdout / stderr ? I read the man page but I’m still pretty unclear about when to use test and how to debug it.
The real answer to the general case is to use find . -exec . Also see the example commands under Why is looping over find’s output bad practice?
@Wildcard — unfortunately that doesn’t solve the general case: it doesn’t work if there is more than one match and the action needs to only run once, and it doesn’t work if you need an action to run when there are no matches. The former can be solved by using . -exec command ‘;’ -quit , but I don’t believe there is any solution for the latter other than parsing the result. Also, in either case, the primary problem with parsing the result of find (i.e. inability to distinguish delimiters from characters in filenames) doesn’t apply, as you don’t need to find delimiters in these cases.
-exec is good for quick response, but for wider conditions if find . | grep . is better: unix.stackexchange.com/a/684153/43233
7 Answers 7
[ and test are synonyms (except [ requires ] ), so you don’t want to use [ test :
[ -x /bin/cat ] && echo 'cat is executable' test -x /bin/cat && echo 'cat is executable'
test returns a zero exit status if the condition is true, otherwise nonzero. This can actually be replaced by any program to check its exit status, where 0 indicates success and non-zero indicates failure:
# echoes "command succeeded" because echo rarely fails if /bin/echo hi; then echo 'command succeeded'; else echo 'command failed'; fi # echoes "command failed" because rmdir requires an argument if /bin/rmdir; then echo 'command succeeded'; else echo 'command failed'; fi
However, all of the above examples only test against the program’s exit status, and ignore the program’s output.
For find , you will need to test if any output was generated. -n tests for a non-empty string:
if [[ -n $(find /var/log/crashes -name "app-*.log" -mmin -5) ]] then service myapp restart fi
A full list of test arguments is available by invoking help test at the bash commandline.
If you are using bash (and not sh ), you can use [[ condition ]] , which behaves more predictably when there are spaces or other special cases in your condition. Otherwise it is generally the same as using [ condition ] . I’ve used [[ condition ]] in this example, as I do whenever possible.
I also changed `command` to $(command) , which also generally behaves similarly, but is nicer with nested commands.
This answer beats all around the root of the problem but gracefully avoids mentioning exactly what that is.
find will exit successfully if there weren’t any errors, so you can’t count on its exit status to know whether it found any file. But, as you said, you can count how many files it found and test that number.
It would be something like this:
if [ $(find /var/log/crashes -name 'app-*.log' -mmin -5 | wc -l) -gt 0 ]; then . fi
test (aka [ ) doesn’t check the error codes of the commands, it has a special syntax to do tests, and then exits with an error code of 0 if the test was successful, or 1 otherwise. It is if the one that checks the error code of the command you pass to it, and executes its body based on it.
See man test (or help test , if you use bash ), and help if (ditto).
In this case, wc -l will output a number. We use test ‘s option -gt to test if that number is greater than 0 . If it is, test (or [ ) will return with exit code 0 . if will interpret that exit code as success, and it will run the code inside its body.
if [ -n "$(find /var/log/crashes -name app-\*\.log -mmin -5)" ]; then
if test -n "$(find /var/log/crashes -name app-\*\.log -mmin -5)"; then
The commands test and [ … ] are exactly synonymous. The only difference is their name, and the fact that [ requires a closing ] as its last argument. As always, use double quotes around the command substitution, otherwise the output of the find command will be broken into words, and here you’ll get a syntax error if there is more than one matching file (and when there are no arguments, [ -n ] is true, whereas you want [ -n «» ] which is false).
In ksh, bash and zsh but not in ash, you can also use [[ … ]] which has different parsing rules: [ is an ordinary command, whereas [[ … ]] is a different parsing construct. You don’t need double quotes inside [[ … ]] (though they don’t hurt). You still need the ; after the command.
if [[ -n $(find /var/log/crashes -name app-\*\.log -mmin -5) ]]; then
This can potentially be inefficient: if there are many files in /var/log/crashes , find will explore them all. You should make find stop as soon as it finds a match, or soon after. With GNU find (non-embedded Linux, Cygwin), use the -quit primary.
if [ -n "$(find /var/log/crashes -name app-\*\.log -mmin -5 -print -quit)" ]; then
With other systems, pipe find into head to at least quit soon after the first match (find will die of a broken pipe).
if [ -n "$(find /var/log/crashes -name app-\*\.log -mmin -5 -print | head -n 1)" ]; then
(You can use head -c 1 if your head command supports it.)
crash_files=(/var/log/crashes/**/app-*.log(mm-5[1])) if (($#crash_files)); then
How to test if a directory is empty with find
I am trying to write an if statement in a unix shell script that returns true if it’s empty, and false if it’s not. This type of thing.
if directory foo is empty then echo empty else echo not empty fi
9 Answers 9
Simple — use the -empty flag. Quoting the find man page:
-empty True if the current file or directory is empty.
Will list all the empty directories.
- The first one is based on find as OP requested ;
- The second is based on ls ;
- The third one is 100% bash but it invokes (spawns) a sub-shell.
1. [ $(find your/dir -prune -empty) = your/dir ]
dn=your/dir if [ x$(find "$dn" -prune -empty) = x"$dn" ]; then echo empty else echo not empty fi
> mkdir -v empty1 empty2 not_empty mkdir: created directory 'empty1' mkdir: created directory 'empty2' mkdir: created directory 'not_empty' > touch not_empty/file > find empty1 empty2 not_empty -prune -empty empty1 empty2
find has printed the two empty directories only ( empty1 and empty2 ).
This answer looks like the -maxdepth 0 -empty from Ariel. But this answer is a bit shorter 😉
2. [ $(ls -A your/directory) ]
if [ "$(ls -A your/dir)" ]; then echo not empty else echo empty fi
[ "$(ls -A your/dir)" ] && echo not empty || echo empty
Similar to Michael Berkowski and gpojd answers. But here we do not require to pipe to wc . See also Bash Shell Check Whether a Directory is Empty or Not by nixCraft (2007).
3. (( $ ))
files=$(shopt -s nullglob dotglob; echo your/dir/*) if (( $ )); then echo not empty else echo empty or does not exist fi
Caution: as written in this above example, there is no difference between an empty directory and a non-existing one.
This last answer has been inspired from the Bruno De Fraine’s answer and the excellent comments from teambob.
There must be an easier way, but you can test for an empty/nonempty directory with ls -1A piped to wc -l
DIRCOUNT=$(ls -1A /path/to/dir |wc -l) if [ $DIRCOUNT -eq 0 ]; then # it's empty fi
find directoryname -maxdepth 0 -empty
Why do you have to use find? In bash, ls -a will return two files ( . and .. ) for an empty directory and should have more than that for non-empty ones.
if [ $(ls -a | wc -l) -eq 2 ]; then echo "empty"; else echo "not empty"; fi
I checked that it has two values to account for the . and .. . The -l flag is not necessary when piped to wc.
if [ `find foo | wc -l` -eq 1 ] then echo Empty else echo Not empty fi
foo is the directory name here.
dircnt.sh: ----------- #!/bin/sh if [ `ls $1 2> /dev/null | wc -l` -gt 0 ]; then echo true; else echo false; fi
andreas@earl ~ $ mkdir asal andreas@earl ~ $ sh dircnt.sh asal false andreas@earl ~ $ touch asal/1 andreas@earl ~ $ sh dircnt.sh asal true
I don’t like using ls because I have some very large directories and I hate wasting the resources to fill a pipe with all that stuff.
I don’t like filling a $files variable with all that stuff either.
So while all of @libre’s answers are interesting, I find them all unreadable, and prefer to make a function of my favorite, the ‘find’ solution:
function isEmptyDir < [ -d $1 -a -n "$( find $1 -prune -empty 2>/dev/null )" ] >
So that I can write code that I can read a year from now without asking «what was I thinking»?
if isEmptyDir some/directory then echo "some/directory is empty" else echo "some/directory does not exist, is not a directory, or is empty fi
or I can use additional code to tease apart the negative results, but that code should be pretty obvious. In any case, I’ll know right away what I was thinking.