How to perform a for-each loop over all the files under a specified path?
The following command attempts to enumerate all *.txt files in the current directory and process them one by one:
for line in "find . -iname '*.txt'"; do echo $line ls -l $line; done
ls: invalid option -- 'e' Try `ls --help' for more information.
4 Answers 4
Here is a better way to loop over files as it handles spaces and newlines in file names:
#!/bin/bash find . -type f -iname "*.txt" -print0 | while IFS= read -r -d $'\0' line; do echo "$line" ls -l "$line" done
I’m having trouble getting this to work when the script containing this snippet is called from within a cronjob @reboot. It complains about the -d flag of the read command and then fails to execute. Otherwise it works great.
The for -loop will iterate over each (space separated) entry on the provided string.
You do not actually execute the find command, but provide it is as string (which gets iterated by the for -loop). Instead of the double quotes use either backticks or $() :
for line in $(find . -iname '*.txt'); do echo "$line" ls -l "$line" done
Furthermore, if your file paths/names contains spaces this method fails (since the for -loop iterates over space separated entries). Instead it is better to use the method described in dogbanes answer.
As said, for line in «find . -iname ‘*.txt'»; iterates over all space separated entries, which are:
The first two do not result in an error (besides the undesired behavior), but the third is problematic as it executes:
A lot of (bash) commands can combine single character options, so -iname is the same as -i -n -a -m -e . And voila: your invalid option — ‘e’ error!
Use for loop in find exec
To use multiple statements, such as a for -loop, as the argument to -exec , one needs to invoke a shell, such as bash , explicitly:
find .. -name bin -exec bash -c 'for file in "$1"/* ; do echo "$file" ; done' none <> \;
This is safe even for filenames that contain spaces or other hostile characters.
How bash -c works
One can invoke bash with a command of the form:
bash -c some_complex_commands arg0 arg1 arg2 .
In this case, bash will execute whatever is in the string some_complex_commands . Those commands can make use of the usual shell positional parameters. The first argument after command, arg0 above, is assigned to $0 , the second to $1 , the third to $2 , etc.
When one executes a normal shell script, $0 is the name of the script and $1 is the first argument that appears on the command line. In keeping with that tradition, the bash -c command was written to assign the file name, <> in find’s notation, to $1 . Since this script does not have a sensible name, none is assigned as a placeholder to $0 .
It looks like you’ve got things reversed from what you want. Try this:
for f in `find .. -name bin` do echo $f done
@JamesAndino There is no need to loop inside find therefore it outputs founded elements like loop by itself
Your answer doesn’t even do anything except for printing a bunch of blank lines — $file is never set to anything. Perhaps you don’t understand what James was trying to do.
Correct on both counts, though I’ve just fixed the answer to address the first issue you noted. I still don’t understand why he’s looping inside the find’s exec instead of looping over the entire output, but that may be because he never explained that.
I’m finding a set of folders then looping through the collection of commands in those bins . The problem with your answer is that if there are new line characters ( maybe spaces tabs etc I always forget ) the script will die a fiery death . Normally not a problem looping your own bin but I’ll look for sub folders with a user chosen name and kaboom goes trash puter
You can approximate the output of your pseudo-code there with find primitives as is:
. which should print only files/dirs with names that do not begin with a . and which are rooted at some level both in the parent directory, and, at some greater degree, in a dir named /bin .
In general I can think of all kinds of practical purposes for looping in a find -exec child process, but not a one in which I could consider it practical to do shelld glob on an argument passed by find . To do the same thing your pseudo-code does with printf — because its use can guarantee literal translation of arguments to output — you might do.
find .. -type d -name bin -exec sh -c ' printf %s\\n "$0/"*' <> \;
. which does the printing and the globbing without the for loop. One difference between this and your example command, though, is that without the -type d specification and type of result matching the name bin will be echo d — and so you’re highly likely to see a lot of bin/* being written to stdout. Of course, even w/ -type d , there’s no guarantee that the * will resolve — an empty directory or one containing only . files will render no matches and so you might see it anyway.
Note also that the example pseudo code, because it uses the <> \; primitives might be a lot slower than some other ways. We’ll try the printf thing again like:
find .. -type d -name bin -exec sh -c ' for d do printf %s\\n "$d/"*; done' -- <> +
. which still risks the empty glob case but instead of -exec ing a shell per match, it rather gathers as many arguments as it might reasonably pass off to another process at exec time and passes the lot to sh in «$@» its positional parameter array — which we can loop over with for d do printf. .
Now if you wanted to do something other than just print results — which is what I would typically consider useful about looping in an -exec statement — you can fallback to the earlier -path example and combine it with -exec like.
find .. -path \*/bin/\* -exec sh -c ' for arg do : something with "$arg" done' -- <> +
For loop for result of «find» command
Is RESULTPATH intended to be «each of the files identified by the find command»? Also, a .jar file is a zip. Do you mean grep -z ? Also, what’s with the quotes on your zip line?
Sounds like you want to grep through not the jar file itself, but the list of filenames it contains. That requires a rather different command.
2 Answers 2
A clean way to run a command for each result of a grep command with xargs , is using the -Z or —null flag with grep to make the results terminated by null, and the -0 flag with xargs so that it expects values terminated by null, like this:
find . -name '*.jar' -exec grep -Z BuildConfig <> \; | xargs -0 zip -d RESULTPATH "*/BuildConfig.class
I removed the Hls flags because they all seem pointless (even harmful) in your use case.
But I’m afraid this will not actually work for your case, because a .jar file is usually a binary file (zipped archive of Java classes), so I don’t think the grep will ever match anything. You can give zgrep a try to search inside the jars.
The grep command has two channels for information out of it. The first and most obvious one is of course stdout , where it sends matches it finds. But if it finds no matches, it also uses an exit value > 0 to inform you. Combined with the -q (quiet) option, you can use this as a more intelligent option for find :
$ find . -name '*.jar' -exec zgrep -sq BuildConfig <> \; -exec zip -d <> "*/BuildConfig.class" \;
This assumes that you want to search through the compressed file using grep -Z , of course. 🙂
find . -name '*.jar' \ -exec zgrep -sq BuildConfig <> \; \ -exec zip -d <> "*/BuildConfig.class" \;
Find operates by running each test in order. You can think of the options as a series of filters; the -name option is the first filters, and a file only gets passed to the second -exec if the preceding -exec exited without errors.