Find and execute on linux

‘find -exec’ a shell function in Linux

Since only the shell knows how to run shell functions, you have to run a shell to run a function. You also need to mark your function for export with export -f , otherwise the subshell won’t inherit them:

export -f dosomething find . -exec bash -c 'dosomething "$0"' <> \; 

@alxndr: that’ll fail on filenames with double-quotes, backquotes, dollar-signs, some escape combos, etc.

Note also that any functions your function might be calling will not be available unless you export -f those as well.

The export -f will work only in some versions of bash. It’s not posix, not crossplatforn, /bin/sh will have an error with it

I think this could break if the filename has special meaning to the shell. Also it’s inconsistent with arguments starting at $1 . If the mini-script becomes a little more complicated, this could be very confusing. I propose to use export -f dosomething; find . -exec bash -c ‘dosomething «$1″‘ _ <> \; instead.

find . | while read file; do dosomething "$file"; done 

Nice solution. Doesn’t require exporting the function or messing around escaping arguments and is presumably more efficient since it’s not spawning subshells to execute each function.

This is more «shell’ish» as your global variables and functions will be available, without creating an entirely new shell/environment each time. Learned this the hard way after trying Adam’s method and running into all sorts of environment problems. This method also does not corrupt your current user’s shell with all the exports and requires less dicipline.

also I fixed my issue by changing while read for a for loop; for item in $(find . ); do some_function «$«; done

user5359531, that won’t work with evil filenames since the output of find is expanded onto the command line and thus subject to word splitting. It’s basically only reliable to expand «$@» (or array elements or subscripts) after keyword ‘in’, and the double quotes are essential.

Jac’s answer is great, but it has a couple of pitfalls that are easily overcome:

find . -print0 | while IFS= read -r -d '' file; do dosomething "$file"; done 

This uses null as a delimiter instead of a linefeed, so filenames with line feeds will work. It also uses the -r flag which disables backslash escaping, and without it backslashes in filenames won’t work. It also clears IFS so that potential trailing white spaces in names are not discarded.

Add quotes in <> as shown below:

export -f dosomething find . -exec bash -c 'dosomething "<>"' \; 

This corrects any error due to special characters returned by find , for example files with parentheses in their name.

This is not the correct way to use <> . This will break for a filename containing double quotes. touch ‘»; rm -rf .; echo «I deleted all you files, haha . Oops.

@kdubs: Use $0 (unquoted) within the command-string and pass the filename as the first argument: -exec bash -c ‘echo $0’ ‘<>‘ \; Note that when using bash -c , $0 is the first argument, not the script name.

Читайте также:  Astra linux расшарить принтер

@sdenham You should double quote $0 to avoid word splitting. But in Bash it does not seem to be neccessary to quote <> . I guess it is necessary for some shell since they tell you to quote it in manual page of find .

Processing results in bulk

For increased efficiency, many people use xargs to process results in bulk, but it is very dangerous. Because of that there was an alternate method introduced into find that executes results in bulk.

Note though that this method might come with some caveats like for example a requirement in POSIX- find to have <> at the end of the command.

export -f dosomething find . -exec bash -c 'for f; do dosomething "$f"; done' _ <> + 

find will pass many results as arguments to a single call of bash and the for -loop iterates through those arguments, executing the function dosomething on each one of those.

The above solution starts arguments at $1 , which is why there is a _ (which represents $0 ).

Processing results one by one

In the same way, I think that the accepted top answer should be corrected to be

export -f dosomething find . -exec bash -c 'dosomething "$1"' _ <> \; 

This is not only more sane, because arguments should always start at $1 , but also using $0 could lead to unexpected behavior if the filename returned by find has special meaning to the shell.

Have the script call itself, passing each item found as an argument:

#!/bin/bash if [ ! $1 == "" ] ; then echo "doing something with $1" exit 0 fi find . -exec $0 <> \; exit 0 

When you run the script by itself, it finds what you are looking for and calls itself passing each find result as the argument. When the script is run with an argument, it executes the commands on the argument and then exits.

cool idea but bad style: uses same script for two purposes. if you want to reduce the number of files in your bin/ then you could merge all your scripts into a single one that has a big case clause at the start. very clean solution, isn’t it?

not to mention this will fail with find: ‘myscript.sh’: No such file or directory if started as bash myscript.sh .

Just a warning regaring the accepted answer that is using a shell, despite it well answer the question, it might not be the most efficient way to exec some code on find results:

Here is a benchmark under bash of all kind of solutions, including a simple for loop case: (1465 directories, on a standard hard drive, armv7l GNU/Linux synology_armada38x_ds218j)

dosomething() < echo $1; >export -f dosomething time find . -type d -exec bash -c 'dosomething "$0"' <> \; real 0m16.102s time while read -d '' filename; do dosomething "$" 

"find | while" and "for loop" seems best and similar in speed.

For those of you looking for a Bash function that will execute a given command on all files in current directory, I have compiled one from the above answers:

Note that it breaks with file names containing spaces (see below).

Читайте также:  What linux mint am i running

As an example, take this function:

Say I wanted to change all instances of "hello" to "world" in all files in the current directory. I would do:

To be safe with any symbols in filenames, use:

(but you need a find that handles -print0 e.g., GNU find ).

It is not possible to executable a function that way.

To overcome this you can place your function in a shell script and call that from find

# dosomething.sh dosomething () < echo "doing something with $1" >dosomething $1 

I considered downvoting but the solution in itself is not bad. Please just use correct quoting: dosomething $1 => dosomething "$1" and start your file correctly with find . -exec bash dosomething.sh <> \;

This is the correct approach. There's reallly no concern about additional files in ~/bin; presumably you already have a definition of dosomething in a startup file somewhere, and proper maintenance of your startup files will have you splitting them into distinct files anyway, so you might as well put that definition in an executable script.

To provide additions and clarifications to some of the other answers, if you are using the bulk option for exec or execdir ( -exec command <> + ), and want to retrieve all the positional arguments, you need to consider the handling of $0 with bash -c .

More concretely, consider the command below, which uses bash -c as suggested above, and simply echoes out file paths ending with '.wav' from each directory it finds:

find "$1" -name '*.wav' -execdir bash -c 'echo "$@"' _ <> + 

If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, they are assigned to positional parameters, starting with $0 .

Here, 'echo "$@"' is the command string, and _ <> are the arguments after the command string. Note that $@ is a special positional parameter in Bash that expands to all the positional parameters starting from 1. Also note that with the -c option, the first argument is assigned to positional parameter $0 .

This means that if you try to access all of the positional parameters with $@ , you will only get parameters starting from $1 and up. That is the reason why Dominik's answer has the _ , which is a dummy argument to fill parameter $0 , so all of the arguments we want are available later if we use $@ parameter expansion for instance, or the for loop as in that answer.

Of course, similar to the accepted answer, bash -c 'shell_function "$0" "$@"' would also work by explicitly passing $0 , but again, you would have to keep in mind that $@ won't work as expected.

Источник

Linux command: find files and run command on them

You can use the -exec flag to execute a command on each matching file:

$ find ./ -type f -name "*.txt" -exec gedit "<>" \; 

Syntax

The syntax is a bit strange (see -exec command ; in the manpages for more):

The string `<>' is replaced by the current file name being processed 

You may also want to consider -execdir , which will do the same, but executes the command from the subdirectory containing the matched file (this is generally preferable).

Читайте также:  Screen linux как выключить

The <> stands in for the current file name, and the semicolon is just terminating the command. The backslash and the surrounding quotes are just to prevent shell expansion.

find . -type f -name "*.txt" -print0 | xargs -0 gedit

@xyz, you can read about the flags of any UNIX command using the man pages. Try man find or man xargs , then / to search for a given flag. The documentation is quite good, it should answer your questions.

-print0 prints a NULL character after each entry, -0 expects entries to be separated by NULL character. It is the safest way to handle tricky names.

And isn't it a little bit strange that the syntax is not gedit xargs -0 instead of xargs -0 gedit. I see that the former doesn't work, but fail to see why.

xargs is preferable to -exec for performance reasons since xargs can "batch up" a number of arguments before passing them to gedit. If one needed to run gedit against each file individually, replace '. -print0 | xargs -0 gedit' with '. -print0 | xargs -0 -i gedit <>' xargs is one of those commands like find and screen that you never know how you got along without a year after you learn about it.

Источник

How to run find -exec?

You missed a ; (escaped here as \; to prevent the shell from interpreting it) or a + and a <> :

find will execute grep and will substitute <> with the filename(s) found. The difference between ; and + is that with ; a single grep command for each file is executed whereas with + as many files as possible are given as parameters to grep at once.

If you use the \; ending construct grep is passed one file at a time, so it doesn't display the file name by default, only the matched lines. To get a file list instead add use grep -ls inside of the find construct.

You don't need to use find for this at all; grep is able to handle opening the files either from a glob list of everything in the current directory:

. or even recursively for folder and everything under it:

grep will choke if the expansion goes over ARG_MAX. -R will visit everything while using find one can more easily add primitives to exclude certain files (-name, etc) or not even visit subtrees (-prune).

Good points @Mel. My point was that in all likelihood the asking party was making things more complex than they needed to be by introducing find when grep could do the job, but in some cases it would be more effective to to use find to fine tine the file list before going out to grep.

@DaCheetah This is a misunderstanding -- the above comment has nothing to do with find -exec . exec here refers to libc functions which directly call the execve syscall (eg. exec ), not find -exec which does its own manipulation prior. The point is that grep never even sees the arguments to choke on them, it's the kernel that rejects it.

find . | xargs grep 'chrome' -ls 

The first shows you the lines in the files, the second just lists the files.

Caleb's option is neater, fewer keystrokes.

Источник

Оцените статью
Adblock
detector