find: missing argument to `-exec’ in SSH command
I’m using SSH inside a CI/CD Pipeline (so it’s non-interactive), and trying to execute a couple find command (among others) to change the ownership of files and directories after executing LFTP mirror, but I keep getting this error (which makes the whole Pipeline fail):
find: missing argument to `-exec'
ssh -i ~/.ssh/id_rsa $USERNAME@$HOST "[other commands. ]; find $SOME_PATH/ -type d -exec 'chmod 755 <> \;' && find $SOME_PATH/ -type f -exec 'chmod 644 <> \;' && echo Done"
I’ve already tried using escaped double quotes like so: -exec \»chmod 755 <> \;\» — but keeps throwing the same error. What would be the main issue here? EDIT: Solved. I removed any quotes for the -exec , removed the && and append an extra semicolon ; to each find and it works as expected.
ssh -i ~/.ssh/id_rsa $USERNAME@$HOST "[other commands. ]; find $SOME_PATH/ -type d -exec chmod 755 <> \;; find $SOME_PATH/ -type f -exec chmod 644 <> \;; echo Done"
So use -exec whatever-command <> \;; [other command, echo, find, ls, whatever. ] . Please check this answer for more information: https://unix.stackexchange.com/a/139800/291364
[. ] When find sees that spurious exit after the -exec … ; directive, it doesn’t know what to do with it; it hazards a (wrong) guess that you meant it to be a path to traverse. You need a command separator: put another ; after \; (with or without a space before). [. ]
find: missing argument to -exec
What I am basically trying to do is go through a directory recursively (if it has other directories) and run the ffmpeg command on the .rm file types and convert them to .mp3 file types. Once this is done, remove the .rm file that has just been converted.
I found that when I copy a similar command from the browser into a terminal the \ dissappears, so I had to manually type it in front of ;
13 Answers 13
A -exec command must be terminated with a ; (so you usually need to type \; or ‘;’ to avoid interpretion by the shell) or a + . The difference is that with ; , the command is called once per file, with + , it is called just as few times as possible (usually once, but there is a maximum length for a command line, so it might be split up) with all filenames. See this example:
$ cat /tmp/echoargs #!/bin/sh echo $1 - $2 - $3 $ find /tmp/foo -exec /tmp/echoargs <> \; /tmp/foo - - /tmp/foo/one - - /tmp/foo/two - - $ find /tmp/foo -exec /tmp/echoargs <> + /tmp/foo - /tmp/foo/one - /tmp/foo/two
Your command has two errors:
First, you use <>; , but the ; must be a parameter of its own.
Second, the command ends at the && . You specified “run find, and if that was successful, remove the file named <>; .“. If you want to use shell stuff in the -exec command, you need to explicitly run it in a shell, such as -exec sh -c ‘ffmpeg . && rm’ .
However you should not add the <> inside the bash command, it will produce problems when there are special characters. Instead, you can pass additional parameters to the shell after -c command_string (see man sh ):
$ ls $(echo damn.) $ find * -exec sh -c 'echo "<>"' \; damn. $ find * -exec sh -c 'echo "$1"' - <> \; $(echo damn.)
You see the $ thing is evaluated by the shell in the first example. Imagine there was a file called $(rm -rf /) 🙂
(Side note: The — is not needed, but the first variable after the command is assigned to the variable $0 , which is a special variable normally containing the name of the program being run and setting that to a parameter is a little unclean, though it won’t cause any harm here probably, so we set that to just — and start with $1 .)
So your command could be something like
find -exec bash -c 'ffmpeg -i "$1" -sameq "$1".mp3 && rm "$1".mp3' - <> \;
But there is a better way. find supports and and or , so you may do stuff like find -name foo -or -name bar . But that also works with -exec , which evaluates to true if the command exits successfully, and to false if not. See this example:
$ ls false true $ find * -exec <> \; -and -print true
It only runs the print if the command was successfully, which it did for true but not for false .
So you can use two exec statements chained with an -and , and it will only execute the latter if the former was run successfully.
Bash on Ubuntu Server: find: missing argument to `-exec’
Is that dot at the end at purpose? Did you already try to debug the problem by yourself? (for example by reducing the complexity of the find command) Any insights with that?
2 Answers 2
Dot in the end of the script.
I think, that your problem in this dot. For example following command gives the same error
find ./ -type f -exec ls \;. . What is the purpose of the dot here?
-exec command ;
Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of `;’ is encountered. .
I assume, that dot in the end (instead of 😉 breaks parsing of the last argument and you
receive the error message.
Thanks — this is the correct answer. The removal of the dot at the end, ie having «;» instead of «;.» was the solution.
Wild guess: Your FreeNas box uses a much more simple shell, which does not feature history expansion.
Therefore you do not have to escape that exclamation point on that shell but you need to do this on bash. — Otherwise it would interpret it as history expansion.
QUOTING Quoting is used to remove the special meaning of certain characters or words to the shell. Quoting can be used to disable special treatment for special characters, to prevent reserved words from being recognized as such, and to prevent parameter expansion. Each of the metacharacters listed above under DEFINITIONS has special meaning to the shell and must be quoted if it is to represent itself. When the command history expansion facilities are being used (see HISTORY EXPANSION below), the history expansion character, usually !, must be quoted to prevent history expansion.
Shell script — find: missing argument to `-exec’
But get this error: find: missing argument to `-exec’. I’ve tried looking at other posts on this but can’t get it working. I’m using cygwin to run this script on Windows.
Did you find a solution for your latest question that you have deleted? stackoverflow.com/q/53381852/6309
2 Answers 2
If you have created any alias with name log , then to use it, you should be using $logs and not logs
Also, maybe the find you are using is of Windows and not of Cygwin . This is made clear if you type ‘which find’
Oops thank you for that I didn’t notice as I was trying to figure out the error. I updated my code as I’m still getting the same error.
alias isn’t needed. What’s probably happening is $logs (as the unquoted expansion of an undefined variable) is disappearing and find is searching the current working directory.
This error is one emitted by a UNIX-style find , so I don’t believe the claim of a PATH problem causing the wrong version to be used. Much more likely the script has DOS-style newlines so the last argument is coming out as ;$’\r’ rather than just ; , which the -exec action expects/requires as its terminator.
@chepner, . why would find searching the current directory result in the specific error the OP reports?
#!/bin/sh logs=/cygdrive/l find "$logs" -name '*system.log*' -mtime +14 -exec rm -- <> +
. but those won’t address your real problem (the one causing -exec to report an error), which is almost certainly the presence of DOS newlines in your script.
find -exec reports the error in question when it doesn’t see an argument containing only the exact string ; . Outside quotes, \; should be that argument — but it can be different if your file has hidden characters. And a DOS text file will appear to have hidden characters when opened by a program expecting a UNIX text file, because the two formats have different line separators.
To fix this, open the file in a native-UNIX editor such as vim, run :set fileformat=unix , and save it; or use dos2unix to convert it in-place.