Command line: search and replace in all filenames matched by grep
I’m trying to search and replace a string in all files matched by grep: grep -n ‘foo’ * will give me output in the form:
9 Answers 9
This appears to be what you want, based on the example you gave:
It is not recursive (it will not descend into subdirectories). For a nice solution replacing in selected files throughout a tree I would use find:
find . -name '*.html' -print -exec sed -i.bak 's/foo/bar/g' <> \;
The *.html is the expression that files must match, the .bak after the -i makes a copy of the original file, with a .bak extension (it can be any extension you like) and the g at the end of the sed expression tells sed to replace multiple copies on one line (rather than only the first one). The -print to find is a convenience to show which files were being matched. All this depends on the exact versions of these tools on your system.
A word of warning for cygwin users. find and sed combo seems to change the user rights for the files that are streamed through. This can be simply fixed by using the command chmod -R 644 * from the same dir level that was used when find/sed was operated.
A word of warning to the people that don’t want to use the -i argument: if you don’t use it, it doesn’t work (don’t ask me why)
@knocte -i tells sed to modify the file, otherwise it just prints the modified version to stdout. If you don’t want the .bak file created, just omit the ‘.bak’ part, -i works standalone too.
On OSX you need to give the find command a directory to start from, for example find . -name ‘*.html’ or find directoryname/ -name ‘*’ .
Do you mean search and replace a string in all files matched by grep?
perl -p -i -e 's/oldstring/newstring/g' `grep -ril searchpattern *`
Since this seems to be a fairly popular question thought I’d update.
Nowadays I mostly use ack-grep as it’s more user-friendly. So the above command would be:
perl -p -i -e 's/old/new/g' `ack -l searchpattern`
To handle whitespace in file names you can run:
ack --print0 -l searchpattern | xargs -0 perl -p -i -e 's/old/new/g'
you can do more with ack-grep . Say you want to restrict the search to HTML files only:
ack --print0 --html -l searchpattern | xargs -0 perl -p -i -e 's/old/new/g'
And if white space is not an issue it’s even shorter:
perl -p -i -e 's/old/new/g' `ack -l --html searchpattern` perl -p -i -e 's/old/new/g' `ack -f --html` # will match all html files
I think this might have a problem with files/folders that contain spaces. Can’t open Untitled: No such file or directory, <> line 5 when trying «Untitled Folder/file.txt».
If your sed(1) has a -i option, then use it like this:
for i in *; do sed -i 's/foo/bar/' $i done
If not, there are several ways variations on the following depending on which language you want to play with:
ruby -i.bak -pe 'sub(%r, 'bar')' * perl -pi.bak -e 's/foo/bar/' *
I like and used the above solution or a system wide search and replace among thousands of files:
find -name '*.htm?' -print -exec sed -i.bak 's/foo/bar/g' <> \;
I assume with the ‘*.htm?’ instead of .html it searches and finds .htm and .html files alike.
I replace the .bak with the more system wide used tilde (~) to make clean up of backup files easier.
This works using grep without needing to use perl or find.
grep -rli 'old-word' * | xargs -i@ sed -i 's/old-word/new-word/g' @
xargs does not have a -i on OSX or BSD openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man1/… did you mean to use an upper case «I» ?
My xargs manpage (on Ubuntu) says that -i is deprecated, and to use -I instead. So, we should say: grep -rli ‘old-word’ * | xargs -I filepath sed -i ‘s/old-word/new-word/g’ filepath
find . -type f -print0 | xargs -0 will process multiple space contained file names at once loading one interpreter per batch. Much faster.
@knocte, «cmd» is a token for the entire search and replace command for whichever given tool one chooses. This answer answers the question about how to deal with the white-space contained file names.
The answer already given of using find and sed
find -name ‘*.html’ -print -exec sed -i.bak ‘s/foo/bar/g’ <> \;
is probably the standard answer. Or you could use perl -pi -e s/foo/bar/g’ instead of the sed command.
For most quick uses, you may find the command rpl is easier to remember. Here is replacement (foo -> bar), recursively on all files in the current directory:
It’s not available by default on most Linux distros but is quick to install ( apt-get install rpl or similar).
However, for tougher jobs that involve regular expressions and back substitution, or file renames as well as search-and-replace, the most general and powerful tool I’m aware of is repren, a small Python script I wrote a while back for some thornier renaming and refactoring tasks. The reasons you might prefer it are:
- Support renaming of files as well as search-and-replace on file contents (including moving files between directories and creating new parent directories).
- See changes before you commit to performing the search and replace.
- Support regular expressions with back substitution, whole words, case insensitive, and case preserving (replace foo -> bar, Foo -> Bar, FOO -> BAR) modes.
- Works with multiple replacements, including swaps (foo -> bar and bar -> foo) or sets of non-unique replacements (foo -> bar, f -> x).
Find and replace text within multiple files
It will not have the graphical interface but I would urge you to examine sed (man sed). It is the stream editor that has been in existence from the start of UNIX.
10 Answers 10
Here I use sed to replace every occurrence of the word «cybernetnews» with «cybernet» in every file with the extension, c, in the directory, /home/user/directory/.
find /home/user/directory -name \*.c -exec sed -i "s/cybernetnews/cybernet/g" <> \;
A more generic variation where you search recursively from the directory of execution and operate on only regular, readable, writeable files:
find ./ -type f -readable -writable -exec sed -i "s/cybernetnews/cybernet/g" <> \;
Voted up because sed is the best choice IMO, but answer would be more useful if you explained the components of the answer (personally, I’m not familiar with and curious about the <> \;)
@sequoiamcdowell Woo! I missed this! Sorry. The braces mean basically «for each match» and the escaped semi-colon (\;) is to prevent double parsing. We don’t want the bourne-compatible shell and sed trying to parse the command, just sed.
@Christopher how does it work if the word contains spaces like cyber net news ? How can i convert this to cyber net ?
The stream editor,sed, is a powerful utility for this kind of work and is my first choice, however, if you want to do this from an ordinary text editor using an Ubuntu based native application, I would suggest you take a look at Jedit, It is available in the repositories and can be installed by typing in your console:
sudo apt-get install jedit
Start jedit, click the search menu item, in the menu list, click the Search in Directory item, you will be presented with the dialog below:
This is similar to that of Notepad++ and does the same thing, I believe this is what you want.
This was perfect for when I was searching for an entire line of code, and was hesitant to escape every single regex-special character if I had done the same thing with sed .
This doesn’t work on a large number of files, after a couple of hours it hadn’t made one replacement, whereas geany took a few minutes to find all the occurrences. This may not be an efficient search and replace but it warmed the room up nicely.
Another GUI option is regexxer:
Even, If you don’t know regex, you can use this tool as a simple string search and replace tool and it will do the job for you, no need for regex or anything.
perl -pi -e 's/oldtext/newtext/g' *
replaces any occurence of oldtext by newtext in all files in the current folder. However you will have to escape all perl special characters within oldtext and newtext using the backslash.
Check with Geany, it is perfect NPP replacement for Linux. You can do exactly that plus you can use regex.
As far as I can tell, Geany 1.23.1 (packaged with Ubuntu 13.10) allows searching multiple files using regex, but not replacing in them.
You can search and replace in Geany multiple files if you open them all and then using the «in session» button in the Search/Replace dialog.
Ok, The regexxer was pretty much what I had been asking for, I had just skipped it earlier due to its name having regex and I just tend to avoid anything to do with regex. But I was wrong, adn that tool has exactly what is required for search and replace.
A very simple solution: replace in all *.txt files in folder string_1 with string_2 :
sed -i 's/string_1/string_2/g' *.txt
Great answer! Just some addition. If you want to replace a string in any file of the current directory recursively then try this: sed -i ‘s/string_1/string_2/g’ $(grep -rEl ‘string_1’ ./)
I wrote a little script for just this thing. If you only need the basics and are not familiar with sed etc, take a look here: http://www.csrdu.org/nauman/2010/12/30/bash-script-to-find-and-replace-in-a-set-of-files/
The script is the following:
for f in submit_*; do sed "s/old_db_name/new_db_name/" < $f >a_$f ; mv a_$f $f ; done
You can use this script, copy code and make a file find_and_replace_in_files.sh .
I have modified it a little; please tell me your opinion.
# ***************************************************************************************** # find_and_replace_in_files.sh # This script does a recursive, case sensitive directory search and replace of files # To make a case insensitive search replace, use the -i switch in the grep call # uses a startdirectory parameter so that you can run it outside of specified directory - else this script will modify itself! # ***************************************************************************************** !/bin/bash # **************** Change Variables Here ************ startdirectory="/your/start/directory" searchterm="test" replaceterm="test=ok!" # ********************************************************** echo "***************************************************" echo "* Search and Replace in Files Version 01-Aug-2012 *" echo "***************************************************" i=0; for file in $(grep -l -R $searchterm $startdirectory) do cp $file $file.bak sed -e "s/$searchterm/$replaceterm/ig" $file > tempfile.tmp mv tempfile.tmp $file let i++; echo "Modified: " $file done echo " *** All Done! *** Modified files:" $i
How can I do a recursive find/replace of a string with awk or sed?
oh my god this is exactly what I just did. But it worked and doesn’t seem to have done any harm. Whats the worst that could happen?
Quick tip for all the people using sed: It will add trailing newlines to your files. If you don’t want them, first do a find-replace that won’t match anything, and commit that to git. Then do the real one. Then rebase interactively and delete the first one.
You can exclude a directory, such as git, from the results by using -path ./.git -prune -o in find . -path ./.git -prune -o -type f -name ‘*matchThisText*’ -print0 before piping to xargs
37 Answers 37
find /home/www \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
-print0 tells find to print each of the results separated by a null character, rather than a new line. In the unlikely event that your directory has files with newlines in the names, this still lets xargs work on the correct filenames.
\( -type d -name .git -prune \) is an expression which completely skips over all directories named .git . You could easily expand it, if you use SVN or have other folders you want to preserve — just match against more names. It’s roughly equivalent to -not -path .git , but more efficient, because rather than checking every file in the directory, it skips it entirely. The -o after it is required because of how -prune actually works.
For more information, see man find .