How to append contents of multiple files into one file
and it did not work. I want my script to add the newline at the end of each text file. eg. Files 1.txt, 2.txt, 3.txt. Put contents of 1,2,3 in 0.txt How do I do it ?
12 Answers 12
You need the cat (short for concatenate) command, with shell redirection ( > ) into your output file
@blasto it depends. You would use >> to append one file onto another, where > overwrites the output file with whatever’s directed into it. As for the newline, is there a newline as the first character in file 1.txt ? You can find out by using od -c , and seeing if the first character is a \n .
@blasto You’re definitely heading in the right direction. Bash certainly accepts the form <. >for filename matching, so perhaps the quotes messed things up a bit in your script? I always try working with things like this using ls in a shell. When I get the command right, I just cut-n-paste it into a script as is. You might also find the -x option useful in your scripts — it will echo the expanded commands in the script before execution.
To maybe stop somebody from making the same mistake: cat 1.txt 2.txt > 1.txt will just override 1.txt with the content of 2.txt . It does not merge the two files into the first one.
Another option, for those of you who still stumble upon this post like I did, is to use find -exec :
find . -type f -name '*.txt' -exec cat <> + >> output.file
In my case, I needed a more robust option that would look through multiple subdirectories so I chose to use find . Breaking it down:
Look within the current working directory.
Only interested in files, not directories, etc.
Whittle down the result set by name
Execute the cat command for each result. «+» means only 1 instance of cat is spawned (thx @gniourf_gniourf)
As explained in other answers, append the cat-ed contents to the end of an output file.
There are lots of flaws in this answer. First, the wildcard *.txt must be quoted (otherwise, the whole find command, as written, is useless). Another flaw comes from a gross misconception: the command that is executed is not cat >> 0.txt <> , but cat <> . Your command is in fact equivalent to < find . -type f -name *.txt -exec cat '<>‘ \; ; > >> 0.txt (I added grouping so that you realize what’s really happening). Another flaw is that find is going to find the file 0.txt , and cat will complain by saying that input file is output file.
Thanks for the corrections. My case was a little bit different and I hadn’t thought of some of those gotchas as applied to this case.
You should put >> output.file at the end of your command, so that you don’t induce anybody (including yourself) into thinking that find will execute cat <> >> output.file for every found file.
Starting to look really good! One final suggestion: use -exec cat <> + instead of -exec cat <> \; , so that only one instance of cat is spawned with several arguments ( + is specified by POSIX).
Good answer and word of warning — I modified mine to: find . -type f -exec cat <> + >> outputfile.txt and couldn’t figure out why my output file wouldn’t stop growing into the gigs even though the directory was only 50 megs. It was because I kept appending outputfile.txt to itself! So just make sure to name that file correctly or place it in another directory entirely to avoid this.
if you have a certain output type then do something like this
cat /path/to/files/*.txt >> finalout.txt
Keep in mind that you are losing the possibility to maintain merge order though. This may affect you if you have your files named, eg. file_1 , file_2 , … file_11 , because of the natural order how files are sorted.
If all your files are named similarly you could simply do:
If all your files are in single directory you can simply do
Files 1.txt,2.txt, .. will go into 0.txt
Already answered by Eswar. Keep in mind that you are losing the possibility to maintain merge order though. This may affect you if you have your files named, eg. file_1 , file_2 , … file_11 , because of the natural order how files are sorted.
for i in ; do cat "$i.txt" >> 0.txt; done
I found this page because I needed to join 952 files together into one. I found this to work much better if you have many files. This will do a loop for however many numbers you need and cat each one using >> to append onto the end of 0.txt.
as brought up in the comments:
sed r 1.txt 2.txt 3.txt > merge.txt
sed h 1.txt 2.txt 3.txt > merge.txt
sed -n p 1.txt 2.txt 3.txt > merge.txt # -n is mandatory here
sed wmerge.txt 1.txt 2.txt 3.txt
Note that last line write also merge.txt (not wmerge.txt !). You can use w»merge.txt» to avoid confusion with the file name, and -n for silent output.
Of course, you can also shorten the file list with wildcards. For instance, in case of numbered files as in the above examples, you can specify the range with braces in this way:
if your files contain headers and you want remove them in the output file, you can use:
for f in `ls *.txt`; do sed '2,$!d' $f >> 0.out; done
All of the (text-) files into one
find . | xargs cat > outfile
xargs makes the output-lines of find . the arguments of cat.
find has many options, like -name ‘*.txt’ or -type.
you should check them out if you want to use it in your pipeline
You should explain what your command does. Btw, you should use find with —print0 and xargs with -0 in order to avoid some caveats with special filenames.
If the original file contains non-printable characters, they will be lost when using the cat command. Using ‘cat -v’, the non-printables will be converted to visible character strings, but the output file would still not contain the actual non-printables characters in the original file. With a small number of files, an alternative might be to open the first file in an editor (e.g. vim) that handles non-printing characters. Then maneuver to the bottom of the file and enter «:r second_file_name». That will pull in the second file, including non-printing characters. The same could be done for additional files. When all files have been read in, enter «:w». The end result is that the first file will now contain what it did originally, plus the content of the files that were read in.
Send multi file to a file(textall.txt):
Concatenating Text Files into a Single File in Linux
The Kubernetes ecosystem is huge and quite complex, so it’s easy to forget about costs when trying out all of the exciting tools.
To avoid overspending on your Kubernetes cluster, definitely have a look at the free K8s cost monitoring tool from the automation platform CAST AI. You can view your costs in real time, allocate them, calculate burn rates for projects, spot anomalies or spikes, and get insightful reports you can share with your team.
Connect your cluster and start monitoring your K8s costs right away:
1. Overview
Linux provides us commands to perform various operations on files. One such activity is the concatenation – or merging – of files.
In this quick tutorial, we’ll see how to concatenate files into a single file.
2. Introducing cat Command
To concatenate files, we’ll use the cat (short for concatenate) command.
Let’s say we have two text files, A.txt and B.txt.
Now, let’s merge these files into file C.txt:
The cat command concatenates files and prints the result to the standard output. Hence, to write the concatenated output to a file, we’ve used the output redirection symbol ‘>’. This sends the concatenated output to the file specified.
The above script will create the file C.txt with the concatenated contents:
Content from file A. Content from file B.
Note that if the file C.txt already exists, it’ll simply be overwritten.
Sometimes, we might want to append the content to the output file rather than overwriting it. We can do this by using the double output redirection symbol >>:
The examples above concatenate two files. But, if we want to concatenate more than two, we specify all these files one after another:
cat A.txt B.txt C.txt D.txt E.txt > F.txt
This’ll concatenate all the files in the order specified.
3. Concatenating Multiple Files Using a Wildcard
If the number of files to be concatenated is large, it is cumbersome to type in the name of each file. So, instead of specifying each file to be concatenated, we can use wildcards to specify the files.
For example, to concatenate all files in the current directory, we can use the asterisk(*) wildcard:
We have to be careful while using wildcards if the output file already exists — if the wildcard specified includes the output file, we’ll get an error:
cat: C.txt: input file is output file
It’s worth noting that when using wildcards, the order of the files isn’t predictable. Consequently, we’ll have to employ the method we saw in the previous section if the order in which the files are to be concatenated is important.
Going a step further, we can also use pipes to feed the contents of the input files to the cat command. For example, we can echo the contents of all files in the current directory and feed it’s output to cat:
echo *.txt | xargs cat > D.txt
4. Conclusion
In this tutorial, we saw how easy it is to concatenate multiple files using the Linux cat command.
How to merge all (text) files in a directory into one?
This is technically what cat («concatenate») is supposed to do, even though most people just use it for outputting files to stdout. If you give it multiple filenames it will output them all sequentially, and then you can redirect that into a new file; in the case of all files just use ./* (or /path/to/directory/* if you’re not in the directory already) and your shell will expand it to all the filenames (excluding hidden ones by default).
Make sure you don’t use the csh or tcsh shells for that which expand the glob after opening the merged-file for output, and that merged-file doesn’t exist before hand, or you’ll likely end up with an infinite loop that fills up the filesystem.
The list of files is sorted lexically. If using zsh , you can change the order (to numeric, or by age, size. ) with glob qualifiers.
To include files in sub-directories, use:
find . ! -path ./merged-file -type f -exec cat <> + > merged-file
Though beware the list of files is not sorted and hidden files are included. -type f here restricts to regular files only as it’s unlikely you’ll want to include other types of files. With GNU find , you can change it to -xtype f to also include symlinks to regular files.
Would do the same ( (-.) achieving the equivalent of -xtype f ) but give you a sorted list and exclude hidden files (add the D qualifier to bring them back). zargs can be used there to work around argument list too long errors.