How to redirect output to a file and stdout
In bash, calling foo would display any output from that command on the stdout. Calling foo > output would redirect any output from that command to the file specified (in this case ‘output’). Is there a way to redirect output to a file and have it display on stdout?
If someone just ended up here looking for capturing error output to file, take a look at — unix.stackexchange.com/questions/132511/…
A note on terminology: when you execute foo > output the data is written to stdout and stdout is the file named output . That is, writing to the file is writing to stdout. You are asking if it is possible to write both to stdout and to the terminal.
@WilliamPursell I’m not sure your clarification improves things 🙂 How about this: OP is asking if it’s possible to direct the called program’s stdout to both a file and the calling program’s stdout (the latter being the stdout that the called program would inherit if nothing special were done; i.e. the terminal, if the calling program is an interactive bash session). And maybe they also want to direct the called program’s stderr similarly («any output from that command» might be reasonably interpreted to mean including stderr).
If we have multiple commands that want to pipe outputs, use ( ) . For example (echo hello; echo world) | tee output.txt
11 Answers 11
The command you want is named tee :
For example, if you only care about stdout:
If you want to include stderr, do:
program [arguments. ] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments. ] 2>&1 | tee -a outfile
If OP wants «all output» to be redirected, you’ll also need to grab stderr: «ls -lR / 2>&1 | tee output.file»
@evanmcdonnal The answer is not wrong, it just may not be specific enough, or complete depending on your requirements. There certainly are conditions where you might not want stderr as part of the output being saved to a file. When I answered this 5 years ago I assumed that the OP only wanted stdout, since he mentioned stdout in the subject of the post.
Ah sorry, I might have been a little confused. When I tried it I just got no output, perhaps it was all going to stderr.
Use -a argument on tee to append content to output.file , instead of overwriting it: ls -lR / | tee -a output.file
If you’re using $? afterwards it will return the status code of tee , which is probably not what you want. Instead, you can use $ .
$ program [arguments. ] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams. tee outfile takes the stream it gets and writes it to the screen and to the file «outfile».
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they’ll end up out of chronological order in the output file and on the screen.
It’s also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn’t even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer , part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee . E.g.:
$ unbuffer program [arguments. ] 2>&1 | tee outfile
How to redirect stderr to a file [duplicate]
@terdon This is a more specific question, and it rightly shows up in google search for the more specific question, which is a good thing.
@nroose yes, and it will keep showing up, that won’t change. But any new answers should go to the more general question.
2 Answers 2
There are two main output streams in Linux (and other OSs), standard output (stdout) and standard error (stderr). Error messages, like the ones you show, are printed to standard error. The classic redirection operator ( command > file ) only redirects standard output, so standard error is still shown on the terminal. To redirect stderr as well, you have a few choices:
- Redirect stdout to one file and stderr to another file:
For more information on the various control and redirection operators, see here.
So hashdeep -rXvvl -j 30 -k checksums.txt /mnt/app/ >> result_hashdeep.txt 2> error_hashdeep.txt & or hashdeep -rXvvl -j 30 -k checksums.txt /mnt/app/ >> result_hashdeep.txt 2>&1 or hashdeep -rXvvl -j 30 -k checksums.txt /mnt/app/ &> result_mixed.txt
@AndréM.Faria yes. But the last two commands are equivalent, they will send both error and output to the same file.
As in the link you provided, I could use |& instead of 2>&1 they are equivalent, thanks for you time.
First thing to note is that there’s couple of ways depending on your purpose and shell, therefore this requires slight understanding of multiple aspects. Additionally, certain commands such as time and strace write output to stderr by default, and may or may not provide a method of redirection specific to that command
Basic theory behind redirection is that a process spawned by shell (assuming it is an external command and not shell built-in) is created via fork() and execve() syscalls, and before that happens another syscall dup2() performs necessary redirects before execve() happens. In that sense, redirections are inherited from the parent shell. The m&>n and m>n.txt inform the shell on how to perform open() and dup2() syscall (see also How input redirection works, What is the difference between redirection and pipe, and What does & exactly mean in output redirection )
Shell redirections
Most typical, is via 2> in Bourne-like shells, such as dash (which is symlinked to /bin/sh ) and bash ; first is the default and POSIX-compliant shell and the other is what most users use for interactive session. They differ in syntax and features, but luckily for us error stream redirection works the same (except the &> non standard one). In case of csh and its derivatives, the stderr redirection doesn’t quite work there.
Let’s come back to 2> part. Two key things to notice: > means redirection operator, where we open a file and 2 integer stands for stderr file descriptor; in fact this is exactly how POSIX standard for shell language defines redirection in section 2.7:
For simple > redirection, the 1 integer is implied for stdout , i.e. echo Hello World > /dev/null is just the same as echo Hello World 1>/dev/null . Note, that the integer or redirection operator cannot be quoted, otherwise shell doesn’t recognize them as such, and instead treats as literal string of text. As for spacing, it’s important that integer is right next to redirection operator, but file can either be next to redirection operator or not, i.e. command 2>/dev/null and command 2> /dev/null will work just fine.
The somewhat simplified syntax for typical command in shell would be
command [arg1] [arg2] 2> /dev/null
The trick here is that redirection can appear anywhere. That is both 2> command [arg1] and command 2> [arg1] are valid. Note that for bash shell, there there exists &> way to redirect both stdout and stderr streams at the same time, but again — it’s bash specific and if you’re striving for portability of scripts, it may not work. See also Ubuntu Wiki and What is the difference between &> and 2>&1.
Note: The > redirection operator truncates a file and overwrites it, if the file exists. The 2>> may be used for appending stderr to file.
If you may notice, > is meant for one single command. For scripts, we can redirect stderr stream of the whole script from outside as in myscript.sh 2> /dev/null or we can make use of exec built-in. The exec built-in has the power to rewire the stream for the whole shell session, so to speak, whether interactively or via script. Something like
#!/bin/sh exec 2> ./my_log_file.txt stat /etc/non_existing_file
In this example, the log file should show stat: cannot stat ‘/etc/non_existing_file’: No such file or directory .
Yet another way is via functions. As kopciuszek noted in his answer, we can write function declaration with already attached redirection, that is
some_function() < command1 command2 >2> my_log_file.txt
Commands writing to stderr exclusively
Commands such as time and strace write their output to stderr by default. In case of time command, the only viable alternative is to redirect output of whole command , that is
time echo foo 2>&1 > file.txt
alternatively, synchronous list or subshell could be redirected if you want to separate the output ( as shown in related post ):
Other commands, such as strace or dialog provide means to redirect stderr. strace has -o option which allows specifying filename where output should be written. There is also an option for writing a textfile for each subprocess that strace sees. The dialog command writes the text user interface to stdout but output to stderr, so in order to save its output to variable ( because var=$(. ) and pipelines only receives stderr ) we need to swap the file descriptors
result=$(dialog --inputbox test 0 0 2>&1 1>/dev/tty);
but additionally, there is —output-fd flag, which we also can utilize. There’s also the method of named pipes. I recommend reading the linked post about the dialog command for thorough description of what’s happening.
How to redirect stderr and stdout to different files in the same line in script?
Is there any way I can output the stderr to the error file and output stdout to the output file in the same line of bash?
5 Answers 5
Just add them in one line command 2>> error 1>> output
However, note that >> is for appending if the file already has data. Whereas, > will overwrite any existing data in the file.
So, command 2> error 1> output if you do not want to append.
Just for completion’s sake, you can write 1> as just > since the default file descriptor is the output. so 1> and > is the same thing.
So, command 2> error 1> output becomes, command 2> error > output
How is this different from like command &2>err.log , I think i am totally confusing sintaxies. (A link to an appropriate answer of all the bash pipe-isms might be in order)
@ThorSummoner tldp.org/LDP/abs/html/io-redirection.html is what I think you’re looking for. Fwiw, looks like command &2>err.log isn’t quite legit — the ampersand in that syntax is used for file descriptor as target, eg command 1>&2 would reroute stdout to stderr.
@DreadPirateShawn, please don’t link the ABS as a reference — it occasionally contains outright inaccuracies, and very frequently contains bad-practice examples. wiki.bash-hackers.org/howto/redirection_tutorial is a far better reference source on redirection.
your_command 2>stderr.log 1>stdout.log
More information
The numerals 0 through 9 are file descriptors in bash. 0 stands for standard input, 1 stands for standard output, 2 stands for standard error. 3 through 9 are spare for any other temporary usage.
Any file descriptor can be redirected to a file or to another file descriptor using the operator > . You can instead use the operator >> to appends to a file instead of creating an empty one.
file_descriptor > filename file_descriptor > &file_descriptor