Linux error messages to file

How do I write standard error to a file while using «tee» with a pipe?

I know how to use tee to write the output (standard output) of aaa.sh to bbb.out , while still displaying it in the terminal:

I did, I will edit my post to clarify that. I do believe lhunath’s solution will suffice. Thanks for the help all!

12 Answers 12

I’m assuming you want to still see standard error and standard output on the terminal. You could go for Josh Kelley’s answer, but I find keeping a tail around in the background which outputs your log file very hackish and cludgy. Notice how you need to keep an extra file descriptor and do cleanup afterward by killing it and technically should be doing that in a trap ‘. ‘ EXIT .

There is a better way to do this, and you’ve already discovered it: tee .

Only, instead of just using it for your standard output, have a tee for standard output and one for standard error. How will you accomplish this? Process substitution and file redirection:

command > >(tee -a stdout.log) 2> >(tee -a stderr.log >&2) 

Let’s split it up and explain:

>(. ) (process substitution) creates a FIFO and lets tee listen on it. Then, it uses > (file redirection) to redirect the standard output of command to the FIFO that your first tee is listening on.

The same thing for the second:

We use process substitution again to make a tee process that reads from standard input and dumps it into stderr.log . tee outputs its input back on standard output, but since its input is our standard error, we want to redirect tee ‘s standard output to our standard error again. Then we use file redirection to redirect command ‘s standard error to the FIFO’s input ( tee ‘s standard input).

Process substitution is one of those really lovely things you get as a bonus of choosing Bash as your shell as opposed to sh (POSIX or Bourne).

In sh , you’d have to do things manually:

out="$/out.$$" err="$/err.$$" mkfifo "$out" "$err" trap 'rm "$out" "$err"' EXIT tee -a stdout.log < "$out" & tee -a stderr.log < "$err" >&2 & command >"$out" 2>"$err" 

I tried this: $ echo «HANG» > >(tee stdout.log) 2> >(tee stderr.log >&2) which works, but waits for input. Is there a simple reason why this happens?

@SillyFreak I don’t understand what you want to do or what the problem is you’re having. echo test; exit doesn’t produce any output on stdout, so err will remain empty.

thanks for that comment; I figured out what my logical error was afterwards: when invoked as an interactive shell, bash prints a command prompt and echoes exit to stderr. However, if stderr is redirected, bash starts as noninteractive by default; compare /bin/bash 2> err and /bin/bash -i 2> err

Читайте также:  Acer chromebook with linux

And for those who «seeing is believing», a quick test: (echo «Test Out»;>&2 echo «Test Err») > >(tee stdout.log) 2> >(tee stderr.log >&2)

Can this be used to write both to the same file, but in a synchronized way, so the order of stderr and stdout messages is preserved, while each stream of the supbrocess is still directed to the corresponding stream of the calling process? I don’t know, but I can imagine if tee does some buffering, the subprocess may finish writing something to stdout and write to stderr, but the second tee receiving the stderr message may finish writing it to the file earlier than the first.

This simply redirects standard error to standard output, so tee echoes both to log and to the screen. Maybe I’m missing something, because some of the other solutions seem really complicated.

Note: Since Bash version 4 you may use |& as an abbreviation for 2>&1 | :

That works fine if you want both stdout (channel 1) and stderr (channel 2) logged to the same file (a single file containing the mixture of both stdout and sterr). The other, more complicated solution allows you to separate stdout and stderr into 2 different files (stdout.log and stderr.log, respectively). Sometimes that is important, sometimes it’s not.

The other solutions are far more complicated than necessary in many cases. This one works perfectly for me.

The problem with this method is that you lose the exit/status code from the aaa.sh process, which can be important (e.g. when using in a makefile). You don’t have this problem with the accepted answer.

@Stefaan I believe you can retain exit status if you prepend the command chain with set -o pipefail followed by ; or && if I’m not mistaken.

This may be useful for people finding this via Google. Simply uncomment the example you want to try out. Of course, feel free to rename the output files.

#!/bin/bash STATUSFILE=x.out LOGFILE=x.log ### All output to screen ### Do nothing, this is the default ### All Output to one file, nothing to the screen #exec > $ 2>&1 ### All output to one file and all output to the screen #exec > >(tee $) 2>&1 ### All output to one file, STDOUT to the screen #exec > >(tee -a $) 2> >(tee -a $ >/dev/null) ### All output to one file, STDERR to the screen ### Note you need both of these lines for this to work #exec 3>&1 #exec > >(tee -a $ >/dev/null) 2> >(tee -a $ >&3) ### STDOUT to STATUSFILE, stderr to LOGFILE, nothing to the screen #exec > $ 2>$ ### STDOUT to STATUSFILE, stderr to LOGFILE and all output to the screen #exec > >(tee $) 2> >(tee $ >&2) ### STDOUT to STATUSFILE and screen, STDERR to LOGFILE #exec > >(tee $) 2>$ ### STDOUT to STATUSFILE, STDERR to LOGFILE and screen #exec > $ 2> >(tee $ >&2) echo "This is a test" ls -l sdgshgswogswghthb_this_file_will_not_exist_so_we_get_output_to_stderr_aronkjegralhfaff ls -l $

Источник

Читайте также:  Linux replace all string in file

How to redirect stderr to a file [duplicate]

@terdon This is a more specific question, and it rightly shows up in google search for the more specific question, which is a good thing.

@nroose yes, and it will keep showing up, that won’t change. But any new answers should go to the more general question.

2 Answers 2

There are two main output streams in Linux (and other OSs), standard output (stdout) and standard error (stderr). Error messages, like the ones you show, are printed to standard error. The classic redirection operator ( command > file ) only redirects standard output, so standard error is still shown on the terminal. To redirect stderr as well, you have a few choices:

    Redirect stdout to one file and stderr to another file:

For more information on the various control and redirection operators, see here.

So hashdeep -rXvvl -j 30 -k checksums.txt /mnt/app/ >> result_hashdeep.txt 2> error_hashdeep.txt & or hashdeep -rXvvl -j 30 -k checksums.txt /mnt/app/ >> result_hashdeep.txt 2>&1 or hashdeep -rXvvl -j 30 -k checksums.txt /mnt/app/ &> result_mixed.txt

@AndréM.Faria yes. But the last two commands are equivalent, they will send both error and output to the same file.

As in the link you provided, I could use |& instead of 2>&1 they are equivalent, thanks for you time.

First thing to note is that there’s couple of ways depending on your purpose and shell, therefore this requires slight understanding of multiple aspects. Additionally, certain commands such as time and strace write output to stderr by default, and may or may not provide a method of redirection specific to that command

Basic theory behind redirection is that a process spawned by shell (assuming it is an external command and not shell built-in) is created via fork() and execve() syscalls, and before that happens another syscall dup2() performs necessary redirects before execve() happens. In that sense, redirections are inherited from the parent shell. The m&>n and m>n.txt inform the shell on how to perform open() and dup2() syscall (see also How input redirection works, What is the difference between redirection and pipe, and What does & exactly mean in output redirection )

Shell redirections

Most typical, is via 2> in Bourne-like shells, such as dash (which is symlinked to /bin/sh ) and bash ; first is the default and POSIX-compliant shell and the other is what most users use for interactive session. They differ in syntax and features, but luckily for us error stream redirection works the same (except the &> non standard one). In case of csh and its derivatives, the stderr redirection doesn’t quite work there.

Let’s come back to 2> part. Two key things to notice: > means redirection operator, where we open a file and 2 integer stands for stderr file descriptor; in fact this is exactly how POSIX standard for shell language defines redirection in section 2.7:

For simple > redirection, the 1 integer is implied for stdout , i.e. echo Hello World > /dev/null is just the same as echo Hello World 1>/dev/null . Note, that the integer or redirection operator cannot be quoted, otherwise shell doesn’t recognize them as such, and instead treats as literal string of text. As for spacing, it’s important that integer is right next to redirection operator, but file can either be next to redirection operator or not, i.e. command 2>/dev/null and command 2> /dev/null will work just fine.

Читайте также:  Always on top linux

The somewhat simplified syntax for typical command in shell would be

 command [arg1] [arg2] 2> /dev/null 

The trick here is that redirection can appear anywhere. That is both 2> command [arg1] and command 2> [arg1] are valid. Note that for bash shell, there there exists &> way to redirect both stdout and stderr streams at the same time, but again — it’s bash specific and if you’re striving for portability of scripts, it may not work. See also Ubuntu Wiki and What is the difference between &> and 2>&1.

Note: The > redirection operator truncates a file and overwrites it, if the file exists. The 2>> may be used for appending stderr to file.

If you may notice, > is meant for one single command. For scripts, we can redirect stderr stream of the whole script from outside as in myscript.sh 2> /dev/null or we can make use of exec built-in. The exec built-in has the power to rewire the stream for the whole shell session, so to speak, whether interactively or via script. Something like

#!/bin/sh exec 2> ./my_log_file.txt stat /etc/non_existing_file 

In this example, the log file should show stat: cannot stat ‘/etc/non_existing_file’: No such file or directory .

Yet another way is via functions. As kopciuszek noted in his answer, we can write function declaration with already attached redirection, that is

some_function() < command1 command2 >2> my_log_file.txt 

Commands writing to stderr exclusively

Commands such as time and strace write their output to stderr by default. In case of time command, the only viable alternative is to redirect output of whole command , that is

time echo foo 2>&1 > file.txt 

alternatively, synchronous list or subshell could be redirected if you want to separate the output ( as shown in related post ):

Other commands, such as strace or dialog provide means to redirect stderr. strace has -o option which allows specifying filename where output should be written. There is also an option for writing a textfile for each subprocess that strace sees. The dialog command writes the text user interface to stdout but output to stderr, so in order to save its output to variable ( because var=$(. ) and pipelines only receives stderr ) we need to swap the file descriptors

result=$(dialog --inputbox test 0 0 2>&1 1>/dev/tty); 

but additionally, there is —output-fd flag, which we also can utilize. There’s also the method of named pipes. I recommend reading the linked post about the dialog command for thorough description of what’s happening.

Источник

Оцените статью
Adblock
detector