Linux redirect all to stdout

Redirect stderr and stdout in Bash [duplicate]

I want to redirect both standard output and standard error of a process to a single file. How do I do that in Bash?

I’d like to say this is a surprisingly useful question. Many people do not know how to do this, as they don’t have to do so frequently, and it is not the best documented behavior of Bash.

Sometimes it’s useful to see the output (as usual) AND to redirect it to a file. See the answer by Marko below. (I say this here because it’s easy to just look at the first accepted answer if that’s sufficient to solve a problem, but other answers often provide useful information.)

15 Answers 15

Take a look here. It should be:

It redirects both standard output and standard error to file filename.

According to wiki.bash-hackers.org/scripting/obsolete, it seems to be obsolete in the sense that it is not part of POSIX, but the bash man page makes no mention of it being removed from bash in the near future. The man page does specify a preference for ‘&>’ over ‘>&’, which is otherwise equivalent.

I guess we should not use &> as it is not in POSIX, and common shells such as «dash» do not support it.

An extra hint: If you use this in a script, make sure it starts with #!/bin/bash rather than #!/bin/sh , since in requires bash.

do_something 2>&1 | tee -a some_file 

This is going to redirect standard error to standard output and standard output to some_file and print it to standard output.

I have a ruby script (which I don’t want to modify in any way) that prints error messages in bold red. This ruby script is then invoked from my bash script (which I can modify). When I use the above, it prints the error messages in plain text, minus the formatting. Is there any way to retain on-screen formatting and get the output (both stdout and stderr) in a file as well?

Note that (by default) this has the side-effect that $? no longer refers to the exit status of do_something , but the exit status of tee .

You can redirect stderr to stdout and the stdout into a file:

This format is preferred over the most popular &> format that only works in Bash. In Bourne shell it could be interpreted as running the command in background. Also the format is more readable — 2 (is standard error) redirected to 1 (standard output).

If you want to append to a file then you must do it this way: echo «foo» 2>&1 1>> bar.txt AFAIK there’s no way to append using &>

I think the interpretation that 2>&1 redirects stderr to stdout is wrong; I believe it is more accurate to say it sends stderr to the same place that stdout is going at this moment in time. Thus place 2>&1 after the first redirect is essential.

@SlappyTheFish, actually, there is a way: «&>>» From bash man: » The format for appending standard output and standard error is: &>>word This is semantically equivalent to >>word 2>&1 «

# Close standard output file descriptor exec 1$LOG_FILE # Redirect standard error to standard output exec 2>&1 echo "This line will appear in $LOG_FILE, not 'on screen'" 

Now, a simple echo will write to $LOG_FILE, and it is useful for daemonizing.

Читайте также:  Simply linux пакетный менеджер

To the author of the original post,

It depends what you need to achieve. If you just need to redirect in/out of a command you call from your script, the answers are already given. Mine is about redirecting within current script which affects all commands/built-ins (includes forks) after the mentioned code snippet.

Another cool solution is about redirecting to both standard error and standard output and to log to a log file at once which involves splitting «a stream» into two. This functionality is provided by ‘tee’ command which can write/append to several file descriptors (files, sockets, pipes, etc.) at once: tee FILE1 FILE2 . >(cmd1) >(cmd2) .

exec 3>&1 4>&2 1> >(tee >(logger -i -t 'my_script_tag') >&3) 2> >(tee >(logger -i -t 'my_script_tag') >&4) trap 'cleanup' INT QUIT TERM EXIT get_pids_of_ppid() < local ppid="$1" RETVAL='' local pids=`ps x -o pid,ppid | awk "\\$2 == \\"$ppid\\" < print \\$1 >"` RETVAL="$pids" > # Needed to kill processes running in background cleanup() < local current_pid element local pids=( "$$" ) running_pids=("$") while :; do current_pid="$" [ -z "$current_pid" ] && break running_pids=("$") get_pids_of_ppid $current_pid local new_pids="$RETVAL" [ -z "$new_pids" ] && continue for element in $new_pids; do running_pids+=("$element") pids=("$element" "$") done done kill $ 2>/dev/null > 

So, from the beginning. Let’s assume we have a terminal connected to /dev/stdout (file descriptor #1) and /dev/stderr (file descriptor #2). In practice, it could be a pipe, socket or whatever.

  • Create file descriptors (FDs) #3 and #4 and point to the same «location» as #1 and #2 respectively. Changing file descriptor #1 doesn’t affect file descriptor #3 from now on. Now, file descriptors #3 and #4 point to standard output and standard error respectively. These will be used as real terminal standard output and standard error.
  • 1> >(. ) redirects standard output to command in parentheses
  • Parentheses (sub-shell) executes ‘tee’, reading from exec’s standard output (pipe) and redirects to the ‘logger’ command via another pipe to the sub-shell in parentheses. At the same time it copies the same input to file descriptor #3 (the terminal)
  • the second part, very similar, is about doing the same trick for standard error and file descriptors #2 and #4.

The result of running a script having the above line and additionally this one:

echo "Will end up in standard output (terminal) and /var/log/messages" 
$ ./my_script Will end up in standard output (terminal) and /var/log/messages $ tail -n1 /var/log/messages Sep 23 15:54:03 wks056 my_script_tag[11644]: Will end up in standard output (terminal) and /var/log/messages 

If you want to see clearer picture, add these two lines to the script:

only one exception. in the first example you wrote: exec 1<>$LOG_FILE . it cause original logfile is allways owerwritten. for real loggin better way is: exec 1>>$LOG_FILE it cause log is allways appended.

Читайте также:  Astra linux перезапуск kde

That’s true although it depends on intentions. My approach is to always create a unique and timestamped log file. The other is to append. Both ways are ‘logrotateable’. I prefer separate files which require less parsing but as I said, whatever makes your boat floating 🙂

Your second solution is informative, but what’s with all the cleanup code? It doesn’t seem relevant, and if so, only muddles an otherwise good example. I’d also like to see it reworked slightly so that FDs 1 and 2 aren’t redirected to the logger but rather 3 and 4 are so that anything calling this script might manipulate 1 and 2 further under the common assumption the stdout==1 and stderr==2, but my brief experimentation suggests that’s more complex.

I like it better with the cleanup code. It might be a bit of distraction from the core example, but stripping it would make the example incomplete. The net is already full of examples without error handling, or at least a friendly note that it still needs about a hundred lines of code to make is safe to use.

I wanted to elaborate on clean-up code. It’s a part of script which daemonizes ergo becomes immune to HANG-UP signal. ‘tee’ and ‘logger’ are processes spawned by the same PPID and they inherit HUP trap from main bash script. So, once the main process dies they become inherited by init[1]. They will not become zombies(defunc). The clean-up code makes sure that all background tasks are killed, if main script dies. It also applies to any other process which might have been created and running in background.

bash your_script.sh 1>file.log 2>&1 

1>file.log instructs the shell to send standard output to the file file.log , and 2>&1 tells it to redirect standard error (file descriptor 2) to standard output (file descriptor 1).

Note: The order matters as liw.fi pointed out, 2>&1 1>file.log doesn’t work.

To me, the second way makes more sense. First send all of stderr to stdout, then send stdout to the file. Why would we want to send stderr to stdout after stdout goes to the file?

But this gives a syntax error:

yourcommand &>> filename syntax error near unexpected token `>' 
yourcommand 1>> filename 2>&1 

Short answer: Command >filename 2>&1 or Command &>filename

Consider the following code which prints the word «stdout» to stdout and the word «stderror» to stderror.

$ (echo "stdout"; echo "stderror" >&2) stdout stderror 

Note that the ‘&’ operator tells bash that 2 is a file descriptor (which points to the stderr) and not a file name. If we left out the ‘&’, this command would print stdout to stdout, and create a file named «2» and write stderror there.

By experimenting with the code above, you can see for yourself exactly how redirection operators work. For instance, by changing which file which of the two descriptors 1,2 , is redirected to /dev/null the following two lines of code delete everything from the stdout, and everything from stderror respectively (printing what remains).

$ (echo "stdout"; echo "stderror" >&2) 1>/dev/null stderror $ (echo "stdout"; echo "stderror" >&2) 2>/dev/null stdout 

Now, we can explain why the solution why the following code produces no output:

(echo "stdout"; echo "stderror" >&2) >/dev/null 2>&1 

To truly understand this, I highly recommend you read this webpage on file descriptor tables. Assuming you have done that reading, we can proceed. Note that Bash processes left to right; thus Bash sees >/dev/null first (which is the same as 1>/dev/null ), and sets the file descriptor 1 to point to /dev/null instead of the stdout. Having done this, Bash then moves rightwards and sees 2>&1 . This sets the file descriptor 2 to point to the same file as file descriptor 1 (and not to file descriptor 1 itself. (see this resource on pointers for more info)) . Since file descriptor 1 points to /dev/null, and file descriptor 2 points to the same file as file descriptor 1, file descriptor 2 now also points to /dev/null. Thus both file descriptors point to /dev/null, and this is why no output is rendered.

Читайте также:  What is kickstart file in linux

To test if you really understand the concept, try to guess the output when we switch the redirection order:

(echo "stdout"; echo "stderror" >&2) 2>&1 >/dev/null 

The reasoning here is that evaluating from left to right, Bash sees 2>&1, and thus sets the file descriptor 2 to point to the same place as file descriptor 1, ie stdout. It then sets file descriptor 1 (remember that >/dev/null = 1>/dev/null) to point to >/dev/null, thus deleting everything which would usually be send to to the standard out. Thus all we are left with was that which was not send to stdout in the subshell (the code in the parentheses)- i.e. «stderror». The interesting thing to note there is that even though 1 is just a pointer to the stdout, redirecting pointer 2 to 1 via 2>&1 does NOT form a chain of pointers 2 -> 1 -> stdout. If it did, as a result of redirecting 1 to /dev/null, the code 2>&1 >/dev/null would give the pointer chain 2 -> 1 -> /dev/null, and thus the code would generate nothing, in contrast to what we saw above.

Finally, I’d note that there is a simpler way to do this:

From section 3.6.4 here, we see that we can use the operator &> to redirect both stdout and stderr. Thus, to redirect both the stderr and stdout output of any command to \dev\null (which deletes the output), we simply type $ command &> /dev/null or in case of my example:

$ (echo "stdout"; echo "stderror" >&2) &>/dev/null 
  • File descriptors behave like pointers (although file descriptors are not the same as file pointers)
  • Redirecting a file descriptor «a» to a file descriptor «b» which points to file «f», causes file descriptor «a» to point to the same place as file descriptor b — file «f». It DOES NOT form a chain of pointers a -> b -> f
  • Because of the above, order matters, 2>&1 >/dev/null is != >/dev/null 2>&1 . One generates output and the other does not!

Finally have a look at these great resources:

Источник

Оцените статью
Adblock
detector