- Redirect all output to file in Bash [duplicate]
- 9 Answers 9
- How to redirect output to a file and stdout
- 11 Answers 11
- Saved searches
- Use saved searches to filter your results more quickly
- Systemd should write to all consoles during startup #9899
- Systemd should write to all consoles during startup #9899
- Comments
- How can I send stdout to multiple commands?
- linux — write commands from one terminal to another
- 7 Answers 7
Redirect all output to file in Bash [duplicate]
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee . However, I’m not sure why part of the output is still output to the screen and not written to the file. Is there a way to redirect all output to file?
9 Answers 9
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
Note: this works in (ba)sh, check your shell for proper syntax
well, i found the reference and have deleted my post for having incorrect information. from the bash manual: ‘»ls 2>&1 > dirlist» directs only the standard output to dirlist, because the standard error was duplicated from the standard output before the standard output was redirected to dirlist» 🙂
also from the bash man «There are two formats for redirecting standard output and standard error: &>word and >&word Of the two forms, the first is preferred. This is semantically equivalent to >word 2>&1»
Two important addenda: If you want to pipe both stdout and stderr, you have to write the redirections in the opposite order from what works for files, cmd1 2>&1 | cmd2 ; putting the 2>&1 after the | will redirect stderr for cmd2 instead. If both stdout and stderr are redirected, a program can still access the terminal (if any) by opening /dev/tty ; this is normally done only for password prompts (e.g. by ssh ). If you need to redirect that too, the shell cannot help you, but expect can.
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with > , >> , or | . stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
EDIT: thanks to Zack for pointing out that the above solution is not portable—use instead:
If you want to silence the error, do:
How to redirect output to a file and stdout
In bash, calling foo would display any output from that command on the stdout. Calling foo > output would redirect any output from that command to the file specified (in this case ‘output’). Is there a way to redirect output to a file and have it display on stdout?
If someone just ended up here looking for capturing error output to file, take a look at — unix.stackexchange.com/questions/132511/…
A note on terminology: when you execute foo > output the data is written to stdout and stdout is the file named output . That is, writing to the file is writing to stdout. You are asking if it is possible to write both to stdout and to the terminal.
@WilliamPursell I’m not sure your clarification improves things 🙂 How about this: OP is asking if it’s possible to direct the called program’s stdout to both a file and the calling program’s stdout (the latter being the stdout that the called program would inherit if nothing special were done; i.e. the terminal, if the calling program is an interactive bash session). And maybe they also want to direct the called program’s stderr similarly («any output from that command» might be reasonably interpreted to mean including stderr).
If we have multiple commands that want to pipe outputs, use ( ) . For example (echo hello; echo world) | tee output.txt
11 Answers 11
The command you want is named tee :
For example, if you only care about stdout:
If you want to include stderr, do:
program [arguments. ] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments. ] 2>&1 | tee -a outfile
If OP wants «all output» to be redirected, you’ll also need to grab stderr: «ls -lR / 2>&1 | tee output.file»
@evanmcdonnal The answer is not wrong, it just may not be specific enough, or complete depending on your requirements. There certainly are conditions where you might not want stderr as part of the output being saved to a file. When I answered this 5 years ago I assumed that the OP only wanted stdout, since he mentioned stdout in the subject of the post.
Ah sorry, I might have been a little confused. When I tried it I just got no output, perhaps it was all going to stderr.
Use -a argument on tee to append content to output.file , instead of overwriting it: ls -lR / | tee -a output.file
If you’re using $? afterwards it will return the status code of tee , which is probably not what you want. Instead, you can use $ .
$ program [arguments. ] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams. tee outfile takes the stream it gets and writes it to the screen and to the file «outfile».
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they’ll end up out of chronological order in the output file and on the screen.
It’s also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn’t even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer , part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee . E.g.:
$ unbuffer program [arguments. ] 2>&1 | tee outfile
Saved searches
Use saved searches to filter your results more quickly
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Systemd should write to all consoles during startup #9899
Systemd should write to all consoles during startup #9899
Comments
Is your feature request related to a problem? Please describe.
Systemd only outputs to /dev/console — which (per the linux documentation) only writes to the final console listed on the kernel command line.
This was previously brought up in #3403 which was closed when some users reported that systemd was outputting to all consoles (which later turned out to be plymouth working around the issue). To make sure this issue is still happening I built a systemd image using mkosi —kernel-command-line ‘console=ttyS0 console=tty1’ and ran it under VirtualBox. The output is below — ttyS0 on the left, tty1 on the right. As you can see only the initial messages with dmesg-style timestamps are getting output, and not the [ OK ] Listening on udev Kernel Socket. stuff.
Where I work we have a control panel which our customers use to create and administer their virtual servers. They may specify a script to run on first boot-up of their server. Under systemd-based distros the output from their scripts is only output to the serial console — but our panel only supports the graphical console. We’d like our customers to be able to see it on both. Under upstart-based distros (i.e. old-but-still-supported Ubuntus) we see the output on both consoles — and it seems there is a patch in Debian to support this on sysvinit (see debian bug #181756) — though we do not have any images with sysvinit out of the box at the moment.
Describe the solution you’d like
The set of consoles that dependency startup state and output is sent to should be configurable either by the kernel command line or in the systemd-system.conf (or similar). I haven’t been able to find any mention of this configuration and a tiny dive into the systemd source made it seem hardwired to /dev/console .
Describe alternatives you’ve considered
There are some workarounds — per #3403 it seems that plymouth mirrors output on /dev/console to all other consoles, and I shall be writing a workaround in our script which calls customer firstboot scripts to tee the output to all consoles.
The text was updated successfully, but these errors were encountered:
How can I send stdout to multiple commands?
There are some commands which filter or act on input, and then pass it along as output, I think usually to stdout — but some commands will just take the stdin and do whatever they do with it, and output nothing. I’m most familiar with OS X and so there are two that come to mind immediately are pbcopy and pbpaste — which are means of accessing the system clipboard. Anyhow, I know that if I want to take stdout and spit the output to go to both stdout and a file then I can use the tee command. And I know a little about xargs , but I don’t think that’s what I’m looking for. I want to know how I can split stdout to go between two (or more) commands. For example:
cat file.txt | stdout-split -c1 pbcopy -c2 grep -i errors
There is probably a better example than that one, but I really am interested in knowing how I can send stdout to a command that does not relay it and while keeping stdout from being «muted» — I’m not asking about how to cat a file and grep part of it and copy it to the clipboard — the specific commands are not that important. Also — I’m not asking how to send this to a file and stdout — this may be a «duplicate» question (sorry) but I did some looking and could only find similar ones that were asking about how to split between stdout and a file — and the answers to those questions seemed to be tee , which I don’t think will work for me. Finally, you may ask «why not just make pbcopy the last thing in the pipe chain?» and my response is 1) what if I want to use it and still see the output in the console? 2) what if I want to use two commands which do not output stdout after they process the input? Oh, and one more thing — I realize I could use tee and a named pipe ( mkfifo ) but I was hoping for a way this could be done inline, concisely, without a prior setup 🙂
linux — write commands from one terminal to another
What are you really trying to do? It may be your logic for trying this is in the first place is flawed; there might be an easier solution to get the same result. Also, terminals run in separate processes so you’d need some form of interprocess communication to get them to talk to one another.
7 Answers 7
#!/usr/bin/python import sys,os,fcntl,termios if len(sys.argv) != 3: sys.stderr.write("usage: ttyexec.py tty command\n") sys.exit(1) fd = os.open("/dev/" + sys.argv[1], os.O_RDWR) cmd=sys.argv[2] for i in range(len(cmd)): fcntl.ioctl(fd, termios.TIOCSTI, cmd[i]) fcntl.ioctl(fd, termios.TIOCSTI, '\n') os.close(fd)
Is posible to show the output of a command on multiple terminals simultaneously with the following script., and it works with all console programs, including the editors. For example doing:
execmon.bash 'nano hello.txt' 5
Open an editor and both the output and the text that we introduce will be redirected to the virtual terminal number 5. You can see your terminals:
Each virtual terminal has an associated number.
Is work with the normal terminal, konsole and xterm, just create the file execmon.bash and put this:
#! / bin / bash # execmon.bash # Script to run a command in a terminal and display the output # in the terminal used and an additional one. param = $ # if [$ param-eq 2]; Then echo $ 1 | tee a.out a.out && cat> / dev / pts / $ 2 && exec `cat` a.out | tee / dev / pts / $ 2 && rm a.out else echo "Usage:" echo "execmon 'command' num ' echo "-command is the command to run (you have to enter ')" echo "-num is the number of virtual console to output to the" fi
execmon.bash 'ls-l' 5 execmon.bash 'nano Hello.txt' 5