Read console output without redirect or pipe
Is there a way to read a command’s console output without redirecting or piping its STDOUT/STDERR? The problem with redirects or pipes is that some commands behave differently when their STDOUT and/or STDERR is redirected, e.g. colour or some format is removed, or more relevant differences. Older tput versions require STDOUT or STDERR on a regular console to read its dimensions. In case of pipes, additionally, the originating command loses the ability to control the originating shell, e.g. exit ing a script from within a function, that has its output piped, is not possible. What I want to achieve is to execute a command so that it prints its output to the console directly, is able to kill/exit the shell, but parse/handle and in case log its output. tee would be an obvious solution but it suffers from the mentioned issues. I did a few attempts with read , running a loop that tries to read from the command’s file descriptors or /dev/stdout or /dev/tty but in all cases not a single line from the shown command output is actually read . E.g.
#!/bin/bash apt update 2>&1 & pid=$! while [[ -f /proc/$pid/fd/1 ]] && read -r line do echo "$line" >> ./testfile done < /proc/$pid/fd/1
#!/bin/bash while read -r line do echo "$line" >> ./testfile done < /dev/tty & pid=$! apt update kill $pid
but in both cases ./testfile remains empty. /dev/stdout is each process’s own STDOUT file descriptor, that cannot work of course. Probably someone has an idea how to achieve this or a similar alternative?
Why not run something like tmux on your console, then you have access to all the output with tmux's pipe-pane command.
@meuh many thanks for your suggestion. Sounds like a solution in general, however in our case tmux or GNU screen cannot be used, it needs to be a solution that works regardless how the (bash) script is called. But might be helpful for others who have similar needs.
1 Answer 1
Preliminary note
In this answer I use the term "tty" for any terminal device, often for pts.
Simple approach
There are tools you can use to capture output from a command while still providing a tty for it: unbuffer , script , expect , screen , tmux , ssh , possibly others.
For a single external command this is relatively easy. Take ls --color=auto . This attempt:
"suffers from the mentioned issues", as you noticed. But this:
unbuffer ls --color=auto | tee ./log # or tee -a
nicely prints colorized and columnized output and stores a full copy in ./log . So does this:
script -qc 'ls --color=auto' ./log # or script -a
although in this case there will be a header and a footer in the file, you may or may not like it.
I won't elaborate on expect , screen or tmux . As a last resort (when no other tool is available) one can use ssh after setting up passwordless SSH access from localhost to itself. Something like this:
ssh -tt localhost "cd $ && ls --color=auto" | tee ./log
( $ expands to the value of var quoted in a format that can be reused as input; perfect here. The shell that runs ssh must be Bash, the syntax is not portable.)
unbuffer seems the simplest solution. The name suggests its main purpose is to disable buffering, nevertheless it does create a pseudo-terminal.
Complications
You want to be able to capture output also from a shell function, without losing its connection with the main shell interpreting the script. For this the function must be run in the main shell, the above simple approach with a tool that runs some external command cannot be used, unless the external command is the whole script:
unbuffer ./the-script | tee ./log
Obviously, this solution is not intrinsic to the script. I guess you want to simply run ./the-script and capture the output as it goes to the terminal. So the script needs to create a "capturable" tty for itself somehow. This is the tricky part.
Possible solution
A possible solution is to run
unbuffer something | tee ./log & # or tee -a
and to redirect file descriptors 1 and (optionally) 2 of the main shell to the tty created for something . something should silently sit there and do (almost) nothing.
- You can save the original file descriptors as different numbers, then you can stop logging anytime by redirecting stdin and stdout back to what they were.
- You can run multiple unbuffer … | tee … & and juggle file descriptors to log output from different parts of the script to different files.
- You can selectively redirect stdout and/or stderr of any single command.
- The script should kill unbuffer or something when logging is no longer needed. It should do this when it exits normally or because of a signal. If it gets forcefully killed then it won't be able to do this. Maybe something should periodically check if the main process is still there and exit eventually. There's a nifty solution with flock (see below).
- something needs to report its tty to the main shell somehow. Just printing the output of tty is a possibility, the main shell would then open ./log independently and retrieve the information. After this, it's just garbage in the file (and on the original terminal screen). The script can truncate the file, this will only work with tee -a (because tee -a vs tee is like >> vs > in this answer of mine). It's better if something passes the information via a separate channel: a temporary file or a named fifo created only for this.
Proof of concept
The following code needs unbuffer associated with expect (in Debian: expect package) and flock (in Debian: util-linux package).
#!/bin/bash save-stdout-stderr() < exec 7>&1 8>&2 > restore-stdout-stderr() < exec 1>&7 2>&8 7>&- 8>&- > create-logging-tty() < # usage: create-logging-tty descriptor log local tmpdir tmpfifo tmpdesc tty descriptor log descriptor="$1" log="$2" tmpdir="$(mktemp -d)" tmpfifo="$tmpdir/fifo" mkfifo "$tmpfifo" eval 'exec '"$descriptor"'>/dev/null' exec <>"$tmpfifo" flock "$tmpdesc" unbuffer sh -c ' exec 3<>"$1" tty >&3 flock 3 flock 2 ' sh "$tmpfifo" | tee "$log" & if ! IFS= read -ru "$tmpdesc" -t 5 tty; then rm -r "$tmpdir" exec >&- flock -u "$tmpdesc" return 1 fi rm -r "$tmpdir" eval 'exec '"$descriptor"'> "$tty"' flock "$descriptor" flock -u "$tmpdesc" > destroy-logging-tty() < # usage: destroy-logging-tty descriptor local descriptor descriptor="$1" flock -u "$descriptor" exec >&- > # here the actual script begins save-stdout-stderr echo "This won't be logged." create-logging-tty 21 ./log exec 1>&21 2>&21 echo "This will be logged." # proof of concept ls --color=auto /dev restore-stdout-stderr destroy-logging-tty 21 echo "This won't be logged."
- save-stdout-stderr and restore-stdout-stderr use hardcoded values 7 and 8 . You shouldn't use these descriptors for anything else. Rebuild this functionality if needed.
- create-logging-tty 21 ./log is a request to create a file descriptor 21 (arbitrary number) that would be a tty logged to ./log (arbitrary pathname). The function must be called from the main shell (not from a subshell) because it should create a file descriptor for the main shell.
- create-logging-tty uses eval to create a file descriptor with the requested number. eval can be evil but here it's safe, unless you pass some unfortunate (or rogue) shell code instead of a number. The function does not verify if its argument is a number. It's your job to make sure it is (or to add a proper test).
- In general, there is no error handling in the example, so maybe you want to add some. There's return 1 when the function cannot get a path to the newly created tty via fifo; still this exit status from the function is not handled in the main code. Fix this and more on your own. In particular, you may want to test if the desired descriptor really leads to a tty ( [ -t 21 ] ) before you redirect anything to it.
- create-logging-tty uses the <>… syntax to create a temporary file descriptor, where the shell picks an unused number (10 or greater) for it and assigns the number to the variable . To make sure this doesn't take the requested number purely by chance, the function creates a file descriptor with the requested number first, before it knows the tty the descriptor should eventually point to. In effect you may request any sane number and the internals of the function won't collide with anything.
- If your whole script uses the <>… or similar syntax then you may not like the idea of hardcoded number like 21. This can easily be solved:
exec >/dev/null create-logging-tty "$foo" ./log exec 1>&"$foo" 2>&"$foo" … destroy-logging-tty "$foo"
- save-stdout-stderr locks the fifo it created, it uses an open descriptor to the fifo for this. Notes:
- The function runs in the main shell, so in fact the descriptor is opened in the main shell and thus the process ( bash ) interpreting the whole script holds the lock.
- The lock does not prevent other processes from writing to the fifo. It only blocks them when they want to lock the fifo for themselves. This is what the shell code running under unbuffer is going to do.
- The shell code running under unbuffer reports its tty via the fifo and then it tries to lock the fifo using its own file descriptor 3. The point is flock blocks until it obtains the lock.
- The function reads the information about the tty, creates the requested descriptor and locks the tty using the descriptor. Only then it unlocks the fifo.
- The first flock under unbuffer is no longer blocked. The execution goes to the second flock which tries to lock the tty and blocks.
- The main script continues. When the tty is no longer needed the main shell unlocks it via destroy-logging-tty .
- Only then the second flock under unbuffer unblocks. The shell there exits (releasing its locks automatically), unbuffer destroys the tty and exits, tee exits. No maintenance is needed.
If we didn't lock the fifo but let the shell under unbuffer lock the tty right away, it might happen it obtains the lock before the main shell, so it terminates immediately. The main shell cannot lock the tty before it learns what it is. By using another lock and the right sequence of locking and unlocking we can be sure unbuffer exits only after the main shell is done with the tty.
The big advantage is: if the main shell exits for whatever reason (including SIGKILL ) before it runs destroy-logging-tty then the kernel will release all locks held by the process anyway. This means unbuffer will eventually terminate, there will be no stale process.
Broader issue
The problem with mc and resizing is because of a broader issue. You wrote:
What I want to achieve is to execute a command so that it prints its output to the console directly […]
The above or a similar solution when there is another tty whose output is logged and printed to the original console is certainly not "directly". mc would correctly update its size if it printed directly to the original terminal.
Normally you cannot print directly to a terminal and log what the terminal receives, unless the terminal itself supports logging. Pseudo-terminals created by screen or tmux can do this and you can programmatically setup them from within a script. Some terminal emulators with GUI may allow you to dump what they receive, you need to configure them via GUI. The point is you need a terminal with the feature. Run a script in a "wrong" terminal and you cannot log this way (you can use reptyr to "move it" to another terminal though). The script can reroute its output like our script, but this is not "directly". Or…
There are ways to snoop on a tty (examples). Maybe you will find something that fits your needs. Usually such snooping requires elevated access, even if you want to snoop on a tty you can read from and write to.