Print to console and file linux

How to use echo command to print out content of a text file?

According to text book it should redirect a programs standards input. Now I am redirecting a.txt to echo but instead of printing the content of the file it is printing out one empty line! Appreciate if anyone display this behaviour.

@AndrewS but that’s just part of it, another part is I don’t really think it is possible. How would you distinguish echo foo to output «foo» but echo (empty variable, no value) to just wait for user input through stdin? That’ll probably break like a half of existing scripts.

6 Answers 6

echo doesn’t read stdin so in this case, the redirect is just meaningless.

To print out a file just use the command below

Here for future reference: in my case, echo «$(

In Unix, I believe all you have to do, assuming you have a file that isn’t hefty is: cat

use below command to print the file content using echo,

here you can also get benefit of all echo features, I most like the removing of trailing newline character, (to get exact same hash as that of buffer and not the file)

echo -n `cat file.txt` | sha256sum 

cat command will display the file with CR or return:

$ cat names.txt Homer Marge Bart Lisa Maggie 

you could use echo command with cat as command substitution. However, it will replace CR or return (unix: \n) with spaces:

$ echo $(cat names.txt) Homer Marge Bart Lisa Maggie 

Could be an interesting feature if you want to pipe to further data processing though. E.g. replacing spaces with sed command.

The echo command does not accept data from standard input ( STDIN ), but only works on the arguments passed to it.

So if we pass data to echo from standard input, e.g. with < or | , it will be ignored because echo only works with arguments.

This can be changed by using echo together with the xargs command, which is designed to call a command with arguments that are data from standard input.

Источник

how to output text to both screen and file inside a shell script?

When executing this script, there is no output to screen, and, since I’m connecting to the server via putty, I have to open another connection and do «tail -f log_file_path.log», because I can’t terminate the running script and I want to see the output in real time. Obviously, what I want is that the text messages are printed on screen and into file, but I’d like to do it in one line, not two lines, one of which has no redirection to file. How to achieve this?

5 Answers 5

tee saves input to a file (use -a to append rather than overwrite), and copies the input to standard output as well.

Because the command can detect that it’s now being run in a non-interactive fashion this may change its behaviour. The most common side effect is that it disables colour output. If this happens (and you want ANSI colour coded output) you have to check the command documentation to see if it has a way to force it to revert to the interactive behaviour, such as grep —color=always . Beware that this means the log file will also include these escape codes, and you’ll need to use less —RAW-CONTROL-CHARS «$log_file» to read it without distracting escape code literals. Also beware that there is no way to make the log file contents different from what is printed to screen when running the above command, so you can’t have colour coded output to screen and non-coloured output in the log file.

Читайте также:  Get largest files linux

You can use a here-doc and . source it for an efficient, POSIX-friendly general collector model.

When you open the heredoc you signal the shell with an IOHERE input token that it should redirect its input to the file-descriptor you specify until it encounters the other end of your limiter token. I’ve looked around but I haven’t seen many examples of the use of the redirection fd number as I’ve shown above in combination with the heredoc operator, though its usage is clearly specified in the POSIX basic shell-command guidelines. Most people just point it at stdin and shoot, but I find sourcing scriptlets this way can keep stdin free and the constituent apps from complaining about blocked I/o paths.

The heredoc’s contents are streamed to the file descriptor you specify, which is in turn then interpreted as shell-code and executed by the . builtin, but not without specifying a specific path for . . If the /proc/self path gives you trouble try /dev/fd/n or /proc/$$. This same method works on pipes, by the way:

Is probably at least as unwise as it looks. You can do the same with sh, of course, but .’s purpose is to execute in the current shell environment, which is probably what you want, and, depending on your shell, is a lot more likely to work with a heredoc than with a standard anonymous pipe.

Anyway, as you’ve probably noticed, I still haven’t answered your question. But if you think about it, in the same way the heredoc streams all of your code to .’s in, it also provides you a single, simple, outpoint:

. 5$(tty) script … more script EOIN 

So all of the terminal stdout from any of the code executed in your heredoc is piped out from . as a matter of course and can easily be tee’d off of a single pipe. I included the unbuffered cat call because I’m unclear about the current stdout direction, but its probably redundant (almost certainly it is as written anyway) and the pipeline can probably end right at tee.

You might also question the missing backslash quote in the second example. This part is important to understand before you jump-in and might give you few ideas about how it can be used. A quoted heredoc limiter (so far we’ve used IOHERE and EOIN, and the first I quoted with a backslash, though ‘single’ or «double» quotes would serve the same purpose) will bar the shell from performing any parameter expansion on the contents, but an unquoted limiter will leave its contents open to expansion. The consequences of this when your heredoc is . sourced are dramatic:

. 3/3 : \$,' `seq 1 100`)> HD echo $vars > 4,8,12… echo $var1 $var51 > 4 104 

Because I didn’t quote the heredoc limiter the shell expanded the contents as it read it in and before serving the resulting file descriptor to . to execute. This essentially resulted in the commands being parsed twice — the expandable ones anyway. Because I backslash quoted the $vars parameter expansion the shell ignored its declaration on the first pass and only stripped the backslash so the whole printf expanded contents could be evaluated by null when . sourced the script on the second pass.

This functionality is basically exactly what the dangerous eval shell builtin can provide, even if the quoting is much easier to handle in a heredoc than with eval, and can be equally as dangerous. Unless you plan it carefully it is probably best to quote the «EOF» limiter as a matter of habit. Just saying.

EDIT: Eh, I’m looking back at this and thinking it’s a little too much of a stretch. If ALL you need to do is concatenate several outputs into one pipe then the simplest method is just to use a:

The curlies will attempt to run contents in the current shell whereas the parens will sub-out automatically. Still, anyone can tell you that and, at least in my opinion, the . heredoc solution is much more valuable information, especially if you’d like to understand how the shell actually works.

Источник

Command output redirect to file and terminal [duplicate]

I am trying to throw command output to file plus console also. This is because i want to keep record of output in file. I am doing following and it appending to file but not printing ls output on terminal.

3 Answers 3

Yes, if you redirect the output, it won’t appear on the console. Use tee .

In this case error is merged into the output ( 2>&1 ), so the next process consuming the pipe will see both of them as regular input (in short: yes).

How to give the size of ls.txt file in above command so that it does not exceeds that given size. And once it exceeds max size, how to create a new file in that same directory (for eg: ls1.txt,ls2.txt. )

Be aware that you will loose the exit status of ls . If you want to retain the exit status of ls , or more precisely want to figure out if something in your pipe failed despite tee being the last (and very likely successful) command in your pipe, you need to use set -o pipefail .

It is worth mentioning that 2>&1 means that standard error will be redirected too, together with standard output. So

someCommand | tee someFile 

gives you just the standard output in the file, but not the standard error: standard error will appear in console only. To get standard error in the file too, you can use

someCommand 2>&1 | tee someFile 

(source: In the shell, what is » 2>&1 «? ). Finally, both the above commands will truncate the file and start clear. If you use a sequence of commands, you may want to get output&error of all of them, one after another. In this case you can use -a flag to «tee» command:

someCommand 2>&1 | tee -a someFile 

Источник

How to redirect output to a file and stdout

In bash, calling foo would display any output from that command on the stdout. Calling foo > output would redirect any output from that command to the file specified (in this case ‘output’). Is there a way to redirect output to a file and have it display on stdout?

If someone just ended up here looking for capturing error output to file, take a look at — unix.stackexchange.com/questions/132511/…

A note on terminology: when you execute foo > output the data is written to stdout and stdout is the file named output . That is, writing to the file is writing to stdout. You are asking if it is possible to write both to stdout and to the terminal.

@WilliamPursell I’m not sure your clarification improves things 🙂 How about this: OP is asking if it’s possible to direct the called program’s stdout to both a file and the calling program’s stdout (the latter being the stdout that the called program would inherit if nothing special were done; i.e. the terminal, if the calling program is an interactive bash session). And maybe they also want to direct the called program’s stderr similarly («any output from that command» might be reasonably interpreted to mean including stderr).

If we have multiple commands that want to pipe outputs, use ( ) . For example (echo hello; echo world) | tee output.txt

11 Answers 11

The command you want is named tee :

For example, if you only care about stdout:

If you want to include stderr, do:

program [arguments. ] 2>&1 | tee outfile 

2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.

Furthermore, if you want to append to the log file, use tee -a as:

program [arguments. ] 2>&1 | tee -a outfile 

If OP wants «all output» to be redirected, you’ll also need to grab stderr: «ls -lR / 2>&1 | tee output.file»

@evanmcdonnal The answer is not wrong, it just may not be specific enough, or complete depending on your requirements. There certainly are conditions where you might not want stderr as part of the output being saved to a file. When I answered this 5 years ago I assumed that the OP only wanted stdout, since he mentioned stdout in the subject of the post.

Ah sorry, I might have been a little confused. When I tried it I just got no output, perhaps it was all going to stderr.

Use -a argument on tee to append content to output.file , instead of overwriting it: ls -lR / | tee -a output.file

If you’re using $? afterwards it will return the status code of tee , which is probably not what you want. Instead, you can use $ .

$ program [arguments. ] 2>&1 | tee outfile 

2>&1 dumps the stderr and stdout streams. tee outfile takes the stream it gets and writes it to the screen and to the file «outfile».

This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.

The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they’ll end up out of chronological order in the output file and on the screen.

It’s also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn’t even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.

Update: The program unbuffer , part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee . E.g.:

$ unbuffer program [arguments. ] 2>&1 | tee outfile 

Источник

Оцените статью
Adblock
detector