Output lines file linux

Quick unix command to display specific lines in the middle of a file?

Trying to debug an issue with a server and my only log file is a 20GB log file (with no timestamps even! Why do people use System.out.println() as logging? In production?!) Using grep, I’ve found an area of the file that I’d like to take a look at, line 347340107. Other than doing something like

. which would require head to read through the first 347 million lines of the log file, is there a quick and easy command that would dump lines 347340100 — 347340200 (for example) to the console? update I totally forgot that grep can print the context around a match . this works well. Thanks!

19 Answers 19

I found two other solutions if you know the line number but nothing else (no grep possible):

Assuming you need lines 20 to 40,

When using sed it is more efficient to quit processing after having printed the last line than continue processing until the end of the file. This is especially important in the case of large files and printing lines at the beginning. In order to do so, the sed command above introduces the instruction 41q in order to stop processing after line 41 because in the example we are interested in lines 20-40 only. You will need to change the 41 to whatever the last line you are interested in is, plus one.

# print line number 52 sed -n '52p' # method 1 sed '52!d' # method 2 sed '52q;d' # method 3, efficient on large files 

method 3 efficient on large files

fastest way to display specific lines

I’m trying to figure out how to adapt method 3 to use a range instead of a single line, but I’m afraid my sed-foo isn’t up to the task.

The reason the first two lines/methods are less efficient, is that they continue processing all lines after Line 52, till the end, whereas #3 stops after printing Line 52.

with GNU-grep you could just say

This is actually not what you want because it will process the whole file even if the match is in the top bit. At this point a head/tail or tail/head combo is much more effective.

This doesn’t satisfy the asked question at all as this doesn’t offer a way to output a specific line, as asked.

No there isn’t, files are not line-addressable.

There is no constant-time way to find the start of line n in a text file. You must stream through the file and count newlines.

Use the simplest/fastest tool you have to do the job. To me, using head makes much more sense than grep , since the latter is way more complicated. I’m not saying » grep is slow», it really isn’t, but I would be surprised if it’s faster than head for this case. That’d be a bug in head , basically.

Unless lines are fixed width in bytes, you don’t know where to move the file pointer without counting new line characters from the start of the file.

tail -n +347340107 filename | head -n 100 

I didn’t test it, but I think that would work.

I prefer just going into less and

  • typing 5 0 % to goto halfway the file,
  • 43210 G to go to line 43210
  • :43210 to do the same

Even better: hit v to start editing (in vim, of course!), at that location. Now, note that vim has the same key bindings!

You can use the ex command, a standard Unix editor (part of Vim now), e.g.

    display a single line (e.g. 2nd one):

Above commands can be tested with the following test file:

  • + or -c followed by the command — execute the (vi/vim) command after file has been read,
  • -s — silent mode, also uses current terminal as a default output,
  • q followed by -c is the command to quit editor (add ! to do force quit, e.g. -scq! ).

As indicated above, don’t forget to quit processing entire file with sed after last line of interest is displayed.

I’d first split the file into few smaller ones like this

$ split --lines=50000 /path/to/large/file /path/to/output/file/prefix 

and then grep on the resulting files.

agreed, break that log up and create a cron job to do that properly. use logrotate or something similar to keep them from getting so huge.

If your line number is 100 to read

head -100 filename | tail -1 
$ sudo apt-get install ack-grep 
$ ack --lines=$START-$END filename 
--lines=NUM Only print line NUM of each file. Multiple lines can be given with multiple --lines options or as a comma separated list (--lines=3,5,7). --lines=4-7 also works. The lines are always output in ascending order, no matter the order given on the command line. 

sed will need to read the data too to count the lines. The only way a shortcut would be possible would there to be context/order in the file to operate on. For example if there were log lines prepended with a fixed width time/date etc. you could use the look unix utility to binary search through the files for particular dates/times

Here you will get the line number where the match occurred.

Now you can use the following command to print 100 lines

or you can use «sed» as well

If you have more than one match, use : «awk ‘NR==1» for first match and so on

With sed -e ‘1,N d; M q’ you’ll print lines N+1 through M. This is probably a bit better then grep -C as it doesn’t try to match lines to a pattern.

Building on Sklivvz’ answer, here’s a nice function one can put in a .bash_aliases file. It is efficient on huge files when printing stuff from the front of the file.

function middle() < startidx=$1 len=$2 endidx=$(($startidx+$len)) filename=$3 awk "FNR>=$ && FNR < print NR\" \"\$0 >; FNR>$ < print \"END HERE\"; exit >" $filename > 

To display a line from a by its , just do this:

If you want a more powerful way to show a range of lines with regular expressions — I won’t say why grep is a bad idea for doing this, it should be fairly obvious — this simple expression will show you your range in a single pass which is what you want when dealing with ~20GB text files:

(tip: if your regex has / in it, use something like m!! instead)

This would print out starting with the line that matches up until (and including) the line that matches .

It doesn’t take a wizard to see how a few tweaks can make it even more powerful.

Last thing: perl, since it is a mature language, has many hidden enhancements to favor speed and performance. With this in mind, it makes it the obvious choice for such an operation since it was originally developed for handling large log files, text, databases, etc.

Источник

How to Display Specific Lines of a File in Linux Command Line

Here are several ways to display specific lines of a file in Linux command line.

How do I find the nth line in a file in Linux command line? How do I display line number x to line number y?

In Linux, there are several ways to achieve the same result. Printing specific lines from a file is no exception.

To display the 13th line, you can use a combination of head and tail:

head -13 file_name | tail +13

Or, you can use the sed command:

To display line numbers from 20 to 25, you can combine head and tail commands like this:

head -25 file_name | tail +20

Or, you can use the sed command like this:

A detailed explanation of each command follows next. I’ll also show the use of the awk command for this purpose.

Display specific lines using head and tail commands

This is my favorite way of displaying lines of choice. I find it easier to remember and use.

Both head and tails commands are used to display the contents of a file in the terminal.

Use a combination of head and tail command in the following function the line number x:

You can replace x with the line number you want to display. So, let’s say you want to display the 13th line of the file.

[email protected]:~$ head -13 lines.txt | tail +13 This is line number 13

Explanation: You probably already know that the head command gets the lines of a file from the start while the tail command gets the lines from the end.

The “head -x” part of the command will get the first x lines of the files. It will then redirect this output to the tail command. The tail command will display all the lines starting from line number x.

Quite obviously, if you take 13 lines from the top, the lines starting from number 13 to the end will be the 13th line. That’s the logic behind this command.

Now let’s take our combination of head and tail commands to display more than one line.

Say you want to display all the lines from x to y. This includes the xth and yth lines also:

Let’s take a practical example. Suppose you want to print all the the lines from line number 20 to 25:

[email protected]:~$ head -25 lines.txt | tail +20 This is line number 20 This is line number 21 This is line number 22 This is line number 23 This is line number 24 This is line number 25

Use SED to display specific lines

The powerful sed command provides several ways of printing specific lines.

For example, to display the 10th line, you can use sed in the following manner:

The -n suppresses the output while the p command prints specific lines. Read this detailed SED guide to learn and understand it in detail.

To display all the lines from line number x to line number y, use this:

[email protected]:~$ sed -n '3,7p' lines.txt This is line number 3 This is line number 4 This is line number 5 This is line number 6 This is line number 7

Use AWK to print specific lines from a file

The awk command could seem complicated and there is surely a learning curve involved. But like sed, awk is also quite powerful when it comes to editing and manipulating file contents.

[email protected]:~$ awk 'NR==5' lines.txt This is line number 5

NR denotes the ‘current record number’. Please read our detailed AWK command guide for more information.

To display all the lines from x to y, you can use awk command in the following manner:

It follows a syntax that is similar to most programming languages.

I hope this quick article helped you in displaying specific lines of a file in Linux command line. If you know some other trick for this purpose, do share it with the rest of us in the comment section.

Источник

How to display certain lines from a text file in Linux?

I guess everyone knows the useful Linux cmd line utilities head and tail . head allows you to print the first X lines of a file, tail does the same but prints the end of the file. What is a good command to print the middle of a file? something like middle —start 10000000 —count 20 (print the 10’000’000th till th 10’000’010th lines). I’m looking for something that will deal with large files efficiently. I tried tail -n 10000000 | head 10 and it’s horrifically slow.

10 Answers 10

sed -n '10000000,10000020p' filename 

You might be able to speed that up a little like this:

sed -n '10000000,10000020p; 10000021q' filename 

In those commands, the option -n causes sed to «suppress automatic printing of pattern space». The p command «print[s] the current pattern space» and the q command «Immediately quit[s] the sed script without processing any more input. » The quotes are from the sed man page.

tail -n 10000000 filename | head -n 10 

starts at the ten millionth line from the end of the file, while your «middle» command would seem to start at the ten millionth from the beginning which would be equivalent to:

head -n 10000010 filename | tail -n 10 

The problem is that for unsorted files with variable length lines any process is going to have to go through the file counting newlines. There’s no way to shortcut that.

If, however, the file is sorted (a log file with timestamps, for example) or has fixed length lines, then you can seek into the file based on a byte position. In the log file example, you could do a binary search for a range of times as my Python script here* does. In the case of the fixed record length file, it’s really easy. You just seek linelength * linecount characters into the file.

* I keep meaning to post yet another update to that script. Maybe I’ll get around to it one of these days.

Источник

Читайте также:  Tp link archer t1u linux driver
Оцените статью
Adblock
detector