Linux ls with file sizes

Using ls to list directories and their total sizes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.

This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.

Is it possible to use ls in Unix to list the total size of a sub-directory and all its contents as opposed to the usual 4K that (I assume) is just the directory file itself?

total 12K drwxrwxr-x 6 *** *** 4.0K 2009-06-19 10:10 branches drwxrwxr-x 13 *** *** 4.0K 2009-06-19 10:52 tags drwxrwxr-x 16 *** *** 4.0K 2009-06-19 10:02 trunk 

Note that the -h option in alias ducks=’du -cksh * | sort -hr | head -n 15′ and the -c option to du are mostly non-portable GNU extensions to the POSIX-standard du and sort utilities. Without the -h option, sort has to be invoked with the -n option to do numeric sorting, simplified: du -sk * | sort -n .

Leaving out -r from sort and not passing the results through head (or tail , as the case may be) emits the entire results to the terminal, largest last. Which is very useful in an interactive session. Beware also that * will skip files and directories that start with . such as .m2 .

29 Answers 29

du --summarize --human-readable * 

Explanation:

du : Disk Usage

-s : Display a summary for each specified file. (Equivalent to -d 0 )

-h : «Human-readable» output. Use unit suffixes: Byte, Kibibyte (KiB), Mebibyte (MiB), Gibibyte (GiB), Tebibyte (TiB) and Pebibyte (PiB). (BASE2)

du —max-depth 1 only shows file/folder sizes of 1 deep in the tree, no more clutter and easy to find large folders within a folder.

@Zak in zsh you can use the *(D) to match hidden (dot) files alongside with normal files. When using bash, you could use * .[!.]* to match both.

To get a clear picture of where space goes, du -sch * .[!.]* | sort -rh is great (show a sorted output) On mac do: brew install coreutils and then du -sch * .[!.]* | gsort -rh

du -sk * | sort -n will sort the folders by size. Helpful when looking to clear space..

or du -sh * | sort -h used when human-readable mode

sort -rn sorts things in reverse numerical order. sort -rn | head -n 10 will show only the top few, if that’s of any interest.

Why is the -k necessary ? In the documentation it says: -k like —block-size=1K , does this influence the precision?

This will be displayed in human readable format.

More about sort -h here: gnu.org/software/coreutils/manual/… It’s especially there for sorting 103K , 102M , 1.1G etc. This should be available on a lot of systems nowadays, but not all.

this command always takes ages for me to run. an alternatve? This is why I was looking for a Q with ls like the title says.

To list the largest directories from the current directory in human readable format:

A better way to restrict number of rows can be

du -sh * | sort -hr | head -n10

Where you can increase the suffix of -n flag to restrict the number of rows listed

[~]$ du -sh * | sort -hr 48M app 11M lib 6.7M Vendor 1.1M composer.phar 488K phpcs.phar 488K phpcbf.phar 72K doc 16K nbproject 8.0K composer.lock 4.0K README.md 

It makes it more convenient to read 🙂

Читайте также:  Linux top all cpus

this command always takes ages for me to run. an alternatve? This is why I was looking for a Q with ls like the title says.

To display it in ls -lh format, use:

(du -sh ./*; ls -lh --color=no) | awk ' < if($1 == "total") else if (!X) else < sub($5 "[ ]*", sprintf("%-7s ", SIZES["./" $9]), $0); print $0>>' 
if($1 == "total") < // Set X when start of ls is detected X = 1 >else if (!X) < // Until X is set, collect the sizes from `du` SIZES[$2] = $1 >else < // Replace the size on current current line (with alignment) sub($5 "[ ]*", sprintf("%-7s ", SIZES["./" $9]), $0); print $0 >
drwxr-xr-x 2 root root 4.0K Feb 12 16:43 cgi-bin drwxrws--- 6 root www 20M Feb 18 11:07 document_root drwxr-xr-x 3 root root 1.3M Feb 18 00:18 icons drwxrwsr-x 2 localusr www 8.0K Dec 27 01:23 passwd 

@anon58192932 You can pipe the output to sort —key=5,5h for sorting ‘human readable units’ from fifth column

returns sort: stray character in field spec: invalid field specification 5,5h’`. I really hate macs sometimes =\

ncdu (ncurses du )

This awesome CLI utility allows you to easily find the large files and directories (recursive total size) interactively.

For example, from inside the root of a well known open source project we do:

sudo apt install ncdu ncdu 

enter image description here

Then, I enter down and right on my keyboard to go into the /drivers folder, and I see:

enter image description here

ncdu only calculates file sizes recursively once at startup for the entire tree, so it is efficient. This way don’t have to recalculate sizes as you move inside subdirectories as you try to determine what the disk hog is.

«Total disk usage» vs «Apparent size» is analogous to du , and I have explained it at: why is the output of `du` often so different from `du -b`

Ubuntu list root

  • -x stops crossing of filesystem barriers
  • —exclude-kernfs skips special filesystems like /sys

MacOS 10.15.5 list root

To properly list root / on that system, I also needed —exclude-firmlinks , e.g.:

brew install ncdu cd / ncdu --exclude-firmlinks 

The things we learn for love.

ncdu non-interactive usage

Another cool feature of ncdu is that you can first dump the sizes in a JSON format, and later reuse them.

For example, to generate the file run:

and then examine it interactively with:

This is very useful if you are dealing with a very large and slow filesystem like NFS.

This way, you can first export only once, which can take hours, and then explore the files, quit, explore again, etc.

The output format is just JSON, so it is easy to reuse it with other programs as well, e.g.:

ncdu -o - | python -m json.tool | less 

reveals a simple directory tree data structure:

I agree, ncdu is the way to go. but do you know if it is possible to search the JSON file? That is, get the full path of a specific file/folder.

@FGV I don’t think ncdu can output that, one possibility would be to hack up a simple python script that parses the JSON.

The command you want is ‘du -sk’ du = «disk usage»

Читайте также:  Linux symbolic link to file

The -k flag gives you output in kilobytes, rather than the du default of disk sectors (512-byte blocks).

The -s flag will only list things in the top level directory (i.e., the current directory, by default, or the directory specified on the command line). It’s odd that du has the opposite behavior of ls in this regard. By default du will recursively give you the disk usage of each sub-directory. In contrast, ls will only give list files in the specified directory. (ls -R gives you recursive behavior.)

Tried this on the root directory, it still tries to list subdirectories, resulting in a lot of messages.

Put this shell function declaration in your shell initialization scripts:

I called it duls because it shows the output from both du and ls (in that order):

$ duls 210M drwxr-xr-x 21 kk staff 714 Jun 15 09:32 . $ duls * 36K -rw-r--r-- 1 kk staff 35147 Jun 9 16:03 COPYING 8.0K -rw-r--r-- 1 kk staff 6962 Jun 9 16:03 INSTALL 28K -rw-r--r-- 1 kk staff 24816 Jun 10 13:26 Makefile 4.0K -rw-r--r-- 1 kk staff 75 Jun 9 16:03 Makefile.am 24K -rw-r--r-- 1 kk staff 24473 Jun 10 13:26 Makefile.in 4.0K -rw-r--r-- 1 kk staff 1689 Jun 9 16:03 README 120K -rw-r--r-- 1 kk staff 121585 Jun 10 13:26 aclocal.m4 684K drwxr-xr-x 7 kk staff 238 Jun 10 13:26 autom4te.cache 128K drwxr-xr-x 8 kk staff 272 Jun 9 16:03 build 60K -rw-r--r-- 1 kk staff 60083 Jun 10 13:26 config.log 36K -rwxr-xr-x 1 kk staff 34716 Jun 10 13:26 config.status 264K -rwxr-xr-x 1 kk staff 266637 Jun 10 13:26 configure 8.0K -rw-r--r-- 1 kk staff 4280 Jun 10 13:25 configure.ac 7.0M drwxr-xr-x 8 kk staff 272 Jun 10 13:26 doc 2.3M drwxr-xr-x 28 kk staff 952 Jun 10 13:26 examples 6.2M -rw-r--r-- 1 kk staff 6505797 Jun 15 09:32 mrbayes-3.2.7-dev.tar.gz 11M drwxr-xr-x 42 kk staff 1428 Jun 10 13:26 src $ duls doc 7.0M drwxr-xr-x 8 kk staff 272 Jun 10 13:26 doc $ duls [bM]* 28K -rw-r--r-- 1 kk staff 24816 Jun 10 13:26 Makefile 4.0K -rw-r--r-- 1 kk staff 75 Jun 9 16:03 Makefile.am 24K -rw-r--r-- 1 kk staff 24473 Jun 10 13:26 Makefile.in 128K drwxr-xr-x 8 kk staff 272 Jun 9 16:03 build 

The paste utility creates columns from its input according to the specification that you give it. Given two input files, it puts them side by side, with a tab as separator.

We give it the output of du -hs — «$@» | cut -f1 as the first file (input stream really) and the output of ls -ldf — «$@» as the second file.

In the function, «$@» will evaluate to the list of all command line arguments, each in double quotes. It will therefore understand globbing characters and path names with spaces etc.

The double minuses ( — ) signals the end of command line options to du and ls . Without these, saying duls -l would confuse du and any option for du that ls doesn’t have would confuse ls (and the options that exist in both utilities might not mean the same thing, and it would be a pretty mess).

The cut after du simply cuts out the first column of the du -hs output (the sizes).

I decided to put the du output on the left, otherwise I would have had to manage a wobbly right column (due to varying lengths of file names).

Читайте также:  Linux database management system

The command will not accept command line flags.

This has been tested in both bash and in ksh93 . It will not work with /bin/sh .

Источник

Linux ls to show only filename date and size

How can I use ls in linux to get a listing of filenames date and size only. I don’t need to see the other info such as owner or permission. Is this possible?

ls is great because it has very fast sorting by datetime, but the formatting is hard to deal with. I suggest using a token at —time-style like —time-style=’+&%Y%m%d+%H%M%S.%N’ where the token is ‘&’, using that as reference you can further parse the output with sed so you can also backtrack as just before the token is the size! If someone want to post that as a complete answer, feel free to, I am too asleep right now 🙂

9 Answers 9

To output in columnar format:

This is nice, but it does have the «environment too large» /»argument list too long» problem potentially.

🙂 Just a proof of concept. In Real Life[tm] this will be a find . -type f -print0 | xargs -0 stat -c «%y %s %n»

To format the output of stat , you can add width information to the format string like C printf function, e.g. «%y %8s %n» , it’s not documented, but seems works (coreutils 8.17, Fedora 18)

You could always use another utility like awk to format the output of ls 1 :

1.Yes, generally, you shouldn’t parse the output of ls but in this case the question specifically calls for it.

That doesn’t print the file size though. And it only prints the first part of filenames with whitespace in them. And it can fail if ls is aliased (say alias ls=’ls -i’ ). You really should take a lot of care if you want to go about parsing ls output.

I had the file size in there and then edited it out (vague moment) — I’ll restore it. And I agree about all the caveats re parsing ls , but that is what the OP wanted.

I disagree, the OP wants the filenames, not the first part of filenames if the filename happens to have whitespace. (Using /bin/ls would avoid the alias problem.)

That is understood implicitly: what is stated explicitly is that OP wants a solution with ls which we both agree is not going to satisfy the whitespace requirement. The /bin/ls suggestion is a good one.

love this solution; in regards to previous comments about not printing size, you can get size with ls, and can use -h to make it human-readable. Specifically I used: ls -lah | awk ‘‘ which yields: 4.0K Jan 24 18:17 . Granted, the original solution doesn’t say anything about awk, but in linux we should assume the output of a process will always be the input to another process, right? 😉

You can get a lot of control about how you list files with the find utility. ls doesn’t really let you specify the columns you want.

$ find . -maxdepth 1 -printf '%CY%Cm%Cd.%CH%CM\t%s\t%f\n' 20111007.0601 4096 . 20111007.0601 2 b 20111001.1322 4096 a 

The argument to the printf action is a detailed in the manpage. You can choose different time information, what size you want (file size or disk blocks used), etc. You can also make this safe for unusual file names if further processing is needed.

Источник

Оцените статью
Adblock
detector