List folder by size linux

Using ls to list directories and their total sizes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.

This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.

Is it possible to use ls in Unix to list the total size of a sub-directory and all its contents as opposed to the usual 4K that (I assume) is just the directory file itself?

total 12K drwxrwxr-x 6 *** *** 4.0K 2009-06-19 10:10 branches drwxrwxr-x 13 *** *** 4.0K 2009-06-19 10:52 tags drwxrwxr-x 16 *** *** 4.0K 2009-06-19 10:02 trunk 

Note that the -h option in alias ducks=’du -cksh * | sort -hr | head -n 15′ and the -c option to du are mostly non-portable GNU extensions to the POSIX-standard du and sort utilities. Without the -h option, sort has to be invoked with the -n option to do numeric sorting, simplified: du -sk * | sort -n .

Leaving out -r from sort and not passing the results through head (or tail , as the case may be) emits the entire results to the terminal, largest last. Which is very useful in an interactive session. Beware also that * will skip files and directories that start with . such as .m2 .

29 Answers 29

du --summarize --human-readable * 

Explanation:

du : Disk Usage

-s : Display a summary for each specified file. (Equivalent to -d 0 )

-h : «Human-readable» output. Use unit suffixes: Byte, Kibibyte (KiB), Mebibyte (MiB), Gibibyte (GiB), Tebibyte (TiB) and Pebibyte (PiB). (BASE2)

du —max-depth 1 only shows file/folder sizes of 1 deep in the tree, no more clutter and easy to find large folders within a folder.

@Zak in zsh you can use the *(D) to match hidden (dot) files alongside with normal files. When using bash, you could use * .[!.]* to match both.

To get a clear picture of where space goes, du -sch * .[!.]* | sort -rh is great (show a sorted output) On mac do: brew install coreutils and then du -sch * .[!.]* | gsort -rh

du -sk * | sort -n will sort the folders by size. Helpful when looking to clear space..

or du -sh * | sort -h used when human-readable mode

sort -rn sorts things in reverse numerical order. sort -rn | head -n 10 will show only the top few, if that’s of any interest.

Why is the -k necessary ? In the documentation it says: -k like —block-size=1K , does this influence the precision?

This will be displayed in human readable format.

More about sort -h here: gnu.org/software/coreutils/manual/… It’s especially there for sorting 103K , 102M , 1.1G etc. This should be available on a lot of systems nowadays, but not all.

this command always takes ages for me to run. an alternatve? This is why I was looking for a Q with ls like the title says.

Читайте также:  Linux file reading commands

To list the largest directories from the current directory in human readable format:

A better way to restrict number of rows can be

du -sh * | sort -hr | head -n10

Where you can increase the suffix of -n flag to restrict the number of rows listed

[~]$ du -sh * | sort -hr 48M app 11M lib 6.7M Vendor 1.1M composer.phar 488K phpcs.phar 488K phpcbf.phar 72K doc 16K nbproject 8.0K composer.lock 4.0K README.md 

It makes it more convenient to read 🙂

this command always takes ages for me to run. an alternatve? This is why I was looking for a Q with ls like the title says.

To display it in ls -lh format, use:

(du -sh ./*; ls -lh --color=no) | awk ' < if($1 == "total") else if (!X) else < sub($5 "[ ]*", sprintf("%-7s ", SIZES["./" $9]), $0); print $0>>' 
if($1 == "total") < // Set X when start of ls is detected X = 1 >else if (!X) < // Until X is set, collect the sizes from `du` SIZES[$2] = $1 >else < // Replace the size on current current line (with alignment) sub($5 "[ ]*", sprintf("%-7s ", SIZES["./" $9]), $0); print $0 >
drwxr-xr-x 2 root root 4.0K Feb 12 16:43 cgi-bin drwxrws--- 6 root www 20M Feb 18 11:07 document_root drwxr-xr-x 3 root root 1.3M Feb 18 00:18 icons drwxrwsr-x 2 localusr www 8.0K Dec 27 01:23 passwd 

@anon58192932 You can pipe the output to sort —key=5,5h for sorting ‘human readable units’ from fifth column

returns sort: stray character in field spec: invalid field specification 5,5h’`. I really hate macs sometimes =\

ncdu (ncurses du )

This awesome CLI utility allows you to easily find the large files and directories (recursive total size) interactively.

For example, from inside the root of a well known open source project we do:

sudo apt install ncdu ncdu 

enter image description here

Then, I enter down and right on my keyboard to go into the /drivers folder, and I see:

enter image description here

ncdu only calculates file sizes recursively once at startup for the entire tree, so it is efficient. This way don’t have to recalculate sizes as you move inside subdirectories as you try to determine what the disk hog is.

«Total disk usage» vs «Apparent size» is analogous to du , and I have explained it at: why is the output of `du` often so different from `du -b`

Ubuntu list root

  • -x stops crossing of filesystem barriers
  • —exclude-kernfs skips special filesystems like /sys

MacOS 10.15.5 list root

To properly list root / on that system, I also needed —exclude-firmlinks , e.g.:

brew install ncdu cd / ncdu --exclude-firmlinks 

The things we learn for love.

ncdu non-interactive usage

Another cool feature of ncdu is that you can first dump the sizes in a JSON format, and later reuse them.

For example, to generate the file run:

and then examine it interactively with:

This is very useful if you are dealing with a very large and slow filesystem like NFS.

This way, you can first export only once, which can take hours, and then explore the files, quit, explore again, etc.

The output format is just JSON, so it is easy to reuse it with other programs as well, e.g.:

ncdu -o - | python -m json.tool | less 

reveals a simple directory tree data structure:

I agree, ncdu is the way to go. but do you know if it is possible to search the JSON file? That is, get the full path of a specific file/folder.

Читайте также:  Linux смонтировать cifs fstab

@FGV I don’t think ncdu can output that, one possibility would be to hack up a simple python script that parses the JSON.

The command you want is ‘du -sk’ du = «disk usage»

The -k flag gives you output in kilobytes, rather than the du default of disk sectors (512-byte blocks).

The -s flag will only list things in the top level directory (i.e., the current directory, by default, or the directory specified on the command line). It’s odd that du has the opposite behavior of ls in this regard. By default du will recursively give you the disk usage of each sub-directory. In contrast, ls will only give list files in the specified directory. (ls -R gives you recursive behavior.)

Tried this on the root directory, it still tries to list subdirectories, resulting in a lot of messages.

Put this shell function declaration in your shell initialization scripts:

I called it duls because it shows the output from both du and ls (in that order):

$ duls 210M drwxr-xr-x 21 kk staff 714 Jun 15 09:32 . $ duls * 36K -rw-r--r-- 1 kk staff 35147 Jun 9 16:03 COPYING 8.0K -rw-r--r-- 1 kk staff 6962 Jun 9 16:03 INSTALL 28K -rw-r--r-- 1 kk staff 24816 Jun 10 13:26 Makefile 4.0K -rw-r--r-- 1 kk staff 75 Jun 9 16:03 Makefile.am 24K -rw-r--r-- 1 kk staff 24473 Jun 10 13:26 Makefile.in 4.0K -rw-r--r-- 1 kk staff 1689 Jun 9 16:03 README 120K -rw-r--r-- 1 kk staff 121585 Jun 10 13:26 aclocal.m4 684K drwxr-xr-x 7 kk staff 238 Jun 10 13:26 autom4te.cache 128K drwxr-xr-x 8 kk staff 272 Jun 9 16:03 build 60K -rw-r--r-- 1 kk staff 60083 Jun 10 13:26 config.log 36K -rwxr-xr-x 1 kk staff 34716 Jun 10 13:26 config.status 264K -rwxr-xr-x 1 kk staff 266637 Jun 10 13:26 configure 8.0K -rw-r--r-- 1 kk staff 4280 Jun 10 13:25 configure.ac 7.0M drwxr-xr-x 8 kk staff 272 Jun 10 13:26 doc 2.3M drwxr-xr-x 28 kk staff 952 Jun 10 13:26 examples 6.2M -rw-r--r-- 1 kk staff 6505797 Jun 15 09:32 mrbayes-3.2.7-dev.tar.gz 11M drwxr-xr-x 42 kk staff 1428 Jun 10 13:26 src $ duls doc 7.0M drwxr-xr-x 8 kk staff 272 Jun 10 13:26 doc $ duls [bM]* 28K -rw-r--r-- 1 kk staff 24816 Jun 10 13:26 Makefile 4.0K -rw-r--r-- 1 kk staff 75 Jun 9 16:03 Makefile.am 24K -rw-r--r-- 1 kk staff 24473 Jun 10 13:26 Makefile.in 128K drwxr-xr-x 8 kk staff 272 Jun 9 16:03 build 

The paste utility creates columns from its input according to the specification that you give it. Given two input files, it puts them side by side, with a tab as separator.

We give it the output of du -hs — «$@» | cut -f1 as the first file (input stream really) and the output of ls -ldf — «$@» as the second file.

In the function, «$@» will evaluate to the list of all command line arguments, each in double quotes. It will therefore understand globbing characters and path names with spaces etc.

The double minuses ( — ) signals the end of command line options to du and ls . Without these, saying duls -l would confuse du and any option for du that ls doesn’t have would confuse ls (and the options that exist in both utilities might not mean the same thing, and it would be a pretty mess).

Читайте также:  Glances linux что это

The cut after du simply cuts out the first column of the du -hs output (the sizes).

I decided to put the du output on the left, otherwise I would have had to manage a wobbly right column (due to varying lengths of file names).

The command will not accept command line flags.

This has been tested in both bash and in ksh93 . It will not work with /bin/sh .

Источник

How to list 5 largest folders/ files on Linux?

but this only listed folders by size and did not show the size of the folders. Any idea how to see file sizes too?

2 Answers 2

If you’re looking at the whole drive, try du -h / | sort -rh | head -5

  • du -h lists all the files with sizes, in human readable format
  • sort -rh reverses the sort, in human readable format
  • head -5 returns the top 5 results, (you could also not reverse it but still have to sort it) and use tail -n 5
  • Instead of / you could use any given directory to list only files below that point in your file system.

Below is the much more usable command that lists top largest folders in /home . You can replace /home with / also.

sudo du -a --max-depth=2 --human-readable --time --exclude=.* /home 2>/dev/null | sort --human-numeric-sort --reverse | head -n 15 

The great thing about above command is that it will only recurse upto 2 levels so you eliminate a lot of noise.

You must log in to answer this question.

Hot Network Questions

Subscribe to RSS

To subscribe to this RSS feed, copy and paste this URL into your RSS reader.

Site design / logo © 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA . rev 2023.7.13.43531

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

Источник

List all directories sorted by size in descending order

I have a requirement to sort all directories of current directory in descended order by size. I tried following du -sh * | sort -rg It is listing all the folders by size but it’s just listing by size of folder by values. However it’s not sorting correcting. 100 MB Dir should be listed before 200KB. Any help will be appreciable.

It didnt work for me in MAC. Getting following issue. du -sh * | sort -rh sort: invalid option — h Try `sort —help’ for more information.

3 Answers 3

-g is for floats. For human-readable output use human-readable sort:

If you have numfmt utility from coreutils, you can use numeric sort with numfmt formatting:

du -B 1 -s * | sort -rn | numfmt --to=iec -d$'\t' --field=1 

It didnt work for me in MAC. Getting following issue. du -sh * | sort -rh sort: invalid option — h Try `sort —help’ for more information.

Yes it works partially. But it’s not listing as MB and KB. It’s not displaying the human readable character for size. It’s displaying in plain total Byte

I prefer to just go straight to comparing bytes.

sort -n sorts numerically. Obviously, -r reverses.

104857600 wbxtra_RESIDENT_07202018_075931.wbt 815372 wbxtra_RESIDENT_07192018_075744.wbt 215310 Slack Crashes 148028 wbxtra_RESIDENT_07182018_162525.wbt 144496 wbxtra_RESIDENT_07182018_163507.wbt 141688 wbxtra_RESIDENT_07182018_161957.wbt 56617 Notification Cache 20480 ~DFFA6E4895E749B423.TMP 16384 ~DF543949D7B4DF074A.TMP 13254 AdobeARM.log 3614 PhishMeOutlookReporterLoader.log 3448 msohtmlclip1/01 3448 msohtmlclip1 512 ~DF92FFF2C02995D884.TMP 28 ExchangePerflog_8484fa311d504d0fdcd6c672.dat 0 WPDNSE 0 VPMECTMP 0 VBE 

Источник

Оцените статью
Adblock
detector