Linux sort folders by size

List all directories sorted by size in descending order

I have a requirement to sort all directories of current directory in descended order by size. I tried following du -sh * | sort -rg It is listing all the folders by size but it’s just listing by size of folder by values. However it’s not sorting correcting. 100 MB Dir should be listed before 200KB. Any help will be appreciable.

It didnt work for me in MAC. Getting following issue. du -sh * | sort -rh sort: invalid option — h Try `sort —help’ for more information.

3 Answers 3

-g is for floats. For human-readable output use human-readable sort:

If you have numfmt utility from coreutils, you can use numeric sort with numfmt formatting:

du -B 1 -s * | sort -rn | numfmt --to=iec -d$'\t' --field=1 

It didnt work for me in MAC. Getting following issue. du -sh * | sort -rh sort: invalid option — h Try `sort —help’ for more information.

Yes it works partially. But it’s not listing as MB and KB. It’s not displaying the human readable character for size. It’s displaying in plain total Byte

I prefer to just go straight to comparing bytes.

sort -n sorts numerically. Obviously, -r reverses.

104857600 wbxtra_RESIDENT_07202018_075931.wbt 815372 wbxtra_RESIDENT_07192018_075744.wbt 215310 Slack Crashes 148028 wbxtra_RESIDENT_07182018_162525.wbt 144496 wbxtra_RESIDENT_07182018_163507.wbt 141688 wbxtra_RESIDENT_07182018_161957.wbt 56617 Notification Cache 20480 ~DFFA6E4895E749B423.TMP 16384 ~DF543949D7B4DF074A.TMP 13254 AdobeARM.log 3614 PhishMeOutlookReporterLoader.log 3448 msohtmlclip1/01 3448 msohtmlclip1 512 ~DF92FFF2C02995D884.TMP 28 ExchangePerflog_8484fa311d504d0fdcd6c672.dat 0 WPDNSE 0 VPMECTMP 0 VBE 

Источник

Sort all directories based on their size

I’d like to sort all the directories/files in a specific directory based on their size (using du -sh «name»). I need to apply this command to all directories in my location, then sort them based on this result. How can I do that ?

6 Answers 6

With GNU sort and GNU du (which it appears you have, since you state you are using du ‘s -h option):

du -sh -- * | sort -rh # Files and directories, or du -sh -- */ | sort -rh # Directories only 

The output looks something like this:

22G foo/ 21G bar/ 5.4G baz/ 2.1G qux/ 1021M wibble/ 4.0K wobble/ 

Note that if a file is hard linked in two or more directories, then the sizes of each directory reported by du depends on the order of the directories on the command line. Also, -h is available on other du implementations than GNU.

Sort by sizes (as unformatted numbers of kibibytes) and then turn those into human readable:

du -sk -- * | sort -nr | cut -f2 | xargs du -sh 

This is an improved version based on jabalv’s answer. It works with a GNU as well as a BSD userland.

IFS='\n' du -sk -- * | sort -n | cut -f2 | while read line ; do xargs du -sh "$line" done 
4.0K games 2.7M local 6.7M lib32 19M sbin 152M src 177M include 321M bin 2.2G share 2.9G lib 

To reverse the sort order and list the largest files and directories first, use sort -nr .

(1) How is this “an improved version”? It still fails if the directory names contain whitespace. (2) Does IFS=’\n’ do anything useful? (3) Does setting IFS at the point in your answer where you have set it have any effect on the execution?

Читайте также:  Linux directory мы folder

This improves on jabalv’s answer, fixing the space in names issue:

du -sk -- * | sort -nr | awk -F '\t' '' | xargs du -sh 

OK, this fixes the problem for spaces in directory names. It still fails for other whitespace. I’ll admit that it is very hard to write complex commands that handle filenames that contain newlines correctly, but this also fails for names that contain tabs. … or quote characters ( » ). Also, before I fixed it, it could have failed for filenames beginning with (hyphen/dash). … … … … … … … … … … … … … P.S. Please don’t refer to “the above answer”, as different people see the answers in different orders.

As inspired by jabalv’s answer, sort by sizes (as unformatted numbers of kibibytes) and then turn those into human readable numbers — without running du a second time:

du -s -- * | sort -n | numfmt --from-unit=1024 --to=iec 
du -s --block-size=1 -- * | sort -n | numfmt --to=iec 
  • You can add the -k option to the first command line (i.e., du -sk ) to specify that output should be in kibibytes, but this seems to be the default.
  • Use */ instead of * to list directories only, as in Chris Down’s answer.
  • Add the -r option to the sort command (i.e., sort -nr ) if you want to sort from high to low.
  • Use —from-unit=1024 because du uses binary prefix notation (i.e., K=1024, M=2 20 , etc.) by default.
  • Likewise, use —to=iec to output binary-prefix numbers.
  • This replaces the tabs in du ’s output with spaces. Add —padding=-7 to the numfmt command to get the output to look like it has tabs. (I guess it is counting zero-based.)
    ( —padding=7 (without the ) would right-justify the numbers.)

Источник

How to list the size of each file and directory and sort by descending size in Bash?

I found that there is no easy to get way the size of a directory in Bash? I want that when I type ls — , it can list of all the sum of the file size of directory recursively and files at the same time and sort by size order. Is that possible?

What exactly do you mean by the «size» of a directory? The number of files under it (recursively or not)? The sum of the sizes of the files under it (recursively or not)? The disk size of the directory itself? (A directory is implemented as a special file containing file names and other information.)

@KeithThompson @KitHo du command estimates file space usage so you cannot use it if you want to get the exact size.

@ztank1013: Depending on what you mean by «the exact size», du (at least the GNU coreutils version) probably has an option to provide the information.

12 Answers 12

Simply navigate to directory and run following command:

OR add -h for human readable sizes and -r to print bigger directories/files first.

du -a -h --max-depth=1 | sort -hr 

du -h requires sort -h too, to ensure that, say 981M sorts before 1.3G ; with sort -n only the numbers would be taken into account and they’d be the wrong way round.

Читайте также:  Настройка nfs на линукс

This doesn’t list the size of the individual files within the current directory, only the size of its subdirectories and the total size of the current directory. How would you include individual files in the output as well (to answer OP’s question)?

@ErikTrautman to list the files also you need to add -a and use —all instead of —max-depth=1 like so du -a -h —all | sort -h

Apparently —max-depth option is not in Mac OS X’s version of the du command. You can use the following instead.

Unfortunately this does not show the files, but only the folder sizes. -a does not work with -d either.

To show files and folders, I combined 2 commands: l -hp | grep -v / && du -h -d 1 , which shows the normal file size from ls for files, but uses du for directories.

(this willnot show hidden (.dotfiles) files)

Use du -sm for Mb units etc. I always use

because the total line ( -c ) will end up at the bottom for obvious reasons 🙂

PS:

  • See comments for handling dotfiles
  • I frequently use e.g. ‘du -smc /home// | sort -n |tail’ to get a feel of where exactly the large bits are sitting

du —max-depth=1|sort -n or find . -mindepth 1 -maxdepth 1|xargs du -s|sort -n for including dotfiles too.

@arnaud576875 find . -mindepth 1 -maxdepth 1 -print0 | xargs -0 du -s | sort -n if some of the found paths could contain spaces.

This is a great variant to get a human readable view of the biggest: sudo du -smch * | sort -h | tail

Command

Output

3,5M asdf.6000.gz 3,4M asdf.4000.gz 3,2M asdf.2000.gz 2,5M xyz.PT.gz 136K xyz.6000.gz 116K xyz.6000p.gz 88K test.4000.gz 76K test.4000p.gz 44K test.2000.gz 8,0K desc.common.tcl 8,0K wer.2000p.gz 8,0K wer.2000.gz 4,0K ttree.3 

Explanation

  • du displays «disk usage»
  • h is for «human readable» (both, in sort and in du)
  • max-depth=0 means du will not show sizes of subfolders (remove that if you want to show all sizes of every file in every sub-, subsub-, . folder)
  • r is for «reverse» (biggest file first)

ncdu

When I came to this question, I wanted to clean up my file system. The command line tool ncdu is way better suited to this task.

Just type ncdu [path] in the command line. After a few seconds for analyzing the path, you will see something like this:

$ ncdu 1.11 ~ Use the arrow keys to navigate, press ? for help --- / --------------------------------------------------------- . 96,1 GiB [##########] /home . 17,7 GiB [# ] /usr . 4,5 GiB [ ] /var 1,1 GiB [ ] /lib 732,1 MiB [ ] /opt . 275,6 MiB [ ] /boot 198,0 MiB [ ] /storage . 153,5 MiB [ ] /run . 16,6 MiB [ ] /etc 13,5 MiB [ ] /bin 11,3 MiB [ ] /sbin . 8,8 MiB [ ] /tmp . 2,2 MiB [ ] /dev ! 16,0 KiB [ ] /lost+found 8,0 KiB [ ] /media 8,0 KiB [ ] /snap 4,0 KiB [ ] /lib64 e 4,0 KiB [ ] /srv ! 4,0 KiB [ ] /root e 4,0 KiB [ ] /mnt e 4,0 KiB [ ] /cdrom . 0,0 B [ ] /proc . 0,0 B [ ] /sys @ 0,0 B [ ] initrd.img.old @ 0,0 B [ ] initrd.img @ 0,0 B [ ] vmlinuz.old @ 0,0 B [ ] vmlinuz 

Delete the currently highlighted element with d , exit with CTRL + c

ls -S sorts by size. Then, to show the size too, ls -lS gives a long ( -l ), sorted by size ( -S ) display. I usually add -h too, to make things easier to read, so, ls -lhS .

Ah, sorry, that was not clear from your post. You want du , seems someone has posted it. @sehe: Depends on your definition of real — it is showing the amount of space the directory is using to store itself. (It’s just not also adding in the size of the subentries.) It’s not a random number, and it’s not always 4KiB.

find . -mindepth 1 -maxdepth 1 -type d | parallel du -s | sort -n 

I think I might have figured out what you want to do. This will give a sorted list of all the files and all the directories, sorted by file size and size of the content in the directories.

(find . -depth 1 -type f -exec ls -s <> \;; find . -depth 1 -type d -exec du -s <> \;) | sort -n 

[enhanced version]
This is going to be much faster and precise than the initial version below and will output the sum of all the file size of current directory:

echo `find . -type f -exec stat -c %s <> \; | tr '\n' '+' | sed 's/+$//g'` | bc 

the stat -c %s command on a file will return its size in bytes. The tr command here is used to overcome xargs command limitations (apparently piping to xargs is splitting results on more lines, breaking the logic of my command). Hence tr is taking care of replacing line feed with + (plus) sign. sed has the only goal to remove the last + sign from the resulting string to avoid complains from the final bc (basic calculator) command that, as usual, does the math.

Читайте также:  Source command linux python

Performances: I tested it on several directories and over ~150.000 files top (the current number of files of my fedora 15 box) having what I believe it is an amazing result:

# time echo `find / -type f -exec stat -c %s <> \; | tr '\n' '+' | sed 's/+$//g'` | bc 12671767700 real 2m19.164s user 0m2.039s sys 0m14.850s 

Just in case you want to make a comparison with the du -sb / command, it will output an estimated disk usage in bytes ( -b option)

As I was expecting it is a little larger than my command calculation because the du utility returns allocated space of each file and not the actual consumed space.

[initial version]
You cannot use du command if you need to know the exact sum size of your folder because (as per man page citation) du estimates file space usage. Hence it will lead you to a wrong result, an approximation (maybe close to the sum size but most likely greater than the actual size you are looking for).

I think there might be different ways to answer your question but this is mine:

ls -l $(find . -type f | xargs) | cut -d" " -f5 | xargs | sed 's/\ /+/g'| bc 

It finds all files under . directory (change . with whatever directory you like), also hidden files are included and (using xargs ) outputs their names in a single line, then produces a detailed list using ls -l . This (sometimes) huge output is piped towards cut command and only the fifth field ( -f5 ), which is the file size in bytes is taken and again piped against xargs which produces again a single line of sizes separated by blanks. Now take place a sed magic which replaces each blank space with a plus ( + ) sign and finally bc (basic calculator) does the math.

It might need additional tuning and you may have ls command complaining about arguments list too long.

Источник

Оцените статью
Adblock
detector