Linux show all folders size

How do I get the size of a directory on the command line?

I tried to obtain the size of a directory (containing directories and sub directories) by using the ls command with option l . It seems to work for files ( ls -l file name ), but if I try to get the size of a directory (for instance, ls -l /home ), I get only 4096 bytes, although altogether it is much bigger.

1) Strictly speaking, you can’t. Linux has directories, not folders. 2) There’s a difference between the size of a directory (which is a special file holding inodes that point to other files), and the size of the contents of that directory. As others have pointed out, the du command provides the latter, which is what it appears you want.

as you seem to be new, I’ll just point out the helpful -h option you can add to the -l option (i.e. ls -lh ) to get the sizes of files to be printed out in human-friendly notation like 1.1M instead of 1130301. The «h» in the du -hs command that @sam gave as the answer for your question about directories also means «human-readable», and it also appears in df -h which shows the human readable amounts of used and free space on disk.

16 Answers 16

Explanation

  • du (disc usage) command estimates file_path space usage
  • The options -sh are (from man du ):
 -s, --summarize display only a total for each argument -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G) 
 -c, --total produce a grand total 

It works very nice with find e.g. to count the amount of space in specific subdirectories in current path: $ find . -type d -name «node_modules» -prune -exec du -sh <> \;

I’m looking right now at a folder I just copied from an external drive. It contains four files (no hardlinks). du -ba $folder reports that each of these files is identical in size across the copied folders, but the total at the folder level does not match. du -bs , du -h , etc., same answer. (One folder size is six bytes more than the sum of the files; the other is ~10% larger.) I’ve seen this issue before comparing a folder on an external drive. Is there any unix command that will reliably report two folders containing identical files as being the same «size»?

will give you the cumulative disk usage of all non-hidden directories, files etc in the current directory in human-readable format.

You can use the df command to know the free space in the filesystem containing the directory:

du -sh * starts throwing «unknown option» errors if any of the files in that dir begin with a dash. Safer to do du -sh — *

du -sh — * .* to include dotfiles. This is useful to include a possibly large .git directory, for example. Alternatively in zsh you can setopt globdots to glob dotfiles by default.

What does the — do? I know it applies to shell built-ins to end option arguments, but du is not a built-in, and I don’t see this usage documented for du : linux.die.net/man/1/du

(—) is used in most bash built-in commands and many other commands to signify the end of command options, after which only positional parameters are accepted. source

du is your friend. If you just want to know the total size of a directory then jump into it and run:

If you also would like to know which sub-folders take up how much disk space?! You could extend this command to:

which will give you the size of all sub-folders (level 1). The output will be sorted (largest folder on top).

It seems on some (perhaps older?) versions of linux, sort does not have an h switch, and therefore the next best command I could find is: du -c —max-depth=1 | sort -rn

to avoid the line for current directory in the result, just add star (idea from Pacifist, above): du -h —max-depth=1 * | sort -h

du can be complicated to use since you have to seemingly pass 100 arguments to get decent output. And figuring out the size of hidden folders is even tougher.

Make your life easy and use ncdu .

ncdu

You get per folder summaries that are easily browsable.

checked out ncdu and would like to point out to others: when you’re hunting for those files that are bloating some directory this utility is extremely useful as it displays size tapes/indicators which make the culprit(s) stand out. Overall this offers the right amount of interactivity which may be particularly useful in command-line only environments.

Others have mentioned du , but I would also like to mention Ncdu — which is an ncurses version of du and provides interactivity: You can explore the directory hierarchy directly and see the sizes of subdirectories.

The du command shows the disk usage of the file.

The -h option shows results in human-readable form (e.g., 4k, 5M, 3G).

All of the above examples will tell you the size of the data on disk (i.e. the amount of disk space a particular file is using, which is usually larger than the actual file size). There are some situations where these will not give you an accurate report, if the data is not actually stored on this particular disk and only inode references exist.

In your example, you have used ls -l on a single file, which will have returned the file’s actual size, NOT its size on disk.

If you want to know the actual file sizes, add the -b option to du.

Yes. I’m using sdfs which compresses & dedups the files, so I couldn’t figure out why it was reporting such low numbers. The actual size of the files with ls can be found by using: du -b

This shows how much disk space you have left on the current drive and then tells you how much every file/directory takes up. e.g.,

Filesystem Size Used Avail Use% Mounted on /dev/sdb2 206G 167G 29G 86% / 115M node_modules 2.1M examples 68K src 4.0K webpack.config.js 4.0K README.md 4.0K package.json 

personally I think this is best, if you don’t want to use ncdu

Thank you! A command to see the size of just the direct children — avoiding the huge wall of text that displays when you use the regular «recursive» version.

In order to get the total size of files under a directory, you can select the type by find

find -type f -print0 | xargs -0 stat -c %s | awk ' END ' 
find -not -type d -print0 | xargs -0 stat -c %s | awk ' END ' 

Why not use du ?

The du command is easier but it will count all types of files, and you don’t have an option to change it. For example, assuming the current directory has a file, an empty dir and a symlink:

$ ls -AlF total 8,192 -rw-r--r-- 1 nordic nordic 29 Mar 28 19:05 abc drwxr-xr-x 2 nordic nordic 4,096 Mar 28 19:06 gogo/ lrwxrwxrwx 1 nordic nordic 3 Mar 28 19:06 s_gogo -> abc $ find -type f -print0 | xargs -0 stat -c %s | awk ' END ' 29 $ du -sb 8224 . 

I would use -not -type d to sum not only sizes of ordinary files ( -type f ) but also sizes of symbolic links and so on.

This is great, because you don’t get the overhead required to store the files, but only the size of the files themselves.

Here is a function for your .bash_aliases

# du with mount exclude and sort function dusort () < DIR=$(echo $1 | sed 's#\/$##') du -scxh $(mount | awk '' | sort | uniq \ | sed 's#/# -- exclude=/#') $DIR/* | sort -h > 
$ dusort / . 0 /mnt 0 /sbin 0 /srv 4,0K /tmp 728K /home 23M /etc 169M /boot 528M /root 1,4G /usr 3,3G /var 4,3G /opt 9,6G total 
sudo ls -1d */ | sudo xargs -I<> du <> -sh && sudo du -sh 

Note that du prints the space that a directory occupy on the media which is usually bigger than just the total size of all files in the directory, because du takes into account the size of all auxiliary information that is stored on the media to organize the directory in compliance with file system format.

If the file system is compressible, then du may output even smaller number than the total size of all files, because files may be internally compressed by the file system and so they take less space on the media than just uncompressed information they contain. Same if there are sparse files.

if there are hard links in the directory, then du may print smaller value as well because several different files in the directory refer the same data on the media.

To get the straightforward total size of all files in the directory, the following one-line shell expression can be used (assuming a GNU system):

find . ! -type d -print0 | xargs -r0 stat -c %s | paste -sd+ - | bc 
find . ! -type d -printf '%s\n' | paste -sd+ - | bc 

It just sums sizes of all non-directory files in the directory (and its subdirectories recursively) one by one. Note that for symlinks, it reports the size of the symlink (not of the file the symlink points to).

Источник

How do I determine the total size of a directory (folder) from the command line?

The -h flag on sort will consider «Human Readable» size values.

If want to avoid recursively listing all files and directories, you can supply the —max-depth parameter to limit how many items are displayed. Most commonly, —max-depth=1

du -h --max-depth=1 /path/to/directory 

I use du -sh or DOOSH as a way to remember it (NOTE: the command is the same, just the organization of commandline flags for memory purposes)

There is a useful option to du called the —apparent-size. It can be used to find the actual size of a file or directory (as opposed to its footprint on the disk) eg, a text file with just 4 characters will occupy about 6 bytes, but will still show up as taking up ~4K in a regular du -sh output. However, if you pass the —apparent-size option, the output will be 6. man du says: —apparent-size print apparent sizes, rather than disk usage; although the apparent size is usually smaller, it may be larger due to holes in (‘sparse’) files, internal fragmentation, indirect blocks

This works for OS X too! Thanks, I was really looking for a way to clear up files, both on my local machine, and my server, but automated methods seemed not to work. So, I ran du -hs * and went into the largest directory and found out which files were so large. This is such a good method, and the best part is you don’t have to install anything! Definitely deserved my upvote

@BandaMuhammadAlHelal I think there are two reasons: rounding ( du has somewhat peculiar rounding, showing no decimals if the value has more than one digit in the chosen unit), and the classical 1024 vs. 1000 prefix issue. du has an option -B (or —block-size ) to change the units in which it displays values, or you could use -b instead of -h to get the «raw» value in bytes.

Источник

Читайте также:  Astra linux управление принтерами
Оцените статью
Adblock
detector