How can I get a count of files in a directory using the command line?
I have a directory with a large number of files. I don’t see a ls switch to provide the count. Is there some command line magic to get a count of files?
tree . | tail or tree -a . | tail to include hidden files/dirs, tree is recursive if that’s what you want.
@CodyChan : It should be tail -n 1 , and even then the count would also include the entries in subdirectories.
20 Answers 20
Using a broad definition of «file»
(note that it doesn’t count hidden files and assumes that file names don’t contain newline characters).
To include hidden files (except . and .. ) and avoid problems with newline characters, the canonical way is:
find . ! -name . -prune -print | grep -c /
find .//. ! -name . -print | grep -c //
wc is a «word count» program. The -l switch causes it to count lines. In this case, it’s counting the lines in the output from ls . This is the always the way I was taught to get a file count for a given directory, too.
that doesn’t get everything in a directory — you’ve missed dot files, and collect a couple extra lines, too. An empty directory will still return 1 line. And if you call ls -la , you will get three lines in the directory. You want ls -lA | wc -l to skip the . and .. entries. You’ll still be off-by-one, however.
A corrected approach, that would not double count files with newlines in the name, would be this: ls -q | wc -l — though note that hidden files will still not be counted by this approach, and that directories will be counted.
For narrow definition of file:
find . -maxdepth 1 -type f | wc -l
And you can of course omit the -maxdepth 1 for counting files recursively (or adjust it for desired max search depth).
A corrected approach, that would not double count files with newlines in the name, would be this: find -maxdepth 1 -type f -printf «\n» | wc -l
I have found du —inodes useful, but I’m not sure which version of du it requires. It should be substantially faster than alternative approaches using find and wc .
On Ubuntu 17.10, the following works:
du --inodes # all files and subdirectories du --inodes -s # summary du --inodes -d 2 # depth 2 at most
Combine with | sort -nr to sort descending by number of containing inodes.
Thanks for sharing! I searched for «count» in the du man page, as in «I want to count the files», but it’s not documented with that word. Any answer using wc -l will be wrong when any name contains a newline character.
$ ls --help | grep -- ' -1' -1 list one file per line
$ wc --help | grep -- ' -l' -l, --lines print the newline counts
@Dennis that’s interesting I didn’t know that an application could tell its output was going to a pipe.
I +’ed this version since it is more explicit. Though, yes ls does use -1 if it’s piped (try it: ls | cat), I find the -1 syntax more explicit.
In my tests it was significantly faster to also provide the -f option to avoid ls sorting the filenames. Unfortunately you still get the wrong answer if your filenames contain newlines.
Probably the most complete answer using ls / wc pair is
if you want to count dot files, and
- -A is to count dot files, but omit . and .. .
- -q make ls replace nongraphic characters, specifically newline character, with ? , making output 1 line for each file
To get one-line output from ls in terminal (i.e. without piping it into wc ), -1 option has to be added.
(behaviour of ls tested with coreutils 8.23)
As you said, -1 is not needed. As to «it handles newlines in filenames sensibly with console output», this is because of the -q switch (that you should use instead of -b because it’s portable) which «Forces each instance of non-printable filename characters and characters to be written as the ( ‘?’ ) character. Implementations may provide this option by default if the output is to a terminal device.» So e.g. ls -Aq | wc -l to count all files/dirs or ls -qp | grep -c / to count only non-hidden dirs etc.
Currently includes directories in its file count. To be most complete we need an easy way to omit those when needed.
@JoshHabdas It says «probably». 😉 I think the way to omit directories would be to use don_crissti’s suggestion with a slight twist: ls -qp | grep -vc / . Actually, you can use ls -q | grep -vc / to count all (non-hidden) files, and adding -p makes it match only regular files.
If you know the current directory contains at least one non-hidden file:
This is obviously generalizable to any glob.
In a script, this has the sometimes unfortunate side effect of overwriting the positional parameters. You can work around that by using a subshell or with a function (Bourne/POSIX version) like:
count_words () < eval 'shift; '"$1"'=$#' >count_words number_of_files * echo "There are $number_of_files non-dot files in the current directory"
An alternative solution is $(ls -d — * | wc -l) . If the glob is * , the command can be shortened to $(ls | wc -l) . Parsing the output of ls always makes me uneasy, but here it should work as long as your file names don’t contain newlines, or your ls escapes them. And $(ls -d — * 2>/dev/null | wc -l) has the advantage of handling the case of a non-matching glob gracefully (i.e., it returns 0 in that case, whereas the set * method requires fiddly testing if the glob might be empty).
If file names may contain newline characters, an alternative is to use $(ls -d ./* | grep -c /) .
Any of those solutions that rely on passing the expansion of a glob to ls may fail with a argument list too long error if there are a lot of matching files.
How to count the total number of files/folders on a system?
How can I count the number of all files/folders that exist on a system, using the command-line? I can find it out using a GUI, simply by opening the properties window for the entire / folder, but it would be nice to know how to do it using the command-line. Would I need a whole series of commands, or will just one be possible?
6 Answers 6
Since file / folder names can contain newlines:
sudo find / -type f -printf '.' | wc -c sudo find / -type d -printf '.' | wc -c
This will count any file / folder in the current / directory. But as muru points out you might want to exclude virtual / other filesystems from the count (the following will exclude any other mounted filesystem):
find / -xdev -type f -printf '.' | wc -c find / -xdev -type d -printf '.' | wc -c
- sudo find / -type f -printf ‘.’ : prints a dot for each file in / ;
- sudo find / -type d -printf ‘.’ : prints a dot for each folder in / ;
- wc -c : counts the number of characters.
Here’s an example of how not taking care of newlines in file / folder names may break other methods such as e.g. find / -type f | wc -l and how using find / -type f -printf ‘.’ | wc -c actually makes it right:
% ls % touch "file \`dquote> with newline" % find . -type f | wc -l 2 % find . -type f -printf '.' | wc -c 1
If STDOUT is not a terminal, find will print each file / folder name literally; this means that a file / folder name containing a newline will be printed across two different lines, and that wc -l will count two lines for a single file / folder, ultimately printing a result off by one.
sudo find / -type f | wc -l sudo find / -type d | wc -l
(sudo to prevent accessing errors)
f for files, d for directories.
The /proc/ filesystem will error out but I do not consider those files 😉
If you really want the total number of objects in your filesystems, use df -i to count inodes. You won’t get the breakdown between directories and plain files, but on the plus side it runs near-instantly. The total number of used inodes is something filesystems already track.
If you want to use one of the find -based suggestions, don’t just run it on / . Use find -xdev on a list of mount points generated by something like findmnt —list -v -U -t xfs,ext3,ext4,btrfs,vfat,ntfs -o TARGET or something. That doesn’t exclude bind mounts, though, so files under bind mounts will get counted twice. findmnt is pretty cool.
Also, surely there’s a straightforward way to list all your «disk» mounts without having to list explicit filesystem types, but I’m not sure exactly what.
As suggested by another answer, use find -printf . | wc -c to avoid any possible problems counting funny characters in filenames. Use -not -type d to count non-directory files. (You don’t want to exclude your symlinks, do you?)
Count number of files within a directory in Linux? [closed]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
I am connecting via ssh to another host to access some data . Unfortunately a bunch of basic commands don’t seem to work on this host . If I use wc it returns «unrecognized command» . So I am looking for other options .
Use the tree command. It will give you the tree and at the bottom tell you how many files and directories there are. If you want hidden files also use tree -a .
@vanza «What exactly is the problem with wc» , what if a file has a \n in the file name? Yes, extremely unlikely! But still technically valid and possible.
1 Answer 1
Which means: ls : list files in dir
-1 : (that’s a ONE) only one entry per line. Change it to -1a if you want hidden files too
No wait . I made a booboo . You are absolutely right Sajad Lfc . ls -1 dir | egrep -c » This returns the number of files in dir . Thanks .
@SajadKaruthedath ls -l . | egrep -c ‘^-‘ does not count hidden files. I suggest adding -a flag to ls .
@runios that’s because ls -l returns an additional line at the top adding up the file sizes for a total amount. You should use ls -1 and not the ls -l . Also if one wants hidden files but without the directories . and .. you should use ls -1A | wc -l
An effective native way without using pipe: du —inodes [root@cs-1-server-01 million]# du —inodes 1000001 ./vdb.1_1.dir 1000003 . [root@cs-1-server-01 million]#