How can I get a count of files in a directory using the command line?
I have a directory with a large number of files. I don’t see a ls switch to provide the count. Is there some command line magic to get a count of files?
tree . | tail or tree -a . | tail to include hidden files/dirs, tree is recursive if that’s what you want.
@CodyChan : It should be tail -n 1 , and even then the count would also include the entries in subdirectories.
20 Answers 20
Using a broad definition of «file»
(note that it doesn’t count hidden files and assumes that file names don’t contain newline characters).
To include hidden files (except . and .. ) and avoid problems with newline characters, the canonical way is:
find . ! -name . -prune -print | grep -c /
find .//. ! -name . -print | grep -c //
wc is a «word count» program. The -l switch causes it to count lines. In this case, it’s counting the lines in the output from ls . This is the always the way I was taught to get a file count for a given directory, too.
that doesn’t get everything in a directory — you’ve missed dot files, and collect a couple extra lines, too. An empty directory will still return 1 line. And if you call ls -la , you will get three lines in the directory. You want ls -lA | wc -l to skip the . and .. entries. You’ll still be off-by-one, however.
A corrected approach, that would not double count files with newlines in the name, would be this: ls -q | wc -l — though note that hidden files will still not be counted by this approach, and that directories will be counted.
For narrow definition of file:
find . -maxdepth 1 -type f | wc -l
And you can of course omit the -maxdepth 1 for counting files recursively (or adjust it for desired max search depth).
A corrected approach, that would not double count files with newlines in the name, would be this: find -maxdepth 1 -type f -printf «\n» | wc -l
I have found du —inodes useful, but I’m not sure which version of du it requires. It should be substantially faster than alternative approaches using find and wc .
On Ubuntu 17.10, the following works:
du --inodes # all files and subdirectories du --inodes -s # summary du --inodes -d 2 # depth 2 at most
Combine with | sort -nr to sort descending by number of containing inodes.
Thanks for sharing! I searched for «count» in the du man page, as in «I want to count the files», but it’s not documented with that word. Any answer using wc -l will be wrong when any name contains a newline character.
$ ls --help | grep -- ' -1' -1 list one file per line
$ wc --help | grep -- ' -l' -l, --lines print the newline counts
@Dennis that’s interesting I didn’t know that an application could tell its output was going to a pipe.
I +’ed this version since it is more explicit. Though, yes ls does use -1 if it’s piped (try it: ls | cat), I find the -1 syntax more explicit.
In my tests it was significantly faster to also provide the -f option to avoid ls sorting the filenames. Unfortunately you still get the wrong answer if your filenames contain newlines.
Probably the most complete answer using ls / wc pair is
if you want to count dot files, and
- -A is to count dot files, but omit . and .. .
- -q make ls replace nongraphic characters, specifically newline character, with ? , making output 1 line for each file
To get one-line output from ls in terminal (i.e. without piping it into wc ), -1 option has to be added.
(behaviour of ls tested with coreutils 8.23)
As you said, -1 is not needed. As to «it handles newlines in filenames sensibly with console output», this is because of the -q switch (that you should use instead of -b because it’s portable) which «Forces each instance of non-printable filename characters and characters to be written as the ( ‘?’ ) character. Implementations may provide this option by default if the output is to a terminal device.» So e.g. ls -Aq | wc -l to count all files/dirs or ls -qp | grep -c / to count only non-hidden dirs etc.
Currently includes directories in its file count. To be most complete we need an easy way to omit those when needed.
@JoshHabdas It says «probably». 😉 I think the way to omit directories would be to use don_crissti’s suggestion with a slight twist: ls -qp | grep -vc / . Actually, you can use ls -q | grep -vc / to count all (non-hidden) files, and adding -p makes it match only regular files.
If you know the current directory contains at least one non-hidden file:
This is obviously generalizable to any glob.
In a script, this has the sometimes unfortunate side effect of overwriting the positional parameters. You can work around that by using a subshell or with a function (Bourne/POSIX version) like:
count_words () < eval 'shift; '"$1"'=$#' >count_words number_of_files * echo "There are $number_of_files non-dot files in the current directory"
An alternative solution is $(ls -d — * | wc -l) . If the glob is * , the command can be shortened to $(ls | wc -l) . Parsing the output of ls always makes me uneasy, but here it should work as long as your file names don’t contain newlines, or your ls escapes them. And $(ls -d — * 2>/dev/null | wc -l) has the advantage of handling the case of a non-matching glob gracefully (i.e., it returns 0 in that case, whereas the set * method requires fiddly testing if the glob might be empty).
If file names may contain newline characters, an alternative is to use $(ls -d ./* | grep -c /) .
Any of those solutions that rely on passing the expansion of a glob to ls may fail with a argument list too long error if there are a lot of matching files.
Count number of files within a directory in Linux? [closed]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
I am connecting via ssh to another host to access some data . Unfortunately a bunch of basic commands don’t seem to work on this host . If I use wc it returns «unrecognized command» . So I am looking for other options .
Use the tree command. It will give you the tree and at the bottom tell you how many files and directories there are. If you want hidden files also use tree -a .
@vanza «What exactly is the problem with wc» , what if a file has a \n in the file name? Yes, extremely unlikely! But still technically valid and possible.
1 Answer 1
Which means: ls : list files in dir
-1 : (that’s a ONE) only one entry per line. Change it to -1a if you want hidden files too
No wait . I made a booboo . You are absolutely right Sajad Lfc . ls -1 dir | egrep -c » This returns the number of files in dir . Thanks .
@SajadKaruthedath ls -l . | egrep -c ‘^-‘ does not count hidden files. I suggest adding -a flag to ls .
@runios that’s because ls -l returns an additional line at the top adding up the file sizes for a total amount. You should use ls -1 and not the ls -l . Also if one wants hidden files but without the directories . and .. you should use ls -1A | wc -l
An effective native way without using pipe: du —inodes [root@cs-1-server-01 million]# du —inodes 1000001 ./vdb.1_1.dir 1000003 . [root@cs-1-server-01 million]#
Find the number of files in a directory
Is there any method in Linux to calculate the number of files in a directory (that is, immediate children) in O(1) (independently of the number of files) without having to list the directory first? If not O(1), is there a reasonably efficient way? I’m searching for an alternative to ls | wc -l .
ls | wc -l will cause ls to do an opendir(), readdir() and probably a stat() on all the files. This will generally be at least O(n).
Yeah correct, my fault. I was thinking of O(1) and O(n) to be same, although I should know it better.
8 Answers 8
readdir is not as expensive as you may think. The knack is avoid stat’ing each file, and (optionally) sorting the output of ls.
avoids aliases in your shell, doesn’t sort the output, and lists 1 file-per-line (not strictly necessary when piping the output into wc).
The original question can be rephrased as «does the data structure of a directory store a count of the number of entries?», to which the answer is no. There isn’t a more efficient way of counting files than readdir(2)/getdents(2).
One can get the number of subdirectories of a given directory without traversing the whole list by stat’ing (stat(1) or stat(2)) the given directory and observing the number of links to that directory. A given directory with N child directories will have a link count of N+2, one link for the «..» entry of each subdirectory, plus two for the «.» and «..» entries of the given directory.
However one cannot get the number of all files (whether regular files or subdirectories) without traversing the whole list — that is correct.
The «/bin/ls -1U» command will not get all entries however. It will get only those directory entries that do not start with the dot (.) character. For example, it would not count the «.profile» file found in many login $HOME directories.
One can use either the «/bin/ls -f» command or the «/bin/ls -Ua» command to avoid the sort and get all entries.
Perhaps unfortunately for your purposes, either the «/bin/ls -f» command or the «/bin/ls -Ua» command will also count the «.» and «..» entries that are in each directory. You will have to subtract 2 from the count to avoid counting these two entries, such as in the following:
expr `/bin/ls -f | wc -l` - 2 # Those are back ticks, not single quotes.
The —format=single-column (-1) option is not necessary on the «/bin/ls -Ua» command when piping the «ls» output, as in to «wc» in this case. The «ls» command will automatically write its output in a single column if the output is not a terminal.