Get a list of all files in folder and sub-folder in a file
How do I get a list of all files in a folder, including all the files within all the subfolders and put the output in a file?
7 Answers 7
You can do this on command line, using the -R switch (recursive) and then piping the output to a file thus:
this will make a file called filename1 in the current directory, containing a full directory listing of the current directory and all of the sub-directories under it.
You can list directories other than the current one by specifying the full path eg:
will list everything in and under /var and put the results in a file in the current directory called filename2. This works on directories owned by another user including root as long as you have read access for the directories.
You can also list directories you don’t have access to such as /root with the use of the sudo command. eg:
sudo ls -R /root > filename3
Would list everything in /root, putting the results in a file called filename3 in the current directory. Since most Ubuntu systems have nothing in this directory filename3 will not contain anything, but it would work if it did.
Maybe telling the person to cd into the directory first could be added to answer.Also this works fine if i own the directory but if trying in a directory say owned by root it didnt.I got the usual permission denied and sudo followed by your command also gave permission denied. IS there a work around without logging in as root?
Well I did say «current» directory. The correct use of CD might the subject of another question, and I’m sure it has been. You can list directories owned by root as long as you have read access to them. Directories owned by root to which the user has read access can be listed with ls -R. It’s hard to imagine why you’d want to list directories owned by root to which you don’t have read access, but sudo does indeed work if you give the full path. I’m adding examples for both of these, but excluding the use of CD.
Just use the find command with the directory name. For example to see the files and all files within folders in your home directory, use
Also check find GNU info page by using info find command in a terminal.
This is the most powerful approach. find has many parameters to customize output format and file selection.
That’s the best approach in my opinion. Simple and practical. Could also do $ find . > output if there’s many directories.
tree
An alternative to recursive ls is the command line tool tree that comes with quite a lot of options to customize the format of the output diplayed. See the manpage for tree for all options.
will give you the same as tree using other characters for the lines.
to display hidden files too
- Go to the folder you want to get a content list from.
- Select the files you want in your list ( Ctrl + A if you want the entire folder).
- Copy the content with Ctrl + C .
- Open gedit and paste the content using Ctrl + V . It will be pasted as a list and you can then save the file.
This method will not include subfolder, content though.
You could also use the GUI counterpart to Takkat’s tree suggestion which is Baobab. It is used to view folders and subfolders, often for the purpose of analysing disk usage. You may have it installed already if you are using a GNOME desktop (it is often called disk usage analyser).
sudo apt-get install baobab
You can select a folder and also view all its subfolders, while also getting the sizes of the folders and their contents as the screenshot below shows. You just click the small down arrow to view a subfolder within a folder. It is very useful for gaining a quick insight into what you’ve got in your folders and can produce viewable lists, but at the present moment it cannot export them to file. It has been requested as a feature, however, at Launchpad. You can even use it to view the root filesystem if you use gksudo baobab .
(You can also get a list of files with their sizes by using ls -shR ~/myfolder and then export that to file.)
Read file names from directory in Bash
I need to write a script that reads all the file names from a directory and then depending on the file name, for example if it contains R1 or R2, it will concatenates all the file names that contain, for example R1 in the name. Can anyone give me some tip how to do this? The only thing I was able to do is:
#!/bin/bash FILES="path to the files" for f in $FILES do cat $f done
3 Answers 3
To make the smallest change that fixes the problem:
dir="path to the files" for f in "$dir"/*; do cat "$f" done
To accomplish what you describe as your desired end goal:
shopt -s nullglob dir="path to the files" substrings=( R1 R2 ) for substring in "$"; do cat /dev/null "$dir"/*"$substring"* >"$.out" done
Note that cat can take multiple files in one invocation — in fact, if you aren’t doing that, you usually don’t need to use cat at all.
@LelandReardon It will do that if your directory is empty (or otherwise the glob expression returns no results). To turn that off and return an empty list on that case, run shopt -s nullglob .
If your filenames contain spaces, print $9 will only pick out the first part. And see the other caveats in Why you shouldn’t parse the output of ls .
will loop over all the file names which are stored in the directory defined by the variable FILES was disappointed by the fact that you had observed that the value of FILES was the only item processed in the for loop.
In order to create a list of files out of the value pointing to a directory it is necessary to provide a pattern for file names which if applied to the file system will give a list of found directory and file names upon evaluation of the pattern by using $FILES.
This can be done by appending of /* to the directory pattern string stored in the variable FILES which is then used to be evaluated to a list of file names using the $-character as directive for the shell to evaluate the value stored in FILES and replace $FILES with a list of found files. The pure * after /* guarantees that all entries in the directory are returned, so the list will contain not only files but also sub-directories if there are any.
In other words if you change the assignment to:
the script will then behave like you have expected it.
Read all files in folder and subfolders — progress and size
This command finds all files in current folder and subfolders, print the name of each file and copy each of them to /dev/null. At the end it shows how much time it took to copy all the files. What I need is to count (show) all copied bytes at the end (so I would be able to compute the read speed //cache doesn’t matter//) and/or show each file’s size beside it’s name. If there would be possibility to show progress for each file (pv) — that would be great! For this purpose I’m using Cygwin and it’s bash shell, but script should also work on real Linux systems. EDIT: The idea is to read the files, not to copy them (rsync).
I was afraid of rsync answers before the first answer. I’ve made an edit for that. I’ve already read other similar questions but didn’t find even the direction that would solve the problem.
2 Answers 2
Not sure I understand your question fully, but what about:
find . -type f -exec pv -N <> <> \; > /dev/null
./file1: 575kB 0:00:00 [1.71GB/s] [=======================>] 100% ./file2: 15.2GB 0:00:07 [2.22GB/s] [==> ] 15% ETA 0:00:38
Very nice! Still an approach. What about the sum of all copied bytes? Can we intermediately sum each file to a variable?
Rather then use cp and find for this task you might want to think about using rsync instead.
Example
$ time rsync -avvz -O --stats --checksum --human-readable \ --acls --itemize-changes --progress \ --out-format='[%t] [%i] (Last Modified: %M) (bytes: %-10l) %-100n' \ "" "" | tee /path/to/log.txt
Example
This will generate a report that looks like this.
$ time rsync -avvz -O --stats --checksum --human-readable --acls --itemize-changes --progress --out-format='[%t] [%i] (Last Modified: %M) (bytes: %-10l) %-100n' "How_to_Write_Shared_Libraries" "/home/saml/newdir/." | tee ~/rsync.txt
details of each file transferred
sending incremental file list delta-transmission disabled for local transfer or --whole-file [2014/05/31 15:12:34] [cd+++++++++] (Last Modified: 2014/02/21-15:42:44) (bytes: 4096 ) How_to_Write_Shared_Libraries/ [2014/05/31 15:12:34] [>f+++++++++] (Last Modified: 2013/12/06-19:59:22) (bytes: 766590 ) How_to_Write_Shared_Libraries/dsohowto.pdf 766.59K 100% 20.00MB/s 0:00:00 (xfer#1, to-check=1/3) [2014/05/31 15:12:34] [>f+++++++++] (Last Modified: 2014/02/21-15:42:44) (bytes: 44 ) How_to_Write_Shared_Libraries/url.txt 44 100% 1.23kB/s 0:00:00 (xfer#2, to-check=0/3) total: matches=0 hash_hits=0 false_alarms=0 data=766634
stats about the transfer as a whole
rsync[5923] (sender) heap statistics: arena: 1073152 (bytes from sbrk) ordblks: 5 (chunks not in use) smblks: 1 hblks: 2 (chunks from mmap) hblkhd: 401408 (bytes from mmap) allmem: 1474560 (bytes from sbrk + mmap) usmblks: 0 fsmblks: 96 uordblks: 410512 (bytes used) fordblks: 662640 (bytes free) keepcost: 396928 (bytes in releasable chunk) rsync[5926] (server receiver) heap statistics: arena: 286720 (bytes from sbrk) ordblks: 2 (chunks not in use) smblks: 5 hblks: 3 (chunks from mmap) hblkhd: 667648 (bytes from mmap) allmem: 954368 (bytes from sbrk + mmap) usmblks: 0 fsmblks: 384 uordblks: 180208 (bytes used) fordblks: 106512 (bytes free) keepcost: 102336 (bytes in releasable chunk) rsync[5925] (server generator) heap statistics: arena: 135168 (bytes from sbrk) ordblks: 2 (chunks not in use) smblks: 6 hblks: 2 (chunks from mmap) hblkhd: 401408 (bytes from mmap) allmem: 536576 (bytes from sbrk + mmap) usmblks: 0 fsmblks: 464 uordblks: 88688 (bytes used) fordblks: 46480 (bytes free) keepcost: 32800 (bytes in releasable chunk)
summary stats of the transfer
Number of files: 3 Number of files transferred: 2 Total file size: 766.63K bytes Total transferred file size: 766.63K bytes Literal data: 766.63K bytes Matched data: 0 bytes File list size: 143 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 667.27K Total bytes received: 54 sent 667.27K bytes received 54 bytes 1.33M bytes/sec total size is 766.63K speedup is 1.15
real 0m0.092s user 0m0.053s sys 0m0.008s
bash script read all the files in directory
How do I loop through a directory? I know there is for f in /var/files;do echo $f;done; The problem with that is it will spit out all the files inside the directory all at once. I want to go one by one and be able to do something with the $f variable. I think the while loop would be best suited for that but I cannot figure out how to actually write the while loop. Any help would be appreciated.
The for loop is exactly right, but you are looping over a single item, the literal directory name /var/files . Your problem description is incorrect; the program you posted will simply echo /var/files . I suspect you may want for f in /var/files/* . Take care to use double quotes around «$f» everywhere.
3 Answers 3
A simple loop should be working:
for file in /var/* do #whatever you need with "$file" done
@Mu_Qiao — I have two commands in the shell script after the do. The first is to echo $file and then echo «hi». I have 10 files in the directory. I’m getting the ten filenames and then the hi rather than 1,hi,2,hi,3,hi. etc
To write it with a while loop you can do:
ls -f /var | while read -r file; do cmd $file; done
The primary disadvantage of this is that cmd is run in a subshell, which causes some difficulty if you are trying to set variables. The main advantages are that the shell does not need to load all of the filenames into memory, and there is no globbing. When you have a lot of files in the directory, those advantages are important (that’s why I use -f on ls; in a large directory ls itself can take several tens of seconds to run and -f speeds that up appreciably. In such cases ‘for file in /var/*’ will likely fail with a glob error.)