How to find which files and folders were deleted recently in Linux?
I just want to know which files and folders were deleted. Recovering those deleted files and folders is not important for me.
You should tell us what filesystem you are using. For example with ext2, ext3 and ext4 You could probably use ext3grep utility to find out information about deleted files. With some scripting it should be possible to put together simple application that lists deleted files based on specific directory. These utilities however needs raw access to disk and as such are extremely dangerous if not used properly (non-blocking read only operations should be completely safe if you remember that writing to disk same time could cause current operation to return broken/incorrect data).
If you use command line to delete the files then the history command is your friend. History command will show you recently used commands.
3 Answers 3
Use find to search by modification time. For example, to find files touched in the last 3 days:
find /home/sam/officedocuments -mtime -3
For «older than 3 days», use +3 .
Pretty much impossible. When a file is deleted, it’s simply gone. On most systems, this is not logged anywhere.
«Pretty much impossible» This is just plain wrong and because of this I have to downvote this. Deletion times are stored in some filesystems, example of such fs is ext3 filesystem. ext3grep might help when hunting down. I got superuser.com/a/433785/132604 that has some information and links to utilities that could be used to find (possibly recover too) deleted files and information about them. When you delete file, in most filesystems, it is not actually removed but marked as space that could be overwritten in demand.
You might be able to restore files from a backup and compare a list of those files with the ones on the filesystem. That would yield a list of missing and newly created files. Grawity’s answer already show you can filter on time, thus you can limit that to only the deleted files.
You should probably install Inotify Tools. then you can use the inotifywait command to listen for events happening for the specified directory.
Specifically if you want to watch for deleted files and folder use this
inotifywait -m -r -e delete dir_name
and log this output in some file.
Hope this solves your problem
Sound like best approach for this. There’s promising cli-app/daemon named iwatch that you might want to include in your answer. +1 for using right tools to solve problem.
ravi, @SampoSarrala — is this applicable if I want to watch files in the / root, taking into account mounting/unmounting drives? I would guess, in that case the only thing viable for keeping a deletion log would be a kernel module that would hook into unlink (see stackoverflow.com/questions/8588386/…); also man inotifywait states: «—recursive: Warning: . this option while watching . a large tree, it may take quite a while. Also, . the maximum amount of inotify watches per user will be reached. The default maximum is 8192;«
I wonder if there is also a way to find out which process deleted the file (say a cron job) where applicable. Have a case of files mysteriously disappearing.
Linux does not generally ask for confirmation before removing files, assuming you’re using rm from the command line.
To find files modified in the last 30 minutes, use touch —date=»HH:MM» /tmp/reference to create a file called reference with a timestamp from 30 minutes ago (where HH:MM corresponds to 30 minutes ago). Then use find /home/sam/officedocuments -newer /tmp/reference to find files newer than the reference.
If you deleted files using a GUI tool, they may still be in some kind of «trash can». It depends on what you’re using for a desktop environment. If you used rm from the command line, then try one of the utilities mentioned in this answer. (Hat tip to @Sampo for that link.)
How to list recently deleted files from a directory?
I’m not even sure if this is easily possible, but I would like to list the files that were recently deleted from a directory, recursively if possible. I’m looking for a solution that does not require the creation of a temporary file containing a snapshot of the original directory structure against which to compare, because write access might not always be available. Edit: If it’s possible to achieve the same result by storing the snapshot in a shell variable instead of a file, that would solve my problem. Something like: find /some/directory -type f -mmin -10 -deletedFilesOnly Edit: OS: I’m using Ubuntu 14.04 LTS, but the command(s) would most likely be running in a variety of Linux boxes or Docker containers, most or all of which should be using ext4 , and to which I would most likely not have access to make modifications.
Virtually impossible. Deleted files are deleted in Linux — no built-in safety net. If you wanted to try something REALLY fancy, you could try this: unix.stackexchange.com/questions/80270/… . but for what you seem to need, that’s probably overkill.
only thing I can think of is IF the file was edited and a backup was kept from editing the file, search for any file in a directory that is such as foo.bar~ but has no foo.bar in the directory thus assuming the file was deleted.
the answer depends on the file system used, not on the operation system itself. the only relation to linux is that what you are asking for is not possible for most popular linux file systems.
but with most file systems many have seen on linux using the CLI interface it is mostly impossible. So you may you tell us what Linux OS you are using?
5 Answers 5
You can use the debugfs utility,
debugfs is a simple to use RAM-based file system specially designed for debugging purposes
First, run debugfs /dev/hda13 in your terminal (replacing /dev/hda13 with your own disk/partition).
(NOTE: You can find the name of your disk by running df / in the terminal).
Once in debug mode, you can use the command lsdel to list inodes corresponding with deleted files.
When files are removed in linux they are only un-linked but their inodes (addresses in the disk where the file is actually present) are not removed
To get paths of these deleted files you can use debugfs -R «ncheck 320236» replacing the number with your particular inode.
Inode Pathname 320236 /path/to/file
From here you can also inspect the contents of deleted files with cat . (NOTE: You can also recover from here if necessary).
lsdel shows you inodes that belonged to files that were deleted; is there anyway to see the names of the delete files?
I’m not sure if there’s a way to output all the filenames, but you can retrieve the contents of a file with the cat command. Like this, cat <32611>, replacing the number with the inode you want to check.32611>
That just gives you the contents of an unknown set of files. The question asked for the identities of the deleted files, not their contents.
- You may have zero success if your partition is ext2; it works best with ext4
- df /
- Fill mount point with result from #2, in my case: sudo debugfs /dev/mapper/q4os—desktop—vg-root
- lsdel
- q (to exit out of debugfs)
- sudo debugfs -R ‘ncheck 528754’ /dev/sda2 2>/dev/null (replace number with one from step #4)
Since you are formally posting an answer to an older question. It would be most helpful that you support your purported answer with some code and the output that results from using your code. Try copy and pasting some of the result or even a screen print that demonstrates you code.
Thanks for your comments & answers guys. debugfs seems like an interesting solution to the initial requirements, but it is a bit overkill for the simple & light solution I was looking for; if I’m understanding correctly, the kernel must be built with debugfs support and the target directory must be in a debugfs mount. Unfortunately, that won’t really work for my use-case; I must be able to provide a solution for existing, «basic» kernels and directories.
As this seems virtually impossible to accomplish, I’ve been able to negotiate and relax the requirements down to listing the amount of files that were recently deleted from a directory, recursively if possible.
This is the solution I ended up implementing:
- A simple find command piped into wc to count the original number of files in the target directory (recursively). The result can then easily be stored in a shell or script variable, without requiring write access to the file system.
DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
- Then we can store the difference between the two in another variable and update the original amount.
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT — $DEL_SCAN_NEW_AMOUNT)); DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo «$DEL_SCAN_DEL_AMOUNT deleted files»; fi;
Unfortunately, this solution won’t report anything if the same amount of files have been created and deleted during an interval, but that’s not a huge issue for my use case.
To circumvent this, I’d have to store the actual list of files instead of the amount, but I haven’t been able to make that work using shell variables. If anyone could figure that out, I’d help me immensely as it would meet the initial requirements!
I’d also like to know if anyone has comments on either of the two approaches.