Linux file by inode

What is the fastest way to find all the file with the same inode?

But it’s really slow. I would like to find a tool like locate . The real problems comes when you have a lot of file, I suppose the operation is O(n).

3 Answers 3

There is no mapping from inode to name. The only way is to walk the entire filesystem, which as you pointed out is O(number of files). (Actually, I think it’s θ(number of files)).

I know this is an old question, but many versions of find have an inum option to match a known inode number easily. You can do this with the following command:

This will still run through all files if allowed to do-so, but once you get a match you can always stop it manually; I’m not sure if find has an option to stop after a single match (perhaps with an -exec statement?)

This is much easier than dumping output to a file, sorting etc. and other methods, so should be used when available.

That does the same thing as -samefile only you have to find the inode yourself. It makes things even slower.

  • Use find -printf «%i:\t%p or similar to create a listing of all files prefixed by inode, and output to a temporary file
  • Extract the first field — the inode with ‘:’ appended — and sort to bring duplicates together and then restrict to duplicates, using cut -f 1 | sort | uniq -d , and output that to a second temporary file
  • Use fgrep -f to load the second file as a list of strings to search and search the first temporary file.

(When I wrote this, I interpreted the question as finding all files which had duplicate inodes. Of course, one could use the output of the first half of this as a kind of index, from inode to path, much like how locate works.)

On my own machine, I use these kinds of files a lot, and keep them sorted. I also have a text indexer application which can then apply binary search to quickly find all lines that have a common prefix. Such a tool ends up being quite useful for jobs like this.

Источник

Quickly find which file(s) belongs to a specific inode number

but it is a very slow search, I feel like there has to be a faster way to do this. Does anybody know a faster method?

Читайте также:  Fedora linux 35 workstation gnome

@Coren — it’s commonly used if you have a file with, say, a «-» in front. You can do ls -li to find the inode of it, then: find . -inum -exec rm -i <> \; This is a belt-and-bracers approach to ensuring you can remove the file. Of course, you could also ‘rm — -filename’, or rm ./-filename, or rm «-filename».

@Coren with selinux, log messages include the inode, but not the full path. So you have to search for the inode to find the file being referred to. (thats my use case anyway)

@Coren For example when a file has multiple hard links, you’ve spotted that the contents are obsolete and want to delete the file, but you’ve only found one of the file’s names and want to delete the others.

Just use find / -inum . It is much more portable than debugfs and also works much more reliably (it can find paths that are not belonging to files on the hard drive, like devices, for instance).

5 Answers 5

For an ext4 filesystem, you can use debugfs as in the following example:

$ sudo debugfs -R 'ncheck 393094' /dev/sda2 2>/dev/null Inode Pathname 393094 /home/enzotib/examples.desktop 

The answer is not immediate, but seems to be faster than find .

The output of debugfs can be easily parsed to obtain the file names:

$ sudo debugfs -R 'ncheck 393094' /dev/sda2 | cut -f2 | tail -n2 > filenames 

I should probably have specified the filesystem type. It didn’t occur to me that the method of doing these things would be different for different filesystems. I’m using XFS, so while I’m sure your answer is correct, it won’t help me specifically.

Doesn’t work on an encrypted drive: debugfs: Bad magic number in super-block while trying to open /dev/sda2 /dev/sda2 contains a crypto_LUKS file system ncheck: Filesystem not open

@LaneRettig You have to run it on the file system device, not the container device. This will be something like /dev/mapper/sda2_crypt .

btrfs

 inode-resolve [-v]  (needs root privileges) resolve paths to all files with given inode number ino in a given subvolume at path, ie. all hardlinks Options -v verbose mode, print count of returned paths and ioctl() return value 

I believe it’s sudo apt install btrfs-tools but I haven’t verified it mysellf. fsck had already destroyed half my files before I realized I could check which inodes it was going to «fix» so never tried it out.

Читайте также:  Openvpn gui linux ubuntu

Is this instantaneous like searching an indexed record, or does it take time that is proportional to the number of files in the path like searching for a file name?

For XFS this seems to be done using xfs_db(1) and the blockget and ncheck commands:

blockget [-npvs] [-b bno] . [-i ino] .
Get block usage and check filesystem consistency. The information is saved for use by a subsequent blockuse, ncheck, or blocktrash command. See xfs_check(8) for more information.

ncheck [-s] [-i ino] .
Print name-inode pairs. A blockget -n command must be run first to gather the information.

# xfs_db -c 'blockget -n -i 123456' /dev/sde1 inode 123456 add link, now 1 inode 123456 mode 0100644 fmt extents afmt extents nex 1 anex 0 nblk 1 sz 135 inode 123456 nlink 6 not dir inode 123456 extent [0,822682790,1,0] setting inode to 6594903486 for block 3/17376422 inode 123456 add link, now 2 inode 123456 add link, now 3 inode 123456 add link, now 4 inode 123456 add link, now 5 inode 123456 add link, now 6 inode 123456 name dir/subdir/foo.bar 

I tried -c ‘blockget -n -i 6594903486’ -c ‘ncheck -i 6594903486’ , but it does not add relevant information by using ncheck , too. The -n flag of blockget returns already the filename. PS Works only if the filesystem is unmounted, is slow as well (not sure if as slow as find ) and returns only one filename, even if multiple hardlinks exist ( find even returns hardlinks).

You could look at the fsdb command, found on most Unices, and available somewhere for Linux I am sure. This is a powerful command allowing you to to access the in-core inode structure of files, so be careful. The syntax is also very terse.

While fsdb won’t actually let you discover the filename of the inode, it does allow you to directly access the inode when you specify it, in essence «porting» you to the file itself (or at least it’s data block pointers) so it’s quicker in that respect than the find ;-).
Your question doesn’t specify what you want to do with the file. Are you perchance decoding NFS filehandles?

Well, I didn’t think what I was going to do with the information was relevant to my question, so I left it out. In my case, it was merely a curiosity question; my xfs_fsr defragmentation spits out which inodes it defragments, and one was extremely fragmented (over 5000 extents) and I was just curious which file it was. find works, it’s just so slow.

Читайте также:  Start tor browser linux

I’m trying to fix a problem where my rhel vm does a full (20 minute!) fsck every boot and all I’ve got to go on is the inode number reported in /var/opt/messages as faulty. (Having said that, find -inum didn’t actually find it)

The basic problem is that there is no index in most filesystems that work in this direction. If you need to do this kind of thing frequently your best bet is to set up a scheduled task that scans the filesystem for the information you need, create a database (using sqlite3 for example) which has the information you need and create an index on the inode number to locate file(s) quickly.

#!/bin/bash # Generate an index file # SCAN_DIRECTORY=/ DB_DIRECTORY=~/my-sqlite-databases if [ ! -d $ ] ; then mkdir $ fi # Remove any old database - or use one created with a filename based on the date rm $/files-index.db ( # Output a command to create a table file_info in the database to hold the information we are interested in echo 'create table file_info ( inode INTEGER, filepath, filename, numlinks INTEGER, size INTEGER);' # Use find to scan the directory and locate all the objects - saving the inode, file path, file name, number of links and file size # This could be reduced to just the inode, file path and file name . if you are looking for files with multiple links the numlinks is useful (select * from file_info where numlinks > 1) # Find output formats # # %i = inode # %h = path to file (directory path) # %f = filename (no directory path) # %n = number of hard links # %s = size # Use find to generate the SQL commands to add the data to the database table. find $SCAN_DIRECTORY -printf "insert into file_info (inode, filepath, filename, numlinks, size) values ( %i, '%h', '%f', %n, %s);\n" # Finally create an index on the inode number so we can locate values quickly echo 'create index inode_index on file_info(inode);' # Pipe all the above commands into sqlite3 and have sqlite3 create and populate a database ) | sqlite3 $/files-index.db # Once you have this in place, you can search the index for an inode number as follows echo 'select * from file_info where inode = 1384238234;' | sqlite3 $/files-index.db 

Источник

Оцените статью
Adblock
detector