Linux test if file is open

check if file is open with lsof

I’m using linux mint 13 xfce and I have a file named wv.gold that I’m trying to check in bash if it’s open by any program (for instance, I opened it in sublime-text and gedit ) In many forums people say that if I run lsof | grep filename I should get 0 if it’s open or 256(1) if it’s closed, but in fact I get nothing (empty string) if I run using grep «wv.gold» , and get a little list if I do it using grep gold . The list is something like:

bash 2045 user cwd DIR 8,1 4096 658031 /home/user/path/to/dir bash 2082 user cwd DIR 8,1 4096 658031 /home/user/path/to/dir watch 4463 user cwd DIR 8,1 4096 658031 /home/user/path/to/dir gedit 16679 user cwd DIR 8,1 4096 658031 /home/user/path/to/dir lsof 20823 user cwd DIR 8,1 4096 658031 /home/user/path/to/dir grep 20824 user cwd DIR 8,1 4096 658031 /home/user/path/to/dir lsof 20825 user cwd DIR 8,1 4096 658031 /home/user/path/to/dir 

Thus, I get the path to the directory it is but NOT the path to the file (there are other files there) and either way only to gedit process, not to sublime-text process. Is there some easy way to see if a txt file is opened by any other program? EDIT: It turns out (cf. comments from @mata and @ctn) that some editors load files and close them immediately, and they just reopen the file when saving it. This way, we can only see it when they are still opening a big file (since you have the time to observe it while opening) and it disappears immediately after that.

Источник

How to determine, whether a file is open?

My code needs to go through files in a directory, picking only those, which are currently opened (for writing) by any other process on the system. The ideal solution would apply for all Unixes, but I’ll settle for a Linux-only. The program is written in Python, but I can add a custom C-function, if I have to — I just need to know, what API is available for this. One suggestion I found was to go through all file-descriptors under Linux /proc , resolving their links to see, if they point at the file of interest. But that seems rather heavy. I know, for example, that opening a file increases its reference count — filesystem will not deallocate blocks of an opened file even if it is deleted — until it is closed — the feature relied upon by tmpfile(3) . Perhaps, a user process can get access to these records in the kernel?

Yeah, lsof — and fuser — scan /proc . But that yields more information than I need — I don’t care, which processes have the file open. I just want to know, whether any such exist. Perhaps, this information can be obtained more cheaply, than /proc rescan?

The advantage of scanning /proc is that it is backed by direct kernel calls, not a physical file system. That gives /proc a huge performance advantage over opening and reading a directory, even just to find the names.

Читайте также:  Extract exe file in linux

The advantage of scanning /proc is that it is the only way to get the information without modifying the kernel.

Источник

Check if file opened bash [closed]

Hey I’m trying to write a bash script and have a question. What is the best way to check if a specific file is not opened and done being written. I need to do this inside an if statement? Pseudocode:

If abc.txt not opened Do this Else Do this 

Even if bash offered a way to do this, the information would be obsolete the moment the check is completed, as another process can open or close a file at any time.

In general, if the check returns false, meaning the file is not open, another process could, at that very moment, open the file and append more data. Thus, the result of the check, indicating that the file is not open, would no longer be correct. The information would be obsolete.

If your process is the only one touching the file, then you would know whether or not the file is opened by your process from information inside your process. Do you mean inside your bash script, which executes some command which writes to the file?

The standard way to fix this is to have the writer write to a temporary file first, and when it has finished, rename the temporary file to the «real» file in one atomic step. The real file only appears after all the data that’s going to be written to has been written.

But it would be better to work with the cient to implement the method @chepner described. It’s failsafe because if the writing process crashes, it won’t do the rename at the end, and you’ll then ignore the file it was writing.

2 Answers 2

With lsof command you can get the files that are currently open.

If that is not 0, the file is open. If you know the pid of the process that should be using that file, you could use it to filter the results using -p option.

The sample code shown falsely reports abc.txt is open when it is not but qabc3.txt is open, for example. Also, there is no need for wc ; grep -q will silence output and report results in its exit status.

lsof abc.txt should be sufficient to list the processes that have opened abc.txt , instead of grepping the list of all open files.

The best way to check if a file is done being written is have the writer signal that it is done. This signal can take many forms, but a common one is to only create the expected file name once all writing is complete. This is accomplished by writing to a temporary file, and only renaming it after the write as complete successfully:

cp foo bar.tmp && mv bar.tmp bar some_long_process > bar.tmp && mv bar.tmp bar 

Now you, as the consumer, can be assured that if bar exists at all, it is complete and ready to be used.

# Polling only used as an example; operating-system-specific # solutions that block until notification of the file's creation # are vastly preferred while [ ! -e bar ]; do echo "Bar doesn't exist yet. " sleep 1 done echo "Bar exists!" do_something_with bar 

Источник

Читайте также:  Убрать только чтение linux

Quick way to know if a file is open on Linux?

Is there a quick way (i.e. that minimizes time-to-answer) to find out if a file is open on Linux? Let’s say I have a process that writes a ton a files in a directory and another process which reads those files once they are finished writing, can the latter process know if a file is still being written to by the former process? A Python based solution would be ideal, if possible. Note: I understand I could be using a FIFO / Queue based solution but I am looking for something else.

Since you want to observe changes in the file system, inotify could be the answer. I have not tested it myself, but maybe this is related to your problem ubuntuforums.org/showthread.php?t=663950

Nice question, lots of good different answers. With a little more information it may be easier to pick a front runner solution.

I’ve got 10,000’s of files generated at each stage by process #1 and I’d like a quick way to pick-out the files that have been processed and hand those off to process #2.

yes I have full control of the environment. I have used the «tmp file & rename» option in the past but I was looking for alternatives.

I would suggest that the «tmpfile & rename» is the most resilient method. If process #1 crashes or runs out of space, you do not have any half-complete «result files». You could combine this method with «inotify» to avoid the need to poll the directory

11 Answers 11

You can of course use INOTIFY feature of Linux, but it is safer to avoid the situation: let the writing process create the files (say data.tmp) which the reading process will definitely ignore. When the writer finishes, it should just rename the file for the reader (into say .dat). The rename operation guarantees that there may be no misunderstandings.

If you know the PID of the writing process, in Linux you can simply query the /proc//fd/ and see whether one of the links found there points to one of your files.

What you would do is, scan the directory, archiving the fact that fd 5 (say) points to /var/data/whatever/file1.log . Then store the file pointed to into an array.

At that point if a filename is in the array, the process has it in use.

import os # Here I use PID = 31824 path="/proc/%d/fd" % 31824 openfiles = [ os.readlink("%s/%s" % (path, fname)) for fname in os.listdir(path) ] if whatever in openfiles: # whatever is used by pid 31824. 

lsof | grep filename immediately comes to mind.

You have a variety of options available:

  • Inotify is a feature that allows you to watch for file operations
  • Writing process renames files when finished writing
  • The program fuser will let you query whether a file is in use
  • Knowing the PID of the writer may let you query /proc/PID/fd for open file descriptors.

If you can change the ‘first’ process logic, the easy solution would be to write data to a temp file and rename the file once all the data is written.

Читайте также:  Нужен ли linux файрвол

This is a solution using inotify. You will get a notification for every file in the directory being closed after a writing operation.

import os import pyinotify def Monitor(path): class PClose(pyinotify.ProcessEvent): def process_IN_CLOSE(self, event): f = event.name and os.path.join(event.path, event.name) or event.path print 'close event: ' + f wm = pyinotify.WatchManager() notifier = pyinotify.Notifier(wm, PClose()) wm.add_watch(path, pyinotify.IN_CLOSE_WRITE) try: while 1: notifier.process_events() if notifier.check_events(): notifier.read_events() except KeyboardInterrupt: notifier.stop() return if __name__ == '__main__': path = "." Monitor(path) 

However, since you are the one being in control of the process writing the files I’d vote for a different solution involving some kind of communication between the processes.

Источник

Is there a faster way to check if a file is in use?

I’m looking for a command line function or c function that will let me know if a file is open/in use by something. lsof and fuser do tell this, but they provide a lot of other info which results in taking up to 300ms in some situations (like when i use this code on MAC OS X, I’m devving for Linux and OS X) (I have a windows solution that takes 5ms so I’m trying to find something in Unix that also is very quick, and just returns true or false if file is in use)

2 Answers 2

If you are using this as a lock, it will not work as neither lsof or fuser prevent race conditions.

The basic process that lsof does is trawl through all processes /proc/*/fs looking for open file descriptors. This is going to take time no matter what you do.

You can do this yourself, but it is not likely to be any faster as you have to check for every open process on the system.

If what you are doing is time critical, figure out another way to do it.

  • If you control the file through a program that you wrote; use a lock file.
  • If you are running some command that operates on the file, look and see what documentation that command/program offers and see if it can’t make a lockfile. Failing that, see if it can’t make a file with its PID inside it. Then you can look at /proc//fs to see if your file is currently open or not. Looking at only one processes open file descriptors will be much faster then mapping across all of them.
  • Otherwise in order to help you I am going to need more information about what you are doing.

You gave more information in a comment that you want to determine if Firefox is running on a given system. The best way to do this is to look for Firefox’s lock files. These are stored in default locations specified on the Mozilla wiki.

For example, on linux, have your program do the following:

  • open up the ~/.mozilla/firefox/ directory.
  • List all directories, filtering for directories ending in .default . (I think all profiles end with .default , if not just crawl into every directory.)
  • In each directory above, look for the existence of a file named lock or .parentlock . If you see one or both files, Firefox is open.

This algorithm ought to execute faster than what you do on windows currently.

Источник

Оцените статью
Adblock
detector