Устройство или ресурс занято linux

How to get over «device or resource busy»?

I tried to rm -rf a folder, and got «device or resource busy». In Windows, I would have used LockHunter to resolve this. What’s the linux equivalent? (Please give as answer a simple «unlock this» method, and not complete articles like this one. Although they’re useful, I’m currently interested in just ASimpleMethodThatWorks™)

Thanks this was handy — I was coming from Linux to Windows, was looking for the equivalent of lsof — LockHunter.

What the hell? Unix does not prevent you from deleting open files like Windows does. This is why you can delete your whole system by running rm -rf / . it will happily delete every single file, including /bin/rm.

@psusi, that is incorrect. You either have a bad source of information or are just making stuff up. Linux, like Windows, has file and device locking. It’s kind of broken, though. 0pointer.de/blog/projects/locking.html

@foobarbecue, normally those are only advisory locks and the man page at least seems to indicate they are only for read/write, not unlink.

Solutions on this page don’t work for me, still not be able to delete the file, but in my case i’m bothered by the size the file, so i do this little trick: vim unwanted_file , then simply delete the content inside the file in edit mode, this way i release the disk, but the file is still there.

9 Answers 9

The tool you want is lsof , which stands for list open files.

It has a lot of options, so check the man page, but if you want to see all open files under a directory:

That will recurse through the filesystem under /path , so beware doing it on large directory trees.

Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.

@marines: Check if another filesystem is mounted beneath /path . That is one cause of hidden «open files».

lsof command directly to the path does not work. So basically need to go in the path location and then run lsof busy_file then kill all the process

lsof seems to do nothing for me: lsof storage/logs/laravel.log returned nothing, and so did lsof +D storage/logs/ . umount responded with not mounted .

Just to elaborate on @camh answer: Use mount | grep . That shows any /dev/ might be mounted on the the . Use sudo umount -lf /dev/ and then try to remove . Works for me. Thanks @camh

sometimes it’s the result of mounting issues, so I’d unmount the filesystem or directory you’re trying to remove:

Late to the party but maybe useful for feature checks, mount the dir rather than mounting the file, because that was causing me the issue.

I had this same issue, built a one-liner starting with @camh recommendation:

lsof +D ./ | awk '' | tail -n +2 | xargs -r kill -9 
  • awk grabs the PIDs.
  • tail gets rid of the pesky first entry: «PID».
  • xargs executes kill -9 on the PIDs. The -r / —no-run-if-empty , prevents kill command failure, in case lsof did not return any PID.

@ChoyltonB.Higginbottom as you asked for a safer way to prevent kill failure (if lsof returns nothing) — Use xargs with -r / —no-run-if-empty . For non-GNU xargs, see this alternative: stackoverflow.com/a/19038748

Читайте также:  Настройки compiz linux mint

kill -9 is a favorite for use but does have serious implications. This signal is «non-catchagable, non-ignorable» to the process. Thus, the process may terminate without saving critical state data. Perhaps a simple kill first, and if that doesn’t work, then the -9 ? Finally, bear in mind that if the process is blocked on I/O, kill -9 isn’t going to work. That’s not an oversight in this suggestion, just something to keep in mind.

  1. Go into the directory and type ls -a
  2. You will find a .xyz file
  3. vi .xyz and look into what is the content of the file
  4. ps -ef | grep username
  5. You will see the .xyz content in the 8th column (last row)
  6. kill -9 job_ids — where job_ids is the value of the 2nd column of corresponding error caused content in the 8th column
  7. Now try to delete the folder or file.

I use fuser for this kind of thing. It will list which process is using a file or files within a mount.

fuser helps only in the specific case when you want to unmount a filesystem. Here the problem is to find what’s using a specific file.

Sorry, wrong objection: fuser doesn’t help here because the problem is to find all the open files in a directory tree. You can tell lsof to show all files and filter, or make it recurse; fuser has no such mode and needs to be invoked on every file.

@Giles: fuser works will lists. Try fuser /var/log/* , if any logs are open it will tell which ones and who has it open. If a simple wildcard, won’t work, find with or without xargs will do the job.

lsof was not in my path while fuser was, allowing me to find the offending process ID to kill, so +1+thanks.

I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem, since the files are typically named like .nfs000000123089abcxyz .

My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file will have been removed automatically, at which point I am free to delete the directory.

This typically happens in directories where I am installing or compiling software libraries.

I also have the same problem with .nfsxxx files dropped seemingly in random places. However, I am not sure how this suggestion can make sense — obviously renaming the parent directory does not work because its contents are locked. Wouldn’t get the error in the first instance otherwise. I tried it and simply nothing happens, the renaming refuses to happen. Do you want to elaborate/have any other suggestion?

renaming the parent directory always worked for me. No clue why. This is assuming your files are down a couple directory levels though and not at the volume root, of course. Sorry I dont have a better answer than «it just works for me».

Riffing off of Prabhat’s question above, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved it, but this

ps -ef | grep name-of-busy-dir 

Showed me the process and the PID (column two).

I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and fuser , were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused for ages because I couldn’t get rid of it — I kept getting «Device or resource busy»!

By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the mount command, i.e. sudo umount path

Читайте также:  Linux on memory card

Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn’t get rid of it by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder again and I could delete it.

Источник

NFS mount: Device or resource busy

I referred the following link, the solution works. How to get over «device or resource busy»? The above solution works when you are manually deleting the file. But I have a python script that deletes the files (automated process). Sometimes I get «Device or resource busy error» when the script tries to delete the files. Consequently, my script fails. I don’t know how to resolve this issue using my python script. EDIT: The script downloads the logs files from a log server. These files are then processed by my script. After the processing is done, the script deletes these log files. I don’t think that there is anything wrong with the design. Exact Error:

OSError: [Errno 16] Device or resource busy: '/home/johndoe/qwerty/.nfs000000000471494300000944' 

What do you want to do with the processes that have the file(s) open? In the manual solution you’ve linked, you can look at what they are and decide if you want to kill them. For a script, I’d recommend against indiscriminately killing processes, for hopefully obvious reasons. In other words, why do you need to resolve this in your script?

The script creates a bunch of files and later on it deletes them. As the entire process is automated, I want to delete the file automatically. Moreover, this script is a scheduled job run by a cron.

I’ll take a bet that this problem could/should be solved at a higher level of abstraction. 🙂 Why make the files if you’re just going to delete them? Could you perhaps use memory for that (i.e. data structures)? In a shell script (not Python), the answer would be «pipelines.» Also relevant: unix.stackexchange.com/q/254296/135943

1 Answer 1

These files are NFS placeholders:

/home/johndoe/qwerty/.nfs000000000471494300000944 

Some background

In a typical UNIX filesystem, a file that is currently in use and open can be deleted but its contents will not actually disappear until the last filehandle to it is closed. You can see this in action with code like this:

$ ps -ef >/tmp/temporaryfile $ ls -l /tmp/temporaryfile -rw-r--r-- 1 roaima roaima 6758 Mar 2 14:02 /tmp/temporaryfile $ ( sleep 60 ; cat ) 

(Note that this is opposite to Microsoft Windows, where files cannot be deleted while they are still open.)

Explanation

A file on an NFS server may have one or more clients accessing it. NFS itself is (mostly) stateless and so needs to emulate the functionality that allows an open file to be accessed even after it's been deleted.

Читайте также:  Samsung ml 2160 linux drivers

The emulation is handled by removing the file from its place in the filesystem but leaving it in place as a file whose name starts with .nfs . When the last reader/writer closes their filehandle to this file it will be properly removed from the filesystem.

Here's an example of this in action:

$ ps -ef > /var/autofs/net/nfsserver/tmp/temporaryfile $ ls -l /var/autofs/net/nfsserver/tmp/temporaryfile -rw-r--r-- 1 roaima roaima 6766 Mar 2 14:14 /var/autofs/net/nfsserver/tmp/temporaryfile $ ( sleep 60 ; cat ) 

You should ignore files on an NFS mount whose names begin with .nfs . Furthermore, your code needs to cope with the possibility that a remote directory cannot be deleted until all these files have actually disappeared.

NFS isn't quite as transparent to applications as one might hope.

It may be that the reason the log files are still open is that they are still being used by the logger process on your remote system. Generally the approach to this would be to cycle the log files and only download and delete the previous log files, leaving the current ones in the filesystem for use by the logger process.

Utilities such as logrotate handle this with specific configuration elements such as delaycompress that (attempt to) ensure a log file is not compressed while it's still in use. (See /etc/logrotate.d/apache2 on at least Debian systems for an example.)

Источник

cannot remove 'folder': Device or resource busy

I create centos with docker and After I start the container, I would remove a directory where there are other two directories. and I do :

cannot remove 'folder': Device or resource busy 

5 Answers 5

Another pretty much simple answer is following:

1. Close all your terminal windows (bash, shell, etc. )

2. Start a new terminal

3. Execute your command again e.g.:

Hopefully it helps others!

if you use windows, it's com.docker.backend.exe , terminate it, everything will ok.

If you know the reason why backend cause this problem, tell me.

Thanks, this helped. The directory I was trying to remove was setup in the "volumes" section, in one of my docker-compose files. The directory and docker installation were on WSL and Docker for Windows was connecting to it for easier administration.

This happened to me, until I closed the code editor VS-Code. Somehow VS code had the folder open so it could not close until the editor was closed.

Maybe you have that folder opened up somewhere. Try lsof to find the opened folder and then sudo kill . Afterwards, I believe you can remove the folder from there.

From there you can get all processes containing "docker" word

Hot Network Questions

Subscribe to RSS

To subscribe to this RSS feed, copy and paste this URL into your RSS reader.

Site design / logo © 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA . rev 2023.7.12.43529

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

Источник

Оцените статью
Adblock
detector