Input output error file linux

Cannot chroot due to «Input/output error» — file and disk seem OK

I am trying to repair a non-booted Arch installation after a computer crash during an update (worst of times). However, chroot fails. Tried this:

# mount /dev/sdb2 /mnt/arch # cd /mnt/arch # mount --bind /dev dev # mount --bind /proc proc # mount --bind /sys sys # chroot . bin/bash chroot: failed to run command ‘bin/bash’: Input/output error Exit 126 
# sha256sum bin/bash 3695f983ad6a1387826f769b96488f24e3635a1501fe89c96d3eadfa4e04caf7 bin/bash 
# umount sys # umount proc # umount dev # cd .. # umount arch # fsck -f /dev/sdb2 

(NO ERRORS REPORTED) The disk is an SSD and not very old. dmesg does not show any relevant kernel error messages. PS / EDIT:

Give the same error as they address the same file. It exists (see sha256sum command). EDIT 2: @roaima pointed out to check the libraries:

# ldd bin/bash linux-vdso.so.1 (0x00007fff16563000) libreadline.so.8 => /lib64/libreadline.so.8 (0x00007f6483600000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f64835f8000) libc.so.6 => /lib64/libc.so.6 (0x00007f6483430000) libtinfo.so.6 => /lib64/libtinfo.so.6 (0x00007f64833f8000) /lib64/ld-linux-x86-64.so.2 (0x00007f6483778000) 
# ls -ld ./lib64/ld-linux-x86-64.so.2 ./lib64/ld-2.29.so ./usr/lib/libdl.so.2 ./usr/lib/libdl-2.29.so -rwxr-xr-x 1 root root 0 Jun 23 10:33 ./lib64/ld-2.29.so lrwxrwxrwx 1 root root 10 Jun 23 10:33 ./lib64/ld-linux-x86-64.so.2 -> ld-2.29.so lrwxrwxrwx 1 root root 13 Jun 23 10:33 ./usr/lib/libdl.so.2 -> libdl-2.29.so -rwxr-xr-x 1 root root 0 Jun 23 10:33 ./usr/lib/libdl-2.29.so 

So, the most basic system libraries are 0-sized due to a crash in the most inappropriate moment thinkable — an update of glibc .

Источник

How to interpret and fix a Input/output error in Linux?

I am running a daily backup with rsync. Starting some days ago, one of the files has been throwing this error during the backup:

rsync: read errors mapping "/home/folder/file.ext": Input/output error (5) WARNING: /home/folder/file.ext failed verification -- update discarded (will try again). 

What’s the best course of action? Is it just a broken file? Or is there something wrong with the hard drive in the location of the file? Should I just delete it and copy one of the backed up versions into the file’s location? Or is there something else/more that I should do?

4 Answers 4

 read errors mapping . Input/output error (5) 

indicates the impossibility of rsync to read or write a file. The most likely causes of this error are disk defects, either in the SRC or in the TGT directory. Other possibilities however include insufficient permissions, file lock by anti-virus programs, and maybe other causes.

The first step toward a diagnosis is to try to copy the files manually. This may work if, for instance, the source of the error was a disk defect in the TGT directory; by repeating the operation at a later time, you will write into a different section of the disk, and the problem may have evaporated.

Alternatively, you may discover that you cannot access the file in the SRC directory. In this case I suggest that you employ any of the disk checking utilities available to your distro.

Читайте также:  Linux sudo no tty

Insufficient privileges, anti-virus, are easier to diagnose.

Lastly, if you have a bad sector on your SRC directory, you may exclude that from future runs of rsync by means of

rsync -av --exclude='/home/my_name/directory_with_corrupt_files/*' 

Thanks! At the risk of this being another question, how do I find out if it’s the SRC or the TGT directory if I can rule out privileges or anti-virus?

An anti-virus locks files for some time. If that is the problem, re-trying the same command some time later should not present the same error. The matter of privileges is easy: use root account on both SRC and TGT machines. If you cannot do that, check that the files on which rsync fails are accessible to you, i.e. they belong to the account trying to access them, and if they don’t that you have read access to them. If this solves your matter, pls remember to accept my answer, it is useful to other readers.

It seems that your answer gave birth to a whole article bobcares.com/blog/rsync-input-output-error_5 (without references, of course).

I had a similar issue, I had a with fuse-mounted device via USB, which would frequently disconnect, causing IO errors. My backup could never finish because the IO errors would start mid-way into the rsync, and despite running rsync repeatedly, at some point the sync would not progress beyond updating existing files.

option. This way I could run the sync in a loop until seeing a 0 exit status.

Of course, in this case I didn’t care about updates to existing files.

I have 2 external drives I keep in sync, using rsync . I perform this task regularly on either of two machines, and frequently switch from one to the other for the sake of convenience. I have 4 machines running Debian 9, and use these drives on each of them.

This morning I used the following:

rsync -ahv --delete drive-x drive-y 

and was surprised to have a few hundred reported failures.

mostly: rsync: readlink_stat. failed: Input/output error (5)
also: rsync: rsync: recv_generator: mkdir . failed: Read-only file system (30)

In the process to find out what happened, I remounted the drives twice, rebooted, ran rsync without —delete and basically my normal tries to fix something that has reliably worked for a long time. Even thought about installing rsync again. Before I would do that I decided to rsync the 2 drives on the other machine, which I run offline. rsync worked just the way it should.

Having read the material posted here, I installed clamav , updated the signatures, and scanned my home directory. I use this regularly on a different machine. I found 1 and only 1 PUA, and I deleted it. I always delete PUA’s. I then remounted the two drives with this machine, and added different test files and folders to each drive.

I ran rsync -ahv —delete drive_x drive_y and everything worked fine.

Источник

How to redirect stderr to a file [duplicate]

@terdon This is a more specific question, and it rightly shows up in google search for the more specific question, which is a good thing.

Читайте также:  Linux ком строка сменить пользователя

@nroose yes, and it will keep showing up, that won’t change. But any new answers should go to the more general question.

2 Answers 2

There are two main output streams in Linux (and other OSs), standard output (stdout) and standard error (stderr). Error messages, like the ones you show, are printed to standard error. The classic redirection operator ( command > file ) only redirects standard output, so standard error is still shown on the terminal. To redirect stderr as well, you have a few choices:

    Redirect stdout to one file and stderr to another file:

For more information on the various control and redirection operators, see here.

So hashdeep -rXvvl -j 30 -k checksums.txt /mnt/app/ >> result_hashdeep.txt 2> error_hashdeep.txt & or hashdeep -rXvvl -j 30 -k checksums.txt /mnt/app/ >> result_hashdeep.txt 2>&1 or hashdeep -rXvvl -j 30 -k checksums.txt /mnt/app/ &> result_mixed.txt

@AndréM.Faria yes. But the last two commands are equivalent, they will send both error and output to the same file.

As in the link you provided, I could use |& instead of 2>&1 they are equivalent, thanks for you time.

First thing to note is that there’s couple of ways depending on your purpose and shell, therefore this requires slight understanding of multiple aspects. Additionally, certain commands such as time and strace write output to stderr by default, and may or may not provide a method of redirection specific to that command

Basic theory behind redirection is that a process spawned by shell (assuming it is an external command and not shell built-in) is created via fork() and execve() syscalls, and before that happens another syscall dup2() performs necessary redirects before execve() happens. In that sense, redirections are inherited from the parent shell. The m&>n and m>n.txt inform the shell on how to perform open() and dup2() syscall (see also How input redirection works, What is the difference between redirection and pipe, and What does & exactly mean in output redirection )

Shell redirections

Most typical, is via 2> in Bourne-like shells, such as dash (which is symlinked to /bin/sh ) and bash ; first is the default and POSIX-compliant shell and the other is what most users use for interactive session. They differ in syntax and features, but luckily for us error stream redirection works the same (except the &> non standard one). In case of csh and its derivatives, the stderr redirection doesn’t quite work there.

Let’s come back to 2> part. Two key things to notice: > means redirection operator, where we open a file and 2 integer stands for stderr file descriptor; in fact this is exactly how POSIX standard for shell language defines redirection in section 2.7:

For simple > redirection, the 1 integer is implied for stdout , i.e. echo Hello World > /dev/null is just the same as echo Hello World 1>/dev/null . Note, that the integer or redirection operator cannot be quoted, otherwise shell doesn’t recognize them as such, and instead treats as literal string of text. As for spacing, it’s important that integer is right next to redirection operator, but file can either be next to redirection operator or not, i.e. command 2>/dev/null and command 2> /dev/null will work just fine.

Читайте также:  Разбивка дисков под линуксом

The somewhat simplified syntax for typical command in shell would be

 command [arg1] [arg2] 2> /dev/null 

The trick here is that redirection can appear anywhere. That is both 2> command [arg1] and command 2> [arg1] are valid. Note that for bash shell, there there exists &> way to redirect both stdout and stderr streams at the same time, but again — it’s bash specific and if you’re striving for portability of scripts, it may not work. See also Ubuntu Wiki and What is the difference between &> and 2>&1.

Note: The > redirection operator truncates a file and overwrites it, if the file exists. The 2>> may be used for appending stderr to file.

If you may notice, > is meant for one single command. For scripts, we can redirect stderr stream of the whole script from outside as in myscript.sh 2> /dev/null or we can make use of exec built-in. The exec built-in has the power to rewire the stream for the whole shell session, so to speak, whether interactively or via script. Something like

#!/bin/sh exec 2> ./my_log_file.txt stat /etc/non_existing_file 

In this example, the log file should show stat: cannot stat ‘/etc/non_existing_file’: No such file or directory .

Yet another way is via functions. As kopciuszek noted in his answer, we can write function declaration with already attached redirection, that is

some_function() < command1 command2 >2> my_log_file.txt 

Commands writing to stderr exclusively

Commands such as time and strace write their output to stderr by default. In case of time command, the only viable alternative is to redirect output of whole command , that is

time echo foo 2>&1 > file.txt 

alternatively, synchronous list or subshell could be redirected if you want to separate the output ( as shown in related post ):

Other commands, such as strace or dialog provide means to redirect stderr. strace has -o option which allows specifying filename where output should be written. There is also an option for writing a textfile for each subprocess that strace sees. The dialog command writes the text user interface to stdout but output to stderr, so in order to save its output to variable ( because var=$(. ) and pipelines only receives stderr ) we need to swap the file descriptors

result=$(dialog --inputbox test 0 0 2>&1 1>/dev/tty); 

but additionally, there is —output-fd flag, which we also can utilize. There’s also the method of named pipes. I recommend reading the linked post about the dialog command for thorough description of what’s happening.

Источник

Оцените статью
Adblock
detector