Linux segmentation fault dump

Segmentation fault (core dumped) — to where? what is it? and why?

Usually you only need the command gdb path-to-your-binary path-to-corefile , then info stack followed by Ctrl-d . The only worrying thing is that core-dumping is a usual thing for you.

Not so much usual, more occasional — most of the time it’s due to typos or something I changed and didn’t preempt the outcome.

4 Answers 4

If other people clean up .

. you usually don’t find anything. But luckily Linux has a handler for this which you can specify at runtime. In /usr/src/linux/Documentation/sysctl/kernel.txt you will find:

core_pattern is used to specify a core dumpfile pattern name.

  • If the first character of the pattern is a ‘|’, the kernel will treat the rest of the pattern as a command to run. The core dump will be written to the standard input of that program instead of to a file.

According to the source this is handled by the abrt program (that’s Automatic Bug Reporting Tool, not abort), but on my Arch Linux it is handled by systemd. You may want to write your own handler or use the current directory.

But what’s in there?

Now what it contains is system specific, but according to the all knowing encyclopedia:

[A core dump] consists of the recorded state of the working memory of a computer program at a specific time[. ]. In practice, other key pieces of program state are usually dumped at the same time, including the processor registers, which may include the program counter and stack pointer, memory management information, and other processor and operating system flags and information.

. so it basically contains everything that gdb needs (in addition to the executable that caused the fault) to analyze the fault.

Yeah, but I’d like me to be happy instead of gdb

You can both be happy since gdb will load any core dump as long as you have a exact copy of your executable: gdb path/to/binary my/core.dump . You should then be able to analyze the specific failure instead of trying and failing to reproduce bugs.

Thanks for this. I’m long used to ulimit -c for controlling the production of core files, but some system builder seems to think that /proc/sys/kernel/core_pattern = |/bin/false was a good idea. Pffftt!

Also, if ulimit -c returns 0 , then no core dump file will be written.

You can also trigger a core dump manually with CTRL — \ which quits the process and causes a core dump.

If ulimit -c returns 0 , you can enable core dumps for that terminal by calling ulimit -c unlimited , as your source says. This sets the max allowed core file size to unlimited .

Читайте также:  Lvm linux удалить диск

The core file is normally called core and is located in the current working directory of the process. However, there is a long list of reasons why a core file would not be generated, and it may be located somewhere else entirely, under a different name. See the core.5 man page for details:

DESCRIPTION

The default action of certain signals is to cause a process to terminate and produce a core dump file, a disk file containing an image of the process’s memory at the time of termination. This image can be used in a debugger (e.g., gdb(1)) to inspect the state of the program at the time that it terminated. A list of the signals which cause a process to dump core can be found in signal(7).

.

There are various circumstances in which a core dump file is not produced:

 * The process does not have permission to write the core file. (By default, the core file is called core or core.pid, where pid is the ID of the process that dumped core, and is created in the current working directory. See below for details on naming.) Writing the core file will fail if the directory in which it is to be created is nonwritable, or if a file with the same name exists and is not writable or is not a regular file (e.g., it is a directory or a symbolic link). * A (writable, regular) file with the same name as would be used for the core dump already exists, but there is more than one hard link to that file. * The filesystem where the core dump file would be created is full; or has run out of inodes; or is mounted read-only; or the user has reached their quota for the filesystem. * The directory in which the core dump file is to be created does not exist. * The RLIMIT_CORE (core file size) or RLIMIT_FSIZE (file size) resource limits for the process are set to zero; see getrlimit(2) and the documentation of the shell's ulimit command (limit in csh(1)). * The binary being executed by the process does not have read permission enabled. * The process is executing a set-user-ID (set-group-ID) program that is owned by a user (group) other than the real user (group) ID of the process, or the process is executing a program that has file capabilities (see capabilities(7)). (However, see the description of the prctl(2) PR_SET_DUMPABLE operation, and the description of the /proc/sys/fs/suid_dumpable file in proc(5).) * (Since Linux 3.7) The kernel was configured without the CONFIG_COREDUMP option. 

In addition, a core dump may exclude part of the address space of the process if the madvise(2) MADV_DONTDUMP flag was employed.

Читайте также:  Change root permission in linux

Naming of core dump files

By default, a core dump file is named core, but the /proc/sys/kernel/core_pattern file (since Linux 2.6 and 2.4.21) can be set to define a template that is used to name core dump files. The template can contain % specifiers which are substituted by the following values when a core file is created:

 %% a single % character %c core file size soft resource limit of crashing process (since Linux 2.6.24) %d dump mode—same as value returned by prctl(2) PR_GET_DUMPABLE (since Linux 3.7) %e executable filename (without path prefix) %E pathname of executable, with slashes ('/') replaced by exclamation marks ('!') (since Linux 3.0). %g (numeric) real GID of dumped process %h hostname (same as nodename returned by uname(2)) %i TID of thread that triggered core dump, as seen in the PID namespace in which the thread resides (since Linux 3.18) %I TID of thread that triggered core dump, as seen in the initial PID namespace (since Linux 3.18) %p PID of dumped process, as seen in the PID namespace in which the process resides %P PID of dumped process, as seen in the initial PID namespace (since Linux 3.12) %s number of signal causing dump %t time of dump, expressed as seconds since the Epoch, 1970-01-01 00:00:00 +0000 (UTC) %u (numeric) real UID of dumped process 

Источник

Empty core dump file after Segmentation fault

I am running a program, and it is interrupted by Segmentation fault. The problem is that the core dump file is created, but of size zero. Have you heard about such a case and how to resolve it? I have enough space on the disk. I have already performed ulimit -c unlimited to unlimit the size of core file — both running it or putting on the top of the submitted batch file — but still have 0 byte core dump files. The permissions of the folder containing these files are uog+rw and the permissions on the core files created are u+rw only. The program is written by C++ and submitted on a linux cluster with qsub command of the Grid Engine, I don’t know this information is relevant or not to this question.

Next questions: What are the permissions on the containing directory? Is the process running under an effective user id that’s different than the directory owner?

You said you’re using Grid Engine. Is it correct that there are multiple nodes in the cluster? It’s easy for multiple node to share a single file system, but if they don’t also share a user account system it’s likely that a job running on another node cannot run the job under your own user id, and thus looks to the file system as an «other» id.

4 Answers 4

setting ulimit -c unlimited turned on generation of dumps. by default core dumps were generated in current directory which was on nfs. setting /proc/sys/kernel/core_pattern to /tmp/core helped me to solve the problem of empty dumps.

Читайте также:  Linux mint universal usb

The comment from Ranjith Ruban helped me to develop this workaround.

What is the filesystem that you are using for dumping the core?

I just had this problem on a Linux VirtualBox image with a vboxsf filesystem that mapped to an NTFS drive (the drive of the host machine).

modifying the core_pattern as the root user works miracles! The NFS drive path made core files zero bytes. stackoverflow.com/a/12760552/999943 Besides setting the path where it gets created, there is some nifty syntax for changing how a core file gets named, too. linuxhowtos.org/Tips%20and%20Tricks/coredump.htm

It sounds like you’re using a batch scheduler to launch your executable. Maybe the shell that Torque/PBS is using to spawn your job inherits a different ulimit value? Maybe the scheduler’s default config is not to preserve core dumps?

Can you run your program directly from the command line instead?

Or if you add ulimit -c unlimited and/or ulimit -s unlimited to the top of your PBS batch script before invoking your executable, you might be able to override PBS’ default ulimit behavior. Or adding ‘ulimit -c’ could report what the limit is anyway.

I put both ulimit -c unlimited and ulimit -s unlimited to the PBS batch script, but still the core dumps are empty!

If you run the core file in a mounted drive.The core file can’t be written to a mounted drive but must be written to the local drive.

You can copy the file to the local drive.

You can set resource limits such as physical memory required by using qsub option such as -l h_vmem=6G to reserver 6 GB of physical memory.

For file blocks you can set h_fsize to appropriate value as well.

See RESOURCE LIMITS section of qconf manpage:

s_cpu The per-process CPU time limit in seconds. s_core The per-process maximum core file size in bytes. s_data The per-process maximum memory limit in bytes. s_vmem The same as s_data (if both are set the minimum is used). h_cpu The per-job CPU time limit in seconds. h_data The per-job maximum memory limit in bytes. h_vmem The same as h_data (if both are set the minimum is used). h_fsize The total number of disk blocks that this job can create. 

Also, if cluster uses local TMPDIR to each node, and that is filling up, you can set TMPDIR to alternate location with more capacity, e.g. NFS share:

Then launch qsub with the -V option to export the current environment to the job.

One or a combination of the above may help you solve your problem.

Источник

Оцените статью
Adblock
detector