[linux] Core dump file is not generated

Every time, my application crash a core dump file is not generated. I remember that few days ago, on another server it was generated. I'm running the app using screen in bash like this:

#!/bin/bash
ulimit -c unlimited
while true; do ./server; done

As you can see I'm using ulimit -c unlimited which is important if I want to generate a core dump, but it still doesn't generate it, when I got an segmentation fault. How can I make it work?

This question is related to linux gdb coredump

The answer is


Note: If you have written any crash handler yourself, then the core might not get generated. So search for code with something on the line:

signal(SIGSEGV, <handler> );

so the SIGSEGV will be handled by handler and you will not get the core dump.


This link contains a good checklist why core dumps are not generated:

  • The core would have been larger than the current limit.
  • You don't have the necessary permissions to dump core (directory and file). Notice that core dumps are placed in the dumping process' current directory which could be different from the parent process.
  • Verify that the file system is writeable and have sufficient free space.
  • If a sub directory named core exist in the working directory no core will be dumped.
  • If a file named core already exist but has multiple hard links the kernel will not dump core.
  • Verify the permissions on the executable, if the executable has the suid or sgid bit enabled core dumps will by default be disabled. The same will be the case if you have execute permissions but no read permissions on the file.
  • Verify that the process has not changed working directory, core size limit, or dumpable flag.
  • Some kernel versions cannot dump processes with shared address space (AKA threads). Newer kernel versions can dump such processes but will append the pid to the file name.
  • The executable could be in a non-standard format not supporting core dumps. Each executable format must implement a core dump routine.
  • The segmentation fault could actually be a kernel Oops, check the system logs for any Oops messages.
  • The application called exit() instead of using the core dump handler.

In centos,if you are not root account to generate core file: you must be set the account has a root privilege or login root account:

vim /etc/security/limits.conf

account soft core unlimited
account hard core unlimited

then if you in login shell with securecrt or other:

logout and then relogin


For the record, on Debian 9 Stretch (systemd), I had to install the package systemd-coredump. Afterwards, core dumps were generated in the folder /var/lib/systemd/coredump.

Furthermore, these coredumps are compressed in the lz4 format. To decompress, you can use the package liblz4-tool like this: lz4 -d FILE.

To be able to debug the decompressed coredump using gdb, I also had to rename the utterly long filename into something shorter...


If one is on a Linux distro (e.g. CentOS, Debian) then perhaps the most accessible way to find out about core files and related conditions is in the man page. Just run the following command from a terminal:

man 5 core

The answers given here cover pretty well most scenarios for which core dump is not created. However, in my instance, none of these applied. I'm posting this answer as an addition to the other answers.

If your core file is not being created for whatever reason, I recommend looking at the /var/log/messages. There might be a hint in there to why the core file is not created. In my case there was a line stating the root cause:

Executable '/path/to/executable' doesn't belong to any package

To work around this issue edit /etc/abrt/abrt-action-save-package-data.conf and change ProcessUnpackaged from 'no' to 'yes'.

ProcessUnpackaged = yes

This setting specifies whether to create core for binaries not installed with package manager.


Also, check to make sure you have enough disk space on /var/core or wherever your core dumps get written. If the partition is almos full or at 100% disk usage then that would be the problem. My core dumps average a few gigs so you should be sure to have at least 5-10 gig available on the partition.


Remember if you are starting the server from a service, it will start a different bash session so the ulimit won't be effective there. Try to put this in your script itself:

ulimit -c unlimited

Just in case someone else stumbles on this. I was running someone else's code - make sure they are not handling the signal, so they can gracefully exit. I commented out the handling, and got the core dump.


Check:

$ sysctl kernel.core_pattern

to see how your dumps are created (%e will be the process name, and %t will be the system time).

If you've Ubuntu, your dumps are created by apport in /var/crash, but in different format (edit the file to see it).

You can test it by:

sleep 10 &
killall -SIGSEGV sleep

If core dumping is successful, you will see “(core dumped)” after the segmentation fault indication.

Read more:

How to generate core dump file in Ubuntu


Ubuntu

Please read more at:

https://wiki.ubuntu.com/Apport


If you call daemon() and then daemonize a process, by default the current working directory will change to /. So if your program is a daemon then you should be looking for a core in / directory and not in the directory of the binary.


Although this isn't going to be a problem for the person who asked the question, because they ran the program that was to produce the core file in a script with the ulimit command, I'd like to document that the ulimit command is specific to the shell in which you run it (like environment variables). I spent way too much time running ulimit and sysctl and stuff in one shell, and the command that I wanted to dump core in the other shell, and wondering why the core file was not produced.

I will be adding it to my bashrc. The sysctl works for all processes once it is issued, but the ulimit only works for the shell in which it is issued (maybe also the descendents too) - but not for other shells that happen to be running.


Examples related to linux

grep's at sign caught as whitespace How to prevent Google Colab from disconnecting? "E: Unable to locate package python-pip" on Ubuntu 18.04 How to upgrade Python version to 3.7? Install Qt on Ubuntu Get first line of a shell command's output Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running? Run bash command on jenkins pipeline How to uninstall an older PHP version from centOS7 How to update-alternatives to Python 3 without breaking apt?

Examples related to gdb

How to install gdb (debugger) in Mac OSX El Capitan? Step out of current function with GDB How do I get the backtrace for all the threads in GDB? Counter exit code 139 when running, but gdb make it through gdb: how to print the current line or find the current line number? How do I pass a command line argument while starting up GDB in Linux? GDB: break if variable equal value How to attach a process in gdb gdb fails with "Unable to find Mach task port for process-id" error gdb: "No symbol table is loaded"

Examples related to coredump

Floating point exception( core dump undefined reference to `std::ios_base::Init::Init()' How do I analyze a program's core dump file with GDB when it has command-line parameters? Core dump file is not generated Core dump file analysis Core dumped, but core file is not in the current directory? How to analyze information from a Java core dump? How to generate a core dump in Linux on a segmentation fault?