- Why does my compiler not accept fork(), despite my inclusion of ?
- 4 Answers 4
- bash fork error (Resource temporarily unavailable) does not stop, and keeps showing up every time I try to kill/reboot
- fork() failing with Out of memory error
- 3 Answers 3
- What are some conditions that may cause fork() or system() calls to fail on Linux?
- 2 Answers 2
- unshare —pid /bin/bash — fork cannot allocate memory
- 2 Answers 2
Why does my compiler not accept fork(), despite my inclusion of ?
@NiklasB.: Yes, that’s true. But unrelated to the point that -pedantic isn’t making any difference here. (And this warning is turned on when -pedantic is not anyway. all functions should be prototyped, even in C89 (and such is required in C99, C++))
@user1166935: What operating system are you using? Is it possible you have a bad unistd.h file for some reason?
4 Answers 4
unistd.h and fork are part of the POSIX standard. They aren’t available on windows ( text.exe in your gcc command hints that’s you’re not on *nix).
It looks like you’re using gcc as part of MinGW, which does provide the unistd.h header but does not implement functions like fork . Cygwin does provide implementations of functions like fork .
However, since this is homework you should already have instructions on how to obtain a working environment.
MinGW does not implement fork(), you could perhaps try Cygwin (or a real POSIX system) if you really need fork(). Or, you could try a similar windows function.
«should already have instructions on how to obtain a working environment» -if only Universities were that good.
You have got #include which is where fork() is declared.
So, you probably need to tell the system to show the POSIX definitions before you include the system headers:
You can use 700 if you think your system is mostly POSIX 2008 compliant, or even 500 for an older system. Because fork() has been around forever, it will show up with any of those.
If you are compiling with -std=c99 —pedantic , then all the declarations for POSIX will be hidden unless you explicitly request them as shown.
You can also play with _POSIX_C_SOURCE , but using _XOPEN_SOURCE implies the correct corresponding _POSIX_C_SOURCE (and _POSIX_SOURCE , and so on).
with gcc and glibc, the feature test macro isn’t actually required for fork . Including unistd.h is enough. The standard does state that applications should define _POSIX_C_SOURCE before including headers though.
As you’ve already noted, fork() should be defined in unistd.h — at least according to the man pages that come with Ubuntu 11.10. The minimal:
#include int main( int argc, char* argv[])
. builds with no warnings on 11.10.
Speaking of which, what UNIX/Linux distribution are you using? For instance, I’ve found several non-remarkable functions that should be defined in Ubuntu 11.10’s headers aren’t. Such as:
// string.h char* strtok_r( char* str, const char* delim, char** saveptr); char* strdup( const char* const qString); // stdio.h int fileno( FILE* stream); // time.h int nanosleep( const struct timespec* req, struct timespec* rem); // unistd.h int getopt( int argc, char* const argv[], const char* optstring); extern int opterr; int usleep( unsigned int usec);
As long as they’re defined in your C library it won’t be a huge problem. Just define your own prototypes in a compatibility header and report the standard header problems to whoever maintains your OS distribution.
bash fork error (Resource temporarily unavailable) does not stop, and keeps showing up every time I try to kill/reboot
I mistakenly used a limited server as an iperf server for 5000 parallel connections. (limit is 1024 processes) Now every time I log in, I see this:
-bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: Resource temporarily unavailable
-bash-4.1$ ps -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: Resource temporarily unavailable
Same happens when I do a killall or similar things. I have even tried to reboot the system but again this is what I get after reboot:
-bash-4.1$ sudo reboot -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: Resource temporarily unavailable -bash-4.1$
So Basically I cannot do anything. all the commands get this error :/ I can, however, do «exit». This is an off-site server that I do not have physical access to, so I cannot turn it off/on physically. Any ideas how I can fix this problem? I highly appreciate any help.
fork() failing with Out of memory error
The parent process fails with errno=12(Out of memory) when it tries to fork a child. The parent process runs on Linux 3.0 kernel — SLES 11. At the point of forking the child, the parent process has already used up around 70% of the RAM(180GB/256GB). Is there any workaround for this problem? The application is written in C++, compiled with g++ 4.6.3.
The question you should have really asked is how to print a backtrace, not how to make fork work; you didn’t tell -except in recent comments- that you are forking to run gdb then its bt command!
3 Answers 3
Maybe virtual memory over commit is prevented in your system.
If it is prevented, then the virtual memory can not be bigger than sizeof physical RAM + swap. If it is allowed, then virtual memory can be bigger than RAM+swap.
When your process forks, your processes (parent and child) would have 2*180GB of virtual memory (that is too much if you don’t have swap).
So, allow over commit by this way:
echo 1 > /proc/sys/vm/overcommit_memory
It should help, if child process execves immediately, or frees allocated memory before the parent writes too much to own memory. So, be careful, out of memory killer may act if both processes keep using all the memory.
/proc/sys/vm/overcommit_memory
This file contains the kernel virtual memory accounting mode. Values are: 0: heuristic overcommit (this is the default) 1: always overcommit, never check 2: always check, never overcommit
In mode 0, calls of mmap(2) with MAP_NORESERVE are not checked, and the default check is very weak, leading to the risk of getting a process «OOM-killed». Under Linux 2.4 any nonzero value implies mode 1. In mode 2 (available since Linux 2.6), the total virtual address space on the system is limited to (SS + RAM*(r/100)), where SS is the size of the swap space, and RAM is the size of the physical memory, and r is the contents of the file /proc/sys/vm/overcommit_ratio.
What are some conditions that may cause fork() or system() calls to fail on Linux?
To clarify, when one knows that an error such as EAGAIN has occurred during fork() (errno == EAGAIN), how do you find out what specifically caused it (was it RLIMIT_NPROC? Was it an error copying page tables, or task scructure, and if so why? And how do you avoid it?)
I also asked a different, but related question about page tables in Linux: stackoverflow.com/questions/853736/…
2 Answers 2
And how can one find out whether any of them are occuring?
Check the errno value if the result (return value) is -1
From the man page on Linux:
RETURN VALUE
On success, the PID of the child process is returned in the parent, and 0 is returned in the child. On failure, -1 is returned in the parent, no child process is created, and errno is set appropriately.ERRORS
EAGAIN
fork() cannot allocate sufficient memory to copy the parent’s page tables and allocate a task structure for the child.
EAGAIN
It was not possible to create a new process because the caller’s RLIMIT_NPROC resource limit was encountered. To exceed this limit, the process must have either the CAP_SYS_ADMIN or the CAP_SYS_RESOURCE capability.
ENOMEM
fork() failed to allocate the necessary kernel structures because memory is tight.CONFORMING TO SVr4, 4.3BSD, POSIX.1-2001.
unshare —pid /bin/bash — fork cannot allocate memory
I’m experimenting with linux namespaces. Specifically the pid namespace. I thought I’d test something out with bash but run into this problem:
unshare -p /bin/bash bash: fork: Cannot allocate memory
2 Answers 2
The error is caused by the PID 1 process exits in the new namespace.
After bash start to run, bash will fork several new sub-processes to do somethings. If you run unshare without -f, bash will have the same pid as the current «unshare» process. The current «unshare» process call the unshare systemcall, create a new pid namespace, but the current «unshare» process is not in the new pid namespace. It is the desired behavior of linux kernel: process A creates a new namespace, the process A itself won’t be put into the new namespace, only the sub-processes of process A will be put into the new namespace. So when you run:
The unshare process will exec /bin/bash, and /bin/bash forks several sub-processes, the first sub-process of bash will become PID 1 of the new namespace, and the subprocess will exit after it completes its job. So the PID 1 of the new namespace exits.
The PID 1 process has a special function: it should become all the orphan processes’ parent process. If PID 1 process in the root namespace exits, kernel will panic. If PID 1 process in a sub namespace exits, linux kernel will call the disable_pid_allocation function, which will clean the PIDNS_HASH_ADDING flag in that namespace. When linux kernel create a new process, kernel will call alloc_pid function to allocate a PID in a namespace, and if the PIDNS_HASH_ADDING flag is not set, alloc_pid function will return a -ENOMEM error. That’s why you got the «Cannot allocate memory» error.
You can resolve this issue by use the ‘-f’ option:
If you run unshare with ‘-f’ option, unshare will fork a new process after it create the new pid namespace. And run /bin/bash in the new process. The new process will be the pid 1 of the new pid namespace. Then bash will also fork several sub-processes to do some jobs. As bash itself is the pid 1 of the new pid namespace, its sub-processes can exit without any problem.