Linux start process and exit

How can I run a command which will survive terminal close?

Sometimes I want to start a process and forget about it. If I start it from the command line, like this:

I can’t close the terminal, or it will kill the process. Can I run a command in such a way that I can close the terminal without killing the process?

To anyone facing the same problem: Remember, that even if you type yourExecutable & and the outputs keep coming on the screen and Ctrl+C does not seem to stop anything, just blindly type disown; and press Enter even if the screen is scrolling with outputs and you can’t see what you’re typing. The process will get disowned and you’ll be able to close the terminal without the process dying.

11 Answers 11

One of the following 2 should work:

See the following for a bit more information on how this works:

The second one ( redshift & disown ) worked for me on Ubuntu 10.10. It seems to work fine putting it all on one line. Is there any reason that I shouldn’t do this?

@Matthew The first should work fine too, it just doesn’t background like the second (you possibly want nohup redshift & so it does background). And putting the second on one line is fine, although usually you separate with ; ( redshift &; disown )

good answer, one might add that it would be a good idea to redirect stdout and stderr so that the terminal won’t be spammed with debug output

If your program is already running you can pause it with Ctrl-Z , pull it into the background with bg and then disown it, like this:

$ sleep 1000 ^Z [1]+ Stopped sleep 1000 $ bg $ disown $ exit 

@necktwi disown does not disconnect stdout or stderr. However nohup does, as does >/dev/null (disconnect standard out), 2>/dev/null (disconnect standard error). It ( disown ) disconnects job-control.

To anyone else finding this, this really messed things up for me so be careful. Ctrl+z stopped it just fine, but bg made it start running again and now it won’t respond to Ctrl+Z or Ctrl+C. Now there seems to be no way to exit the running command safely at all. No idea why, just did exactly what it said, but for whatever reason bg bought it back running. I can’t do the next step of typing disown because there’s too much output to type anything.

@JohnMellor You were probably running into a similar situation to ctrl-alt-delor’s comment, where the program is indeed running in the background but still has its stdout piped to your shell. Ctrl + C and Ctrl + Z won’t do anything from here because no program is actually running in the foreground. You can still type in commands from here, like ls or disown without a problem, it is just really hard to read what you’re typing while output is being mixed in with your characters. I just had a similar situation with ffmpeg, but it works fine.

Читайте также:  How to install linux to usb stick

Good answer is already posted by @Steven D, yet I think this might clarify it a bit more.

The reason that the process is killed on termination of the terminal is that the process you start is a child process of the terminal. Once you close the terminal, this will kill these child processes as well. You can see the process tree with pstree , for example when running kate & in Konsole:

init-+ ├─konsole─┬─bash─┬─kate───2*[] │ │ └─pstree │ └─2*[] 

To make the kate process detached from konsole when you terminate konsole , use nohup with the command, like this:

After closing konsole , pstree will look like this:

An alternative is using screen / tmux / byobu , which will keep the shell running, independent of the terminal.

I had problems with the disown methods. This is currently working for me, so upvoting. I also like this because I can tail -f nohup.out to see whats happening, but not worry about my session failing

You can run the process like this in the terminal

This will run the program in a new session, as explained in my article here.

1) It doesn’t print an annoying message about nohup.out . 2) It doesn’t remain in your shell’s job list, so it doesn’t clutter the output of jobs .

This is the only solution that worked for with script that runs in lxterminal, launchs pcmanfm and exit (with setsid the terminal can close while pcmanfm keep running). Thank you so much!

Similar to desgua said with my Fedora GNOME install. Running a script to open other applications through my fish shell and a desktop application wrapper. nohup, open, xdg-open were not helping, even with the ampersand (&) and verifying in System Monitor in dependencies/hierarchy/family/tree mode that my new process was not a parent of the originally spawning-shell under gnome-terminal-server that was consistent with non- and actually- -working approaches.

for example: linux@linux-desktop:~$ (chromium-browser &)

Make sure to use parenthesis when type the command!

Probably you have prompt command that I’ve used in my answer without parenthesis, if so closing the terminal will kill the process, otherwise it will NOT happen. I’ve already edited the solution above to make it clear.

I am wondering why this got not more upvotes. nohup prevents any tty output and I actually just wanted redirect the output to a file and it prevented that so this is the better solution to me.

Outstanding! I’ve never been able to figure out how to open apps/files/resources/etc and advance the shell prompt from a single command, until now. So thank you. Can you elaborate on how the shell interprets (command &) vs command & . Why does the latter wait for more input? Also, if I want to immediately close the terminal after opening a gui app, is it enough to just pass something like (nautilus path/to/foo/bar/ &) ; exit; to the shell? Just want to make sure I am not introducing any new problems.

Читайте также:  How to erase linux

@NicholasCousar I’m not sure how other shells interpret (. ) but in Bash that runs . in a subshell. (It might be a POSIX standard.)

Though all of the suggestions work well, I’ve found my alternative is to use screen , a program that sets up a virtual terminal on your screen.

You might consider starting it with screen -S session_name . Screen can be installed on virtually all Linux and Unix derivatives. Hitting Ctrl + A and (lower case) C will start a second session. This would allow you to toggle back and forth between the initial session by hitting Ctrl + A and 0 or the newer session by hitting Ctrl + A and 1 . You can have up to ten sessions in one terminal. I used to start a session at work, go home, ssh into my work machine, and then invoke screen -d -R session_name . This will reconnect you to that remote session.

screen -d -m command will start it within already detached screen: ~# screen -d -m top ~# screen -ls There is a screen on: 10803..Server-station (Detached) 1 Socket in /root/.screen. ~# screen -x . and you’re in.

I have a script (I called run ) to:

  • Run arbitrary commands in the background
  • Stop them from being killed with the terminal window
  • Suppress their output
  • Handles exit status

I use it mainly for gedit , evince , inkscape etc that all have lots of annoying terminal output. If the command finishes before TIMEOUT , nohup’s exit status is returned instead of zero. Contents of run :

#!/bin/bash TIMEOUT=0.1 #use nohup to run the command, suppressing its output and allowing the terminal to be closed #also send nohup's output to /dev/null, supressing nohup.out #run nohup in the background so this script doesn't block nohup "$" >/dev/null 2>&1 & NOHUP_PID=$! #kill this script after a short time, exiting with success status - command is still running #this is needed as there is no timeout argument for `wait` below MY_PID=$$ trap "exit 0" SIGINT SIGTERM sleep $TIMEOUT && kill $MY_PID 2>/dev/null & #ignore "No such process" error if this exits normally #if the command finishes before the above timeout, everything may be just fine or there could have been an error wait $NOHUP_PID NOHUP_STATUS=$? #print an error if there was any. most commonly, there was a typo in the command [ $NOHUP_STATUS != 0 ] && echo "Error $" #return the exit status of nohup, whatever it was exit $NOHUP_STATUS 
>>> run true && echo success || echo fail success >>> run false && echo success || echo fail Error false fail >>> run sleep 1000 && echo success || echo fail success >>> run notfound && echo success || echo fail Error notfound fail 

Источник

Bash script to start process, wait random, kill process, restart

I’m an absolute beginner and am trying to create a bash script to randomize the start and exit of a command line app. I plan to autostart the script on boot (Crunchbang) after a slight delay with the following in autostart.sh (found here: http://interwebworld.co.uk/2011/10/23/how-to-launch-programs-automatically-at-startup-in-crunchbang-linux/ )

(sleep 300s && /home/myuser/Scripts/randomizer.sh) & 
start applicationfile wait a random period of time if applicationfile is still running kill its process wait a random period of time exit this script and restart this script else exit this script and restart this script 

The randomizer.sh as I have it so far and which I’d welcome some help with, is as follows (containing remnants of the pseudocode), and the sleep delay found here: http://blog.buberel.org/2010/07/howto-random-sleep-duration-in-bash.html

/path/to/applicationfile -s 111.222.333.444 -u username -p password sleep $[ ( $RANDOM % 150 ) + 60 ]m if applicationfile is still running kill $(ps aux | grep '[u]sername' | awk '') sleep $[ ( $RANDOM % 150 ) + 60 ]m exec $randomizer.sh else exec $randomizer.sh 

I «think» the non-pseudo parts should work pretty much as they are, but please correct me or adjust if I’m wrong. The initial applicationfile command line works as it is, and I already tested the process kill line and it works as expected. Applicationfile doesn’t have a built-in way to end itself from commandline, but the dead connection on the remote machine will be killed after 5 minutes of being killed locally, so killing it locally is acceptable for my needs. What I don’t have any idea how to handle is the line above the kill, which checks «if» the process is running in the first place. Sorry for the wall of text but I wanted to show I’ve done as much as I could already.

Читайте также:  Nslookup linux debian 10

Источник

Use SSH to start a background process on a remote server, and exit session

This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C. When I hit Ctr-C, the remote process continues to run in the background. I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started. Is there a way to make this happen?

4 Answers 4

The «-f» option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,

ssh -f user@host "echo foo; sleep 5; echo bar" 

If you type the above, you will get your shell prompt back immediately, you will then see «foo» output. Five seconds later you will then see «bar» output. In the meantime, you could have been using the shell.

Note that this will keep your SSH session running in the background. If the session is disconnected the command will stop running.

When using nohup , make sure you also redirect stdin, stdout and stderr:

ssh user@server 'DISPLAY=:0 nohup xeyes < /dev/null >std.out 2> std.err &' 

In this way you will be completely detached from the remote process. Be carefull with using ssh -f user@host. since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.

In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.

Источник

Оцените статью
Adblock
detector