How can I pipe initial input into process which will then be interactive?
I’d like to be able to inject an initial command into the launching of an interactive process, so that I can do something like this:
echo "initial command" | INSERT_MAGIC_HERE some_tool tool> initial command [result of initial command] tool> [now I type an interactive command]
- Just piping the initial command in doesn’t work, as this results in stdin not being connected to the terminal
- Writing to /dev/pts/[number] sends the output to the terminal, not input to the process as if it were from the terminal
What would but with disadvantages:
- Make a command which forks a child, writes to its stdin and then forwards everything from its own stdin. Downside — terminal control things (like line vs character mode) won’t work. Maybe I could do something with proxying of pseudo terminals?
- Make a modified version of xterm (I’m launching one for this task anyway) with a command line option to inject additional commands after encountering a desired prompt string. Ugly.
- Make a modified version of the tool I’m trying to run so that it accepts an initial command on the command line. Breaks the standard installation.
(The tool of current interest, incidentally, is android’s adb shell — I want to open an interactive shell on the phone, run a command automatically, and then have an interactive session)
Does adb has options similar to -i , -c like in the following command: python -i -c’print «initial command»‘ ? Here an initial command is print «initial command» and -i option forces interactive mode afterwards.
I’ve written the program to create a pipe, fork, dup2 that to the child, put its own stdin in raw mode, push an initial string through the fifo, and then proxy stdin through. Seems to work in this case, but still wondering if there was a standard solution.
7 Answers 7
You don’t need to write a new tool to forward stdin — one has already been written ( cat ):
(echo "initial command" && cat) | some_tool
This does have the downside of connecting a pipe to some_tool , not a terminal.
Good idea. I had been wondering about something like that. This does work, but there are sometimes downsides to not having a terminal — such as not being able to send ctrl-C through the chain (for example to kill something running under the shell of the embedded device obtained from some_tool) or to do character-by-character access (I was surprised to discover I can painfully run vi through my experimental program). Still, this answer has real utility since it’s portable, requires nothing custom, and would be enough a lot of the time.
This slight modification of your answer seems to do what I want: stty raw -echo ; ( echo «initial command» && cat ) | some_tool ; stty sane
Very useful answer. Can this be extended to take arguments for multiple prompts in the interactive programme? For example, a programme with three questions, which will always be answered «n», «y», and «7»?
@ChrisStratton I’m trying your solution, but I can’t do anything with the following example: stty raw -echo; (echo «test» && cat) | less; stty sane When I enter it in my terminal, I can’t kill/stop it in any way. It just hangs, I have to kill it externally.
The accepted answer is simple and mostly good.
But it has a disadvantage: the programs gets a pipe as its input, not a terminal. This means that autocompletion will not work. In a lot of cases, this also disables pretty output, and I’ve heard some programs just refuse to work if stdin is not a terminal.
The following program solves the problem. It creates a pseudoterminal, spawns a program connected to this pseudoterminal. It first feeds extra input passed via commandline, and then feeds it input given by user via stdin.
For example, ptypipe «import this» python3 makes Python execute «import this» first, and then it drops you to interactive command prompt, with working completion and other stuff.
Likewise, ptypipe «date» bash runs Bash, which executes date and then gives a shell to you. Again, with working completion, colourized prompt and so on.
#!/usr/bin/env python3 import sys import os import pty import tty import select import subprocess STDIN_FILENO = 0 STDOUT_FILENO = 1 STDERR_FILENO = 2 def _writen(fd, data): while data: n = os.write(fd, data) data = data[n:] def main_loop(master_fd, extra_input): fds = [master_fd, STDIN_FILENO] _writen(master_fd, extra_input) while True: rfds, _, _ = select.select(fds, [], []) if master_fd in rfds: data = os.read(master_fd, 1024) if not data: fds.remove(master_fd) else: os.write(STDOUT_FILENO, data) if STDIN_FILENO in rfds: data = os.read(STDIN_FILENO, 1024) if not data: fds.remove(STDIN_FILENO) else: _writen(master_fd, data) def main(): extra_input = sys.argv[1] interactive_command = sys.argv[2] if hasattr(os, "fsencode"): # convert them back to bytes # http://bugs.python.org/issue8776 interactive_command = os.fsencode(interactive_command) extra_input = os.fsencode(extra_input) # add implicit newline if extra_input and extra_input[-1] != b'\n': extra_input += b'\n' # replace LF with CR (shells like CR for some reason) extra_input = extra_input.replace(b'\n', b'\r') pid, master_fd = pty.fork() if pid == 0: os.execlp("sh", "/bin/sh", "-c", interactive_command) try: mode = tty.tcgetattr(STDIN_FILENO) tty.setraw(STDIN_FILENO) restore = True except tty.error: # This is the same as termios.error restore = False try: main_loop(master_fd, extra_input) except OSError: if restore: tty.tcsetattr(0, tty.TCSAFLUSH, mode) os.close(master_fd) return os.waitpid(pid, 0)[1] if __name__ == "__main__": main()
(Note: I’m afraid this solution contains a possible deadlock. You may want to feed extra_input in small chunks to avoid it)
How to write data to existing process’s STDIN from external process?
I’m seeking for ways to write data to the existing process’s STDIN from external processes, and found similar question How do you stream data into the STDIN of a program from different local/remote processes in Python? in stackoverlow. In that thread, @Michael says that we can get file descriptors of existing process in path like below, and permitted to write data into them on Linux.
So, I’ve created a simple script listed below to test writing data to the script’s STDIN (and TTY ) from external process.
#!/usr/bin/env python import os, sys def get_ttyname(): for f in sys.stdin, sys.stdout, sys.stderr: if f.isatty(): return os.ttyname(f.fileno()) return None if __name__ == "__main__": print("Try commands below") print("$ echo 'foobar' > ".format(get_ttyname())) print("$ echo 'foobar' > /proc//fd/0".format(os.getpid())) print("read :: [" + sys.stdin.readline() + "]")
This test script shows paths of STDIN and TTY and then, wait for one to write it’s STDIN . I launched this script and got messages below.
Try commands below $ echo 'foobar' > /dev/pts/6 $ echo 'foobar' > /proc/3308/fd/0
So, I executed the command echo ‘foobar’ > /dev/pts/6 and echo ‘foobar’ > /proc/3308/fd/0 from other terminal. After execution of both commands, message foobar is displayed twice on the terminal the test script is running on, but that’s all. The line print(«read :: [» + sys.stdin.readline() + «]») was not executed. Are there any ways to write data from external processes to the existing process’s STDIN (or other file descriptors), i.e. invoke execution of the line print(«read :: [» + sys.stdin.readline() + «]») from other processes?
Writing to stdin of a process
This to me looks like, it just took the print «Hello»\n and wrote it to stdout , yet did not interpret it. Why is that not working and what would I have to do to make it working?
The TIOCSTI ioctl can write to a terminal’s stdin as if the data has been entered from the keyboard. For example github.com/thrig/scripts/blob/master/tty/ttywrite.c
3 Answers 3
Accessing /proc/PID/fd/0 doesn’t access file descriptor 0 of process PID, it accesses the file which PID has open on file descriptor 0. This is a subtle distinction, but it matters. A file descriptor is a connection that a process has to a file. Writing to a file descriptor writes to the file regardless of how the file has been opened.
If /proc/PID/fd/0 is a regular file, writing to it modifies the file. The data isn’t necessarily what the process will read next: it depends on the position attached to the file descriptor that the process is using to read the file. When a process opens /proc/PID/fd/0 , it gets the same file as the other process, but the file positions are independent.
If /proc/PID/fd/0 is a pipe, then writing to it appends the data to the pipe’s buffer. In that case, the process that’s reading from the pipe will read the data.
If /proc/PID/fd/0 is a terminal, then writing to it outputs the data on a terminal. A terminal file is bidirectional: writing to it outputs the data, i.e. the terminal displays the text; reading from a terminal inputs the data, i.e. the terminal transmits user input.
Python is both reading and writing to the terminal. When you run echo ‘print «Hello»‘ > /proc/$(pidof python)/fd/0 , you’re writing print «Hello» to the terminal. The terminal displays print «Hello» as instructed. The python process doesn’t see anything, it’s still waiting for input.
If you want to feed input to the Python process, you have to get the terminal to do it. See crasic’s answer for ways to do that.
How do I attach a terminal to a detached process?
That terminal is now long closed, but process is still running, and I want to send some commands to that process’s stdin. Is that possible?
Easiest way (if you are still in same terminal) is to run jobs (to see, if process is still running) and if yes, use fg to being it to foreground. After that, you can start sending commands and you will also receive stdout data. PS: «sending it to background again» can be done using CTRL+Z (suspend) and than running bg (run last job in background). See some tutorials for this topic to learn more.
5 Answers 5
Yes, it is. First, create a pipe: mkfifo /tmp/fifo . Use gdb to attach to the process: gdb -p PID
Then close stdin: call close (0) ; and open it again: call open («/tmp/fifo», 0600)
Finally, write away (from a different terminal, as gdb will probably hang):
@rustyx: Untested, but this should work: create a file rather than a pipe, touch /tmp/thefile . Stdout is 1, so call close (1) ; also, use the correct permissions for writing: call open («/tmp/thefile», 0400) . The echo… is, of course, not needed.
This is great! I’m using this to send «y» or «n» responses to certain processes that have been completely detached. The detached process has its stdout to a separate window. When I do this trick however, I can see that it does not «receive» the ‘y’ or ‘n’ as soon as I echo it, I must quit gdb and detach it and then it receives all of the echos accordingly, so is there a way to perform this without needing to quit gdb before the process receives the input from the fifo?
When original terminal is no longer accessible.
Have a look at reptyr, which does exactly that. The github page has all the information.
reptyr — A tool for «re-ptying» programs.
reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don’t want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home.
USAGE
reptyr PID
«reptyr PID» will grab the process with id PID and attach it to your current terminal.
After attaching, the process will take input from and write output to the new terminal, including ^C and ^Z. (Unfortunately, if you background it, you will still have to run «bg» or «fg» in the old terminal. This is likely impossible to fix in a reasonable way without patching your shell.)
Edit claims that » reptyr cannot grab a process which has subprocesses. Or the subprocess (reptyr version 0.6.2).» Limited support does exist Issue, Issue