Linux file command in python

How do I execute a program or call a system command?

Another common way is os.system but you shouldn’t use it because it is unsafe if any parts of the command come from outside your program or can contain spaces or other special characters, also subprocess.run is generally more flexible (you can get the stdout , stderr , the «real» status code, better error handling, etc.). Even the documentation for os.system recommends using subprocess instead.

On Python 3.4 and earlier, use subprocess.call instead of .run :

Is there a way to use variable substitution? IE I tried to do echo $PATH by using call([«echo», «$PATH»]) , but it just echoed the literal string $PATH instead of doing any substitution. I know I could get the PATH environment variable, but I’m wondering if there is an easy way to have the command behave exactly as if I had executed it in bash.

@KevinWheeler You should NOT use shell=True , for this purpose Python comes with os.path.expandvars. In your case you can write: os.path.expandvars(«$PATH») . @SethMMorton please reconsider your comment -> Why not to use shell=True

Many arguments version looks like that: subprocess.run([«balcon.exe»,»-n»,»Tatyana»,»-t», «Hello world»])

Here is a summary of ways to call external programs, including their advantages and disadvantages:

    os.system passes the command and arguments to your system’s shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:

os.system("some_command < input_file | another_command >output_file") 
print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read() 
print os.popen("echo Hello World").read() 
return_code = subprocess.call("echo Hello World", shell=True) 

The subprocess module should probably be what you use.

Finally, please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:

print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read() 

and imagine that the user enters something » my mama didnt love me && rm -rf / » which could erase the whole filesystem.

Nice answer/explanation. How is this answer justifying Python’s motto as described in this article ? fastcompany.com/3026446/… «Stylistically, Perl and Python have different philosophies. Perl’s best known mottos is » There’s More Than One Way to Do It». Python is designed to have one obvious way to do it» Seem like it should be the other way! In Perl I know only two ways to execute a command — using back-tick or open .

Читайте также:  Dameware mini remote control linux

What one typically needs to know is what is done with the child process’s STDOUT and STDERR, because if they are ignored, under some (quite common) conditions, eventually the child process will issue a system call to write to STDOUT (STDERR too?) that would exceed the output buffer provided for the process by the OS, and the OS will cause it to block until some process reads from that buffer. So, with the currently recommended ways, subprocess.run(..) , what exactly does «This does not capture stdout or stderr by default.» imply? What about subprocess.check_output(..) and STDERR?

which of the commands you recommended block my script? i.e. if I want to run multiple commands in a for loop how do I do it without it blocking my python script? I don’t care about the output of the command I just want to run lots of them.

This is arguably the wrong way around. Most people only need subprocess.run() or its older siblings subprocess.check_call() et al. For cases where these do not suffice, see subprocess.Popen() . os.popen() should perhaps not be mentioned at all, or come even after «hack your own fork/exec/spawn code».

import subprocess p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) for line in p.stdout.readlines(): print line, retval = p.wait() 

You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters ( stdout= and stderr= ) and it’ll behave like os.system() .

.readlines() reads all lines at once i.e., it blocks until the subprocess exits (closes its end of the pipe). To read in real time (if there is no buffering issues) you could: for line in iter(p.stdout.readline, »): print line,

Could you elaborate on what you mean by «if there is no buffering issues»? If the process blocks definitely, the subprocess call also blocks. The same could happen with my original example as well. What else could happen with respect to buffering?

the child process may use block-buffering in non-interactive mode instead of line-buffering so p.stdout.readline() (note: no s at the end) won’t see any data until the child fills its buffer. If the child doesn’t produce much data then the output won’t be in real time. See the second reason in Q: Why not just use a pipe (popen())?. Some workarounds are provided in this answer (pexpect, pty, stdbuf)

the buffering issue only matters if you want output in real time and doesn’t apply to your code that doesn’t print anything until all data is received

Читайте также:  Linux var cache yum

This answer was fine for its time, but we should no longer recommend Popen for simple tasks. This also needlessly specifies shell=True . Try one of the subprocess.run() answers.

Some hints on detaching the child process from the calling one (starting the child process in background).

Suppose you want to start a long task from a CGI script. That is, the child process should live longer than the CGI script execution process.

The classical example from the subprocess module documentation is:

import subprocess import sys # Some code here pid = subprocess.Popen([sys.executable, "longtask.py"]) # Call subprocess # Some more code here 

The idea here is that you do not want to wait in the line ‘call subprocess’ until the longtask.py is finished. But it is not clear what happens after the line ‘some more code here’ from the example.

My target platform was FreeBSD, but the development was on Windows, so I faced the problem on Windows first.

On Windows (Windows XP), the parent process will not finish until the longtask.py has finished its work. It is not what you want in a CGI script. The problem is not specific to Python; in the PHP community the problems are the same.

The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in Windows API. If you happen to have installed pywin32, you can import the flag from the win32process module, otherwise you should define it yourself:

DETACHED_PROCESS = 0x00000008 pid = subprocess.Popen([sys.executable, "longtask.py"], creationflags=DETACHED_PROCESS).pid 

/* UPD 2015.10.27 @eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */

On FreeBSD we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in a CGI script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:

pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) 

I have not checked the code on other platforms and do not know the reasons of the behaviour on FreeBSD. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.

Источник

Execute shell commands in Python

I’m currently studying penetration testing and Python programming. I just want to know how I would go about executing a Linux command in Python. The commands I want to execute are:

echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 8080 

If I just use print in Python and run it in the terminal will it do the same as executing it as if you was typing it yourself and pressing Enter ?

Читайте также:  Чем забит диск linux

6 Answers 6

You can use os.system() , like this:

os.system('echo 1 > /proc/sys/net/ipv4/ip_forward') os.system('iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 8080') 

Better yet, you can use subprocess’s call, it is safer, more powerful and likely faster:

from subprocess import call call('echo "I like potatos"', shell=True) 

Or, without invoking shell:

If you want to capture the output, one way of doing it is like this:

import subprocess cmd = ['echo', 'I like potatos'] proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) o, e = proc.communicate() print('Output: ' + o.decode('ascii')) print('Error: ' + e.decode('ascii')) print('code: ' + str(proc.returncode)) 

I highly recommend setting a timeout in communicate , and also to capture the exceptions you can get when calling it. This is a very error-prone code, so you should expect errors to happen and handle them accordingly.

@binarysubstrate, deprecated as in not supported or not available? I’ve been recently working on machine with 2.7 (not by choice), and os.system still works.

With Python 3.4 the shell=True has to be stated otherwise the call command will not work. By default call will try to open a file specified by the string unless the shell=True is set. It also looks like that in Python 3.5 call is replaced with run

The first command simply writes to a file. You wouldn’t execute that as a shell command because python can read and write to files without the help of a shell:

with open('/proc/sys/net/ipv4/ip_forward', 'w') as f: f.write("1") 

The iptables command is something you may want to execute externally. The best way to do this is to use the subprocess module.

import subprocess subprocess.check_call(['iptables', '-t', 'nat', '-A', 'PREROUTING', '-p', 'tcp', '--destination-port', '80', '-j', 'REDIRECT', '--to-port', '8080']) 

Note that this method also does not use a shell, which is unnecessary overhead.

import os os.system("your command here") 

This isn’t the most flexible approach; if you need any more control over your process than «run it once, to completion, and block until it exits», then you should use the subprocess module instead.

As a general rule, you’d better use python bindings whenever possible (better Exception catching, among other advantages.)

For the echo command, it’s obviously better to use python to write in the file as suggested in @jordanm’s answer.

For the iptables command, maybe python-iptables (PyPi page, GitHub page with description and doc) would provide what you need (I didn’t check your specific command).

This would make you depend on an external lib, so you have to weight the benefits. Using subprocess works, but if you want to use the output, you’ll have to parse it yourself, and deal with output changes in future iptables versions.

Источник

Оцените статью
Adblock
detector