[python] How to execute a program or call a system command from Python

How do you call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script?

This question is related to python shell terminal subprocess command

The answer is


There is also Plumbum

>>> from plumbum import local
>>> ls = local["ls"]
>>> ls
LocalCommand(<LocalPath /bin/ls>)
>>> ls()
u'build.py\ndist\ndocs\nLICENSE\nplumbum\nREADME.rst\nsetup.py\ntests\ntodo.txt\n'
>>> notepad = local["c:\\windows\\notepad.exe"]
>>> notepad()                                   # Notepad window pops up
u''                                             # Notepad window is closed by user, command returns

Here is calling an external command and return or print the command's output:

Python Subprocess check_output is good for

Run command with arguments and return its output as a byte string.

import subprocess
proc = subprocess.check_output('ipconfig /all')
print proc

I have written a wrapper to handle errors and redirecting output and other stuff.

import shlex
import psutil
import subprocess

def call_cmd(cmd, stdout=sys.stdout, quiet=False, shell=False, raise_exceptions=True, use_shlex=True, timeout=None):
    """Exec command by command line like 'ln -ls "/var/log"'
    """
    if not quiet:
        print("Run %s", str(cmd))
    if use_shlex and isinstance(cmd, (str, unicode)):
        cmd = shlex.split(cmd)
    if timeout is None:
        process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
        retcode = process.wait()
    else:
        process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
        p = psutil.Process(process.pid)
        finish, alive = psutil.wait_procs([p], timeout)
        if len(alive) > 0:
            ps = p.children()
            ps.insert(0, p)
            print('waiting for timeout again due to child process check')
            finish, alive = psutil.wait_procs(ps, 0)
        if len(alive) > 0:
            print('process {} will be killed'.format([p.pid for p in alive]))
            for p in alive:
                p.kill()
            if raise_exceptions:
                print('External program timeout at {} {}'.format(timeout, cmd))
                raise CalledProcessTimeout(1, cmd)
        retcode = process.wait()
    if retcode and raise_exceptions:
        print("External program failed %s", str(cmd))
        raise subprocess.CalledProcessError(retcode, cmd)

You can call it like this:

cmd = 'ln -ls "/var/log"'
stdout = 'out.txt'
call_cmd(cmd, stdout)

MOST OF THE CASES:

For the most of cases, a short snippet of code like this is all you are going to need:

import subprocess
import shlex

source = "test.txt"
destination = "test_copy.txt"

base = "cp {source} {destination}'"
cmd = base.format(source=source, destination=destination)
subprocess.check_call(shlex.split(cmd))

It is clean and simple.

subprocess.check_call run command with arguments and wait for command to complete.

shlex.split split the string cmd using shell-like syntax

REST OF THE CASES:

If this do not work for some specific command, most probably you have a problem with command-line interpreters. The operating system chose the default one which is not suitable for your type of program or could not found an adequate one on the system executable path.

Example:

Using the redirection operator on a Unix system

input_1 = "input_1.txt"
input_2 = "input_2.txt"
output = "merged.txt"
base_command = "/bin/bash -c 'cat {input} >> {output}'"

base_command.format(input_1, output=output)
subprocess.check_call(shlex.split(base_command))

base_command.format(input_2, output=output)
subprocess.check_call(shlex.split(base_command))

As it is stated in The Zen of Python: Explicit is better than implicit

So if using a Python >=3.6 function, it would look something like this:

import subprocess
import shlex

def run_command(cmd_interpreter: str, command: str) -> None:
    base_command = f"{cmd_interpreter} -c '{command}'"
    subprocess.check_call(shlex.split(base_command)


If you are not using user input in the commands, you can use this:

from os import getcwd
from subprocess import check_output
from shlex import quote

def sh(command):
    return check_output(quote(command), shell=True, cwd=getcwd(), universal_newlines=True).strip()

And use it as

branch = sh('git rev-parse --abbrev-ref HEAD')

shell=True will spawn a shell, so you can use pipe and such shell things sh('ps aux | grep python'). This is very very handy for running hardcoded commands and processing its output. The universal_lines=True make sure the output is returned in a string instead of binary.

cwd=getcwd() will make sure that the command is run with the same working directory as the interpreter. This is handy for Git commands to work like the Git branch name example above.

Some recipes

  • free memory in megabytes: sh('free -m').split('\n')[1].split()[1]
  • free space on / in percent sh('df -m /').split('\n')[1].split()[4][0:-1]
  • CPU load sum(map(float, sh('ps -ef -o pcpu').split('\n')[1:])

But this isn't safe for user input, from the documentation:

Security Considerations

Unlike some other popen functions, this implementation will never implicitly call a system shell. This means that all characters, including shell metacharacters, can safely be passed to child processes. If the shell is invoked explicitly, via shell=True, it is the application’s responsibility to ensure that all whitespace and metacharacters are quoted appropriately to avoid shell injection vulnerabilities.

When using shell=True, the shlex.quote() function can be used to properly escape whitespace and shell metacharacters in strings that are going to be used to construct shell commands.

Even using the shlex.quote(), it is good to keep a little paranoid when using user inputs on shell commands. One option is using a hardcoded command to take some generic output and filtering by user input. Anyway using shell=False will make sure that only the exactly process that you want to execute will be executed or you get a No such file or directory error.

Also there is some performance impact on shell=True, from my tests it seems about 20% slower than shell=False (the default).

In [50]: timeit("check_output('ls -l'.split(), universal_newlines=True)", number=1000, globals=globals())
Out[50]: 2.6801227919995654

In [51]: timeit("check_output('ls -l', universal_newlines=True, shell=True)", number=1000, globals=globals())
Out[51]: 3.243950183999914

i use this for 3.6+

import subprocess
def execute(cmd):
    """
        Purpose  : To execute a command and return exit status
        Argument : cmd - command to execute
        Return   : result, exit_code
    """
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    (result, error) = process.communicate()
    rc = process.wait()
    if rc != 0:
        print ("Error: failed to execute command: ", cmd)
        print (error.rstrip().decode("utf-8"))
    return result.rstrip().decode("utf-8"), serror.rstrip().decode("utf-8")
# def

If you are not using user input in the commands, you can use this:

from os import getcwd
from subprocess import check_output
from shlex import quote

def sh(command):
    return check_output(quote(command), shell=True, cwd=getcwd(), universal_newlines=True).strip()

And use it as

branch = sh('git rev-parse --abbrev-ref HEAD')

shell=True will spawn a shell, so you can use pipe and such shell things sh('ps aux | grep python'). This is very very handy for running hardcoded commands and processing its output. The universal_lines=True make sure the output is returned in a string instead of binary.

cwd=getcwd() will make sure that the command is run with the same working directory as the interpreter. This is handy for Git commands to work like the Git branch name example above.

Some recipes

  • free memory in megabytes: sh('free -m').split('\n')[1].split()[1]
  • free space on / in percent sh('df -m /').split('\n')[1].split()[4][0:-1]
  • CPU load sum(map(float, sh('ps -ef -o pcpu').split('\n')[1:])

But this isn't safe for user input, from the documentation:

Security Considerations

Unlike some other popen functions, this implementation will never implicitly call a system shell. This means that all characters, including shell metacharacters, can safely be passed to child processes. If the shell is invoked explicitly, via shell=True, it is the application’s responsibility to ensure that all whitespace and metacharacters are quoted appropriately to avoid shell injection vulnerabilities.

When using shell=True, the shlex.quote() function can be used to properly escape whitespace and shell metacharacters in strings that are going to be used to construct shell commands.

Even using the shlex.quote(), it is good to keep a little paranoid when using user inputs on shell commands. One option is using a hardcoded command to take some generic output and filtering by user input. Anyway using shell=False will make sure that only the exactly process that you want to execute will be executed or you get a No such file or directory error.

Also there is some performance impact on shell=True, from my tests it seems about 20% slower than shell=False (the default).

In [50]: timeit("check_output('ls -l'.split(), universal_newlines=True)", number=1000, globals=globals())
Out[50]: 2.6801227919995654

In [51]: timeit("check_output('ls -l', universal_newlines=True, shell=True)", number=1000, globals=globals())
Out[51]: 3.243950183999914

There are a number of ways of Calling an external Command from Python. There are some Functions and Modules with the Good Helper Functions that can make it really easy. But the Recommended among all is the Subprocess Module.

import subprocess as s
s.call(["command.exe","..."])

Call function will start the external process, pass some command line arguments and wait for it to finish. When it finishes you continue executing. Arguments in Call function are passed through the list. The first argument in the list is the command typically in the form of an executable file and subsequent arguments in the list whatever you want to pass.

If you have called processes from the command line in the windows before you'll be aware that you often need to quote arguments. You need to put quotations mark around it, if there's a space then there's a backslash and there are some complicated rules but you can avoid a whole lot of that in python by using subprocess module because it is a list and each item is known to be a distinct and python can get quoting correctly for you.

In the end, after the list, there are a number of optional parameters one of these is a shell and if you set shell equals to true then your command is going to be run as if you have typed in at the command prompt.

s.call(["command.exe","..."], shell=True)

This gives you access to functionality like piping, you can redirect to files, you can call multiple commands in one thing.

One more thing, if your script relies on the process succeeding then you want to check the result and the result can be checked with the check call helper function.

s.check_call(...)

It is exactly the same as a call function, it takes the same arguments, takes the same list, you can pass in any of the extra arguments but it going to wait for the functions to complete. And if the exit code of the function is anything other then zero, it will through an exception in the python script.

Finally, if you want tighter control Popen constructor which is also from the subprocess module. It also takes the same arguments as incall & check_call function but it returns an object representing the running process.

p=s.Popen("...")

It does not wait for the running process to finish also it's not going to throw any exception immediately but it gives you an object that will let you do things like wait for it to finish, let you communicate to it, you can redirect standard input, standard output if you want to display output somewhere else and a lot more.


There are many ways to call a command.

  • For example:

if and.exe needs two parameters. In cmd we can call sample.exe use this: and.exe 2 3 and it show 5 on screen.

If we use a Python script to call and.exe, we should do like..

  1. os.system(cmd,...)

    • os.system(("and.exe" + " " + "2" + " " + "3"))
  2. os.popen(cmd,...)

    • os.popen(("and.exe" + " " + "2" + " " + "3"))
  3. subprocess.Popen(cmd,...)
    • subprocess.Popen(("and.exe" + " " + "2" + " " + "3"))

It's too hard, so we can join cmd with a space:

import os
cmd = " ".join(exename,parameters)
os.popen(cmd)

Typical implementation:

import subprocess

p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
    print line,
retval = p.wait()

You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().


For Python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code.

from subprocess import PIPE, run

command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)

There are a lot of different ways to run external commands in Python, and all of them have their own plus sides and drawbacks.

My colleagues and me have been writing Python system administration tools, so we need to run a lot of external commands, and sometimes you want them to block or run asynchronously, time-out, update every second, etc.

There are also different ways of handling the return code and errors, and you might want to parse the output, and provide new input (in an expect kind of style). Or you will need to redirect stdin, stdout and stderr to run in a different tty (e.g., when using screen).

So you will probably have to write a lot of wrappers around the external command. So here is a Python module which we have written which can handle almost anything you would want, and if not, it's very flexible so you can easily extend it:

https://github.com/hpcugent/vsc-base/blob/master/lib/vsc/utils/run.py

update: It doesn't work standalone and requires some of our other tools, and got a lot of specialised functionality over the years, so it might not be a drop in replacement for you, but can give you a lot of info on how the internals of python for running commands work and idea's on how to handle certain situations.


After some research, I have the following code which works very well for me. It basically prints both stdout and stderr in real time.

stdout_result = 1
stderr_result = 1


def stdout_thread(pipe):
    global stdout_result
    while True:
        out = pipe.stdout.read(1)
        stdout_result = pipe.poll()
        if out == '' and stdout_result is not None:
            break

        if out != '':
            sys.stdout.write(out)
            sys.stdout.flush()


def stderr_thread(pipe):
    global stderr_result
    while True:
        err = pipe.stderr.read(1)
        stderr_result = pipe.poll()
        if err == '' and stderr_result is not None:
            break

        if err != '':
            sys.stdout.write(err)
            sys.stdout.flush()


def exec_command(command, cwd=None):
    if cwd is not None:
        print '[' + ' '.join(command) + '] in ' + cwd
    else:
        print '[' + ' '.join(command) + ']'

    p = subprocess.Popen(
        command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
    )

    out_thread = threading.Thread(name='stdout_thread', target=stdout_thread, args=(p,))
    err_thread = threading.Thread(name='stderr_thread', target=stderr_thread, args=(p,))

    err_thread.start()
    out_thread.start()

    out_thread.join()
    err_thread.join()

    return stdout_result + stderr_result

This is how I run my commands. This code has everything you need pretty much

from subprocess import Popen, PIPE
cmd = "ls -l ~/"
p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
print "Return code: ", p.returncode
print out.rstrip(), err.rstrip()

Update:

subprocess.run is the recommended approach as of Python 3.5 if your code does not need to maintain compatibility with earlier Python versions. It's more consistent and offers similar ease-of-use as Envoy. (Piping isn't as straightforward though. See this question for how.)

Here's some examples from the documentation.

Run a process:

>>> subprocess.run(["ls", "-l"])  # Doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)

Raise on failed run:

>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
  ...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Capture output:

>>> subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n')

Original answer:

I recommend trying Envoy. It's a wrapper for subprocess, which in turn aims to replace the older modules and functions. Envoy is subprocess for humans.

Example usage from the README:

>>> r = envoy.run('git config', data='data to pipe in', timeout=2)

>>> r.status_code
129
>>> r.std_out
'usage: git config [options]'
>>> r.std_err
''

Pipe stuff around too:

>>> r = envoy.run('uptime | pbcopy')

>>> r.command
'pbcopy'
>>> r.status_code
0

>>> r.history
[<Response 'uptime'>]

There are a number of ways of Calling an external Command from Python. There are some Functions and Modules with the Good Helper Functions that can make it really easy. But the Recommended among all is the Subprocess Module.

import subprocess as s
s.call(["command.exe","..."])

Call function will start the external process, pass some command line arguments and wait for it to finish. When it finishes you continue executing. Arguments in Call function are passed through the list. The first argument in the list is the command typically in the form of an executable file and subsequent arguments in the list whatever you want to pass.

If you have called processes from the command line in the windows before you'll be aware that you often need to quote arguments. You need to put quotations mark around it, if there's a space then there's a backslash and there are some complicated rules but you can avoid a whole lot of that in python by using subprocess module because it is a list and each item is known to be a distinct and python can get quoting correctly for you.

In the end, after the list, there are a number of optional parameters one of these is a shell and if you set shell equals to true then your command is going to be run as if you have typed in at the command prompt.

s.call(["command.exe","..."], shell=True)

This gives you access to functionality like piping, you can redirect to files, you can call multiple commands in one thing.

One more thing, if your script relies on the process succeeding then you want to check the result and the result can be checked with the check call helper function.

s.check_call(...)

It is exactly the same as a call function, it takes the same arguments, takes the same list, you can pass in any of the extra arguments but it going to wait for the functions to complete. And if the exit code of the function is anything other then zero, it will through an exception in the python script.

Finally, if you want tighter control Popen constructor which is also from the subprocess module. It also takes the same arguments as incall & check_call function but it returns an object representing the running process.

p=s.Popen("...")

It does not wait for the running process to finish also it's not going to throw any exception immediately but it gives you an object that will let you do things like wait for it to finish, let you communicate to it, you can redirect standard input, standard output if you want to display output somewhere else and a lot more.


os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.

Get more information here.


Invoke is a Python (2.7 and 3.4+) task execution tool & library. It provides a clean, high level API for running shell commands

>>> from invoke import run
>>> cmd = "pip install -r requirements.txt"
>>> result = run(cmd, hide=True, warn=True)
>>> print(result.ok)
True
>>> print(result.stdout.splitlines()[-1])
Successfully installed invocations-0.13.0 pep8-1.5.7 spec-1.3.1

Shameless plug, I wrote a library for this :P https://github.com/houqp/shell.py

It's basically a wrapper for popen and shlex for now. It also supports piping commands so you can chain commands easier in Python. So you can do things like:

ex('echo hello shell.py') | "awk '{print $2}'"

os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.

Get more information here.


import subprocess

p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

Try online


i use this for 3.6+

import subprocess
def execute(cmd):
    """
        Purpose  : To execute a command and return exit status
        Argument : cmd - command to execute
        Return   : result, exit_code
    """
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    (result, error) = process.communicate()
    rc = process.wait()
    if rc != 0:
        print ("Error: failed to execute command: ", cmd)
        print (error.rstrip().decode("utf-8"))
    return result.rstrip().decode("utf-8"), serror.rstrip().decode("utf-8")
# def

os.popen() is the easiest and the most safest way to execute a command. You can execute any command that you run on the command line. In addition you will also be able to capture the output of the command using os.popen().read()

You can do it like this:

import os
output = os.popen('Your Command Here').read()
print (output)

An example where you list all the files in the current directory:

import os
output = os.popen('ls').read()
print (output)
# Outputs list of files in the directory

I always use fabric for this things like:

from fabric.operations import local
result = local('ls', capture=True)
print "Content:/n%s" % (result, )

But this seem to be a good tool: sh (Python subprocess interface).

Look at an example:

from sh import vgdisplay
print vgdisplay()
print vgdisplay('-v')
print vgdisplay(v=True)

Very simplest way to run any command and get the result back:

from commands import getstatusoutput

try:
    return getstatusoutput("ls -ltr")
except Exception, e:
    return None

There are lots of different libraries which allow you to call external commands with Python. For each library I've given a description and shown an example of calling an external command. The command I used as the example is ls -l (list all files). If you want to find out more about any of the libraries I've listed and linked the documentation for each of them.

Sources:

These are all the libraries:

Hopefully this will help you make a decision on which library to use :)

subprocess

Subprocess allows you to call external commands and connect them to their input/output/error pipes (stdin, stdout, and stderr). Subprocess is the default choice for running commands, but sometimes other modules are better.

subprocess.run(["ls", "-l"]) # Run command
subprocess.run(["ls", "-l"], stdout=subprocess.PIPE) # This will run the command and return any output
subprocess.run(shlex.split("ls -l")) # You can also use the shlex library to split the command

os

os is used for "operating system dependent functionality". It can also be used to call external commands with os.system and os.popen (Note: There is also a subprocess.popen). os will always run the shell and is a simple alternative for people who don't need to, or don't know how to use subprocess.run.

os.system("ls -l") # run command
os.popen("ls -l").read() # This will run the command and return any output

sh

sh is a subprocess interface which lets you call programs as if they were functions. This is useful if you want to run a command multiple times.

sh.ls("-l") # Run command normally
ls_cmd = sh.Command("ls") # Save command as a variable
ls_cmd() # Run command as if it were a function

plumbum

plumbum is a library for "script-like" Python programs. You can call programs like functions as in sh. Plumbum is useful if you want to run a pipeline without the shell.

ls_cmd = plumbum.local("ls -l") # get command
ls_cmd() # run command

pexpect

pexpect lets you spawn child applications, control them and find patterns in their output. This is a better alternative to subprocess for commands that expect a tty on Unix.

pexpect.run("ls -l") # Run command as normal
child = pexpect.spawn('scp foo [email protected]:.') # Spawns child application
child.expect('Password:') # When this is the output
child.sendline('mypassword')

fabric

fabric is a Python 2.5 and 2.7 library. It allows you to execute local and remote shell commands. Fabric is simple alternative for running commands in a secure shell (SSH)

fabric.operations.local('ls -l') # Run command as normal
fabric.operations.local('ls -l', capture = True) # Run command and receive output

envoy

envoy is known as "subprocess for humans". It is used as a convenience wrapper around the subprocess module.

r = envoy.run("ls -l") # Run command
r.std_out # get output

commands

commands contains wrapper functions for os.popen, but it has been removed from Python 3 since subprocess is a better alternative.

The edit was based on J.F. Sebastian's comment.


Often, I use the following function for external commands, and this is especially handy for long running processes. The below method tails process output while it is running and returns the output, raises an exception if process fails.

It comes out if the process is done using the poll() method on the process.

import subprocess,sys

def exec_long_running_proc(command, args):
    cmd = "{} {}".format(command, " ".join(str(arg) if ' ' not in arg else arg.replace(' ','\ ') for arg in args))
    print(cmd)
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

    # Poll process for new output until finished
    while True:
        nextline = process.stdout.readline().decode('UTF-8')
        if nextline == '' and process.poll() is not None:
            break
        sys.stdout.write(nextline)
        sys.stdout.flush()

    output = process.communicate()[0]
    exitCode = process.returncode

    if (exitCode == 0):
        return output
    else:
        raise Exception(command, exitCode, output)

You can invoke it like this:

exec_long_running_proc(command = "hive", args=["-f", hql_path])

There is also Plumbum

>>> from plumbum import local
>>> ls = local["ls"]
>>> ls
LocalCommand(<LocalPath /bin/ls>)
>>> ls()
u'build.py\ndist\ndocs\nLICENSE\nplumbum\nREADME.rst\nsetup.py\ntests\ntodo.txt\n'
>>> notepad = local["c:\\windows\\notepad.exe"]
>>> notepad()                                   # Notepad window pops up
u''                                             # Notepad window is closed by user, command returns

Under Linux, in case you would like to call an external command that will execute independently (will keep running after the python script terminates), you can use a simple queue as task spooler or the at command

An example with task spooler:

import os
os.system('ts <your-command>')

Notes about task spooler (ts):

  1. You could set the number of concurrent processes to be run ("slots") with:

    ts -S <number-of-slots>

  2. Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.


I would recommend the following method 'run' and it will help us in getting STDOUT, STDERR and exit status as dictionary; The caller of this can read the dictionary return by 'run' method to know the actual state of process.

  def run (cmd):
       print "+ DEBUG exec({0})".format(cmd)
       p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, shell=True)
       (out, err) = p.communicate()
       ret        = p.wait()
       out        = filter(None, out.split('\n'))
       err        = filter(None, err.split('\n'))
       ret        = True if ret == 0 else False
       return dict({'output': out, 'error': err, 'status': ret})
  #end

For Python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code.

from subprocess import PIPE, run

command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)

This is how I run my commands. This code has everything you need pretty much

from subprocess import Popen, PIPE
cmd = "ls -l ~/"
p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
print "Return code: ", p.returncode
print out.rstrip(), err.rstrip()

Typical implementation:

import subprocess

p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
    print line,
retval = p.wait()

You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().


os.system does not allow you to store results, so if you want to store results in some list or something, a subprocess.call works.


Use subprocess.

...or for a very simple command:

import os
os.system('cat testfile')

Typical implementation:

import subprocess

p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
    print line,
retval = p.wait()

You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().


As an example (in Linux):

import subprocess
subprocess.run('mkdir test.dir', shell=True)

This creates test.dir in the current directory. Note that this also works:

import subprocess
subprocess.call('mkdir test.dir', shell=True)

The equivalent code using os.system is:

import os
os.system('mkdir test.dir')

Best practice would be to use subprocess instead of os, with .run favored over .call. All you need to know about subprocess is here. Also, note that all Python documentation is available for download from here. I downloaded the PDF packed as .zip. I mention this because there's a nice overview of the os module in tutorial.pdf (page 81). Besides, it's an authoritative resource for Python coders.


Sultan is a recent-ish package meant for this purpose. Provides some niceties around managing user privileges and adding helpful error messages.

from sultan.api import Sultan

with Sultan.load(sudo=True, hostname="myserver.com") as sultan:
  sultan.yum("install -y tree").run()

You can use Popen, and then you can check the procedure's status:

from subprocess import Popen

proc = Popen(['ls', '-l'])
if proc.poll() is None:
    proc.kill()

Check out subprocess.Popen.


Update:

subprocess.run is the recommended approach as of Python 3.5 if your code does not need to maintain compatibility with earlier Python versions. It's more consistent and offers similar ease-of-use as Envoy. (Piping isn't as straightforward though. See this question for how.)

Here's some examples from the documentation.

Run a process:

>>> subprocess.run(["ls", "-l"])  # Doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)

Raise on failed run:

>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
  ...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Capture output:

>>> subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n')

Original answer:

I recommend trying Envoy. It's a wrapper for subprocess, which in turn aims to replace the older modules and functions. Envoy is subprocess for humans.

Example usage from the README:

>>> r = envoy.run('git config', data='data to pipe in', timeout=2)

>>> r.status_code
129
>>> r.std_out
'usage: git config [options]'
>>> r.std_err
''

Pipe stuff around too:

>>> r = envoy.run('uptime | pbcopy')

>>> r.command
'pbcopy'
>>> r.status_code
0

>>> r.history
[<Response 'uptime'>]

os.system does not allow you to store results, so if you want to store results in some list or something, a subprocess.call works.


If you're writing a Python shell script and have IPython installed on your system, you can use the bang prefix to run a shell command inside IPython:

!ls
filelist = !ls

As some of the answers were related to the previous versions of Python or were using os.system module, I post this answer for people like me who intend to use subprocess in Python 3.5+. The following did the trick for me on Linux:

import subprocess

# subprocess.run() returns a completed process object that can be inspected
c = subprocess.run(["ls", "-ltrh"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(c.stdout.decode('utf-8'))

As mentioned in the documentation, PIPE values are byte sequences and for properly showing them decoding should be considered. For later versions of Python, text=True and encoding='utf-8' are added to kwargs of subprocess.run().

The output of the abovementioned code is:

total 113M
-rwxr-xr-x  1 farzad farzad  307 Jan 15  2018 vpnscript
-rwxrwxr-x  1 farzad farzad  204 Jan 15  2018 ex
drwxrwxr-x  4 farzad farzad 4.0K Jan 22  2018 scripts
.... # Some other lines

A simple way is to use the os module:

import os
os.system('ls')

Alternatively, you can also use the subprocess module:

import subprocess
subprocess.check_call('ls')

If you want the result to be stored in a variable try:

import subprocess
r = subprocess.check_output('ls')

To fetch the network id from the OpenStack Neutron:

#!/usr/bin/python
import os
netid = "nova net-list | awk '/ External / { print $2 }'"
temp = os.popen(netid).read()  /* Here temp also contains new line (\n) */
networkId = temp.rstrip()
print(networkId)

Output of nova net-list

+--------------------------------------+------------+------+
| ID                                   | Label      | CIDR |
+--------------------------------------+------------+------+
| 431c9014-5b5d-4b51-a357-66020ffbb123 | test1      | None |
| 27a74fcd-37c0-4789-9414-9531b7e3f126 | External   | None |
| 5a2712e9-70dc-4b0e-9281-17e02f4684c9 | management | None |
| 7aa697f5-0e60-4c15-b4cc-9cb659698512 | Internal   | None |
+--------------------------------------+------------+------+

Output of print(networkId)

27a74fcd-37c0-4789-9414-9531b7e3f126

Check the "pexpect" Python library, too.

It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet, etc. You can just type something like:

child = pexpect.spawn('ftp 192.168.0.24')

child.expect('(?i)name .*: ')

child.sendline('anonymous')

child.expect('(?i)password')

Calling an external command in Python

Simple, use subprocess.run, which returns a CompletedProcess object:

>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)

Why?

As of Python 3.5, the documentation recommends subprocess.run:

The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle. For more advanced use cases, the underlying Popen interface can be used directly.

Here's an example of the simplest possible usage - and it does exactly as asked:

>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)

run waits for the command to successfully finish, then returns a CompletedProcess object. It may instead raise TimeoutExpired (if you give it a timeout= argument) or CalledProcessError (if it fails and you pass check=True).

As you might infer from the above example, stdout and stderr both get piped to your own stdout and stderr by default.

We can inspect the returned object and see the command that was given and the returncode:

>>> completed_process.args
'python --version'
>>> completed_process.returncode
0

Capturing output

If you want to capture the output, you can pass subprocess.PIPE to the appropriate stderr or stdout:

>>> cp = subprocess.run('python --version', 
                        stderr=subprocess.PIPE, 
                        stdout=subprocess.PIPE)
>>> cp.stderr
b'Python 3.6.1 :: Anaconda 4.4.0 (64-bit)\r\n'
>>> cp.stdout
b''

(I find it interesting and slightly counterintuitive that the version info gets put to stderr instead of stdout.)

Pass a command list

One might easily move from manually providing a command string (like the question suggests) to providing a string built programmatically. Don't build strings programmatically. This is a potential security issue. It's better to assume you don't trust the input.

>>> import textwrap
>>> args = ['python', textwrap.__file__]
>>> cp = subprocess.run(args, stdout=subprocess.PIPE)
>>> cp.stdout
b'Hello there.\r\n  This is indented.\r\n'

Note, only args should be passed positionally.

Full Signature

Here's the actual signature in the source and as shown by help(run):

def run(*popenargs, input=None, timeout=None, check=False, **kwargs):

The popenargs and kwargs are given to the Popen constructor. input can be a string of bytes (or unicode, if specify encoding or universal_newlines=True) that will be piped to the subprocess's stdin.

The documentation describes timeout= and check=True better than I could:

The timeout argument is passed to Popen.communicate(). If the timeout expires, the child process will be killed and waited for. The TimeoutExpired exception will be re-raised after the child process has terminated.

If check is true, and the process exits with a non-zero exit code, a CalledProcessError exception will be raised. Attributes of that exception hold the arguments, the exit code, and stdout and stderr if they were captured.

and this example for check=True is better than one I could come up with:

>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
  ...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Expanded Signature

Here's an expanded signature, as given in the documentation:

subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None, 
shell=False, cwd=None, timeout=None, check=False, encoding=None, 
errors=None)

Note that this indicates that only the args list should be passed positionally. So pass the remaining arguments as keyword arguments.

Popen

When use Popen instead? I would struggle to find use-case based on the arguments alone. Direct usage of Popen would, however, give you access to its methods, including poll, 'send_signal', 'terminate', and 'wait'.

Here's the Popen signature as given in the source. I think this is the most precise encapsulation of the information (as opposed to help(Popen)):

def __init__(self, args, bufsize=-1, executable=None,
             stdin=None, stdout=None, stderr=None,
             preexec_fn=None, close_fds=_PLATFORM_DEFAULT_CLOSE_FDS,
             shell=False, cwd=None, env=None, universal_newlines=False,
             startupinfo=None, creationflags=0,
             restore_signals=True, start_new_session=False,
             pass_fds=(), *, encoding=None, errors=None):

But more informative is the Popen documentation:

subprocess.Popen(args, bufsize=-1, executable=None, stdin=None,
                 stdout=None, stderr=None, preexec_fn=None, close_fds=True,
                 shell=False, cwd=None, env=None, universal_newlines=False,
                 startupinfo=None, creationflags=0, restore_signals=True,
                 start_new_session=False, pass_fds=(), *, encoding=None, errors=None)

Execute a child program in a new process. On POSIX, the class uses os.execvp()-like behavior to execute the child program. On Windows, the class uses the Windows CreateProcess() function. The arguments to Popen are as follows.

Understanding the remaining documentation on Popen will be left as an exercise for the reader.


I would recommend the following method 'run' and it will help us in getting STDOUT, STDERR and exit status as dictionary; The caller of this can read the dictionary return by 'run' method to know the actual state of process.

  def run (cmd):
       print "+ DEBUG exec({0})".format(cmd)
       p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, shell=True)
       (out, err) = p.communicate()
       ret        = p.wait()
       out        = filter(None, out.split('\n'))
       err        = filter(None, err.split('\n'))
       ret        = True if ret == 0 else False
       return dict({'output': out, 'error': err, 'status': ret})
  #end

If you're writing a Python shell script and have IPython installed on your system, you can use the bang prefix to run a shell command inside IPython:

!ls
filelist = !ls

As some of the answers were related to the previous versions of Python or were using os.system module, I post this answer for people like me who intend to use subprocess in Python 3.5+. The following did the trick for me on Linux:

import subprocess

# subprocess.run() returns a completed process object that can be inspected
c = subprocess.run(["ls", "-ltrh"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(c.stdout.decode('utf-8'))

As mentioned in the documentation, PIPE values are byte sequences and for properly showing them decoding should be considered. For later versions of Python, text=True and encoding='utf-8' are added to kwargs of subprocess.run().

The output of the abovementioned code is:

total 113M
-rwxr-xr-x  1 farzad farzad  307 Jan 15  2018 vpnscript
-rwxrwxr-x  1 farzad farzad  204 Jan 15  2018 ex
drwxrwxr-x  4 farzad farzad 4.0K Jan 22  2018 scripts
.... # Some other lines

With the standard library

Use the subprocess module (Python 3):

import subprocess
subprocess.run(['ls', '-l'])

It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tedious to construct and write.

Note on Python version: If you are still using Python 2, subprocess.call works in a similar way.

ProTip: shlex.split can help you to parse the command for run, call, and other subprocess functions in case you don't want (or you can't!) provide them in form of lists:

import shlex
import subprocess
subprocess.run(shlex.split('ls -l'))

With external dependencies

If you do not mind external dependencies, use plumbum:

from plumbum.cmd import ifconfig
print(ifconfig['wlan0']())

It is the best subprocess wrapper. It's cross-platform, i.e. it works on both Windows and Unix-like systems. Install by pip install plumbum.

Another popular library is sh:

from sh import ifconfig
print(ifconfig('wlan0'))

However, sh dropped Windows support, so it's not as awesome as it used to be. Install by pip install sh.


If you need the output from the command you are calling, then you can use subprocess.check_output (Python 2.7+).

>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\n'

Also note the shell parameter.

If shell is True, the specified command will be executed through the shell. This can be useful if you are using Python primarily for the enhanced control flow it offers over most system shells and still want convenient access to other shell features such as shell pipes, filename wildcards, environment variable expansion, and expansion of ~ to a user’s home directory. However, note that Python itself offers implementations of many shell-like features (in particular, glob, fnmatch, os.walk(), os.path.expandvars(), os.path.expanduser(), and shutil).


It can be this simple:

import os
cmd = "your command"
os.system(cmd)

Very simplest way to run any command and get the result back:

from commands import getstatusoutput

try:
    return getstatusoutput("ls -ltr")
except Exception, e:
    return None

Using the Popen function of the subprocess Python module is the simplest way of running Linux commands. In that, the Popen.communicate() function will give your commands output. For example

import subprocess

..
process = subprocess.Popen(..)   # Pass command and arguments to the function
stdout, stderr = process.communicate()   # Get command output and error
..

As of Python 3.7.0 released on June 27th 2018 (https://docs.python.org/3/whatsnew/3.7.html), you can achieve your desired result in the most powerful while equally simple way. This answer intends to show you the essential summary of various options in a short manner. For in-depth answers, please see the other ones.


TL;DR in 2021

The big advantage of os.system(...) was its simplicity. subprocess is better and still easy to use, especially as of Python 3.5.

import subprocess
subprocess.run("ls -a", shell=True)

Note: This is the exact answer to your question - running a command

like in a shell


Preferred Way

If possible, remove the shell overhead and run the command directly (requires a list).

import subprocess
subprocess.run(["help"])
subprocess.run(["ls", "-a"])

Pass program arguments in a list. Don't include \"-escaping for arguments containing spaces.


Advanced Use Cases

Checking The Output

The following code speaks for itself:

import subprocess
result = subprocess.run(["ls", "-a"], capture_output=True, text=True)
if "stackoverflow-logo.png" in result.stdout:
    print("You're a fan!")
else:
    print("You're not a fan?")

result.stdout is all normal program output excluding errors. Read result.stderr to get them.

capture_output=True - turns capturing on. Otherwise result.stderr and result.stdout would be None. Available from Python 3.7.

text=True - a convenience argument added in Python 3.7 which converts the received binary data to Python strings you can easily work with.

Checking the returncode

Do

if result.returncode == 127: print("The program failed for some weird reason")
elif result.returncode == 0: print("The program succeeded")
else: print("The program failed unexpectedly")

If you just want to check if the program succeeded (returncode == 0) and otherwise throw an Exception, there is a more convenient function:

result.check_returncode()

But it's Python, so there's an even more convenient argument check which does the same thing automatically for you:

result = subprocess.run(..., check=True)

stderr should be inside stdout

You might want to have all program output inside stdout, even errors. To accomplish this, run

result = subprocess.run(..., stderr=subprocess.STDOUT)

result.stderr will then be None and result.stdout will contain everything.

using shell=False with an argument string

shell=False expects a list of arguments. You might however, split an argument string on your own using shlex.

import subprocess
import shlex
subprocess.run(shlex.split("ls -a"))

That's it.

Common Problems

Chances are high you just started using Python when you come across this question. Let's look at some common problems.

FileNotFoundError: [Errno 2] No such file or directory: 'ls -a': 'ls -a'

You're running a subprocess without shell=True . Either use a list (["ls", "-a"]) or set shell=True.

TypeError: [...] NoneType [...]

Check that you've set capture_output=True.

TypeError: a bytes-like object is required, not [...]

You always receive byte results from your program. If you want to work with it like a normal string, set text=True.

subprocess.CalledProcessError: Command '[...]' returned non-zero exit status 1.

Your command didn't run successfully. You could disable returncode checking or check your actual program's validity.

TypeError: init() got an unexpected keyword argument [...]

You're likely using a version of Python older than 3.7.0; update it to the most recent one available. Otherwise there are other answers in this StackOverflow post showing you older alternative solutions.


Here are my two cents: In my view, this is the best practice when dealing with external commands...

These are the return values from the execute method...

pass, stdout, stderr = execute(["ls","-la"],"/home/user/desktop")

This is the execute method...

def execute(cmdArray,workingDir):

    stdout = ''
    stderr = ''

    try:
        try:
            process = subprocess.Popen(cmdArray,cwd=workingDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1)
        except OSError:
            return [False, '', 'ERROR : command(' + ' '.join(cmdArray) + ') could not get executed!']

        for line in iter(process.stdout.readline, b''):

            try:
                echoLine = line.decode("utf-8")
            except:
                echoLine = str(line)

            stdout += echoLine

        for line in iter(process.stderr.readline, b''):

            try:
                echoLine = line.decode("utf-8")
            except:
                echoLine = str(line)

            stderr += echoLine

    except (KeyboardInterrupt,SystemExit) as err:
        return [False,'',str(err)]

    process.stdout.close()

    returnCode = process.wait()
    if returnCode != 0 or stderr != '':
        return [False, stdout, stderr]
    else:
        return [True, stdout, stderr]

This can be done in 2 ways:

1. os.system

   import os

To run a system command:

   os.system('cls')

To execute a program:

   os.system('code1.py')

2. subprocess.call

   import subprocess

To run a system command:

   subprocess.call('cls',shell=True)

To execute a program:

   subprocess.call('code1.py',shell=True)

Here's a summary of the ways to call external programs and the advantages and disadvantages of each:

  1. os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:

    os.system("some_command < input_file | another_command > output_file")  
    

However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs. See the documentation.

  1. stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything. See the documentation.

  2. The Popen class of the subprocess module. This is intended as a replacement for os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say:

    print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
    

    instead of:

    print os.popen("echo Hello World").read()
    

    but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See the documentation.

  3. The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:

    return_code = subprocess.call("echo Hello World", shell=True)  
    

    See the documentation.

  4. If you're on Python 3.5 or later, you can use the new subprocess.run function, which is a lot like the above but even more flexible and returns a CompletedProcess object when the command finishes executing.

  5. The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.

The subprocess module should probably be what you use.

Finally please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:

print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()

and imagine that the user enters something "my mama didnt love me && rm -rf /" which could erase the whole filesystem.


I tend to use subprocess together with shlex (to handle escaping of quoted strings):

>>> import subprocess, shlex
>>> command = 'ls -l "/your/path/with spaces/"'
>>> call_params = shlex.split(command)
>>> print call_params
["ls", "-l", "/your/path/with spaces/"]
>>> subprocess.call(call_params)

Shameless plug, I wrote a library for this :P https://github.com/houqp/shell.py

It's basically a wrapper for popen and shlex for now. It also supports piping commands so you can chain commands easier in Python. So you can do things like:

ex('echo hello shell.py') | "awk '{print $2}'"

After some research, I have the following code which works very well for me. It basically prints both stdout and stderr in real time.

stdout_result = 1
stderr_result = 1


def stdout_thread(pipe):
    global stdout_result
    while True:
        out = pipe.stdout.read(1)
        stdout_result = pipe.poll()
        if out == '' and stdout_result is not None:
            break

        if out != '':
            sys.stdout.write(out)
            sys.stdout.flush()


def stderr_thread(pipe):
    global stderr_result
    while True:
        err = pipe.stderr.read(1)
        stderr_result = pipe.poll()
        if err == '' and stderr_result is not None:
            break

        if err != '':
            sys.stdout.write(err)
            sys.stdout.flush()


def exec_command(command, cwd=None):
    if cwd is not None:
        print '[' + ' '.join(command) + '] in ' + cwd
    else:
        print '[' + ' '.join(command) + ']'

    p = subprocess.Popen(
        command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
    )

    out_thread = threading.Thread(name='stdout_thread', target=stdout_thread, args=(p,))
    err_thread = threading.Thread(name='stderr_thread', target=stderr_thread, args=(p,))

    err_thread.start()
    out_thread.start()

    out_thread.join()
    err_thread.join()

    return stdout_result + stderr_result

The subprocess module described above by Eli is very powerful, but the syntax to make a bog-standard system call and inspect its output, is unnecessarily prolix.

The easiest way to make a system call is with the commands module (Linux only).

> import commands
> commands.getstatusoutput("grep matter alice-in-wonderland.txt")
(0, "'Then it doesn't matter which way you go,' said the Cat.")

The first item in the tuple is the return code of the process. The second item is its standard output (and standard error, merged).


The Python devs have 'deprecated' the commands module, but that doesn't mean you shouldn't use it. Only that they're not developing it anymore, which is okay, because it's already perfect (at its small but important function).


I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer.

subprocess.call(['ping', 'localhost'])

Typical implementation:

import subprocess

p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
    print line,
retval = p.wait()

You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().


Here are my two cents: In my view, this is the best practice when dealing with external commands...

These are the return values from the execute method...

pass, stdout, stderr = execute(["ls","-la"],"/home/user/desktop")

This is the execute method...

def execute(cmdArray,workingDir):

    stdout = ''
    stderr = ''

    try:
        try:
            process = subprocess.Popen(cmdArray,cwd=workingDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1)
        except OSError:
            return [False, '', 'ERROR : command(' + ' '.join(cmdArray) + ') could not get executed!']

        for line in iter(process.stdout.readline, b''):

            try:
                echoLine = line.decode("utf-8")
            except:
                echoLine = str(line)

            stdout += echoLine

        for line in iter(process.stderr.readline, b''):

            try:
                echoLine = line.decode("utf-8")
            except:
                echoLine = str(line)

            stderr += echoLine

    except (KeyboardInterrupt,SystemExit) as err:
        return [False,'',str(err)]

    process.stdout.close()

    returnCode = process.wait()
    if returnCode != 0 or stderr != '':
        return [False, stdout, stderr]
    else:
        return [True, stdout, stderr]

Here is calling an external command and return or print the command's output:

Python Subprocess check_output is good for

Run command with arguments and return its output as a byte string.

import subprocess
proc = subprocess.check_output('ipconfig /all')
print proc

There are many ways to call a command.

  • For example:

if and.exe needs two parameters. In cmd we can call sample.exe use this: and.exe 2 3 and it show 5 on screen.

If we use a Python script to call and.exe, we should do like..

  1. os.system(cmd,...)

    • os.system(("and.exe" + " " + "2" + " " + "3"))
  2. os.popen(cmd,...)

    • os.popen(("and.exe" + " " + "2" + " " + "3"))
  3. subprocess.Popen(cmd,...)
    • subprocess.Popen(("and.exe" + " " + "2" + " " + "3"))

It's too hard, so we can join cmd with a space:

import os
cmd = " ".join(exename,parameters)
os.popen(cmd)

Use subprocess.

...or for a very simple command:

import os
os.system('cat testfile')

Use:

import subprocess

p = subprocess.Popen("df -h", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")

It gives nice output which is easier to work with:

['Filesystem      Size  Used Avail Use% Mounted on',
 '/dev/sda6        32G   21G   11G  67% /',
 'none            4.0K     0  4.0K   0% /sys/fs/cgroup',
 'udev            1.9G  4.0K  1.9G   1% /dev',
 'tmpfs           387M  1.4M  386M   1% /run',
 'none            5.0M     0  5.0M   0% /run/lock',
 'none            1.9G   58M  1.9G   3% /run/shm',
 'none            100M   32K  100M   1% /run/user',
 '/dev/sda5       340G  222G  100G  69% /home',
 '']

Using the Popen function of the subprocess Python module is the simplest way of running Linux commands. In that, the Popen.communicate() function will give your commands output. For example

import subprocess

..
process = subprocess.Popen(..)   # Pass command and arguments to the function
stdout, stderr = process.communicate()   # Get command output and error
..

I always use fabric for this things like:

from fabric.operations import local
result = local('ls', capture=True)
print "Content:/n%s" % (result, )

But this seem to be a good tool: sh (Python subprocess interface).

Look at an example:

from sh import vgdisplay
print vgdisplay()
print vgdisplay('-v')
print vgdisplay(v=True)

If you need to call a shell command from a Python notebook (like Jupyter, Zeppelin, Databricks, or Google Cloud Datalab) you can just use the ! prefix.

For example,

!ls -ilF

os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.

Get more information here.


subprocess.check_call is convenient if you don't want to test return values. It throws an exception on any error.


You can use Popen, and then you can check the procedure's status:

from subprocess import Popen

proc = Popen(['ls', '-l'])
if proc.poll() is None:
    proc.kill()

Check out subprocess.Popen.


import os
os.system("your command")

Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant documentation on the 'os' and 'sys' modules. There are a bunch of functions (exec* and spawn*) that will do similar things.


In Windows you can just import the subprocess module and run external commands by calling subprocess.Popen(), subprocess.Popen().communicate() and subprocess.Popen().wait() as below:

# Python script to run a command line
import subprocess

def execute(cmd):
    """
        Purpose  : To execute a command and return exit status
        Argument : cmd - command to execute
        Return   : exit_code
    """
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    (result, error) = process.communicate()

    rc = process.wait()

    if rc != 0:
        print "Error: failed to execute command:", cmd
        print error
    return result
# def

command = "tasklist | grep python"
print "This process detail: \n", execute(command)

Output:

This process detail:
python.exe                     604 RDP-Tcp#0                  4      5,660 K

This can be done in 2 ways:

1. os.system

   import os

To run a system command:

   os.system('cls')

To execute a program:

   os.system('code1.py')

2. subprocess.call

   import subprocess

To run a system command:

   subprocess.call('cls',shell=True)

To execute a program:

   subprocess.call('code1.py',shell=True)

If you need to call a shell command from a Python notebook (like Jupyter, Zeppelin, Databricks, or Google Cloud Datalab) you can just use the ! prefix.

For example,

!ls -ilF

import os
cmd = 'ls -al'
os.system(cmd)

If you want to return the results of the command, you can use os.popen. However, this is deprecated since version 2.6 in favor of the subprocess module, which other answers have covered well.


os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.

Get more information here.


It can be this simple:

import os
cmd = "your command"
os.system(cmd)

Calling an external command in Python

A simple way to call an external command is using os.system(...). And this function returns the exit value of the command. But the drawback is we won't get stdout and stderr.

ret = os.system('some_cmd.sh')
if ret != 0 :
    print 'some_cmd.sh execution returned failure'

Calling an external command in Python in background

subprocess.Popen provides more flexibility for running an external command rather than using os.system. We can start a command in the background and wait for it to finish. And after that we can get the stdout and stderr.

proc = subprocess.Popen(["./some_cmd.sh"], stdout=subprocess.PIPE)
print 'waiting for ' + str(proc.pid)
proc.wait()
print 'some_cmd.sh execution finished'
(out, err) = proc.communicate()
print 'some_cmd.sh output : ' + out

Calling a long running external command in Python in the background and stop after some time

We can even start a long running process in the background using subprocess.Popen and kill it after sometime once its task is done.

proc = subprocess.Popen(["./some_long_run_cmd.sh"], stdout=subprocess.PIPE)
# Do something else
# Now some_long_run_cmd.sh exeuction is no longer needed, so kill it
os.system('kill -15 ' + str(proc.pid))
print 'Output : ' proc.communicate()[0]

Use:

import os

cmd = 'ls -al'

os.system(cmd)

os - This module provides a portable way of using operating system-dependent functionality.

For the more os functions, here is the documentation.


I have written a wrapper to handle errors and redirecting output and other stuff.

import shlex
import psutil
import subprocess

def call_cmd(cmd, stdout=sys.stdout, quiet=False, shell=False, raise_exceptions=True, use_shlex=True, timeout=None):
    """Exec command by command line like 'ln -ls "/var/log"'
    """
    if not quiet:
        print("Run %s", str(cmd))
    if use_shlex and isinstance(cmd, (str, unicode)):
        cmd = shlex.split(cmd)
    if timeout is None:
        process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
        retcode = process.wait()
    else:
        process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
        p = psutil.Process(process.pid)
        finish, alive = psutil.wait_procs([p], timeout)
        if len(alive) > 0:
            ps = p.children()
            ps.insert(0, p)
            print('waiting for timeout again due to child process check')
            finish, alive = psutil.wait_procs(ps, 0)
        if len(alive) > 0:
            print('process {} will be killed'.format([p.pid for p in alive]))
            for p in alive:
                p.kill()
            if raise_exceptions:
                print('External program timeout at {} {}'.format(timeout, cmd))
                raise CalledProcessTimeout(1, cmd)
        retcode = process.wait()
    if retcode and raise_exceptions:
        print("External program failed %s", str(cmd))
        raise subprocess.CalledProcessError(retcode, cmd)

You can call it like this:

cmd = 'ln -ls "/var/log"'
stdout = 'out.txt'
call_cmd(cmd, stdout)

Use subprocess.

...or for a very simple command:

import os
os.system('cat testfile')

os.popen() is the easiest and the most safest way to execute a command. You can execute any command that you run on the command line. In addition you will also be able to capture the output of the command using os.popen().read()

You can do it like this:

import os
output = os.popen('Your Command Here').read()
print (output)

An example where you list all the files in the current directory:

import os
output = os.popen('ls').read()
print (output)
# Outputs list of files in the directory

Invoke is a Python (2.7 and 3.4+) task execution tool & library. It provides a clean, high level API for running shell commands

>>> from invoke import run
>>> cmd = "pip install -r requirements.txt"
>>> result = run(cmd, hide=True, warn=True)
>>> print(result.ok)
True
>>> print(result.stdout.splitlines()[-1])
Successfully installed invocations-0.13.0 pep8-1.5.7 spec-1.3.1

subprocess.check_call is convenient if you don't want to test return values. It throws an exception on any error.


import os
os.system("your command")

Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant documentation on the 'os' and 'sys' modules. There are a bunch of functions (exec* and spawn*) that will do similar things.


I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer.

subprocess.call(['ping', 'localhost'])

As of Python 3.7.0 released on June 27th 2018 (https://docs.python.org/3/whatsnew/3.7.html), you can achieve your desired result in the most powerful while equally simple way. This answer intends to show you the essential summary of various options in a short manner. For in-depth answers, please see the other ones.


TL;DR in 2021

The big advantage of os.system(...) was its simplicity. subprocess is better and still easy to use, especially as of Python 3.5.

import subprocess
subprocess.run("ls -a", shell=True)

Note: This is the exact answer to your question - running a command

like in a shell


Preferred Way

If possible, remove the shell overhead and run the command directly (requires a list).

import subprocess
subprocess.run(["help"])
subprocess.run(["ls", "-a"])

Pass program arguments in a list. Don't include \"-escaping for arguments containing spaces.


Advanced Use Cases

Checking The Output

The following code speaks for itself:

import subprocess
result = subprocess.run(["ls", "-a"], capture_output=True, text=True)
if "stackoverflow-logo.png" in result.stdout:
    print("You're a fan!")
else:
    print("You're not a fan?")

result.stdout is all normal program output excluding errors. Read result.stderr to get them.

capture_output=True - turns capturing on. Otherwise result.stderr and result.stdout would be None. Available from Python 3.7.

text=True - a convenience argument added in Python 3.7 which converts the received binary data to Python strings you can easily work with.

Checking the returncode

Do

if result.returncode == 127: print("The program failed for some weird reason")
elif result.returncode == 0: print("The program succeeded")
else: print("The program failed unexpectedly")

If you just want to check if the program succeeded (returncode == 0) and otherwise throw an Exception, there is a more convenient function:

result.check_returncode()

But it's Python, so there's an even more convenient argument check which does the same thing automatically for you:

result = subprocess.run(..., check=True)

stderr should be inside stdout

You might want to have all program output inside stdout, even errors. To accomplish this, run

result = subprocess.run(..., stderr=subprocess.STDOUT)

result.stderr will then be None and result.stdout will contain everything.

using shell=False with an argument string

shell=False expects a list of arguments. You might however, split an argument string on your own using shlex.

import subprocess
import shlex
subprocess.run(shlex.split("ls -a"))

That's it.

Common Problems

Chances are high you just started using Python when you come across this question. Let's look at some common problems.

FileNotFoundError: [Errno 2] No such file or directory: 'ls -a': 'ls -a'

You're running a subprocess without shell=True . Either use a list (["ls", "-a"]) or set shell=True.

TypeError: [...] NoneType [...]

Check that you've set capture_output=True.

TypeError: a bytes-like object is required, not [...]

You always receive byte results from your program. If you want to work with it like a normal string, set text=True.

subprocess.CalledProcessError: Command '[...]' returned non-zero exit status 1.

Your command didn't run successfully. You could disable returncode checking or check your actual program's validity.

TypeError: init() got an unexpected keyword argument [...]

You're likely using a version of Python older than 3.7.0; update it to the most recent one available. Otherwise there are other answers in this StackOverflow post showing you older alternative solutions.


To fetch the network id from the OpenStack Neutron:

#!/usr/bin/python
import os
netid = "nova net-list | awk '/ External / { print $2 }'"
temp = os.popen(netid).read()  /* Here temp also contains new line (\n) */
networkId = temp.rstrip()
print(networkId)

Output of nova net-list

+--------------------------------------+------------+------+
| ID                                   | Label      | CIDR |
+--------------------------------------+------------+------+
| 431c9014-5b5d-4b51-a357-66020ffbb123 | test1      | None |
| 27a74fcd-37c0-4789-9414-9531b7e3f126 | External   | None |
| 5a2712e9-70dc-4b0e-9281-17e02f4684c9 | management | None |
| 7aa697f5-0e60-4c15-b4cc-9cb659698512 | Internal   | None |
+--------------------------------------+------------+------+

Output of print(networkId)

27a74fcd-37c0-4789-9414-9531b7e3f126

As an example (in Linux):

import subprocess
subprocess.run('mkdir test.dir', shell=True)

This creates test.dir in the current directory. Note that this also works:

import subprocess
subprocess.call('mkdir test.dir', shell=True)

The equivalent code using os.system is:

import os
os.system('mkdir test.dir')

Best practice would be to use subprocess instead of os, with .run favored over .call. All you need to know about subprocess is here. Also, note that all Python documentation is available for download from here. I downloaded the PDF packed as .zip. I mention this because there's a nice overview of the os module in tutorial.pdf (page 81). Besides, it's an authoritative resource for Python coders.


Calling an external command in Python

Simple, use subprocess.run, which returns a CompletedProcess object:

>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)

Why?

As of Python 3.5, the documentation recommends subprocess.run:

The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle. For more advanced use cases, the underlying Popen interface can be used directly.

Here's an example of the simplest possible usage - and it does exactly as asked:

>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)

run waits for the command to successfully finish, then returns a CompletedProcess object. It may instead raise TimeoutExpired (if you give it a timeout= argument) or CalledProcessError (if it fails and you pass check=True).

As you might infer from the above example, stdout and stderr both get piped to your own stdout and stderr by default.

We can inspect the returned object and see the command that was given and the returncode:

>>> completed_process.args
'python --version'
>>> completed_process.returncode
0

Capturing output

If you want to capture the output, you can pass subprocess.PIPE to the appropriate stderr or stdout:

>>> cp = subprocess.run('python --version', 
                        stderr=subprocess.PIPE, 
                        stdout=subprocess.PIPE)
>>> cp.stderr
b'Python 3.6.1 :: Anaconda 4.4.0 (64-bit)\r\n'
>>> cp.stdout
b''

(I find it interesting and slightly counterintuitive that the version info gets put to stderr instead of stdout.)

Pass a command list

One might easily move from manually providing a command string (like the question suggests) to providing a string built programmatically. Don't build strings programmatically. This is a potential security issue. It's better to assume you don't trust the input.

>>> import textwrap
>>> args = ['python', textwrap.__file__]
>>> cp = subprocess.run(args, stdout=subprocess.PIPE)
>>> cp.stdout
b'Hello there.\r\n  This is indented.\r\n'

Note, only args should be passed positionally.

Full Signature

Here's the actual signature in the source and as shown by help(run):

def run(*popenargs, input=None, timeout=None, check=False, **kwargs):

The popenargs and kwargs are given to the Popen constructor. input can be a string of bytes (or unicode, if specify encoding or universal_newlines=True) that will be piped to the subprocess's stdin.

The documentation describes timeout= and check=True better than I could:

The timeout argument is passed to Popen.communicate(). If the timeout expires, the child process will be killed and waited for. The TimeoutExpired exception will be re-raised after the child process has terminated.

If check is true, and the process exits with a non-zero exit code, a CalledProcessError exception will be raised. Attributes of that exception hold the arguments, the exit code, and stdout and stderr if they were captured.

and this example for check=True is better than one I could come up with:

>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
  ...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Expanded Signature

Here's an expanded signature, as given in the documentation:

subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None, 
shell=False, cwd=None, timeout=None, check=False, encoding=None, 
errors=None)

Note that this indicates that only the args list should be passed positionally. So pass the remaining arguments as keyword arguments.

Popen

When use Popen instead? I would struggle to find use-case based on the arguments alone. Direct usage of Popen would, however, give you access to its methods, including poll, 'send_signal', 'terminate', and 'wait'.

Here's the Popen signature as given in the source. I think this is the most precise encapsulation of the information (as opposed to help(Popen)):

def __init__(self, args, bufsize=-1, executable=None,
             stdin=None, stdout=None, stderr=None,
             preexec_fn=None, close_fds=_PLATFORM_DEFAULT_CLOSE_FDS,
             shell=False, cwd=None, env=None, universal_newlines=False,
             startupinfo=None, creationflags=0,
             restore_signals=True, start_new_session=False,
             pass_fds=(), *, encoding=None, errors=None):

But more informative is the Popen documentation:

subprocess.Popen(args, bufsize=-1, executable=None, stdin=None,
                 stdout=None, stderr=None, preexec_fn=None, close_fds=True,
                 shell=False, cwd=None, env=None, universal_newlines=False,
                 startupinfo=None, creationflags=0, restore_signals=True,
                 start_new_session=False, pass_fds=(), *, encoding=None, errors=None)

Execute a child program in a new process. On POSIX, the class uses os.execvp()-like behavior to execute the child program. On Windows, the class uses the Windows CreateProcess() function. The arguments to Popen are as follows.

Understanding the remaining documentation on Popen will be left as an exercise for the reader.


If you need the output from the command you are calling, then you can use subprocess.check_output (Python 2.7+).

>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\n'

Also note the shell parameter.

If shell is True, the specified command will be executed through the shell. This can be useful if you are using Python primarily for the enhanced control flow it offers over most system shells and still want convenient access to other shell features such as shell pipes, filename wildcards, environment variable expansion, and expansion of ~ to a user’s home directory. However, note that Python itself offers implementations of many shell-like features (in particular, glob, fnmatch, os.walk(), os.path.expandvars(), os.path.expanduser(), and shutil).


The subprocess module described above by Eli is very powerful, but the syntax to make a bog-standard system call and inspect its output, is unnecessarily prolix.

The easiest way to make a system call is with the commands module (Linux only).

> import commands
> commands.getstatusoutput("grep matter alice-in-wonderland.txt")
(0, "'Then it doesn't matter which way you go,' said the Cat.")

The first item in the tuple is the return code of the process. The second item is its standard output (and standard error, merged).


The Python devs have 'deprecated' the commands module, but that doesn't mean you shouldn't use it. Only that they're not developing it anymore, which is okay, because it's already perfect (at its small but important function).


Calling an external command in Python

A simple way to call an external command is using os.system(...). And this function returns the exit value of the command. But the drawback is we won't get stdout and stderr.

ret = os.system('some_cmd.sh')
if ret != 0 :
    print 'some_cmd.sh execution returned failure'

Calling an external command in Python in background

subprocess.Popen provides more flexibility for running an external command rather than using os.system. We can start a command in the background and wait for it to finish. And after that we can get the stdout and stderr.

proc = subprocess.Popen(["./some_cmd.sh"], stdout=subprocess.PIPE)
print 'waiting for ' + str(proc.pid)
proc.wait()
print 'some_cmd.sh execution finished'
(out, err) = proc.communicate()
print 'some_cmd.sh output : ' + out

Calling a long running external command in Python in the background and stop after some time

We can even start a long running process in the background using subprocess.Popen and kill it after sometime once its task is done.

proc = subprocess.Popen(["./some_long_run_cmd.sh"], stdout=subprocess.PIPE)
# Do something else
# Now some_long_run_cmd.sh exeuction is no longer needed, so kill it
os.system('kill -15 ' + str(proc.pid))
print 'Output : ' proc.communicate()[0]

I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer.

subprocess.call(['ping', 'localhost'])

Python 3.5+

import subprocess

p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

Try online


Check the "pexpect" Python library, too.

It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet, etc. You can just type something like:

child = pexpect.spawn('ftp 192.168.0.24')

child.expect('(?i)name .*: ')

child.sendline('anonymous')

child.expect('(?i)password')

There are a lot of different ways to run external commands in Python, and all of them have their own plus sides and drawbacks.

My colleagues and me have been writing Python system administration tools, so we need to run a lot of external commands, and sometimes you want them to block or run asynchronously, time-out, update every second, etc.

There are also different ways of handling the return code and errors, and you might want to parse the output, and provide new input (in an expect kind of style). Or you will need to redirect stdin, stdout and stderr to run in a different tty (e.g., when using screen).

So you will probably have to write a lot of wrappers around the external command. So here is a Python module which we have written which can handle almost anything you would want, and if not, it's very flexible so you can easily extend it:

https://github.com/hpcugent/vsc-base/blob/master/lib/vsc/utils/run.py

update: It doesn't work standalone and requires some of our other tools, and got a lot of specialised functionality over the years, so it might not be a drop in replacement for you, but can give you a lot of info on how the internals of python for running commands work and idea's on how to handle certain situations.


A simple way is to use the os module:

import os
os.system('ls')

Alternatively, you can also use the subprocess module:

import subprocess
subprocess.check_call('ls')

If you want the result to be stored in a variable try:

import subprocess
r = subprocess.check_output('ls')

MOST OF THE CASES:

For the most of cases, a short snippet of code like this is all you are going to need:

import subprocess
import shlex

source = "test.txt"
destination = "test_copy.txt"

base = "cp {source} {destination}'"
cmd = base.format(source=source, destination=destination)
subprocess.check_call(shlex.split(cmd))

It is clean and simple.

subprocess.check_call run command with arguments and wait for command to complete.

shlex.split split the string cmd using shell-like syntax

REST OF THE CASES:

If this do not work for some specific command, most probably you have a problem with command-line interpreters. The operating system chose the default one which is not suitable for your type of program or could not found an adequate one on the system executable path.

Example:

Using the redirection operator on a Unix system

input_1 = "input_1.txt"
input_2 = "input_2.txt"
output = "merged.txt"
base_command = "/bin/bash -c 'cat {input} >> {output}'"

base_command.format(input_1, output=output)
subprocess.check_call(shlex.split(base_command))

base_command.format(input_2, output=output)
subprocess.check_call(shlex.split(base_command))

As it is stated in The Zen of Python: Explicit is better than implicit

So if using a Python >=3.6 function, it would look something like this:

import subprocess
import shlex

def run_command(cmd_interpreter: str, command: str) -> None:
    base_command = f"{cmd_interpreter} -c '{command}'"
    subprocess.check_call(shlex.split(base_command)


I wrote a small library to help with this use case:

https://pypi.org/project/citizenshell/

It can be installed using

pip install citizenshell

And then used as follows:

from citizenshell import sh
assert sh("echo Hello World") == "Hello World"

You can separate stdout from stderr and extract the exit code as follows:

result = sh(">&2 echo error && echo output && exit 13")
assert result.stdout() == ["output"]
assert result.stderr() == ["error"]
assert result.exit_code() == 13

And the cool thing is that you don't have to wait for the underlying shell to exit before starting processing the output:

for line in sh("for i in 1 2 3 4; do echo -n 'It is '; date +%H:%M:%S; sleep 1; done", wait=False)
    print ">>>", line + "!"

will print the lines as they are available thanks to the wait=False

>>> It is 14:24:52!
>>> It is 14:24:53!
>>> It is 14:24:54!
>>> It is 14:24:55!

More examples can be found at https://github.com/meuter/citizenshell


Some hints on detaching the child process from the calling one (starting the child process in background).

Suppose you want to start a long task from a CGI script. That is, the child process should live longer than the CGI script execution process.

The classical example from the subprocess module documentation is:

import subprocess
import sys

# Some code here

pid = subprocess.Popen([sys.executable, "longtask.py"]) # Call subprocess

# Some more code here

The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.

My target platform was FreeBSD, but the development was on Windows, so I faced the problem on Windows first.

On Windows (Windows XP), the parent process will not finish until the longtask.py has finished its work. It is not what you want in a CGI script. The problem is not specific to Python; in the PHP community the problems are the same.

The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in Windows API. If you happen to have installed pywin32, you can import the flag from the win32process module, otherwise you should define it yourself:

DETACHED_PROCESS = 0x00000008

pid = subprocess.Popen([sys.executable, "longtask.py"],
                       creationflags=DETACHED_PROCESS).pid

/* UPD 2015.10.27 @eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */

On FreeBSD we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in a CGI script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:

pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)

I have not checked the code on other platforms and do not know the reasons of the behaviour on FreeBSD. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.


I quite like shell_command for its simplicity. It's built on top of the subprocess module.

Here's an example from the documentation:

>>> from shell_command import shell_call
>>> shell_call("ls *.py")
setup.py  shell_command.py  test_shell_command.py
0
>>> shell_call("ls -l *.py")
-rw-r--r-- 1 ncoghlan ncoghlan  391 2011-12-11 12:07 setup.py
-rw-r--r-- 1 ncoghlan ncoghlan 7855 2011-12-11 16:16 shell_command.py
-rwxr-xr-x 1 ncoghlan ncoghlan 8463 2011-12-11 16:17 test_shell_command.py
0

I wrote a small library to help with this use case:

https://pypi.org/project/citizenshell/

It can be installed using

pip install citizenshell

And then used as follows:

from citizenshell import sh
assert sh("echo Hello World") == "Hello World"

You can separate stdout from stderr and extract the exit code as follows:

result = sh(">&2 echo error && echo output && exit 13")
assert result.stdout() == ["output"]
assert result.stderr() == ["error"]
assert result.exit_code() == 13

And the cool thing is that you don't have to wait for the underlying shell to exit before starting processing the output:

for line in sh("for i in 1 2 3 4; do echo -n 'It is '; date +%H:%M:%S; sleep 1; done", wait=False)
    print ">>>", line + "!"

will print the lines as they are available thanks to the wait=False

>>> It is 14:24:52!
>>> It is 14:24:53!
>>> It is 14:24:54!
>>> It is 14:24:55!

More examples can be found at https://github.com/meuter/citizenshell


import subprocess

p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

Try online


Just to add to the discussion, if you include using a Python console, you can call external commands from IPython. While in the IPython prompt, you can call shell commands by prefixing '!'. You can also combine Python code with the shell, and assign the output of shell scripts to Python variables.

For instance:

In [9]: mylist = !ls

In [10]: mylist
Out[10]:
['file1',
 'file2',
 'file3',]

Use:

import os

cmd = 'ls -al'

os.system(cmd)

os - This module provides a portable way of using operating system-dependent functionality.

For the more os functions, here is the documentation.


Use subprocess.

...or for a very simple command:

import os
os.system('cat testfile')

Some hints on detaching the child process from the calling one (starting the child process in background).

Suppose you want to start a long task from a CGI script. That is, the child process should live longer than the CGI script execution process.

The classical example from the subprocess module documentation is:

import subprocess
import sys

# Some code here

pid = subprocess.Popen([sys.executable, "longtask.py"]) # Call subprocess

# Some more code here

The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.

My target platform was FreeBSD, but the development was on Windows, so I faced the problem on Windows first.

On Windows (Windows XP), the parent process will not finish until the longtask.py has finished its work. It is not what you want in a CGI script. The problem is not specific to Python; in the PHP community the problems are the same.

The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in Windows API. If you happen to have installed pywin32, you can import the flag from the win32process module, otherwise you should define it yourself:

DETACHED_PROCESS = 0x00000008

pid = subprocess.Popen([sys.executable, "longtask.py"],
                       creationflags=DETACHED_PROCESS).pid

/* UPD 2015.10.27 @eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */

On FreeBSD we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in a CGI script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:

pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)

I have not checked the code on other platforms and do not know the reasons of the behaviour on FreeBSD. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.


Often, I use the following function for external commands, and this is especially handy for long running processes. The below method tails process output while it is running and returns the output, raises an exception if process fails.

It comes out if the process is done using the poll() method on the process.

import subprocess,sys

def exec_long_running_proc(command, args):
    cmd = "{} {}".format(command, " ".join(str(arg) if ' ' not in arg else arg.replace(' ','\ ') for arg in args))
    print(cmd)
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

    # Poll process for new output until finished
    while True:
        nextline = process.stdout.readline().decode('UTF-8')
        if nextline == '' and process.poll() is not None:
            break
        sys.stdout.write(nextline)
        sys.stdout.flush()

    output = process.communicate()[0]
    exitCode = process.returncode

    if (exitCode == 0):
        return output
    else:
        raise Exception(command, exitCode, output)

You can invoke it like this:

exec_long_running_proc(command = "hive", args=["-f", hql_path])

import os
cmd = 'ls -al'
os.system(cmd)

If you want to return the results of the command, you can use os.popen. However, this is deprecated since version 2.6 in favor of the subprocess module, which other answers have covered well.


Sultan is a recent-ish package meant for this purpose. Provides some niceties around managing user privileges and adding helpful error messages.

from sultan.api import Sultan

with Sultan.load(sudo=True, hostname="myserver.com") as sultan:
  sultan.yum("install -y tree").run()

Python 3.5+

import subprocess

p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

Try online


Just to add to the discussion, if you include using a Python console, you can call external commands from IPython. While in the IPython prompt, you can call shell commands by prefixing '!'. You can also combine Python code with the shell, and assign the output of shell scripts to Python variables.

For instance:

In [9]: mylist = !ls

In [10]: mylist
Out[10]:
['file1',
 'file2',
 'file3',]

I quite like shell_command for its simplicity. It's built on top of the subprocess module.

Here's an example from the documentation:

>>> from shell_command import shell_call
>>> shell_call("ls *.py")
setup.py  shell_command.py  test_shell_command.py
0
>>> shell_call("ls -l *.py")
-rw-r--r-- 1 ncoghlan ncoghlan  391 2011-12-11 12:07 setup.py
-rw-r--r-- 1 ncoghlan ncoghlan 7855 2011-12-11 16:16 shell_command.py
-rwxr-xr-x 1 ncoghlan ncoghlan 8463 2011-12-11 16:17 test_shell_command.py
0

There is another difference here which is not mentioned previously.

subprocess.Popen executes the <command> as a subprocess. In my case, I need to execute file <a> which needs to communicate with another program, <b>.

I tried subprocess, and execution was successful. However <b> could not communicate with <a>. Everything is normal when I run both from the terminal.

One more: (NOTE: kwrite behaves different from other applications. If you try the below with Firefox, the results will not be the same.)

If you try os.system("kwrite"), program flow freezes until the user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite). This time program continued to flow, but kwrite became the subprocess of the console.

Anyone runs the kwrite not being a subprocess (i.e. in the system monitor it must appear at the leftmost edge of the tree).


import os
os.system("your command")

Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant documentation on the 'os' and 'sys' modules. There are a bunch of functions (exec* and spawn*) that will do similar things.


With the standard library

Use the subprocess module (Python 3):

import subprocess
subprocess.run(['ls', '-l'])

It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tedious to construct and write.

Note on Python version: If you are still using Python 2, subprocess.call works in a similar way.

ProTip: shlex.split can help you to parse the command for run, call, and other subprocess functions in case you don't want (or you can't!) provide them in form of lists:

import shlex
import subprocess
subprocess.run(shlex.split('ls -l'))

With external dependencies

If you do not mind external dependencies, use plumbum:

from plumbum.cmd import ifconfig
print(ifconfig['wlan0']())

It is the best subprocess wrapper. It's cross-platform, i.e. it works on both Windows and Unix-like systems. Install by pip install plumbum.

Another popular library is sh:

from sh import ifconfig
print(ifconfig('wlan0'))

However, sh dropped Windows support, so it's not as awesome as it used to be. Install by pip install sh.


There is another difference here which is not mentioned previously.

subprocess.Popen executes the <command> as a subprocess. In my case, I need to execute file <a> which needs to communicate with another program, <b>.

I tried subprocess, and execution was successful. However <b> could not communicate with <a>. Everything is normal when I run both from the terminal.

One more: (NOTE: kwrite behaves different from other applications. If you try the below with Firefox, the results will not be the same.)

If you try os.system("kwrite"), program flow freezes until the user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite). This time program continued to flow, but kwrite became the subprocess of the console.

Anyone runs the kwrite not being a subprocess (i.e. in the system monitor it must appear at the leftmost edge of the tree).


import os
os.system("your command")

Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant documentation on the 'os' and 'sys' modules. There are a bunch of functions (exec* and spawn*) that will do similar things.


import os
cmd = 'ls -al'
os.system(cmd)

If you want to return the results of the command, you can use os.popen. However, this is deprecated since version 2.6 in favor of the subprocess module, which other answers have covered well.


Use subprocess.call:

from subprocess import call

# Using list
call(["echo", "Hello", "world"])

# Single string argument varies across platforms so better split it
call("echo Hello world".split(" "))

I tend to use subprocess together with shlex (to handle escaping of quoted strings):

>>> import subprocess, shlex
>>> command = 'ls -l "/your/path/with spaces/"'
>>> call_params = shlex.split(command)
>>> print call_params
["ls", "-l", "/your/path/with spaces/"]
>>> subprocess.call(call_params)

Here's a summary of the ways to call external programs and the advantages and disadvantages of each:

  1. os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:

    os.system("some_command < input_file | another_command > output_file")  
    

However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs. See the documentation.

  1. stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything. See the documentation.

  2. The Popen class of the subprocess module. This is intended as a replacement for os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say:

    print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
    

    instead of:

    print os.popen("echo Hello World").read()
    

    but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See the documentation.

  3. The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:

    return_code = subprocess.call("echo Hello World", shell=True)  
    

    See the documentation.

  4. If you're on Python 3.5 or later, you can use the new subprocess.run function, which is a lot like the above but even more flexible and returns a CompletedProcess object when the command finishes executing.

  5. The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.

The subprocess module should probably be what you use.

Finally please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:

print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()

and imagine that the user enters something "my mama didnt love me && rm -rf /" which could erase the whole filesystem.


There are lots of different libraries which allow you to call external commands with Python. For each library I've given a description and shown an example of calling an external command. The command I used as the example is ls -l (list all files). If you want to find out more about any of the libraries I've listed and linked the documentation for each of them.

Sources:

These are all the libraries:

Hopefully this will help you make a decision on which library to use :)

subprocess

Subprocess allows you to call external commands and connect them to their input/output/error pipes (stdin, stdout, and stderr). Subprocess is the default choice for running commands, but sometimes other modules are better.

subprocess.run(["ls", "-l"]) # Run command
subprocess.run(["ls", "-l"], stdout=subprocess.PIPE) # This will run the command and return any output
subprocess.run(shlex.split("ls -l")) # You can also use the shlex library to split the command

os

os is used for "operating system dependent functionality". It can also be used to call external commands with os.system and os.popen (Note: There is also a subprocess.popen). os will always run the shell and is a simple alternative for people who don't need to, or don't know how to use subprocess.run.

os.system("ls -l") # run command
os.popen("ls -l").read() # This will run the command and return any output

sh

sh is a subprocess interface which lets you call programs as if they were functions. This is useful if you want to run a command multiple times.

sh.ls("-l") # Run command normally
ls_cmd = sh.Command("ls") # Save command as a variable
ls_cmd() # Run command as if it were a function

plumbum

plumbum is a library for "script-like" Python programs. You can call programs like functions as in sh. Plumbum is useful if you want to run a pipeline without the shell.

ls_cmd = plumbum.local("ls -l") # get command
ls_cmd() # run command

pexpect

pexpect lets you spawn child applications, control them and find patterns in their output. This is a better alternative to subprocess for commands that expect a tty on Unix.

pexpect.run("ls -l") # Run command as normal
child = pexpect.spawn('scp foo [email protected]:.') # Spawns child application
child.expect('Password:') # When this is the output
child.sendline('mypassword')

fabric

fabric is a Python 2.5 and 2.7 library. It allows you to execute local and remote shell commands. Fabric is simple alternative for running commands in a secure shell (SSH)

fabric.operations.local('ls -l') # Run command as normal
fabric.operations.local('ls -l', capture = True) # Run command and receive output

envoy

envoy is known as "subprocess for humans". It is used as a convenience wrapper around the subprocess module.

r = envoy.run("ls -l") # Run command
r.std_out # get output

commands

commands contains wrapper functions for os.popen, but it has been removed from Python 3 since subprocess is a better alternative.

The edit was based on J.F. Sebastian's comment.


Use:

import subprocess

p = subprocess.Popen("df -h", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")

It gives nice output which is easier to work with:

['Filesystem      Size  Used Avail Use% Mounted on',
 '/dev/sda6        32G   21G   11G  67% /',
 'none            4.0K     0  4.0K   0% /sys/fs/cgroup',
 'udev            1.9G  4.0K  1.9G   1% /dev',
 'tmpfs           387M  1.4M  386M   1% /run',
 'none            5.0M     0  5.0M   0% /run/lock',
 'none            1.9G   58M  1.9G   3% /run/shm',
 'none            100M   32K  100M   1% /run/user',
 '/dev/sda5       340G  222G  100G  69% /home',
 '']

Use subprocess.call:

from subprocess import call

# Using list
call(["echo", "Hello", "world"])

# Single string argument varies across platforms so better split it
call("echo Hello world".split(" "))

I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer.

subprocess.call(['ping', 'localhost'])

import os
cmd = 'ls -al'
os.system(cmd)

If you want to return the results of the command, you can use os.popen. However, this is deprecated since version 2.6 in favor of the subprocess module, which other answers have covered well.


Under Linux, in case you would like to call an external command that will execute independently (will keep running after the python script terminates), you can use a simple queue as task spooler or the at command

An example with task spooler:

import os
os.system('ts <your-command>')

Notes about task spooler (ts):

  1. You could set the number of concurrent processes to be run ("slots") with:

    ts -S <number-of-slots>

  2. Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.


Examples related to python

programming a servo thru a barometer Is there a way to view two blocks of code from the same file simultaneously in Sublime Text? python variable NameError Why my regexp for hyphenated words doesn't work? Comparing a variable with a string python not working when redirecting from bash script is it possible to add colors to python output? Get Public URL for File - Google Cloud Storage - App Engine (Python) Real time face detection OpenCV, Python xlrd.biffh.XLRDError: Excel xlsx file; not supported Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation

Examples related to shell

Comparing a variable with a string python not working when redirecting from bash script Get first line of a shell command's output How to run shell script file using nodejs? Run bash command on jenkins pipeline Way to create multiline comments in Bash? How to do multiline shell script in Ansible How to check if a file exists in a shell script How to check if an environment variable exists and get its value? Curl to return http status code along with the response docker entrypoint running bash script gets "permission denied"

Examples related to terminal

Git is not working after macOS Update (xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools) Can't compile C program on a Mac after upgrade to Mojave Flutter command not found VSCode Change Default Terminal How to switch Python versions in Terminal? How to open the terminal in Atom? Color theme for VS Code integrated terminal How to edit a text file in my terminal How to open google chrome from terminal? Switch between python 2.7 and python 3.5 on Mac OS X

Examples related to subprocess

Subprocess check_output returned non-zero exit status 1 OSError: [Errno 8] Exec format error OSError: [WinError 193] %1 is not a valid Win32 application How to catch exception output from Python subprocess.check_output()? Subprocess changing directory OSError: [Errno 2] No such file or directory while using python subprocess in Django live output from subprocess command running multiple bash commands with subprocess Understanding Popen.communicate wait process until all subprocess finish?

Examples related to command

'ls' is not recognized as an internal or external command, operable program or batch file Command to run a .bat file how to run python files in windows command prompt? Run a command shell in jenkins How to recover the deleted files using "rm -R" command in linux server? Split text file into smaller multiple text file using command line ansible : how to pass multiple commands Jmeter - Run .jmx file through command line and get the summary report in a excel cocoapods - 'pod install' takes forever Daemon not running. Starting it now on port 5037