[python] Disable output buffering

Is output buffering enabled by default in Python's interpreter for sys.stdout?

If the answer is positive, what are all the ways to disable it?

Suggestions so far:

  1. Use the -u command line switch
  2. Wrap sys.stdout in an object that flushes after every write
  3. Set PYTHONUNBUFFERED env var
  4. sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

Is there any other way to set some global flag in sys/sys.stdout programmatically during execution?

This question is related to python stdout buffered

The answer is


Yes, it is.

You can disable it on the commandline with the "-u" switch.

Alternatively, you could call .flush() on sys.stdout on every write (or wrap it with an object that does this automatically)


The following works in Python 2.6, 2.7, and 3.2:

import os
import sys
buf_arg = 0
if sys.version_info[0] == 3:
    os.environ['PYTHONUNBUFFERED'] = '1'
    buf_arg = 1
sys.stdout = os.fdopen(sys.stdout.fileno(), 'a+', buf_arg)
sys.stderr = os.fdopen(sys.stderr.fileno(), 'a+', buf_arg)

# reopen stdout file descriptor with write mode
# and 0 as the buffer size (unbuffered)
import io, os, sys
try:
    # Python 3, open as binary, then wrap in a TextIOWrapper with write-through.
    sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)
    # If flushing on newlines is sufficient, as of 3.7 you can instead just call:
    # sys.stdout.reconfigure(line_buffering=True)
except TypeError:
    # Python 2
    sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

Credits: "Sebastian", somewhere on the Python mailing list.


You can also use fcntl to change the file flags in-fly.

fl = fcntl.fcntl(fd.fileno(), fcntl.F_GETFL)
fl |= os.O_SYNC # or os.O_DSYNC (if you don't care the file timestamp updates)
fcntl.fcntl(fd.fileno(), fcntl.F_SETFL, fl)

In Python 3, you can monkey-patch the print function, to always send flush=True:

_orig_print = print

def print(*args, **kwargs):
    _orig_print(*args, flush=True, **kwargs)

As pointed out in a comment, you can simplify this by binding the flush parameter to a value, via functools.partial:

print = functools.partial(print, flush=True)

You can also run Python with stdbuf utility:

stdbuf -oL python <script>


# reopen stdout file descriptor with write mode
# and 0 as the buffer size (unbuffered)
import io, os, sys
try:
    # Python 3, open as binary, then wrap in a TextIOWrapper with write-through.
    sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)
    # If flushing on newlines is sufficient, as of 3.7 you can instead just call:
    # sys.stdout.reconfigure(line_buffering=True)
except TypeError:
    # Python 2
    sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

Credits: "Sebastian", somewhere on the Python mailing list.


Variant that works without crashing (at least on win32; python 2.7, ipython 0.12) then called subsequently (multiple times):

def DisOutBuffering():
    if sys.stdout.name == '<stdout>':
        sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

    if sys.stderr.name == '<stderr>':
        sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 0)

def disable_stdout_buffering():
    # Appending to gc.garbage is a way to stop an object from being
    # destroyed.  If the old sys.stdout is ever collected, it will
    # close() stdout, which is not good.
    gc.garbage.append(sys.stdout)
    sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

# Then this will give output in the correct order:
disable_stdout_buffering()
print "hello"
subprocess.call(["echo", "bye"])

Without saving the old sys.stdout, disable_stdout_buffering() isn't idempotent, and multiple calls will result in an error like this:

Traceback (most recent call last):
  File "test/buffering.py", line 17, in <module>
    print "hello"
IOError: [Errno 9] Bad file descriptor
close failed: [Errno 9] Bad file descriptor

Another possibility is:

def disable_stdout_buffering():
    fileno = sys.stdout.fileno()
    temp_fd = os.dup(fileno)
    sys.stdout.close()
    os.dup2(temp_fd, fileno)
    os.close(temp_fd)
    sys.stdout = os.fdopen(fileno, "w", 0)

(Appending to gc.garbage is not such a good idea because it's where unfreeable cycles get put, and you might want to check for those.)


Yes, it is enabled by default. You can disable it by using the -u option on the command line when calling python.


I would rather put my answer in How to flush output of print function? or in Python's print function that flushes the buffer when it's called?, but since they were marked as duplicates of this one (what I do not agree), I'll answer it here.

Since Python 3.3, print() supports the keyword argument "flush" (see documentation):

print('Hello World!', flush=True)

You can create an unbuffered file and assign this file to sys.stdout.

import sys 
myFile= open( "a.log", "w", 0 ) 
sys.stdout= myFile

You can't magically change the system-supplied stdout; since it's supplied to your python program by the OS.


This relates to Cristóvão D. Sousa's answer, but I couldn't comment yet.

A straight-forward way of using the flush keyword argument of Python 3 in order to always have unbuffered output is:

import functools
print = functools.partial(print, flush=True)

afterwards, print will always flush the output directly (except flush=False is given).

Note, (a) that this answers the question only partially as it doesn't redirect all the output. But I guess print is the most common way for creating output to stdout/stderr in python, so these 2 lines cover probably most of the use cases.

Note (b) that it only works in the module/script where you defined it. This can be good when writing a module as it doesn't mess with the sys.stdout.

Python 2 doesn't provide the flush argument, but you could emulate a Python 3-type print function as described here https://stackoverflow.com/a/27991478/3734258 .


It is possible to override only write method of sys.stdout with one that calls flush. Suggested method implementation is below.

def write_flush(args, w=stdout.write):
    w(args)
    stdout.flush()

Default value of w argument will keep original write method reference. After write_flush is defined, the original write might be overridden.

stdout.write = write_flush

The code assumes that stdout is imported this way from sys import stdout.


(I've posted a comment, but it got lost somehow. So, again:)

  1. As I noticed, CPython (at least on Linux) behaves differently depending on where the output goes. If it goes to a tty, then the output is flushed after each '\n'
    If it goes to a pipe/process, then it is buffered and you can use the flush() based solutions or the -u option recommended above.

  2. Slightly related to output buffering:
    If you iterate over the lines in the input with

    for line in sys.stdin:
    ...

then the for implementation in CPython will collect the input for a while and then execute the loop body for a bunch of input lines. If your script is about to write output for each input line, this might look like output buffering but it's actually batching, and therefore, none of the flush(), etc. techniques will help that. Interestingly, you don't have this behaviour in pypy. To avoid this, you can use

while True: line=sys.stdin.readline()
...


One way to get unbuffered output would be to use sys.stderr instead of sys.stdout or to simply call sys.stdout.flush() to explicitly force a write to occur.

You could easily redirect everything printed by doing:

import sys; sys.stdout = sys.stderr
print "Hello World!"

Or to redirect just for a particular print statement:

print >>sys.stderr, "Hello World!"

To reset stdout you can just do:

sys.stdout = sys.__stdout__