[python] High-precision clock in Python

Is there a way to measure time with high-precision in Python --- more precise than one second? I doubt that there is a cross-platform way of doing that; I'm interesting in high precision time on Unix, particularly Solaris running on a Sun SPARC machine.

timeit seems to be capable of high-precision time measurement, but rather than measure how long a code snippet takes, I'd like to directly access the time values.

This question is related to python time

The answer is


David's post was attempting to show what the clock resolution is on Windows. I was confused by his output, so I wrote some code that shows that time.time() on my Windows 8 x64 laptop has a resolution of 1 msec:

# measure the smallest time delta by spinning until the time changes
def measure():
    t0 = time.time()
    t1 = t0
    while t1 == t0:
        t1 = time.time()
    return (t0, t1, t1-t0)

samples = [measure() for i in range(10)]

for s in samples:
    print s

Which outputs:

(1390455900.085, 1390455900.086, 0.0009999275207519531)
(1390455900.086, 1390455900.087, 0.0009999275207519531)
(1390455900.087, 1390455900.088, 0.0010001659393310547)
(1390455900.088, 1390455900.089, 0.0009999275207519531)
(1390455900.089, 1390455900.09, 0.0009999275207519531)
(1390455900.09, 1390455900.091, 0.0010001659393310547)
(1390455900.091, 1390455900.092, 0.0009999275207519531)
(1390455900.092, 1390455900.093, 0.0009999275207519531)
(1390455900.093, 1390455900.094, 0.0010001659393310547)
(1390455900.094, 1390455900.095, 0.0009999275207519531)

And a way to do a 1000 sample average of the delta:

reduce( lambda a,b:a+b, [measure()[2] for i in range(1000)], 0.0) / 1000.0

Which output on two consecutive runs:

0.001
0.0010009999275207519

So time.time() on my Windows 8 x64 has a resolution of 1 msec.

A similar run on time.clock() returns a resolution of 0.4 microseconds:

def measure_clock():
    t0 = time.clock()
    t1 = time.clock()
    while t1 == t0:
        t1 = time.clock()
    return (t0, t1, t1-t0)

reduce( lambda a,b:a+b, [measure_clock()[2] for i in range(1000000)] )/1000000.0

Returns:

4.3571334791658954e-07

Which is ~0.4e-06

An interesting thing about time.clock() is that it returns the time since the method was first called, so if you wanted microsecond resolution wall time you could do something like this:

class HighPrecisionWallTime():
    def __init__(self,):
        self._wall_time_0 = time.time()
        self._clock_0 = time.clock()

    def sample(self,):
        dc = time.clock()-self._clock_0
        return self._wall_time_0 + dc

(which would probably drift after a while, but you could correct this occasionally, for example dc > 3600 would correct it every hour)


You can also use time.clock() It counts the time used by the process on Unix and time since the first call to it on Windows. It's more precise than time.time().

It's the usually used function to measure performance.

Just call

import time
t_ = time.clock()
#Your code here
print 'Time in function', time.clock() - t_

EDITED: Ups, I miss the question as you want to know exactly the time, not the time spent...


For those stuck on windows (version >= server 2012 or win 8)and python 2.7,

import ctypes

class FILETIME(ctypes.Structure):
    _fields_ = [("dwLowDateTime", ctypes.c_uint),
                ("dwHighDateTime", ctypes.c_uint)]

def time():
    """Accurate version of time.time() for windows, return UTC time in term of seconds since 01/01/1601
"""
    file_time = FILETIME()
    ctypes.windll.kernel32.GetSystemTimePreciseAsFileTime(ctypes.byref(file_time))
    return (file_time.dwLowDateTime + (file_time.dwHighDateTime << 32)) / 1.0e7

GetSystemTimePreciseAsFileTime function


Python 3.7 introduces 6 new time functions with nanosecond resolution, for example instead of time.time() you can use time.time_ns() to avoid floating point imprecision issues:

import time
print(time.time())
# 1522915698.3436284
print(time.time_ns())
# 1522915698343660458

These 6 functions are described in PEP 564:

time.clock_gettime_ns(clock_id)

time.clock_settime_ns(clock_id, time:int)

time.monotonic_ns()

time.perf_counter_ns()

time.process_time_ns()

time.time_ns()

These functions are similar to the version without the _ns suffix, but return a number of nanoseconds as a Python int.


Python tries hard to use the most precise time function for your platform to implement time.time():

/* Implement floattime() for various platforms */

static double
floattime(void)
{
    /* There are three ways to get the time:
      (1) gettimeofday() -- resolution in microseconds
      (2) ftime() -- resolution in milliseconds
      (3) time() -- resolution in seconds
      In all cases the return value is a float in seconds.
      Since on some systems (e.g. SCO ODT 3.0) gettimeofday() may
      fail, so we fall back on ftime() or time().
      Note: clock resolution does not imply clock accuracy! */
#ifdef HAVE_GETTIMEOFDAY
    {
        struct timeval t;
#ifdef GETTIMEOFDAY_NO_TZ
        if (gettimeofday(&t) == 0)
            return (double)t.tv_sec + t.tv_usec*0.000001;
#else /* !GETTIMEOFDAY_NO_TZ */
        if (gettimeofday(&t, (struct timezone *)NULL) == 0)
            return (double)t.tv_sec + t.tv_usec*0.000001;
#endif /* !GETTIMEOFDAY_NO_TZ */
    }

#endif /* !HAVE_GETTIMEOFDAY */
    {
#if defined(HAVE_FTIME)
        struct timeb t;
        ftime(&t);
        return (double)t.time + (double)t.millitm * (double)0.001;
#else /* !HAVE_FTIME */
        time_t secs;
        time(&secs);
        return (double)secs;
#endif /* !HAVE_FTIME */
    }
}

( from http://svn.python.org/view/python/trunk/Modules/timemodule.c?revision=81756&view=markup )


If Python 3 is an option, you have two choices:

  • time.perf_counter which always use the most accurate clock on your platform. It does include time spent outside of the process.
  • time.process_time which returns the CPU time. It does NOT include time spent outside of the process.

The difference between the two can be shown with:

from time import (
    process_time,
    perf_counter,
    sleep,
)

print(process_time())
sleep(1)
print(process_time())

print(perf_counter())
sleep(1)
print(perf_counter())

Which outputs:

0.03125
0.03125
2.560001310720671e-07
1.0005455362793145

The original question specifically asked for Unix but multiple answers have touched on Windows, and as a result there is misleading information on windows. The default timer resolution on windows is 15.6ms you can verify here.

Using a slightly modified script from cod3monk3y I can show that windows timer resolution is ~15milliseconds by default. I'm using a tool available here to modify the resolution.

Script:

import time

# measure the smallest time delta by spinning until the time changes
def measure():
    t0 = time.time()
    t1 = t0
    while t1 == t0:
        t1 = time.time()
    return t1-t0

samples = [measure() for i in range(30)]

for s in samples:
    print(f'time delta: {s:.4f} seconds') 

enter image description here

enter image description here

These results were gathered on windows 10 pro 64-bit running python 3.7 64-bit.


Here is a python 3 solution for Windows building upon the answer posted above by CyberSnoopy (using GetSystemTimePreciseAsFileTime). We borrow some code from jfs

Python datetime.utcnow() returning incorrect datetime

and get a precise timestamp (Unix time) in microseconds

#! python3
import ctypes.wintypes

def utcnow_microseconds():
    system_time = ctypes.wintypes.FILETIME()
    #system call used by time.time()
    #ctypes.windll.kernel32.GetSystemTimeAsFileTime(ctypes.byref(system_time))
    #getting high precision:
    ctypes.windll.kernel32.GetSystemTimePreciseAsFileTime(ctypes.byref(system_time))
    large = (system_time.dwHighDateTime << 32) + system_time.dwLowDateTime
    return large // 10 - 11644473600000000

for ii in range(5):
    print(utcnow_microseconds()*1e-6)

References
https://docs.microsoft.com/en-us/windows/win32/sysinfo/time-functions
https://docs.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getsystemtimepreciseasfiletime
https://support.microsoft.com/en-us/help/167296/how-to-convert-a-unix-time-t-to-a-win32-filetime-or-systemtime


time.clock() has 13 decimal points on Windows but only two on Linux. time.time() has 17 decimals on Linux and 16 on Windows but the actual precision is different.

I don't agree with the documentation that time.clock() should be used for benchmarking on Unix/Linux. It is not precise enough, so what timer to use depends on operating system.

On Linux, the time resolution is high in time.time():

>>> time.time(), time.time()
(1281384913.4374139, 1281384913.4374161)

On Windows, however the time function seems to use the last called number:

>>> time.time()-int(time.time()), time.time()-int(time.time()), time.time()-time.time()
(0.9570000171661377, 0.9570000171661377, 0.0)

Even if I write the calls on different lines in Windows it still returns the same value so the real precision is lower.

So in serious measurements a platform check (import platform, platform.system()) has to be done in order to determine whether to use time.clock() or time.time().

(Tested on Windows 7 and Ubuntu 9.10 with python 2.6 and 3.1)


The comment left by tiho on Mar 27 '14 at 17:21 deserves to be its own answer:

In order to avoid platform-specific code, use timeit.default_timer()


I observed that the resolution of time.time() is different between Windows 10 Professional and Education versions.

On a Windows 10 Professional machine, the resolution is 1 ms. On a Windows 10 Education machine, the resolution is 16 ms.

Fortunately, there's a tool that increases Python's time resolution in Windows: https://vvvv.org/contribution/windows-system-timer-tool

With this tool, I was able to achieve 1 ms resolution regardless of Windows version. You will need to be keep it running while executing your Python codes.


On the same win10 OS system using "two distinct method approaches" there appears to be an approximate "500 ns" time difference. If you care about nanosecond precision check my code below.

The modifications of the code is based on code from user cod3monk3y and Kevin S.

OS: python 3.7.3 (default, date, time) [MSC v.1915 64 bit (AMD64)]

def measure1(mean):
    for i in range(1, my_range+1):
        x = time.time()
        
        td = x- samples1[i-1][2]
        if i-1 == 0:
            td = 0
        td = f'{td:.6f}'
        samples1.append((i, td, x))
        mean += float(td)
        print (mean)
        sys.stdout.flush()
        time.sleep(0.001)
    
    mean = mean/my_range
    
    return mean

def measure2(nr):
    t0 = time.time()
    t1 = t0
    while t1 == t0:
        t1 = time.time()
    td = t1-t0
    td = f'{td:.6f}'
    return (nr, td, t1, t0)

samples1 = [(0, 0, 0)]
my_range = 10
mean1    = 0.0
mean2    = 0.0

mean1 = measure1(mean1)

for i in samples1: print (i)

print ('...\n\n')

samples2 = [measure2(i) for i in range(11)]

for s in samples2:
    #print(f'time delta: {s:.4f} seconds')
    mean2 += float(s[1])
    print (s)
    
mean2 = mean2/my_range

print ('\nMean1 : ' f'{mean1:.6f}')
print ('Mean2 : ' f'{mean2:.6f}')

The measure1 results:

nr, td, t0
(0, 0, 0)
(1, '0.000000', 1562929696.617988)
(2, '0.002000', 1562929696.6199884)
(3, '0.001001', 1562929696.620989)
(4, '0.001001', 1562929696.62199)
(5, '0.001001', 1562929696.6229906)
(6, '0.001001', 1562929696.6239917)
(7, '0.001001', 1562929696.6249924)
(8, '0.001000', 1562929696.6259928)
(9, '0.001001', 1562929696.6269937)
(10, '0.001001', 1562929696.6279945)
...

The measure2 results:

nr, td , t1, t0
(0, '0.000500', 1562929696.6294951, 1562929696.6289947)
(1, '0.000501', 1562929696.6299958, 1562929696.6294951)
(2, '0.000500', 1562929696.6304958, 1562929696.6299958)
(3, '0.000500', 1562929696.6309962, 1562929696.6304958)
(4, '0.000500', 1562929696.6314962, 1562929696.6309962)
(5, '0.000500', 1562929696.6319966, 1562929696.6314962)
(6, '0.000500', 1562929696.632497, 1562929696.6319966)
(7, '0.000500', 1562929696.6329975, 1562929696.632497)
(8, '0.000500', 1562929696.633498, 1562929696.6329975)
(9, '0.000500', 1562929696.6339984, 1562929696.633498)
(10, '0.000500', 1562929696.6344984, 1562929696.6339984)

End result:

Mean1 : 0.001001 # (measure1 function)

Mean2 : 0.000550 # (measure2 function)


def start(self):
    sec_arg = 10.0
    cptr = 0
    time_start = time.time()
    time_init = time.time()
    while True:
        cptr += 1
        time_start = time.time()
        time.sleep(((time_init + (sec_arg * cptr)) - time_start ))

        # AND YOUR CODE .......
        t00 = threading.Thread(name='thread_request', target=self.send_request, args=([]))
        t00.start()