[c] Get a timestamp in C in microseconds?

How do I get a microseconds timestamp in C?

I'm trying to do:

struct timeval tv;
gettimeofday(&tv,NULL);
return tv.tv_usec;

But this returns some nonsense value that if I get two timestamps, the second one can be smaller or bigger than the first (second one should always be bigger). Would it be possible to convert the magic integer returned by gettimeofday to a normal number which can actually be worked with?

This question is related to c time

The answer is


First we need to know on the range of microseconds i.e. 000_000 to 999_999 (1000000 microseconds is equal to 1second). tv.tv_usec will return value from 0 to 999999 not 000000 to 999999 so when using it with seconds we might get 2.1seconds instead of 2.000001 seconds because when only talking about tv_usec 000001 is essentially 1. Its better if you insert

if(tv.tv_usec<10)
{
 printf("00000");
} 
else if(tv.tv_usec<100&&tv.tv_usec>9)// i.e. 2digits
{
 printf("0000");
}

and so on...


You have two choices for getting a microsecond timestamp. The first (and best) choice, is to use the timeval type directly:

struct timeval GetTimeStamp() {
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv;
}

The second, and for me less desirable, choice is to build a uint64_t out of a timeval:

uint64_t GetTimeStamp() {
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv.tv_sec*(uint64_t)1000000+tv.tv_usec;
}

struct timeval contains two components, the second and the microsecond. A timestamp with microsecond precision is represented as seconds since the epoch stored in the tv_sec field and the fractional microseconds in tv_usec. Thus you cannot just ignore tv_sec and expect sensible results.

If you use Linux or *BSD, you can use timersub() to subtract two struct timeval values, which might be what you want.


But this returns some nonsense value that if I get two timestamps, the second one can be smaller or bigger than the first (second one should always be bigger).

What makes you think that? The value is probably OK. It’s the same situation as with seconds and minutes – when you measure time in minutes and seconds, the number of seconds rolls over to zero when it gets to sixty.

To convert the returned value into a “linear” number you could multiply the number of seconds and add the microseconds. But if I count correctly, one year is about 1e6*60*60*24*360 µsec and that means you’ll need more than 32 bits to store the result:

$ perl -E '$_=1e6*60*60*24*360; say int log($_)/log(2)'
44

That’s probably one of the reasons to split the original returned value into two pieces.


timespec_get from C11 returns up to nanoseconds, rounded to the resolution of the implementation.

#include <time.h>
struct timespec ts;
timespec_get(&ts, TIME_UTC);
struct timespec {
    time_t   tv_sec;        /* seconds */
    long     tv_nsec;       /* nanoseconds */
};

More details here: https://stackoverflow.com/a/36095407/895245


use an unsigned long long (i.e. a 64 bit unit) to represent the system time:

typedef unsigned long long u64;

u64 u64useconds;
struct timeval tv;

gettimeofday(&tv,NULL);
u64useconds = (1000000*tv.tv_sec) + tv.tv_usec;