Yes, this program has likely used less than a millsecond. Try using microsecond resolution with timeval
.
e.g:
#include <sys/time.h>
struct timeval stop, start;
gettimeofday(&start, NULL);
//do stuff
gettimeofday(&stop, NULL);
printf("took %lu us\n", (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);
You can then query the difference (in microseconds) between stop.tv_usec - start.tv_usec
. Note that this will only work for subsecond times (as tv_usec
will loop). For the general case use a combination of tv_sec
and tv_usec
.
Edit 2016-08-19
A more appropriate approach on system with clock_gettime
support would be:
struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC_RAW, &start);
//do stuff
clock_gettime(CLOCK_MONOTONIC_RAW, &end);
uint64_t delta_us = (end.tv_sec - start.tv_sec) * 1000000 + (end.tv_nsec - start.tv_nsec) / 1000;