Consider the following code:
struct timespec ts;
uint64_t start_time;
uint64_t stop_time;
if (clock_gettime(CLOCK_REALTIME, &ts) != 0) {
abort();
}
start_time = ts.tv_sec * UINT64_C(1000000000) + ts.tv_nsec;
/* some computation... */
if (clock_gettime(CLOCK_REALTIME, &ts) != 0) {
abort();
}
stop_time = ts.tv_sec * UINT64_C(1000000000) + ts.tv_nsec;
printf("%" PRIu64 "\n", (stop_time - start_time + 500000000) / 1000000000);
In the vast majority of cases, the code works as I expected, i.e., prints the number of seconds that took the computation.
Very rarely, however, one anomaly occurs.
The program reports the number of seconds like 18446743875, 18446743877, 18446743962, etc.
I figured this number roughly matched 264 nanoseconds (~584 years).
So I got the suspicion that ts.tv_nsec is sometimes equal to −1.
So my question is: What's wrong with my code? Where and why does adding 264 nanoseconds happen?
I don't see anything wrong with your code. I suspect your OS is occasionally delivering an anomalous value for CLOCK_REALTIME — although I'm surprised, and I can't quite imagine what it might be.
I suggest rewriting your code like this:
Then, if/when it happens again, you'll have more information to go on.