How acurate is a Golang time.Sleep() on a Google Cloud Pod when the amount is subseconds?

32 Views Asked by At

I added a loop which tries to read data from a database and if not yet available, sleeps for 250ms and try again. It works, as in, it will eventually retrieve the data, but the time it takes to do so increased by 1 second compare to not having the loop.

Old code:

data, err := db.Read(key)
if err != nil {
    return err
}
...handle data...

New code:

var data string
var err error
for {
    data, err = db.Read(key)
    if len(data) > 0 {
        break // it worked
    }
    if !strings.Contains(err.Error(), "not found") {
        return err
    }
    time.Sleep(250 * time.Milliseconds)
}
...handle data...

In the old case, it worked 70% of the time. The process takes about 2s on success.

In the second case, it works 95% of the time and takes about 3s on success.

I'm pretty sure that the data becomes available within 250ms to 500ms. Yet, clearly the amount of time to run the process increased by 1s. I'm thinking that the time.Sleep() command on a Virtual Machine (such as a Google Cloud pod running a docker) does not have access to a clock with such high precision.

My question is: Have you had a chance to verify a VM clock to see whether a sleep of subseconds would work (or not) pretty consistently? To me, it feels like such a sleep waits the next second and wakes up then. So, at times, it may be close to 250ms and many times it could be closer to 750ms, depending on where it starts within the current second.

1

There are 1 best solutions below

0
hobbs On

I'm thinking that the time.Sleep() command on a Virtual Machine (such as a Google Cloud pod running a docker) does not have access to a clock with such high precision.

This is definitely not true. You may have more timing jitter in such a shared environment, but the timing resolution is the same as you'll find on any modern Linux system, certainly nowhere near as coarse as whole seconds.