I am running this program on ubuntu with Python 3.10.12. I need to measure the time used to evaluate a probability function. This is the code.
def likelihood(x, fun):
if fun == 0:
rosenbrock(x)
else:
rosenbrock_simplified(x)
def rosenbrock(x):
for i in range(3000):
# some calculation
return logl
def time_evaluation():
x = np.random.normal(size=2)
start = time.time()
value = likelihood(x, 0)
stop = time.time()
return stop - start
Running this code I obtain little time values. if i run this code measuring the time inside the function rosenbrock the values are higher. Is it possible that time.time() measure only the time needed for likelihood function call?
as I said before I noticed this discrepancy of times by moving time.time() within the rosenbrock function.
These time differences might be due to the short time it takes to run your code. There will always be some fluctuations in this case.
You should run your function e.g. 1000 times (or more/less, it should run for a few minutes at least in any case and there should be no less than 10-20 calls), measure the time necessary to complete all 1000 calls and then divide the time by the number of calls to obtain the time of one call.