I recently started learning python and I'm looking for a method to mesure how fast my code implementations are and compare them with others.
I stambled across the timeit library, and I found it very useful for my purpose but I struggle to understand the output I get from using the timeit() function vs using Timer().
Take this code example:
from timeit import timeit
def maxWealth(accounts):
m = 0
for account in accounts:
tot = sum(account)
if tot > m:
m = tot
return m
list1 = [[1, 2, 3, 5], [1, 2, 45, 6]]
print(timeit(lambda: maxWealth(list1)))
output: 0.41396119981072843
Then this example:
from timeit import default_timer as timer
def maxWealth(accounts):
m = 0
for account in accounts:
tot = sum(account)
if tot > m:
m = tot
return m
list1 = [[1, 2, 3, 5], [1, 2, 45, 6]]
start = timer()
result = maxWealth(list1)
stop = timer()
print(stop-start)
output: 6.400048732757568e-06 (which is 0.0000064s)
If i keep going with the executions, the timeit measures seem a little more consistent then using the default_timer, but they are really different values tough.
Which method is more accurate? From the docs I see both methods use a Timer() instance but If so, why am I getting this very different values?