I've had some problems using timeit() as an accurate benchmark when comparing longer (> 1 lines) code snippets for SL4A on Android. I get a pretty high variation when comparing times. (Possibly something to do with the way android/dalvik vm allocates cpu time?).
Anyway, I wrote a script that uses a hypothesis test for means to analyse large (~1000) samples of times. Is there anything wrong with this approach?
from math import sqrt
import timeit
#statistics stuff
mean = lambda x: sum(x) / float(len(x))
def stdev (mean, dataset):
variance = ((x - mean)**2 for x in dataset)
deviation = sqrt(sum(variance) / float(len(dataset) - 1))
return deviation / sqrt(len(dataset))
def interval(mean, sampleDeviation, defaultZ = 1.57):
margin = sampleDeviation * defaultZ
return (mean - margin, mean + margin)
def testnull(dataset1, dataset2, defaultZ = 1.57):
mean1, mean2 = mean(dataset1), mean(dataset2)
sd1, sd2 = stdev(mean1, dataset1), stdev(mean2, dataset2)
interval1, interval2 = interval(mean1, sd1, defaultZ), interval(mean2, sd2, defaultZ)
inside = lambda x, y: y >= x[0] and y <= x[1]
if inside(interval1, interval2[0]) or inside(interval1, interval2[1]):
return True
return False
#timer setup
t1 = timeit.Timer('sum(x)', 'x = (i for i in range(1000))')
t2 = timeit.Timer('sum(x)', 'x = list(range(1000))')
genData, listData = [], []
for i in range(10000):
genData.append(t1.timeit())
listData.append(t2.timeit())
# testing the interval
print('The null hypothesis is {0}'.format(testnull(genData, listData)))
I think that's sensible. What you want is to compare the confidence intervals of both versions of the code for overlap. Georges et al (2007) has a complete description of the technique you are trying to use.