Can somebody explain me why this is happening?
aatiis@aiur ~ $ time python /usr/lib64/python2.7/timeit.py -n 1 \
-- 'x = 10**1000'
1 loops, best of 3: 0.954 usec per loop
real 0m0.055s
user 0m0.050s
sys 0m0.000s
aatiis@aiur ~ $ time python /usr/lib64/python2.7/timeit.py -n 1 \
-- 'x = 10**100000'
1 loops, best of 3: 0.954 usec per loop
real 0m0.067s
user 0m0.040s
sys 0m0.020s
aatiis@aiur ~ $ time python /usr/lib64/python2.7/timeit.py -n 1 \
-- 'x = 10**10000000'
1 loops, best of 3: 0.954 usec per loop
real 0m20.802s
user 0m20.540s
sys 0m0.170s
I get the exact same result from timeit
, but time
tells me that evaluating 10**10000000
takes more than 20 seconds. The same happens if I call timeit
from the interpreter:
>>> t = time.time()
>>> print timeit.timeit('x = 10**10000000;', number=1)
5.00679016113e-06
>>> print time.time() - t
20.6168580055
Why is my timeit
not working, or what am I doing wrong?
Additional info:
>>> print sys.version
2.7.1+ (2.7:4f07cacb2c3b+, Mar 28 2011, 23:11:59)
[GCC 4.4.5]
>>> print sys.version_info
>>> sys.version_info(major=2, minor=7, micro=2, releaselevel='alpha', serial=0)
UPDATE:
Here's another very interesting observation:
>>> def run():
... t = time.time()
... x = 10**10000000
... print time.time() - t
When I press enter after defining this function, it takes about half a minute till I get back to a prompt. And then:
>>> run()
2.14576721191e-06
Why is that happening? Is the function body being pre-compiled or optimized somehow?
Compare:
>>> import dis
>>> def run():
... return 10**100
...
>>> dis.dis(run)
3 0 LOAD_CONST 3 (100000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000L)
3 RETURN_VALUE
And
>>> def runvar():
... x = 10
... return x**100
...
>>> dis.dis(runvar)
3 0 LOAD_CONST 1 (10)
3 STORE_FAST 0 (x)
4 6 LOAD_FAST 0 (x)
9 LOAD_CONST 2 (100)
12 BINARY_POWER
13 RETURN_VALUE
Notice that BINARY_POWER
is executed at runtime only in the second case. So timeit
behaves as it should.