Search code examples
pythonfloating-pointdecimal

Decimal Python vs. float runtime


Just a general question on what sort of runtime differences I should be expecting between using these two different data types.

My test:

test = [100.0897463, 1.099999939393,1.37382829829393,29.1937462874847272,2.095478262874647474]
test2 = [decimal.Decimal('100.0897463'), decimal.Decimal('1.09999993939'), decimal.Decimal('1.37382829829'), decimal.Decimal('29.1937462875'), decimal.Decimal('2.09547826287')]

def average(numbers, ddof=0):
    return sum(numbers) / (len(numbers)-ddof)

%timeit average(test)
%timeit average(test2)

The differences in runtime are:
1000000 loops, best of 3: 364 ns per loop
10000 loops, best of 3: 80.3 µs per loop

So using decimal was about 200 times slower than using floats. Is this type of difference normal and along the lines of what I should expect when deciding which data type to use?


Solution

  • Based on the time difference you are seeing, you are likely using Python 2.x. In Python 2.x, the decimal module is written in Python and is rather slow. Beginning with Python 3.2, the decimal module was rewritten in C and is much faster.

    Using Python 2.7 on my system, the decimal module is ~180x slower. Using Python 3.5, the decimal module is in only ~2.5x slower.

    If you care about decimal performance, Python 3 is much faster.