Search code examples
pythonpython-3.xgmpy

How to boost efficiency for large number modulus multiplications in Python


I am using python3 without any tailored library for some simple arithmetic. The operation that dominates computational efficiency is a multiplication of many 2048 bit values:

length=len(array)
res=1
for x in range(length):
       res=(res*int(array[x]))
       ret=res%n2

To give you an insight it takes ~3940 seconds to make 10000 multiplications moduli a number for every multiplication for an:

Intel Core i5 CPU M 560 @ 2.67GHz × 4 with 8GB of memory, running Ubuntu 12.04 32bit machine.

Would it make sense to boost it up using a library like gmpy2 or there would not be any advantage?


Solution

  • You seem to be calculating the product of all the numbers first then taking the remainder, rather than exploiting the properties of modular multiplication: a * b * c mod p == (a * b mod p) * c mod p. This takes very little time at all to multiply 10,000 2048-bit numbers modulo some n:

    In [1]: import random
    
    In [2]: array = [random.randrange(2**2048) for i in range(10000)]
    
    In [3]: n = random.randrange(2**2048)
    
    In [4]: prod = 1
    
    In [5]: %%time
       ...: for e in array:
       ...:         prod *= e
       ...:         prod %= n
       ...: 
    CPU times: user 210 ms, sys: 4.07 ms, total: 214 ms
    Wall time: 206 ms
    

    For you, I would suggest:

    array = map(int, array)
    prod = 1
    for x in array:
        prod *= x
        prod %= n2