Search code examples
pythonnumpymatrix-multiplication

What is difference between the function numpy.dot(), @, and method .dot() for matrix-matrix multiplication?


Is there any difference? If not, what is preferred by convention? The performance seems to be almost the same.

a=np.random.rand(1000,1000)
b=np.random.rand(1000,1000)
%timeit a.dot(b)     #14.3 ms ± 374 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit np.dot(a,b)  #14.7 ms ± 315 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit a @ b        #15.1 ms ± 779 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Solution

  • They are all basically doing the same thing. In terms of timing, based on Numpy's documentation here:

    • If both a and b are 1-D arrays, it is inner product of vectors (without complex conjugation).

    • If both a and b are 2-D arrays, it is matrix multiplication, but using matmul or a @ b is preferred.

    • If either a or b is 0-D (scalar), it is equivalent to multiply and using numpy.multiply(a, b) or a * b is preferred.

    • If a is an N-D array and b is a 1-D array, it is a sum product over the last axis of a and b.