I am trying to calculate dot product of two 2D vectors using numpy Here is what I know so far:
np.inner(v, w)
to get the dot product of these two vectors but since 1D arrays have shape of (m, ) I don't like to use them due to making troubles by that lost second dimension in (m, ) v = np.array([[1], [3], [5]])
w = np.array([[2], [4], [6]])
We can calculate the dot product by transposing one of these vectors and then using matrix multiplication:
vdotw = v.T @ w
or:
vdotw = w.T @ v
My question is that is there any method is numpy that specifically calculate dot product of two vectors (defined as 2D arrays not those funny 1D arrays) directly and without transposing one of them?
Numpy's einsum()
can be used to do this.
However, your request is very unusual. Why don't you want to use transpose? The transpose operator has virtually zero cost: it does not make a copy of the underlying data as I suspect you imagine it to.