I have a set of vectors as a numpy array. I need to get the orthogonal distances of each of those from a plane defined by 2 vectors v1 and v2. I can get this easily for a single vector using the Gram-Schmidt process. Is there a way to do it very fast in for many vectors without having to loop through each of them, or using np.vectorize?
Thanks!
A more explicit way to achieve @Jaime's answer, you could be explicit about the construction of the projection operator:
def normalized(v):
return v/np.sqrt( np.dot(v,v))
def ortho_proj_vec(v, uhat ):
'''Returns the projection of the vector v on the subspace
orthogonal to uhat (which must be a unit vector) by subtracting off
the appropriate multiple of uhat.
i.e. dot( retval, uhat )==0
'''
return v-np.dot(v,uhat)*uhat
def ortho_proj_array( Varray, uhat ):
''' Compute the orhogonal projection for an entire array of vectors.
@arg Varray: is an array of vectors, each row is one vector
(i.e. Varray.shape[1]==len(uhat)).
@arg uhat: a unit vector
@retval : an array (same shape as Varray), where each vector
has had the component parallel to uhat removed.
postcondition: np.dot( retval[i,:], uhat) ==0.0
for all i.
'''
return Varray-np.outer( np.dot( Varray, uhat), uhat )
# We need to ensure that the vectors defining the subspace are unit norm
v1hat=normalized( v1 )
# now to deal with v2, we need to project it into the subspace
# orhogonal to v1, and normalize it
v2hat=normalized( ortho_proj(v2, v1hat ) )
# TODO: if np.dot( normalized(v2), v1hat) ) is close to 1.0, we probably
# have an ill-conditioned system (the vectors are close to parallel)
# Act on each of your data vectors with the projection matrix,
# take the norm of the resulting vector.
result=np.zeros( M.shape[0], dtype=M.dtype )
for idx in xrange( M.shape[0] ):
tmp=ortho_proj_vec( ortho_proj_vec(M[idx,:], v1hat), v2hat )
result[idx]=np.sqrt(np.dot(tmp,tmp))
# The preceeding loop could be avoided via
#tmp=orhto_proj_array( ortho_proj_array( M, v1hat), v2hat )
#result=np.sum( tmp**2, axis=-1 )
# but this results in many copies of matrices that are the same
# size as M, so, initially, I prefer the loop approach just on
# a memory usage basis.
This is really just the generalization of Gram-Schmidt orthogonalization procedure.
note that at the end of this process we have np.dot(v1hat, v1hat.T)==1
, np.dot(v2hat,v2hat.T)==1
, np.dot(v1hat, v2hat)==0
(to within numerical precision)