I'm working on implementing the stochastic gradient descent algorithm for recommender systems using sparse matrices with Scipy.
This is how a first basic implementation looks like:
N = self.model.shape[0] #no of users
M = self.model.shape[1] #no of items
self.p = np.random.rand(N, K)
self.q = np.random.rand(M, K)
rows,cols = self.model.nonzero()
for step in xrange(steps):
for u, i in zip(rows,cols):
e=self.model-np.dot(self.p,self.q.T) #calculate error for gradient
p_temp = learning_rate * ( e[u,i] * self.q[i,:] - regularization * self.p[u,:])
self.q[i,:]+= learning_rate * ( e[u,i] * self.p[u,:] - regularization * self.q[i,:])
self.p[u,:] += p_temp
Unfortunately, my code is still pretty slow, even for a small 4x5 ratings matrix. I was thinking that this is probably due to the sparse matrix for loop. I've tried expressing the q and p changes using fancy indexing but since I'm still pretty new at scipy and numpy, I couldn't figure a better way to do it.
Do you have any pointers on how i could avoid iterating over the rows and columns of the sparse matrix explicitly?
I almost forgot everything about recommender systems, so I may be erroneously translated your code, but you reevaluate self.model-np.dot(self.p,self.q.T)
inside each loop, while I am almost convinced it should be evaluated once per step.
Then it seems that you do matrix multiplication by hand, that probably can be speeded up with direct matrix mulitplication (numpy or scipy will do it faster than you by hand), something like that:
for step in xrange(steps):
e = self.model - np.dot(self.p, self.q.T)
p_temp = learning_rate * np.dot(e, self.q)
self.q *= (1-regularization)
self.q += learning_rate*(np.dot(e.T, self.p))
self.p *= (1-regularization)
self.p += p_temp