I have to factorize Matrix
R[mn] to two low-rank Matrices (U[Km] and V[K*n]), I do this for predicting missing values of R by U and V.
The problem is, for factorizing R I can't use Matlab
factorization methods, so I have to work on objective function which minimizes the sum-of-squared-errors
for enhancing factorization accuracy:
details are shown below:
My Question in this post is how to minimize function F in Matlab
Using Stochastic Gradient Descent method to decompose R into U and V matrices.
Thanks for any help!
Finally I figured out with help of this page :)
I explain the approach in some steps:
Create U[k*m] and V[k*n] and fill them arbitrarily
Compute derivatives for objective function on Ui and Vj
Do gradient descent as follows:
while (your criteria satisfies(optimizing error function F)) { Ui=Ui+a(U'i); Vj=Vj+a(V'j); Evaluate F using new values of Ui and Vj; }
With the minimum F , take U and V, compute transpose(U)*V and the result is estimated R (a is step size or learning rate)