Search code examples
pythonpytorchembeddingmatrix-factorization

PyTorch matrix factorization embedding error


I'm trying to use a single hidden layer NN to perform matrix factorization. In general, I'm trying to solve for a tensor, V, with dimensions [9724x300] where there are 9724 items in inventory, and 300 is the arbitrary number of latent features.

The data that I have is a [9724x9724] matrix, X, where columns and rows represent the number of mutual likes. (eg X[0,1] represents the sum of users who like both item 0 and item 1. Diagonal entries are not of importance.

My goal is to use MSE loss, such that the dot product of V[i,:] on V[j,:] transposed is very, very close to X[i,j].

Below is code that I've adapted from the below link.

https://blog.fastforwardlabs.com/2018/04/10/pytorch-for-recommenders-101.html

import torch
from torch.autograd import Variable

class MatrixFactorization(torch.nn.Module):
    def __init__(self, n_items=len(movie_ids), n_factors=300):
        super().__init__()

        self.vectors = nn.Embedding(n_items, n_factors,sparse=True)


    def forward(self, i,j):
        return (self.vectors([i])*torch.transpose(self.vectors([j]))).sum(1)

    def predict(self, i, j):
        return self.forward(i, j)

model = MatrixFactorization(n_items=len(movie_ids),n_factors=300)
loss_fn = nn.MSELoss() 
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

for i in range(len(movie_ids)):
    for j in range(len(movie_ids)):
    # get user, item and rating data
        rating = Variable(torch.FloatTensor([Xij[i, j]]))
        # predict
#         i = Variable(torch.LongTensor([int(i)]))
#         j = Variable(torch.LongTensor([int(j)]))
        prediction = model(i, j)
        loss = loss_fn(prediction, rating)

        # backpropagate
        loss.backward()

        # update weights
        optimizer.step()

The error returned is:

TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list

I'm very new to embeddings. I had tried replacing embeddings as a simple float tensor, however the MatrixFactorization class, which I defined, did not recognize the tensor as a model parameters to be optimized over.

Any thoughts on where I'm going wrong?


Solution

  • You are passing a list to self.vectors,

    return (self.vectors([i])*torch.transpose(self.vectors([j]))).sum(1)
    

    Try to convert it to tensor before you call self.vectors()