Search code examples
python-3.xnumpyrecommendation-enginetf-idfmxnet

MXNet - Dot Product of Sparse Matrices


I'm in the process of building a content recommendation model using MXNet. Despite being ~10K rows, out of memory issues are thrown with CPU and GPU contexts in MXNet. The current code is below.

```

import mxnet as mx
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel

df = pd.read_csv("df_text.csv")

tf = TfidfVectorizer(analyzer = "word",
        ngram_range = (1,3),
        min_df = 2,
        stop_words="english")

tfidf_matrix = tf.fit_transform(df["text_column"])

mx_tfidf = mx.nd.array(tfidf_matrix, ctx=mx.gpu())

# Out of memory error occurs here.  
cosine_similarities = mx.ndarray.dot(mx_tfidf, mx_tfidf.T)

```

I'm aware that the dot product is a sparse matrix multiplied by a dense matrix, which may be part of the issue. This said, would the dot product have to be calculated across multiple GPU's, in order to prevent out of memory issues?


Solution

  • In MXNet (and AFAIK all other platforms) there is not magical "perform dot across GPUs" solution. One option is to use sparse matrices in MXNet (see this tutorial)

    Another option is to implement your own multi-GPU dot product by slicing your input array into multiple matrices and performing parts of your dot product in each GPU.