Search code examples
pythonnlpspacysimilaritysentence-similarity

Efficient way for Computing the Similarity of Multiple Documents using Spacy


I have around 10k docs (mostly 1-2 sentences) and want for each of these docs find the ten most simliar docs of a collection of 60k docs. Therefore, I want to use the spacy library. Due to the large amount of docs this needs to be efficient, so my first idea was to compute both for each of the 60k docs as well as the 10k docs the document vector (https://spacy.io/api/doc#vector) and save them in two matrices. This two matrices can be multiplied to get the dot product, which can be interpreted as the similarity. Now, I have basically two questions:

  1. Is this actually the most efficient way or is there a clever trick that can speed up this process
  2. If there is no other clever way, I was wondering whether there is at least a clever way to speed up the process of computing the matrices of document vectors. Currently I am using a for loop, which obviously is not exactly fast:
import spacy
nlp = spacy.load('en_core_web_lg')
doc_matrix = np.zeros((len(train_list), 300))
for i in range(len(train_list)):
  doc = nlp(train_list[i]) #the train list contains the single documents
  doc_matrix[i] = doc.vector

Is there for example a way to parallelize this?


Solution

  • Don't do a big matrix operation, instead put your document vectors in an approximate nearest neighbors store (annoy is easy to use) and query the nearest items for each vector.

    Doing a big matrix operation will do n * n comparisons, but using approximate nearest neighbors techniques will partition the space to perform many fewer calculations. That's much more important for the overall runtime than anything you do with spaCy.

    That said, also check the spaCy speed FAQ.