Search code examples
pythonmachine-learningscoring

Combine independent scores to make aggregate score in duplicate detection algorithm


I'm building a duplicate detector, and I have identified a few factors that should be correlated to being a duplicate:

  • Comparison of document lengths
  • Comparison of document titles
  • Comparison of document citations
  • Comparison of document texts using "gestalt pattern matching"

I can easily get a 0-1 value for any of these factors, but where I'm stuck is how to combine these factors into an aggregate.

So, for example, if the length is spot on and the titles are very similar, I can probably assume it's a duplicate, even if the citation is fairly different, because citations are messy in this corpus. Or you can imagine similar kinds of things (length is off, but other factor are on; all factors are good but not great; etc).

Ultimately what I'd like to do is have the system identify documents that might be duplicates (this part is easy), and then I say yay or nay. As I vote on these duplicates, it determines what kinds of scores should be expected in a valid duplicate, and learns how to proceed without my yays or nays.


Solution

  • You could use some kind of machine learning classification algorithm that uses your inputs as features.

    That is, what you're asking for is a black-box function that takes a 0-1 score for each of those factors and gives you an overall score as to whether the document pair should be considered a duplicate. You need to choose such a function based on a list of (input, output) pairs, where the inputs are those four features from above (or whatever other ones you think might make sense) and the outputs are either 0 (not duplicate) or 1 (duplicate).

    This is exactly the standard setting for classification. Some options for accomplishing this include logistic regression, decision trees, neural networks, support vector machines, and many many more.

    Logistic regression might be a good choice; it's fairly easy and quick to implement but also quite powerful. Basically, it chooses weights to assign to each dimension based on the training data, and then predicts by adding up the weighted features and passing that sum through a logistic function 1/(1+exp(sum)) to give a probability of being a duplicate. This amounts to selecting a separating hyperplane in the 4-dimensional space chosen by your features: if the 4-dimensional input point lies on one side it's positive, on the other side it's negative.

    If you want a simple numpy implementation to look at for reference, here's one that I wrote for a class assignment.


    Note that this approach only tells you what to do for a pairwise comparison: unless your number of documents is quite small, you probably won't want to do this for every pair of documents (since the fuzzy content matching at least is probably fairly expensive to compute, though with logistic regression the actual prediction is fairly easy). You'll probably have to come up with some heuristic for deciding which documents to consider as duplicates at all (based on, say, a nearest-neighbors title search or citation matching or TF-IDF scores or something).