Search code examples
cluster-analysisdata-miningdistancesimilarity

Combining different similarities to build one final similarity


Im pretty much new to data mining and recommendation systems, now trying to build some kind of rec system for users that have such parameters:

  • city
  • education
  • interest

To calculate similarity between them im gonna apply cosine similarity and discrete similarity. For example:

  • city : if x = y then d(x,y) = 0. Otherwise, d(x,y) = 1.
  • education : here i will use cosine similarity as words appear in the name of the department or bachelors degree
  • interest : there will be hardcoded number of interest user can choose and cosine similarity will be calculated based on two vectors like this:

1 0 0 1 0 0 ... n
1 1 1 0 1 0 ... n

where 1 means the presence of the interest and n is the total number of all interests.

My question is: How to combine those 3 similarities in appropriate order? I mean just summing them doesnt sound quite smart, does it? Also I would like to hear comments on my "newbie similarity system", hah.


Solution

  • There are not hard-and-fast answers, since the answers here depend greatly on your input and problem domain. A lot of the work of machine learning is the art (not science) of preparing your input, for this reason. I could give you some general ideas to think about. You have two issues: making meaningful similarities out of each of these items, and then combining them.

    The city similarity sounds reasonable but really depends on your domain. Is it really the case that being in the same city means everything, and being in neighboring cities means nothing? For example does being in similarly-sized cities count for anything? In the same state? If they do your similarity should reflect that.

    Education: I understand why you might use cosine similarity but that is not going to address the real problem here, which is handling different tokens that mean the same thing. You need "eng" and "engineering" to match, and "ba" and "bachelors", things like that. Once you prepare the tokens that way it might give good results.

    Interest: I don't think cosine will be the best choice here, try a simple tanimoto coefficient similarity (just size of intersection over size of union).

    You can't just sum them, as I assume you still want a value in the range [0,1]. You could average them. That makes the assumption that the output of each of these are directly comparable, that they're the same "units" if you will. They aren't here; for example it's not as if they are probabilities.

    It might still work OK in practice to average them, perhaps with weights. For example, being in the same city here is as important as having exactly the same interests. Is that true or should it be less important?

    You can try and test different variations and weights as hopefully you have some scheme for testing against historical data. I would point you at our project, Mahout, as it has a complete framework for recommenders and evaluation.

    However all these sorts of solutions are hacky and heuristic. I think you might want to take a more formal approach to feature encoding and similarities. If you're willing to buy a book and like Mahout, Mahout in Action has good coverage in the clustering chapters on how to select and encode features and then how to make one similarity out of them.