I've following user behavior data, 1. like 2. dislike 3. rating 4. product viewed 5. product purchased
The spark MLlib which support implicit behavioral data with the confident score 0 or 1, Ref (http://spark.apache.org/docs/latest/mllib-collaborative-filtering.html).
For example user 1 viewed product A then the model will be like
1,A,1 (userId, productId, binary confident score)
But by looking at the nature of behavior, product liked has strong confident than product viewed. Product bought has strong confident than product viewed.
How can one model the data based on the type of behavior?
Actually implicit data does not have to be 0 or 1. It means that the values are treated like a confidence or strength of association, rather than a rating. You can simply model actions that show a higher association between user and item as having a higher confidence. A like is stronger than a view, and a purchase is stronger than a like.
In fact, negative confidence can be fit into this framework (and I know MLlib implements that). A dislike can mean a negative score.
What the values are are up to you to tune, really. I think it's reasonable to pick values that correspond to relative frequency, if you have no better idea. For example, if there are generally 50x more page views than likes, maybe a like's value is 50x that of a page view.