I am building a recommendation system in order to recommend training to employees based on user features and item features which LightFM according to the documentation its a great algorithm.
my user dataframe:
User-Id name age los ou gender skills
0 1 Luis 21 IFS architecture M python
1 2 Peter 22 ADV pmo M pm
2 3 Jurgen 23 IFS architecture M sql
3 4 Bart 24 IFS architecture M python
4 5 Cristina 25 ADV pmo F pm
5 6 Lambert 33 IFS development M sql
6 7 Rahul 44 IFS development M python
My trainingds data frame
Training-Id training name main skill
0 1 basic python python
1 2 advanced python python
2 3 basic scrum pm
3 4 advanced scrum pm
4 5 basic sql sql
5 6 advanced sql sql
My training taken dataframe (10 means a user took that training) so my weights its only 10s
User-Id Training-Id TrainingTaken
0 1 1 10
1 1 2 10
2 2 3 10
3 2 4 10
4 3 5 10
5 3 6 10
6 4 1 10
7 4 2 10
I found this great helpder in order to create tha matrices: https://github.com/Med-ELOMARI/LightFM-Dataset-Helper
So:
items_column = "Training-Id"
user_column = "User-Id"
ratings_column = "TrainingTaken"
items_feature_columns = [
"training name",
"main skill"
]
user_features_columns = ["name","age","los","ou", "gender", "skills"]
dataset_helper_instance = DatasetHelper(
users_dataframe=usersdf,
items_dataframe=trainingsdf,
interactions_dataframe=trainingstakendf,
item_id_column=items_column,
items_feature_columns=items_feature_columns,
user_id_column=user_column,
user_features_columns=user_features_columns,
interaction_column=ratings_column,
clean_unknown_interactions=True,
)
# run the routine
# you can alslo run the steps separately one by one | routine function is simplifying the flow
dataset_helper_instance.routine()
the above helper returns the interaction matrix, the weight matrix, etc.
dataset_helper_instance.weights.todense()
Output menu
matrix([[10., 10., 0., 0., 0., 0.],
[ 0., 0., 10., 10., 0., 0.],
[ 0., 0., 0., 0., 10., 10.],
[10., 10., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.]], dtype=float32)
dataset_helper_instance.interactions.todense()
matrix([[1., 1., 0., 0., 0., 0.],
[0., 0., 1., 1., 0., 0.],
[0., 0., 0., 0., 1., 1.],
[1., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]], dtype=float32)
Then I train test split and fit the model
from lightfm import LightFM
from lightfm.cross_validation import random_train_test_split
(train, test) = random_train_test_split(interactions=dataset_helper_instance.interactions, test_percentage=0.2)
model = LightFM(loss='warp')
model.fit(
interactions=dataset_helper_instance.interactions,
sample_weight=dataset_helper_instance.weights,
item_features=dataset_helper_instance.item_features_list,
user_features=dataset_helper_instance.user_features_list,
verbose=True,
epochs=50,
num_threads=20,
)
Then I check the AUC and precision:
from lightfm.evaluation import precision_at_k
from lightfm.evaluation import auc_score
train_precision = precision_at_k(model, train,item_features=dataset_helper_instance.item_features_list, user_features=dataset_helper_instance.user_features_list , k=10).mean()
test_precision = precision_at_k(model, test, item_features=dataset_helper_instance.item_features_list, user_features=dataset_helper_instance.user_features_list,k=10).mean()
train_auc = auc_score(model, train,item_features=dataset_helper_instance.item_features_list, user_features=dataset_helper_instance.user_features_list).mean()
test_auc = auc_score(model, test,item_features=dataset_helper_instance.item_features_list, user_features=dataset_helper_instance.user_features_list).mean()
print('Precision: train %.2f, test %.2f. '% (train_precision, test_precision))
print('AUC: train %.2f, test %.2f.' % (train_auc, test_auc))
Precision: train 0.15, test 0.10.
AUC: train 0.90, test 1.00.
Then I do predictions for an existing user
scores = model.predict(user_ids=6, item_ids=[1,2,3,5,6])
print(scores)
[ 0.01860116 -0.20987387 0.06134995 0.08332028 0.13678455]
Great, I can get some predictions of trainings to follow for user ID 6.
Now I want to predict for new users, (cold start)
I tried the following:
dataset = Dataset()
new_user_feature = [8,{'name:John', 'Age:33', 'los:IFS','ou:development', 'skills:sql'} ]
new_user_feature = [8,new_user_feature]
new_user_feature = dataset.build_user_features([new_user_feature])
#predict new users User-Id name age los ou gender skills
model.predict(0, item_ids=[1,2,3,5,6], user_features=new_user_feature)
However I get this error:
ValueError: user id 8 not in user id mappings.
What am I missing here?
I cannot test it, but I think the problem is when you write:
new_user_feature = [8,{'name:John', 'Age:33', 'los:IFS','ou:development', 'skills:sql'} ]
new_user_feature = [8,new_user_feature]
According to the documentation, dataset.build_user_features(..)
wants an iterable of the form (user id, [list of feature names])
or (user id, {feature name: feature weight})
.
In your case, I think you should replace the two lines above with just:
new_user_feature = [8,{'name':'John', 'Age':33, 'los':'IFS','ou':'development', 'skills':'sql'} ]
# Is the gender missing?
If it doesn't work, maybe the input format is something like that:
new_user_feature = [8,['John', 33, 'IFS', 'development', 'sql'] ]
Let me know if it solves the issue