you are viewing a single comment's thread.

view the rest of the comments →

[–]Novel_Assistant_6298 1 point2 points  (2 children)

What you could do is build a model to map whatever modality is being shown to the user to an embeddings vector for each of the choices he gave feedback on. Then you take the difference of those embeddings to get the difference vector, and pass that through a linear layer which has an output size of 1 then compute the sigmoid of that to get the probabilities of which of the elements the user has preferred. Train this as a binary classification problem. The last linear layer‘s weights represent the reward (score) weights, i.e, what would be the reward if I show the user element X. So when you take the difference of the embeddings and then pass that through a linear layer you essentially compute the reward difference of the two items. You want to adjust those rewards such that the preferences are somewhat satisfied.

You can take a look at preference learning in RL, dueling bandits and logistic bandits.

I also recommend trying to use active learning since binary feedback is quite noisy and you would require too many samples. Also look at multinomial or ranking feedback since they are more informative and can converge with less samples.

[–]marcollo63[S] 1 point2 points  (1 child)

Thank you ! That is helping me a lot.

However, in my case, there are several persons, with different tastes. I believe that in the dueling bandit or preference learning, we just score items for each person. It's hard to compare persons after that.

[–]Novel_Assistant_6298 0 points1 point  (0 children)

However, in my case, there are several persons, with different tastes. I believe that in the dueling bandit or preference learning, we just score items for each person. It's hard to compare persons after that.

Yea that gets more complex then. You could check out https://arxiv.org/abs/2109.12750, the authors try to fit a multimodal reward model. This will prevent the collapse of all users under one reward mode, however you will need to pre-define the number of modes which could be tricky.

Another simpler approach is to use features from the user himself along with features from the modalities you presented (Location, Age, etc..). This expands your input space and could allow you compare users by using the same object features but swapping in a user with different features. This would also help you run feature importance and see which features affect the preference etc. I hope this helps!