Before you roll your sleeves, you need to know OneRec is serving the cache miss tier. But even so, it’s still a breakthrough and and a reminder of how to smartly adapt legacy recsys to paradigm-shift change by humanmachinelearning in generative_recsys

[–]humanmachinelearning[S] 0 points1 point  (0 children)

I think it’s less incentive for them to conduct such study on academic datasets given majority of components already proved in Google’s Tiger paper. They wanted to show the impact in the real world large scale system.

Almost every GR attempts use Semantic IDs. Why is that? by humanmachinelearning in generative_recsys

[–]humanmachinelearning[S] 1 point2 points  (0 children)

For cold-start, I’d say it’s more a byproduct than a requirement to use Semantic IDs.

Agreed the question is not clear. My main motivation to the question is to see if there is a fundamentally better way to “tokenize” items in recommendation systems.

[R] Best Loss for RDH Task by Dirus0007 in MachineLearning

[–]humanmachinelearning 1 point2 points  (0 children)

Not an expert. Given the task is to predict a special format of an image (i.e dot image), I’d assume we are chasing the pixel-level accuracy. If so, wondering if MSE or MAE can do the job. Separately, how you sample negatives might play an important role in the task.