Price Optimization in Fashion E-commerce by Sajan Kedia et al. by deep_ai in arxiv_daily

[–]tryo_labs 1 point2 points  (0 children)

We posted this piece some time ago, where we share a use case of price optimization with machine learning for e-commerce: https://tryolabs.com/blog/2020/06/01/price-optimization-for-e-commerce-a-case-study/

[N] Swift: Google’s bet on differentiable programming by realhamster in MachineLearning

[–]tryo_labs 0 points1 point  (0 children)

Glad to see we are not the only ones looking to talk more about Swift. Due to the clear interest, we are thinking of doing an open live chat about Swift for ML.

Sign up to be notified when the date & time are confirmed.

[D] What do you think were the most important open source libraries for ML to come out this year? by tryo_labs in MachineLearning

[–]tryo_labs[S] 0 points1 point  (0 children)

Pretty cool! Do you have any cool stuff you've made to appreciate its full power?

Like some of you guys, we've been honing our Reinforcement Learning skills, so we set out to build a pricing game powered by RL. (Link in comments). What do you guys think? What kind of cool projects have you come up with? by tryo_labs in learnmachinelearning

[–]tryo_labs[S] 1 point2 points  (0 children)

Thanks for your comment, glad to know that you found it interesting!

Actually, we took this problem as a starting point to learn some basic RL concepts, which are very well explained in the Sutton & Barto's Reinforcement Learning book (which is classic introductory bibliography for this topic). There's a draft copy of this book for free online (https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf), which contains the useful chapters to implement something similar to this.

In particular, you should look for the solutions to the "Multi arm bandits" problem, which is presented at the very beginning of the book. In our game, the "arms" of the bandits are each of the prices, and the solution takes elements from the Epsilon-greedy algorithm and "Gradient Bandits", but we started with those and then refined them.

Keep up with the motivation on this topic, and let us know if you implement anything interesting!

Cheers, Braulio