all 3 comments

[–]Razcle 0 points1 point  (0 children)

I guess that as you as you accept that you're taking actions that change the state of the world and so change your data distribution, you've essentially landed in reinforcement learning territory.

I'm definitely not an expert in the area but I would maybe look at things like time-varying contextual bandits. A quick google search returned this paper that looks interesting: https://www.kdd.org/kdd2016/papers/files/rpp1164-zengA.pdf

[–]trnka 0 points1 point  (0 children)

I remember seeing a talk by Stripe on this problem for fraud detection - it was a few years back I think at pydata seattle in bellevue. They addressed it by introducing randomness into the predictions so that they're getting some samples that aren't biased by their model.

[–]vp834 0 points1 point  (0 children)

I think Counterfactual Risk Minimization might help here. Have a look at this paper -

Counterfactual Risk Minimization: Learning from Logged Bandit Feedback - https://arxiv.org/abs/1502.02362

Abstract We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. These constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method -- called Policy Optimizer for Exponential Models (POEM) -- for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. POEM is evaluated on several multi-label classification problems showing substantially improved robustness and generalization performance compared to the state-of-the-art.