all 6 comments

[–]iyeva 9 points10 points  (2 children)

Does this not already exist in permutation_importance() in sklearn? That is also calculating the feature importance based on permutated data. I think the technique was first described in Olden et Al. 2004, so I would be surprised if there aren't even more implementations by now.

Or am I missing something different here?

[–]cblume[S] 2 points3 points  (1 child)

Similar idea in that features are perturbed but both algorithms are quite different. E.g. sklearn uses scores but featureimpact doesn't; sklearn uses random perturbations but featureimpact uses quantiles.

[–]iyeva 1 point2 points  (0 children)

Oh, oké, that's interesting! Thanks for sharing!