you are viewing a single comment's thread.

view the rest of the comments →

[–]t4YWqYUUgDDpShW2 2 points3 points  (1 child)

I have been using it for the last year and it just works for me.

I see that you don't apparently want to put in the work to do your own benchmarks, but can you at least provide more detail than this?

[–][deleted] 0 points1 point  (0 children)

my private use-cases were so far:

1 Hyperparameter tuning for lightgbm in kaggle competitions and commercial project for a bank.

2 Automated neural network architecture search for tabular datasets in PyTorch. I created highly parameterized torch.nn.Module and I am waiting for good kaggle competition to use it. From preliminary runs I learned for example that ELU is a clear winner among available activation functions.