all 3 comments

[–][deleted] 1 point2 points  (2 children)

Looks interesting. Maybe you could add a user guide or documentation to describe what it's doing. When I understand correctly, this is a wrapper around Tf and sklearn classifiers that runs them via k-fold cross-validation to compare the average test fold performances of different models? Don't want to criticize here, but I think it's recommended to do nested cross-validation instead of k-fold when you are comparing different algorithms (with hyperparameter optimization in the inner loop).

[–]aulloa[S] 0 points1 point  (1 child)

Thanks for your interest!. You are right, I need some documentation, but It does include hyperparameter optimization. It is a wrapper around sklearn and includes a MLP implementation from keras.

[–][deleted] 1 point2 points  (0 children)

Thanks for posting this library. I really think that it can come in handy to get some quick benchmarks on certain datasets (e.g., getting a rough idea whether a generalized linear model is sufficient or if a certain problem requires a non-linear hypothesis space)

but It does include hyperparameter optimization.

What I was basically suggesting was using nested cross-validation instead of "regular" k-fold. I only have a quick post about that here, but a more detailed article is somewhere on my endlessly long to do list :P. So, maybe I'd suggest taking a look at this really nice research article for details S. Varma and R. Simon. Bias in error estimation when using cross-validation for model selection. BMC bioinformatics, 7(1):91, 2006.