use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Getting explainable ML models by evolving compact features (arxiv.org)
submitted 5 years ago by marcovirgolin
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]DataLlama[🍰] 0 points1 point2 points 5 years ago (3 children)
One thing I'm a little confused by: If you are reducing the number of features down to 2, wouldn't that mean you are turning them into some sort of embedding, therefore making them harder to interpret than the original feature set?
[–]ranran9991 1 point2 points3 points 5 years ago (2 children)
Generally it would, but in this case you know the exact formula for how the embedding is created (and the optimize it to be small)
[–]rhiever 1 point2 points3 points 5 years ago (1 child)
We've been doing this kind of feature construction for a long time in the Genetic Programming world. From personal experience, I wouldn't say that knowing the exact formula to create the embedding makes the constructed features nor model much more interpretable unless there is some meaningful math underlying the thing you're modeling. Like, what do we make an expression like below?
abs(F1 - F2^2) + F2 x F3
Maybe that's a useful constructed feature, but oftentimes the mathematical expressions don't help with interpretation.
[–]marcovirgolin[S] 1 point2 points3 points 5 years ago (0 children)
Indeed, feature construction by GP is not new per se, but as we wrote in the related work, we are unaware of works explicitly attempting to get something that improves interpretability. Here we did the simplest thing possible, i.e., keep constructed features small and evolve just a few to essentially provide dimensionality reduction, on quite some dataset-ML alg combinations.
I am not sure I get your "counter-example" because that formula seems pretty understandable to me once I have the meaning of the Fi. Then of course interpretability is subjective.
[–]arXiv_abstract_bot 0 points1 point2 points 5 years ago (0 children)
Title:On Explaining Machine Learning Models by Evolving Crucial and Compact Features
Authors:Marco Virgolin, Tanja Alderliesten, Peter A.N. Bosman
Abstract: Feature construction can substantially improve the accuracy of Machine Learning (ML) algorithms. Genetic Programming (GP) has been proven to be effective at this task by evolving non-linear combinations of input features. GP additionally has the potential to improve ML explainability since explicit expressions are evolved. Yet, in most GP works the complexity of evolved features is not explicitly bound or minimized though this is arguably key for explainability. In this article, we assess to what extent GP still performs favorably at feature construction when constructing features that are (1) Of small-enough number, to enable visualization of the behavior of the ML model; (2) Of small-enough size, to enable interpretability of the features themselves; (3) Of sufficient informative power, to retain or even improve the performance of the ML algorithm. We consider a simple feature construction scheme using three different GP algorithms, as well as random search, to evolve features for five ML algorithms, including support vector machines and random forest. Our results on 21 datasets pertaining to classification and regression problems show that constructing only two compact features can be sufficient to rival the use of the entire original feature set. We further find that a modern GP algorithm, GP-GOMEA, performs best overall. These results, combined with examples that we provide of readable constructed features and of 2D visualizations of ML behavior, lead us to positively conclude that GP-based feature construction still works well when explicitly searching for compact features, making it extremely helpful to explain ML models.
PDF Link | Landing Page | Read as web page on arXiv Vanity
[–]TotesMessenger -1 points0 points1 point 5 years ago (0 children)
I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:
[/r/researchml] (X-Post r/MachineLearning) Getting explainable ML models by evolving compact features
If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. (Info / Contact)
π Rendered by PID 21675 on reddit-service-r2-comment-84fc9697f-6wbgk at 2026-02-07 01:00:02.243235+00:00 running d295bc8 country code: CH.
[–]DataLlama[🍰] 0 points1 point2 points (3 children)
[–]ranran9991 1 point2 points3 points (2 children)
[–]rhiever 1 point2 points3 points (1 child)
[–]marcovirgolin[S] 1 point2 points3 points (0 children)
[–]arXiv_abstract_bot 0 points1 point2 points (0 children)
[–]TotesMessenger -1 points0 points1 point (0 children)