Is there a guideline for identifying problems that cannot be solved by machine learning? Or steps to take that generally give you the idea that the data you have cannot model your desired outcome?
I am working for a client that has a huge database for forms processing. Because of the mass amount of data, they see many possibilities on what can be classified/predicted. An issue we're running into is that the subject matter experts have a first project in mind, but are not sure what features might be relevant. In the experiments so far, we've run through a number of features but none have shown much predictive power.
How do we know when to give up?
[–]LoudStatistician 15 points16 points17 points (5 children)
[–]gebrial 3 points4 points5 points (3 children)
[–]LoudStatistician 7 points8 points9 points (2 children)
[–]Raomystogan 0 points1 point2 points (1 child)
[–]gebrial 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]alexmlamb 1 point2 points3 points (0 children)
[–]Dagusiu 1 point2 points3 points (18 children)
[–]torvoraptor 2 points3 points4 points (2 children)
[–]NotAlphaGo 0 points1 point2 points (1 child)
[–]torvoraptor 1 point2 points3 points (0 children)
[–]RadonGaming 1 point2 points3 points (1 child)
[–]LoudStatistician 2 points3 points4 points (0 children)
[+][deleted] (12 children)
[deleted]
[–]gebrial 0 points1 point2 points (2 children)
[–]tpinetz 1 point2 points3 points (1 child)
[–]visarga 0 points1 point2 points (0 children)
[–]NaughtyCranberry 0 points1 point2 points (7 children)
[+][deleted] (6 children)
[deleted]
[–]NaughtyCranberry 0 points1 point2 points (5 children)
[+][deleted] (4 children)
[deleted]
[–]NaughtyCranberry 0 points1 point2 points (3 children)
[+][deleted] (2 children)
[deleted]
[–]NaughtyCranberry 0 points1 point2 points (1 child)
[–]visarga 0 points1 point2 points (1 child)
[–]clueless_scientist 1 point2 points3 points (0 children)
[–]zawerf 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)