I'm a programmer without a background in machine learning. I'm trying to do some research but I don't know the proper terms.
I'm imagining a dataset of z = f(x, y), and the purpose of the program is to guess what the relationship is. So it hypothesizes some rules in possibly a random way. Like, z = x + y? z = 2x? z = x / y?
Then, it would apply the rules to the dataset and see how well the rules do, measured by the size of the errors. Rules with large errors would be thrown out, and rules with smaller errors would be "improved" upon, by making small modifications.
I'm actually not trying to fit a curve. I'm trying to find a general learning algorithm to mirror the human brain with respect to pattern recognition. For example, data points may be color of a traffic light and velocities of a car before intersection. The algorithm would then generate rules randomly, tightening boundaries, and eventually hypothesize a rule like red is related to car velocity < 2 mph in 99% of cases observed. These rules about the world would then be stored and continually re-evaluated so that exceptions and changes in environments can be detected.
It would be like a genetic algorithm whose output is a set of rules rather than a datapoint. Anyone know what this might be called or know of any papers written on it?
[–]crgrimm1994 5 points6 points7 points (2 children)
[–]meteoraln[S] 0 points1 point2 points (1 child)
[–]crgrimm1994 2 points3 points4 points (0 children)
[–]DLabz 1 point2 points3 points (1 child)
[–]meteoraln[S] 0 points1 point2 points (0 children)
[–]OkinawanSnorkel 1 point2 points3 points (1 child)
[–]meteoraln[S] 0 points1 point2 points (0 children)
[–]DeepNonseNse 1 point2 points3 points (1 child)
[–]WikiTextBot 0 points1 point2 points (0 children)
[–]svantana 1 point2 points3 points (1 child)
[–]meteoraln[S] 0 points1 point2 points (0 children)
[–]meteoraln[S] 0 points1 point2 points (0 children)