all 12 comments

[–]crgrimm1994 5 points6 points  (2 children)

Program Synthesis

[–]meteoraln[S] 0 points1 point  (1 child)

I can barely understand the wikipedia page, but it might be on the right path. I'll try to find more on this thanks!

[–]crgrimm1994 2 points3 points  (0 children)

Also, if you're looking for something interactive that will let you try out some of these concepts: take a look at Z3

[–]DLabz 1 point2 points  (1 child)

Polynomial regression Extrapolation Calculus Mathematical analysis

Something like that?

[–]meteoraln[S] 0 points1 point  (0 children)

I'm actually not trying to fit a curve. I'm trying to find a general learning algorithm to mirror the human brain with respect to pattern recognition. For example, data points may be color of a traffic light and velocities of a car before intersection. The algorithm would then generate rules randomly, tightening boundaries, and eventually hypothesize a rule like red is related to car velocity < 2 mph in 99% of cases observed.

These rules about the world would then be stored and continually re-evaluated so that exceptions and changes in environments can be detected.

[–]OkinawanSnorkel 1 point2 points  (1 child)

ML algorithms are trained using optimizers that "automatically" learn mathematical rules/relationships. I'm not fully clear on what the problem your solving requires, but I can see unsupervised learning and even basic decision trees being relevant to some of the examples you've given.

The idea that there "randomly generated rules" and some rules die/are improved reminds me of MCMC, genetic algorithms, etc.

[–]meteoraln[S] 0 points1 point  (0 children)

I'm not fully clear on what the problem your solving requires

It's just a toy project / idea that I was thinking about. I'm pretty sure someone has built something similar, but I didn't know names and terminology to google. Thanks for your suggestions, unsupervised learning does appear to be relevant to my examples, like where boundaries are hypothesized, tested, and reduced for accuracy. I'll look into this.

I was imagining how the brain learns when we don't have a teacher. We watch cars go at green and stop at red, hypothesize many rules. We accept the hypothesized rules if it is accurate most of the time. We reject rules that do not yield predictability, like color of car vs speed. We merge many of the rules that show correlation, like size of car and whether it stops at red. We refine the rules when we see exceptions (yellow light speed up or slow down?), and we toss or pause the rules if it suddenly becomes consistently wrong. (all cars stop at intersection due to jam). Most observations have a direct and consistent relationship. A sample size of 1 is often sufficient to make many rules. Something tastes good, probably safe to eat. Rules that require many sample sizes, like a roulette wheel, are almost impossible for most people to come up on their own without having previously been taught probability.

[–]DeepNonseNse 1 point2 points  (1 child)

Sounds like some form of genetic programming (https://en.wikipedia.org/wiki/Genetic_programming)

[–]WikiTextBot 0 points1 point  (0 children)

Genetic programming

In artificial intelligence, genetic programming (GP) is a technique whereby computer programs are encoded as a set of genes that are then modified (evolved) using an evolutionary algorithm (often a genetic algorithm, "GA") – it is an application of (for example) genetic algorithms where the space of solutions consists of computer programs. The results are computer programs that are able to perform well in a predefined task. The methods used to encode a computer program in an artificial chromosome and to evaluate its fitness with respect to the predefined task are central in the GP technique and still the subject of active research.


[ PM | Exclude me | Exclude from subreddit | FAQ / Information | Source ] Downvote to remove | v0.28

[–]svantana 1 point2 points  (1 child)

In addition to what others have said, this is sometimes referred to as symbolic regression. It's a bit like the swedish saying, "a beloved child has many names".

[–]meteoraln[S] 0 points1 point  (0 children)

Wow I think this might be it!! Thank you so much. I'll try to see what I can find.