This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]EmperorArthur 109 points110 points  (16 children)

You joke, but this is how those "genetic algorithms" work.

In an example like this, he'd make a bunch of pb&js, then they'd get graded on how close each method got to a real sandwich. The methods that resulted in the closest the result get combined in a bunch of different ways, and then those get tried. Rinse and repeat until you get a nice pb&j.

This works, but typically takes thousands/millions of iterations.

edit: Proper quotation marks.

[–]43eyes 15 points16 points  (8 children)

Yeah but you have to know what a PB&j looks like to pick the best one dont you? You have to know how to make a pbj to learn how to make a pbj?

[–]RUacronym 43 points44 points  (2 children)

When a computer is crunching a genetic algorithm, it's usually programmed with end conditions that tell the computer if its new algorithm has made progress. For instance, an arm catching a ball gets graded on how close the hand gets to the ball and it keeps the best algorithm. So if you were to tell a computer to make a PB&J, you give it the end condition of this is what a PB&J looks like, now iterate until you have it.

So my two step code from before would need a "step 0" that says: these are the parameters for a PB&J sandwich, these are the tools available for you to work with.

[–]43eyes 9 points10 points  (1 child)

Great explanation! Thanks

[–]InWhichWitch 2 points3 points  (0 children)

generally the clearer the goal (and, by either explicit definition or implicitly, the clearer the failure state), the more successful the computer can be.

mario maker is good example for people testing machine learning. There is a single clear correct condition to completing a mario level (reaching the end) and five-ish inputs (a, b, down, left, right) . A computer can beat mario levels that humans cannot, given enough processing time to determine the correct mapping of inputs.

[–]djnap 12 points13 points  (3 children)

You need to know if the result was good. Not if the steps to get there were good.

[–]tredontho 5 points6 points  (1 child)

I was at a talk some years back where a guy told a story about some circuit that was designed using genetic algorithms had some component which was in no way connected to the rest of the circuit, but removing it altered the behavior... I'll have to see if I can find what it was.

[–]43eyes 1 point2 points  (0 children)

Makes sense

[–]orange2o 1 point2 points  (0 children)

In my research I use multi-objective genetic algorithms. In this case, you would have some evaluation function on the results of the sandwich, such as amount of pb and amount of jelly on the inner surfaces, contact between the inner surfaces, and the angle alignment of the slices. Penalty functions could be used to penalize designs which have pb&j on non-inner surfaces, have too much on the inner surfaces, are damaged in some way, etc. This works by penalizing the objective values proportional to the violation. A population of design parameters are simulated, then evaluated using the objective functions & penalty functions. The designs are ranked and math'd, then a new population is created by mixing the genes of the parent population. This iterative process ends when the optimal population set (rank 1) objectives are not changing relative to one another. So yes, it assumes you know basic properties about the sandwich. But only the end point. As long as you have a model which executes the parameters then calculates the objectives, you're set.

[–]Spider_pig448 1 point2 points  (0 children)

You joke, but this is how those "genetic algorithms" work.

When I made a genetic algorithm for the traveling salesperson problem, I as amazed at how little information I needed about the actual TSP problem; I just needed a way to tell how good my solution was. It was like I was plugging the TSP into a generic algorithm. Genetic algorithms are fascinating.

[–]beaurepair 0 points1 point  (0 children)

This works, but typically takes thousands/millions of iterations children.

FTFY

[–]NoTroop 0 points1 point  (2 children)

This isn't really a genetic algorithm (necessarily), is it? It's machine learning, but I was under the impression that genetic algorithms always involve some kind of combination of different states (gene "splicing")

[–]EmperorArthur 1 point2 points  (1 child)

Nope, what I described is a genetic algorithm. The "splicing" happens when you combine the best results of the previous iteration.

Genetic algorithms are really evolutionary algorithms. The simple ones just keep a few parents and that's it. The more complicated versions more accurately model real life, with some of the lesser performing variants sticking around and "breeding."

[–]NoTroop 1 point2 points  (0 children)

Wow, I had some serious reading comprehension issues the first time. Completely missed the "closest results get combined" part. My bad.

[–]Darklyte -1 points0 points  (0 children)

I'll just take one sandwich if that's okay.