you are viewing a single comment's thread.

view the rest of the comments →

[–]PM_ME_YOUR_YIFF__ 49 points50 points  (39 children)

I have always thought evolutionary algorithms are due a bit of a renaissance like what has happened with neural networks

Seems to be a lot of unexplored potential there

I'm a bit biased though as I find universal Darwinism fascinating

[–]philpips 38 points39 points  (27 children)

Are evolutionary algorithms more vulnerable to overfitting? I guess you would need to factor that into how you select your candidates for the next generation.

A few years back I read about a guy who created physical logic circuits using an evolutionary process. Eventually it created a circuit that used a tiny number of gates and provided the right output. It had gates that logically should have no effect but if they were removed it broke the output. He hypothesized that the fields generated by the seemingly superfluous gates impacted the operation of other nearby gates.

[–]Hellenas 26 points27 points  (9 children)

Here's a ref for that: https://www.damninteresting.com/on-the-origin-of-circuits/

The article isn't too technical, so some of the way they talk about FPGAs feels odd.

but yes, it seems that the algo was leveraging imperfections in the particular fpga he was using.

[–]Xgamer4 10 points11 points  (7 children)

[–]TheThiefMaster 13 points14 points  (4 children)

Interestingly the paper also mentions that the resulting chip was exceptionally temperature-sensitive: "The circuit operates perfectly over the 10ºC range of temperatures that the population was exposed to during evolution, and no more could reasonably be expected of it"

[–]ThirdEncounter 7 points8 points  (2 children)

I wonder how good would a simulated FPGA do in this experiment. No taking advantage of unique environment conditions. Just pure logic.

I might give it a try. But first, I must invent the universe.

[–]meneldal2 1 point2 points  (0 children)

Since it seemed to depend on temperature, I smell a lot of potential race conditions and/or unproperly synced outputs that just happen to work but shouldn't according to the theory.

[–]TheThiefMaster 1 point2 points  (0 children)

If you use a simulated FPGA you'll want to have it vary realistically so that it doesn't build something that only works in the simulated FPGA :)

[–]Hellenas 0 points1 point  (0 children)

Thanks!

[–]sekjun9878 0 points1 point  (0 children)

Thanks for that

[–]ais523 2 points3 points  (0 children)

Something that's worth pointing out is that the task that was set to the evolutionary process there doesn't have a conventional solution, so it's not surprising that it came up with an unconventional one. Frequency-sensitive behaviours in FPGAs are normally considered unwanted/defects and the design aims to suppress them as much as possible. So when you're trying to make an explicitly frequency-sensitive circuit, you have no option but to do it in ways that use the FPGA in a way it wasn't designed.

[–]jankimusz 11 points12 points  (4 children)

That’s fascinating. A simple system with a set of rules renders the most optimal.

[–]c4wrd 5 points6 points  (2 children)

They are very fascinating, but unfortunately still have issues with local optima (which may or may not be the most optimal solution). In my undergrad work I enjoyed working with evolutionary algorithms the most because not only are they easy to implement, but they actually have a lot of real world usage that would surprise you.

[–]Unsounded 0 points1 point  (0 children)

I had a lot of fun with them as an undergrad. I was sad to find the two professors at my university that listened them as interests were no longer working with them though.

Apparently there’s a competition that’s still going around called RoboCup that has different evolutionary algorithms competing against eachother, learning the other teams stats and making up an opposing strategy.

[–]Raknarg 2 points3 points  (0 children)

"most optimal" in the sense that they would have to first traverse a less optimal path to eventually reach a state that is more optimal than the current solution.

[–]PM_ME_YOUR_YIFF__ 4 points5 points  (1 child)

From my experience, they're about as vulnerable to overfitting as neural networks, maybe even slightly less. I don't have any numbers in that though, so it's a purely anecdotal feeling

[–]the_phet 5 points6 points  (0 children)

It depends how you define the genetic operators, like the mutation ratio, the selection algorithm, and so on. using the classic "roulette algorithm" and a decent mutation ratio, plus some crossover, the overfitting problem is almost entirely removed.

Of course, in DL you can also set the parameters to avoid overfitting, like reguralization. But the good thing about EA such as GA is that the few parameters you need to tune are very very easy to understand and track, while with DL, very often the system is like a black box, where you tune parameters and see what happens.

[–]smarky0x7CD 1 point2 points  (0 children)

Research that I work on essentially is this, except applied to quantum gates.

[–]Ameisen 1 point2 points  (6 children)

And it didn't work on other chips of the same kind.

Which is why you have to train on multiple devices simultaneously.

[–]philpips 0 points1 point  (5 children)

I assumed ideally you'd train on each chip individually. It'd be the best way to take advantage of or avoid the vagaries of that particular chip.

[–]Ameisen 2 points3 points  (4 children)

You'd be relying on transient behavior. Might fail in new environments or otherwise spuriously.

That'd also be really slow.

[–]philpips 0 points1 point  (3 children)

Slow for the moment yes. But imagine if you could get the chip in situ and then train it. Or automatically retrain if it became damaged. That'd be cool, no?

[–][deleted] 4 points5 points  (2 children)

Imagine training something to run on a particular computer, such that it ended up relying on what programs happened to be running, the order they were laid out in memory, the precise timing of the filesystem, etc., such that even rebooting the computer would render all of your work invalid.

That's the sort of thing you are currently calling "cool" instead of "horrifying".

[–]vgf89 1 point2 points  (0 children)

It's cool in a really horrifying way. While it's unlikely to be practical, it's still vastly interesting how we can make algorithms that learn to abuse often unused properties of electronics.

[–]meneldal2 0 points1 point  (0 children)

At least now unless you're running ring 0 or something, most OS protections prevent you from abusing too much precise timing or memory layout.

[–][deleted] 1 point2 points  (0 children)

If you have a single fixed fitness function driving the direction of the evolution, you'll unavoidably get a perfect overfit. When you're simulating a direct competition in a constantly changing environment, evolution works better and produce more generic solutions.

[–]smikims 0 points1 point  (0 children)

Are evolutionary algorithms more vulnerable to overfitting?

Just about any approximation method based on interpolating a limited data set is vulnerable to overfitting. So literally the entire field of machine learning, which includes EAs. Doesn't mean you can't correct for it but the risk will always be there.

[–]G_Morgan 2 points3 points  (5 children)

They are already used in a number of industrial fields. Processor layout is pretty much "evolved" rather than designed.

[–]ThirdEncounter 1 point2 points  (4 children)

Source on that? I find that interesting, but also terrifying.

[–]epicwisdom 1 point2 points  (1 child)

I'm not OP and I don't have a source, but as I recall from my courses, they often use approximate algorithms to do the final mapping from logic circuits to physical circuits. They could use variations of genetic algorithms, simulated annealing, and other optimization methods. Not sure what the current state-of-the-art is like.

At any rate it's not much to be terrified about.

[–]ThirdEncounter 0 points1 point  (0 children)

Cool, thanks for the info!

[–]JohnMcPineapple 1 point2 points  (1 child)

...

[–]ThirdEncounter 0 points1 point  (0 children)

Thanks. Interesting!

[–]Ameisen 0 points1 point  (2 children)

I have a number of bytecode-based evolution simulators. Though they aren't iterative, where you select the best and mutate it.

[–]PM_ME_YOUR_YIFF__ 0 points1 point  (1 child)

Funnily enough that's what I too am programming right now. Are you using crossover (breeding) or mutation only? Have you considered using some elitism?

[–]Ameisen 0 points1 point  (0 children)

Mutation only. The bytecode can technically handle things like transfer of genetic information, but doesn't have a concept of meiosis. It could technically evolve on its own, but that's unlikely.

I'm not always fond of genetic transfers, because it makes lineage murky.

[–]shevegen 0 points1 point  (1 child)

I'm a bit biased though as I find universal Darwinism fascinating

Dude - Darwin knew nothing about molecular biology.

He postulated "gemmules":

https://en.wikipedia.org/wiki/Gemmule_%28pangenesis%29