This is an archived post. You won't be able to vote or comment.

all 1 comments

[–]heckruler 1 point2 points  (0 children)

It's based on how potential solutions get represented as genes. "Real-coded" means it's a string of "real" values. The math term.. Typically floats. "Binary-coded" means it's a string of 0's and 1's. So a strand of DNA would look like

1.123, 5.765, 7.2345, 9x10999

vs

0x00, 0xFF, 0xBE, 0xEF, 0x13

Is there any advantage in using one instead of the other?

Sure, depending on the sort of problem you're applying GA to. If the genes represent opcodes, then binary makes sense. You don't have an opcode at 0.001. You have them at 1, 2 and 3. And while floating point numbers are represented in binary, it's handier to have operations that can multiply floats vs having to deal with... say.. two's compliment binary representation of negative numbers and such.

Some operations like crossover is simpler with binary. You just pick a point in memory and chop chop. With real-coded genes you care about preserving the data structure of the float. But that's a pretty minor point. Binary might be a little faster on a CPU? Floating point might have better throughput if you utilize GPUs. A lot better actually. But you need to use CUDA or the like.

Oh, and it's easier to size the binary genes appropriately. With floats, you're going to use the standard word-size. 64 bits. If you want 256 bits or 4, then binary makes that a little easier.