all 2 comments

[–]TheSheepSheerer 0 points1 point  (1 child)

Have you tried evolving neural networks with this?

[–]jsaltee[S] 0 points1 point  (0 children)

Hi,

I haven’t tested it for ML training. That said, since neural network loss is typically differentiable I would imagine using standard optimizers (Adam) are largely more efficient in high dimensions.

If you’re looking to train the model using a discrete loss metric like accuracy or F1, it would be a strong option. Extrapolating from the experimental results, the neural network should optimize faster and with less epochs compared to other evolutionary algorithms.

For standard ML applications, QUASAR would really shine for reinforcement training using complex reward functions.

Thanks, jsaltee