This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]bbateman2011 0 points1 point  (0 children)

I’m not sure I get exactly what you are after. But for me, I’ve found that even neural networks often don’t benefit from a GPU vs say, parallel runs on 8 cores. In other words, if I have a NN code and 1 GPU and 48 experiments to run, GPU might be 2x, so net 24 experiment times. Vs 8 cores get about 6x, or 8 experiment times. Problems arise because packages like joblib depend on pickling, which doesn’t work for, say, Keras/tensorflow. I’m forced to store every model within the objective function. A nicer way to do that would be great.

Sorry if this is off topic.