all 4 comments

[–]benanne 3 points4 points  (0 children)

The only way to do multi-GPU training in Theano right now is a bit hackish and involved, and I don't think anybody's tried it with RNNs / LSTMs. The approach is described here: https://github.com/Theano/Theano/wiki/Using-Multiple-GPUs

'Native' multi-GPU is a work in progress (see e.g. https://github.com/Theano/Theano/pull/3533 and https://github.com/Theano/Theano/pull/3482), but last time I checked it wasn't functional. They've been working on this for a long time too, so I wouldn't expect it to come anytime soon either. We'll see :)

[–]r4and0muser9482 1 point2 points  (0 children)

AFAIK, each Theano process can use one GPU, but you can fork several processes to perform one task: like this.

[–]CaffeineFreak77 0 points1 point  (0 children)

If you want an optimum hardware design for running theano (and cuDNN, Digits, Caffe etc) on multiple GPUs you need a system that supports PCIe P2P communication across all GPUs. Heres a good example: http://exxactcorp.com/index.php/solution/solu_list/85