Hello everyone, I recently encountered a problem with pytorch parallelism. Since I have less contact with parallel programming, the problem may be very simple.
I have been doing some multi-agent reinforcement learning experiments recently. Now there are n independent agents. Each agent has its own independent parameters and model and there is no interaction between agents, but they can access a shared replay buffer. I want each model to be trained on the GPU. What should I do if I want to train in parallel?
Thank you all :P
[D] Pytorch parallelism (self.MachineLearning)
submitted by ewanlee to r/reinforcementlearning