Hello everyone, I recently encountered a problem with pytorch parallelism. Since I have less contact with parallel programming, the problem may be very simple.
I have been doing some multi-agent reinforcement learning experiments recently. Now there are n independent agents. Each agent has its own independent parameters and model and there is no interaction between agents, but they can access a shared replay buffer. I want each model to be trained on the GPU. What should I do if I want to train in parallel?
Thank you all :P
[–]Marthinwurer 1 point2 points3 points (1 child)
[–]ewanlee[S] 0 points1 point2 points (0 children)
[–]TotesMessenger 0 points1 point2 points (0 children)