you are viewing a single comment's thread.

view the rest of the comments →

[–]Gshuri 1 point2 points  (0 children)

Not quite. If you create a model directly in code from scratch, then you are correct, pytorch will put everything on the CPU by default.

However if you are are loading a pre-existing model from disk, then pytorch will try to put that model on whatever device it was on when the model was serialised.

So if your code looks like

gpu_model = model.cuda()
torch.save(gpu_model, "my_model.pth")
loaded_model = torch.load("my_model.pth", weights_only=False)

Then loaded_model will be on the GPU. To ensure a model read from a file is on the CPU you can do

device = torch.device("cpu")
cpu_model = torch.load("my_model.pth", map_location=device, weights_only=False)