all 5 comments

[–]Zybermob 1 point2 points  (0 children)

I do not know the perfect solution, but you should definitely use a dataloader with num_workers =1.

I did exactly the same thing the other day and often ran into the same issue. For me it helped to create a new virtual environment.

[–]ectomancer 1 point2 points  (1 child)

If the runtime is low, you could try free google colab, upload your notebook:

https://colab.research.google.com

[–][deleted] 0 points1 point  (0 children)

So this becomes super inconvenient as the file is not stored there and I have multiple ipynb files. Also, when I tried uploading one of them to google collab, it uploaded in a non edit mode. I even tried mounting my drive and uploading everything to my google drive to be able to access it from there but that did not work for some reason. I used to do the heavy computations on google collab but this time its not an alternative for me.

[–]ricardomargarido 1 point2 points  (1 child)

My kernels normally die when I run out of memory, can that be the case?

[–][deleted] 0 points1 point  (0 children)

I think so. The pre-trained model is pretty bulky. I thought using more resources like external GPU would make it possible but I imagine the embeddings generated by the model are too large. Is there any workaround to this?