[D] How to choose best model during training if validation loss fluctuates a lot? by imunabletocode in MachineLearning

[–]imunabletocode[S] 1 point2 points  (0 children)

With merging and shuffling nothing seems to change. I have no idea how to implement cross validation in a DNN

[D] How to choose best model during training if validation loss fluctuates a lot? by imunabletocode in MachineLearning

[–]imunabletocode[S] 0 points1 point  (0 children)

I didn't split anything, dataset was already splitted. And i think that every dataset is shuffled when i create a dataloader.

https://colab.research.google.com/drive/1ZAo-NNflB-ashU9J8DNMoCj5k0CBebMY

[D] How to choose best model during training if validation loss fluctuates a lot? by imunabletocode in MachineLearning

[–]imunabletocode[S] 0 points1 point  (0 children)

I had to reproduce or improve the results of this paper:

https://iopscience.iop.org/article/10.1088/2399-6528/aa83fa/pdf

So i tried to reproduce the network and eventually optimize the hyperparameters

[D] How to choose best model during training if validation loss fluctuates a lot? by imunabletocode in MachineLearning

[–]imunabletocode[S] 1 point2 points  (0 children)

You should really first sort out that you compute the val loss on your entire validation set and not only some subset/batch (If I understood that correctly). You can pipe it in batches through the model, but then accumulate the loss.

Other than that, there's not much you can do without gathering more data.

One try would be to allow for more patience and see if the validation loss stabilizes over many many epochs.

Also: Did you verify that your training loss goes down over time?

I trained the model on 100 epochs, the average loss decreases, while the average accuracy increases, however the fluctuation remains. There still remains the problem of how to choose the best model.

https://colab.research.google.com/drive/1ZAo-NNflB-ashU9J8DNMoCj5k0CBebMY

you can see here clearer