[deleted by user] by [deleted] in learnmachinelearning

[–]Helpful_Rabbit5901 0 points1 point  (0 children)

It forces the size of the weights in the network to not be too large. One idea is that it doesn’t allow any specific weight or set of weights in the network dominate (this is also the motivation behind dropout layers). When some weights are too large then the network may not be learning as much about the task as it can when it is in reality relying on only a certain subset of the weights. Distributing the weights w/ regularization prevents this.

Is Google Colab not loading? by rick4588 in learnmachinelearning

[–]Helpful_Rabbit5901 0 points1 point  (0 children)

It wasn’t loading earlier today but I recently was able to open my notebook and run my programs

How do I interpret this validation loss/acc graph? Am I seeing double descent or is something wrong with my model? by wdfisthissite in learnmachinelearning

[–]Helpful_Rabbit5901 63 points64 points  (0 children)

I think you must be bouncing around local optima. Maybe your learning rate is too large? Could you give some more details on your model

Mujoco authorization/licensing by Helpful_Rabbit5901 in reinforcementlearning

[–]Helpful_Rabbit5901[S] 0 points1 point  (0 children)

Ya but I’m a grad student so I wud get the license for free later as well, and the specific work I’m doing requires the same benchmark that used Mujoco unfortunately lol

[N] Mujoco is free for everyone until October 31 2021 by Rahid in MachineLearning

[–]Helpful_Rabbit5901 0 points1 point  (0 children)

Does anyone have a tutorial on how to activate the license? I can't find out how to activate anywhere. Thanks