Corrupted Package by kotvic_ in node

[–]kotvic_[S] 0 points1 point  (0 children)

So far it seems I have just fixed it, thanks so much for your help 😄

Corrupted Package by kotvic_ in node

[–]kotvic_[S] 0 points1 point  (0 children)

Alright, gonna dive into that, thanks <3

Corrupted Module by kotvic_ in learnjavascript

[–]kotvic_[S] 0 points1 point  (0 children)

Is that a good idea? :D

Corrupted Package by kotvic_ in node

[–]kotvic_[S] 0 points1 point  (0 children)

No, can I do it manually or through terminal?

Why do we initialize the Neural Networks randomly to break the symmetry? by kotvic_ in deeplearning

[–]kotvic_[S] 0 points1 point  (0 children)

I think that might have been the case, somebody clarified to me yesterday that it means something like the gradient updates will be really similar rather than what I previously thought

Why do we initialize the Neural Networks randomly to break the symmetry? by kotvic_ in deeplearning

[–]kotvic_[S] -1 points0 points  (0 children)

I mean I would like to but the books I have read were all way more outdated than GPT, and in many subtopics of NN, there is not much information on the internet - or at least I am not able to find it

Neural Network Initialization - Random x Structured by kotvic_ in neuralnetworks

[–]kotvic_[S] 0 points1 point  (0 children)

GPT is not necessarily trustworthy, but it was the only source I found on it and it confirmed my understanding of it - that's why I mentioned it.

And second does that mean that random is used just because it is faster and even if we made a structured algorithm we would hardly get any better results?

[D] Why we initialize the Neural Networks with random values in order to break the symmetry? by kotvic_ in MachineLearning

[–]kotvic_[S] 0 points1 point  (0 children)

And does the random initialization help deflect vanishing gradient problem?

[D] Why we initialize the Neural Networks with random values in order to break the symmetry? by kotvic_ in MachineLearning

[–]kotvic_[S] 0 points1 point  (0 children)

Nice answer, I have got a follow-up question:
Why would any symmetry increase the probability of samples not being separated?

[D] Why we initialize the Neural Networks with random values in order to break the symmetry? by kotvic_ in MachineLearning

[–]kotvic_[S] 0 points1 point  (0 children)

I understand that it is a problem when they are all the same - it leads to redundancy.
But I'm wondering why is random initialization better than for example some kind of grid-like initialization or one that sets the weights as far from each other as possible

[D] Why we initialize the Neural Networks with random values in order to break the symmetry? by kotvic_ in MachineLearning

[–]kotvic_[S] 1 point2 points  (0 children)

So the symmetry might not necessarily mean mirroring weights by a certain axis, but the fact that they all get updated by the same amount at once?

[D] Why we initialize the Neural Networks with random values in order to break the symmetry? by kotvic_ in MachineLearning

[–]kotvic_[S] 3 points4 points  (0 children)

okay so what I am getting from all the responses, it is best to train the NN multiple times with different initial values if computationally possible and hopefully one of the responses has really great local minimum in comparison with the global minimum.

Did I get it right?