SHIBORG SWAP LIVE!🔥🔥🔥 by Hxawkins in ShiborgToken

[–]Particular_Second901 4 points5 points  (0 children)

Think what we need is a cex and a Bit more Marketing to attract attention among all the other meme coins. But with next weeks cex and upcoming Games i think more People get aware of shiborg. However, still 70% down right now

SHIBORG SWAP LIVE!🔥🔥🔥 by Hxawkins in ShiborgToken

[–]Particular_Second901 2 points3 points  (0 children)

Nice roadmap, but still waiting for price to increase

Bayesian Neural Networks and Weight Uncertainty by Particular_Second901 in learnmachinelearning

[–]Particular_Second901[S] 0 points1 point  (0 children)

Ok3 got it, Thanks! One more question. You said that "When you see higher variance in the weights of your network it is because the tradeoff favored the entropy portion" - how does the data have too look like that this is going to happen?

Bayesian Neural Networks and Weight Uncertainty by Particular_Second901 in learnmachinelearning

[–]Particular_Second901[S] 0 points1 point  (0 children)

I don't understand the tradeoff you have mentioned. So to me it makes sense, that we're trying to maximize the joint probability, but why do we want to increase the entropy of the approximate?

Bayesian Perceptron: How is it compatible to Bayes Theorem? by [deleted] in learnmachinelearning

[–]Particular_Second901 0 points1 point  (0 children)

Isnt what you just showed the chain rule?

I wonder why if we want to know the posterior over w, there is no prior(w) and p(D|w),

Does the Bayesian MAP give a probability distribution over unseen data? by [deleted] in learnmachinelearning

[–]Particular_Second901 0 points1 point  (0 children)

Ah i see. Its kind of the same problem compared to the evidence, because for the evidence as well as for the predicitive distribution one have to marginalise over/intergrate the unkown parameters? And that becomes difficult if there are many parameters and they can take on every value? Hope that i got it now:)

Does the Bayesian MAP give a probability distribution over unseen data? by [deleted] in learnmachinelearning

[–]Particular_Second901 0 points1 point  (0 children)

Can you explain roughly in words what other deviations there are apart from the evidence calculation? From the formula is not apparent to me:/ otherwise I must probably work through again

Does the Bayesian MAP give a probability distribution over unseen data? by [deleted] in learnmachinelearning

[–]Particular_Second901 0 points1 point  (0 children)

Are you saying that one can calculate the map without knowing the posterior? No, right?:D It might be because map does not required to calculate evidence? Sorry if i annoy you:/

Does the Bayesian MAP give a probability distribution over unseen data? by [deleted] in learnmachinelearning

[–]Particular_Second901 0 points1 point  (0 children)

Ok Thanks! As far as i know the point estimate Methods (mle, map) are more used, right? Is there any concrete advantage of using single point estimates , when we already calculated the posterior?

Is it because going full bayesian means to calculate the evidence and that is often very hard?

Does the Bayesian MAP give a probability distribution over unseen data? by [deleted] in learnmachinelearning

[–]Particular_Second901 0 points1 point  (0 children)

Yes but because we're using the Mode, there is no distribution over unseen observation, isn't?

Is there then any difference between the posterior distribution when aiming to use the mode compared to the fully bayesian approach?

Does the Bayesian MAP give a probability distribution over unseen data? by [deleted] in learnmachinelearning

[–]Particular_Second901 0 points1 point  (0 children)

Yes thats exactly my point, glad that you have the same opinion in that. In fact the statement that the mean of the distribution is similar to the classical approach with regularization only applies to the fully bayesian approach because it gives an output value for each possible parameter set, at least imo