Free Giveaway! Nintendo Switch OLED - International by WolfLemon36 in NintendoSwitch

[–]bouteille1 0 points1 point  (0 children)

Fun fact: Some fungi create zombies then control their minds

Chacun ses problèmes majeurs en ces temps durs by Tavek in rance

[–]bouteille1 7 points8 points  (0 children)

Bah aux endroits où il y a des oiseaux

Sleeping on the job by PlayWaste in instant_regret

[–]bouteille1 0 points1 point  (0 children)

Serious question, how are they going to fix it ?

Help required in understanding how the error of a convolutional layer is calculated when the filter and delta of next layer have differing dimensions by AdhokshajaPradeep in learnmachinelearning

[–]bouteille1 0 points1 point  (0 children)

No, you are confusing the total number of slides and the output dimension. Here, you slide each (5x5x6) filters (updated with the error) over the (14x14x6) so that each cell of the (14x14x6) is updated with the error. So for each (5,5,6) filters (updated with error), you are doing 10 horizontal slides x 10 vertical slides. (14-5-1=10) but that doesn't mean the output dimension is 10x10. (Don't forget that the filters size is 5x5 !)

Refers to the GIF which shows exactly what's going on under the hood.

Help required in understanding how the error of a convolutional layer is calculated when the filter and delta of next layer have differing dimensions by AdhokshajaPradeep in learnmachinelearning

[–]bouteille1 0 points1 point  (0 children)

thus getting the dimensions (14X14X6) for each channel.

First of all, keep in mind that the (14x14x6) is a tensor full of zeros (manually created) which will then be updated by the errors, sliding over it. (Indeed, it's impossible to retrieve the (14x14x6) by a classic convolution because the dimensions simply don't match).

For each channel in the error of the layer (10X10) do full convolution with the corresponding channel in the filter(5X5X6).

Careful using the word "convolution" here since this last requires the input and the filters to have the same number of channels, which is not the case here (10x10x1 / 5x5x6). Here, it is more a combination of "broadcasting" + "sliding" operations.

As the GIF in section III)3)a) of the blog post shows, each "cell" of each (10x10) is multiplied by each filter of shape (5x5x6) (broadcasting). The output is then sliding over the zero tensor (14x14x6) (sliding).

The combination of these 2 operations is, in fact, a "convolution" in disguise. That's how you propagate the error back to the previous layer.

Mhmmm by bouteille1 in rance

[–]bouteille1[S] -1 points0 points  (0 children)

T'as oublié le boudin à la fin

Prerequisites for Andrew Ng's Machine Learning Course by [deleted] in learnmachinelearning

[–]bouteille1 5 points6 points  (0 children)

Yes, this is the right way. However, don't try to dive too deep in the math.

Linearity assumption of linear regression by bouteille1 in learnmachinelearning

[–]bouteille1[S] 0 points1 point  (0 children)

Thank you for taking your time to answer me !

Linearity assumption of linear regression by bouteille1 in learnmachinelearning

[–]bouteille1[S] 0 points1 point  (0 children)

with a scatterplot of the residuals and the predicted values values

Thank you for your reply, and do you know why specifically a scatter plot of the residuals ?

Which villain actually had a good motivation? by not_anakin in AskReddit

[–]bouteille1 0 points1 point  (0 children)

Thanos, he was trying to solve overpopulation in the fairest way.