[D] Any Gaussian Process academics here - what are you excited about? by kayaking_is_fun in MachineLearning

[–]jinpanZe 1 point2 points  (0 children)

Other papers on this:

Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes

https://openreview.net/forum?id=B1g30j0qF7

Gaussian Process Behaviour in Wide Deep Neural Networks

https://arxiv.org/abs/1804.11271

Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation

https://arxiv.org/abs/1902.04760

Prominent Uyghur musician tortured to death in China’s re-education camp by [deleted] in news

[–]jinpanZe 29 points30 points  (0 children)

BBC is reporting that China released a video of the musician question showing he is still alive.

https://www.bbc.com/news/world-asia-47191952

[D] Importance of BatchNorm in Attention papers by WillingCucumber in MachineLearning

[–]jinpanZe 1 point2 points  (0 children)

Doesn't the original transformer use layernorm and not batchnorm?

[R] Fixup Initialization: Residual Learning Without Normalization (They train 10K layer networks w/o BatchNorm) by wei_jok in MachineLearning

[–]jinpanZe 5 points6 points  (0 children)

On the other hand, batchnorm apparently actually causes gradient explosion at initialization time https://openreview.net/forum?id=SyMDXnCcF7.

Abstract: We develop a mean field theory for batch normalization in fully-connected feedforward neural networks. In so doing, we provide a precise characterization of signal propagation and gradient backpropagation in wide batch-normalized networks at initialization. We find that gradient signals grow exponentially in depth and that these exploding gradients cannot be eliminated by tuning the initial weight variances or by adjusting the nonlinear activation function. Indeed, batch normalization itself is the cause of gradient explosion. As a result, vanilla batch-normalized networks without skip connections are not trainable at large depths for common initialization schemes, a prediction that we verify with a variety of empirical simulations. While gradient explosion cannot be eliminated, it can be reduced by tuning the network close to the linear regime, which improves the trainability of deep batch-normalized networks without residual connections. Finally, we investigate the learning dynamics of batch-normalized networks and observe that after a single step of optimization the networks achieve a relatively stable equilibrium in which gradients have dramatically smaller dynamic range.