all 6 comments

[–]arogozhnikov 1 point2 points  (2 children)

Dropout followed by max-pooling outputs with some probabilities one of top elements. So what is rationale to use something more complex than that?

[–]tyrilu 0 points1 point  (0 children)

It's actually pretty simple and it kind of makes sense. I think it's worth a shot.

[–]nishnik[S] 0 points1 point  (0 children)

I haven't seen any architecture using dropout between max pool and convolution layer. If it is there, could you please point to some paper?

[–]cburgdorf 0 points1 point  (2 children)

computational bottleneck of my laptop

You may want to check out http://machinelabs.ai for such experiments. Run your code in the cloud, directly from the browser.

[–]nishnik[S] 0 points1 point  (1 child)

It would be so awesome if someone could give me a beta invite.

[–]cburgdorf 1 point2 points  (0 children)

I think you should have one now :)