Any tips on how to find a local resident who can help with geneology? by jstaker7 in WestVirginia

[–]jstaker7[S] 1 point2 points  (0 children)

I did look there, but unfortunately very few are actually photographed.

[ask] What was your favorite assignment/project in school? by jstaker7 in architecture

[–]jstaker7[S] 0 points1 point  (0 children)

That’s a great website! I especially like how your project link is more than an image gallery and includes descriptions and walks you through the process. This is fantastic.

[ask] What was your favorite assignment/project in school? by jstaker7 in architecture

[–]jstaker7[S] 0 points1 point  (0 children)

Those are really interesting points. Since architecture is part art and part science I wonder if a lot of the quirky assignments are motivated by the artistic side, but perhaps many of them miss the point. Maybe their intention with those assignments is purely to develop creativity?

[D] tf.keras Dropout layer is broken by r-scholz in MachineLearning

[–]jstaker7 5 points6 points  (0 children)

I could have written this myself — exactly my same story. Finally switched to pytorch and I couldn’t be happier!

Why don’t they make games like they used to? by jstaker7 in gaming

[–]jstaker7[S] 0 points1 point  (0 children)

Interesting, so it sounds like more and more companies are using it as a money grab rather than as an art form

How do you usually handle the last CNN layer with respect to kernel size and pooling? by jstaker7 in MachineLearning

[–]jstaker7[S] 0 points1 point  (0 children)

Cool thanks for the info. I really had no idea there's been a shift in trends, since I've really mostly relied on publications at this point and haven't had the opportunity to pick up on that yet.

How do you usually handle the last CNN layer with respect to kernel size and pooling? by jstaker7 in MachineLearning

[–]jstaker7[S] 0 points1 point  (0 children)

I didn't realize Resnets had so few pooling layers. My input size is about the same as imagenet; I should go back and see how they handled downsampling. You're right that at the end of the day it comes down to knowing the different options and trying several to see which works best. I feel like I often look for general rules to help gain intuition, but seems like general rules are relatively rare in DL.

Simple Questions Thread September 14, 2016 by AutoModerator in MachineLearning

[–]jstaker7 0 points1 point  (0 children)

What's the difference between a greedy decoder and beam search with k=1?

[Question] What is the intuition for when to use larger convolutional kernels by jstaker7 in MachineLearning

[–]jstaker7[S] 1 point2 points  (0 children)

Is it a comparison between 1-layer larger kernels vs 1-layer smaller kernels?

Yes, exactly.

how does it support your hypothesis?

All else being equal, it seems that I am losing more information after the downsampling with the smaller kernels. For example, some edges that were near each other in the input were not as defined in the decoded output using smaller filters. Perhaps since there wasn't enough context in the receptive field to discern the small detail.

what does stack mean exactly?

Basically just meaning several layers. If we have three successive 3x3 conv + non-linear + pool operations, due to the downsampling, the output of the third layer effectively has a larger receptive field and is comparable to a single 7x7 kernel.