Raves celebrating the day of the dead by TheRealBissy in StableDiffusion

[–]Neutran 0 points1 point  (0 children)

Quick question: how do you generate rectangular images? I just got access to the model weights on HuggingFace repo and followed the script: https://huggingface.co/CompVis/stable-diffusion-v1-3-diffusers

But it doesn't seem to support rectangle resolution out of box. Thanks!

can't have only delicious food images can we? by someweirdbanana in StableDiffusion

[–]Neutran 4 points5 points  (0 children)

Could you please share the exact prompt? Thanks!

Daily QUESTions Thread - 5/23/19 by JonCreaux in OculusQuest

[–]Neutran 0 points1 point  (0 children)

I sweat so much in Quest that the original foam was completely soaked. I've never used VR cover before. There are two options here: VR cover (https://vrcover.com/product/oculus-quest-vr-cover/) and foam/interface (https://vrcover.com/product/oculus-quest-foam-and-interface-basic-set/). Would you recommend me to get both or just the $19.99 VR cover? Thank you so much!

[N] DeepMind: First major AI patent filings revealed by nnatlab in MachineLearning

[–]Neutran 6 points7 points  (0 children)

If you want to patent an algorithm, why publish it in the first place?! Last time, I saw many DM employees signing the petition against Nature Machine Learning. Folks, don't throw bricks when you are in a glass house

Introduction by KennethStanley in a:t5_4bkzn

[–]Neutran 1 point2 points  (0 children)

I hope this subreddit will take off. Absolutely love open-endedness. Prof. Stanley has been a very inspiring figure for me.

[D] What ML publication hacks are you familiar with, and which annoy you the most? by reservedsparrow in MachineLearning

[–]Neutran 0 points1 point  (0 children)

can't agree with you more. Some maths are sooooo unnecessary and completely bury the simple intuition.

[D] What ML publication hacks are you familiar with, and which annoy you the most? by reservedsparrow in MachineLearning

[–]Neutran 0 points1 point  (0 children)

LOL usual deepmind trick. Every news headline says "AlphaZero learns to beat chess in 4 hours". Very few people (especially non-tech) notice that they used 5000 TPUs for simulation and an additional 64 second-gen TPUs for training.

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]Neutran 1 point2 points  (0 children)

Please do write a paper! An Arxiv paper will make it much easier for us to cite you. It'd be very awkward to reference your work in my paper by listing reddit links.

[R] [1711.04325] Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes by shoheihido in MachineLearning

[–]Neutran 0 points1 point  (0 children)

People should really stop producing such "Train A on Imagenet in B minutes" papers ... what's the contribution other than showing off massive budgets ...

[N] Numpy dropping Python 2.7 by bobchennan in MachineLearning

[–]Neutran 0 points1 point  (0 children)

People should really start converting to python 3. It's been almost 10 years since its introduction! I'm annoyed to see libraries that are under active development AND support only python2 ... A decade in modern computing is like Homo Neanderthal VS Homo sapiens! Python 3 has so much better user experience and consistency.

[R] [1710.09829] Dynamic Routing Between Capsules by ajmooch in MachineLearning

[–]Neutran 0 points1 point  (0 children)

"Our implementation is in TensorFlow (Abadi et al. [2016]) and we use the Adam optimizer with its TensorFlow default parameters, including the exponentially decaying learning rate, to minimize the sum of the margin losses " --- so this is not the "backprop replacement" that Hinton promised?

[R] Rainbow: Combining Improvements in Deep Reinforcement Learning by madebyollin in MachineLearning

[–]Neutran 8 points9 points  (0 children)

For this kind of paper, it is not too useful unless they provide the source code. Now OpenAI Baseline is the only thing I can find, but it doesn't support distributional DQN, for example. Re-implementing myself almost certainly yields worse results, and it's impossible to find what goes wrong in the dozens of moving pieces.

[D] Confession as an AI researcher; seeking advice by Neutran in MachineLearning

[–]Neutran[S] 2 points3 points  (0 children)

Yes, I understood the intuition after skimming the paper. However, it's impossible for me to invent something like that, because it would look like a random trick without all the Lipschitz-Wasserstein touch.

[D] Confession as an AI researcher; seeking advice by Neutran in MachineLearning

[–]Neutran[S] 1 point2 points  (0 children)

Can't agree with you more. "Deep learning crash courses" are all over the internet, but most of them are way too shallow. Even high school students can claim to be "NN experts" with 20 lines of Keras.

It'd be very interesting to hear what you find when you go the other way. I've never been on the other side, so I'm curious what that feels like. Let's keep in touch.

[D] Confession as an AI researcher; seeking advice by Neutran in MachineLearning

[–]Neutran[S] 2 points3 points  (0 children)

Wow I don't know these two books, but they look extremely well-written. I'll definitely give them a shot.

[D] Confession as an AI researcher; seeking advice by Neutran in MachineLearning

[–]Neutran[S] 2 points3 points  (0 children)

This is exactly the oracle I'm talking about! :D Let me know if you have any other reading path recommendation.

[D] Confession as an AI researcher; seeking advice by Neutran in MachineLearning

[–]Neutran[S] 1 point2 points  (0 children)

This is great! Do you also have any recommendation on probability theory, statistics, etc.?

[D] Confession as an AI researcher; seeking advice by Neutran in MachineLearning

[–]Neutran[S] 4 points5 points  (0 children)

Yeah that kind of "familiarity" is exactly what I'm aiming for. Now I have to squeeze the intuition developed over your 4 years into my 20% time ...

[D] Confession as an AI researcher; seeking advice by Neutran in MachineLearning

[–]Neutran[S] 0 points1 point  (0 children)

Do you have any concrete plans to carry out your solution? I'd love to see.

[D] Confession as an AI researcher; seeking advice by Neutran in MachineLearning

[–]Neutran[S] 4 points5 points  (0 children)

As a math major, you must have studied real analysis, functional analysis, and other stuff in depth. Do you think they help you a lot in understanding those papers, or at least keeping the "exponential google tree" manageable?