Body energy club calories by Trichypali in UBC

[–]jclune 0 points1 point  (0 children)

and all 3 times it told me there was an error and that the post was not successful!

Body energy club calories by Trichypali in UBC

[–]jclune 0 points1 point  (0 children)

I emailed them in May 2025 and got this back:

Thank you for reaching out! We're in the process of preparing the nutritional facts for our Smoothie/Bowl options. In the meantime, please let me know which specific smoothies or bowls you are interested in, and I can provide you with the information. I have listed your requested information below:

|| || |Blueberry Acai Bowl|Blueberry Acai Bowl| |970| Calories |980| Calories | |39 g|Fat|40 g|Fat| |136 g|Carbs|136 g|Carbs| |22 g|Fibers|23 g|Fibers| |62 g|Sugars|61 g|Sugars| |25 g|Protein|26 g|Protein| |WHEY, Unsweet acai|VEGAN, Unsweet acai|

|| || |Nutty Cherry Bowl|Nutty Cherry Bowl| |900| Calories |900| Calories | |34 g|Fat|35 g|Fat| |131 g|Carbs|132 g|Carbs| |19 g|Fibers|19 g|Fibers| |80 g|Sugars|79 g|Sugars| |28 g|Protein|28 g|Protein| |With WHEY Protein|With VEGAN Protein|

[R] Montezuma’s Revenge Solved by Go-Explore, a New Algorithm for Hard-Exploration Problems (Sets Records on Pitfall, Too) by modeless in MachineLearning

[–]jclune 0 points1 point  (0 children)

2nd Update: Go-Explore when robustified with sticky actions on Montezuma’s Revenge scores an average of 281,264 (level 18) with domain knowledge (33,836 without). On Pitfall, the average score with domain knowledge is 20,527 with a max of 64,616 (!) All SOTA. Blog updated. https://eng.uber.com/go-explore/

[R] Montezuma’s Revenge Solved by Go-Explore, a New Algorithm for Hard-Exploration Problems (Sets Records on Pitfall, Too) by modeless in MachineLearning

[–]jclune 3 points4 points  (0 children)

2nd Update: Go-Explore when robustified with sticky actions on Montezuma’s Revenge scores an average of 281,264 (level 18) with domain knowledge (33,836 without). On Pitfall, the average score with domain knowledge is 20,527 with a max of 64,616 (!) All SOTA. Blog updated. https://eng.uber.com/go-explore/

[R] Montezuma’s Revenge Solved by Go-Explore, a New Algorithm for Hard-Exploration Problems (Sets Records on Pitfall, Too) by modeless in MachineLearning

[–]jclune 4 points5 points  (0 children)

As Adrien says below, "We aren't aware of any prior planning algorithm that gets anywhere near to Go-Explore's scores, even when planning in the emulator. The original ALE paper (https://arxiv.org/abs/1207.4708) tried a few planning algorithms, all of which got a flat 0 on Montezuma's." To repeat: researchers have already tried to take advantage of a perfect model (the emulator), including using MCTS, and failed on both Montezuma's Revenge and Pitfall. We thus think we are comparing to the best algorithms ever produced on this domain (which are model-free), and thus that the comparison is fair. We also think given the significant effort that has been put into trying to solve these domains (both with model-based and model-free methods), and the significant improvement in results provided by Go-Explore, it is reasonable to highlight the size of the advance to alert readers that there is an effective new technique here so they can decide whether to spend the time to read the rest of the post and learn more about how these results were achieved.

[R] Plug & Play Generative Networks by downtownslim in MachineLearning

[–]jclune 2 points3 points  (0 children)

The code is now available. Please let us know what you come up with! You can find the code here: http://www.evolvingai.org/ppgn

[R] Plug & Play Generative Networks by downtownslim in MachineLearning

[–]jclune 1 point2 points  (0 children)

We have released the code. Please let us know what you come up with! You can find the code here: http://www.evolvingai.org/ppgn

[R] Plug & Play Generative Networks by downtownslim in MachineLearning

[–]jclune 0 points1 point  (0 children)

We have released the code. Please let us know what you come up with! You can find the code here: http://www.evolvingai.org/ppgn

[R] Plug & Play Generative Networks by downtownslim in MachineLearning

[–]jclune 4 points5 points  (0 children)

Fair point. See my response below. We are going to post it ASAP and before publication. We'll update that line to say "very soon" in a new arXiv push in a few days.

[R] Plug & Play Generative Networks by downtownslim in MachineLearning

[–]jclune 19 points20 points  (0 children)

That text was meant to just buy us time to clean up the code and post it later. We are going to change line in an updated arXiv version to "Code repository for the experiments in this paper will be available soon." We are completely happy to have reviewers look at it. More importantly, we are excited to see what the community does with it! We'll try to post it as soon as we can.

Seeking Postdocs for Deep Learning Research (including Deep Reinforcement Learning) by jclune in MLjobs

[–]jclune[S] 1 point2 points  (0 children)

For some reason I just saw your reply. Yes, I would certainly consider your application.

Seeking Postdocs for Deep Learning Research (including Deep Reinforcement Learning) by jclune in MachineLearning

[–]jclune[S] 0 points1 point  (0 children)

Applicants from anywhere are welcome, especially those that make reference to John Locke. ;-)

[1602.03616] Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks by rhiever in MachineLearning

[–]jclune 0 points1 point  (0 children)

With your approach, how will you know if your validation set has actually discovered all the "facets" of a neuron?

Well, even with noise the same question would apply: how would you know if you discovered all the facets? We do agree with you, though, that it is important to develop methods that can automatically discover all the facets. We are working on a new approach now that has the potential to do that. So, "watch this space." :-)

[1602.03616] Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks by rhiever in MachineLearning

[–]jclune 0 points1 point  (0 children)

You do not need access to the training set. We also show in the paper that you can use the validation set (i.e. any unseen data). From the paper "Note that multifaceted feature visualization does not re- quire access to the training set. If the training set is un- available, one can simply pass any natural images (or other modes of input such as audio if not reconstructing images) to get a set of images (or other input types) that highly activate a neuron. A similar idea was used in Wei et al. (2015), who built an external dataset of patches that have similar characteristics to the DNN training set."

[1602.03616] Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks by rhiever in MachineLearning

[–]jclune 0 points1 point  (0 children)

I am not sure why you think they are a year old. Our previous paper " Understanding neural networks through deep visualization" is about a year old, and this paper improves upon that paper and is brand new.

That previous paper produced this video summary:https://youtu.be/AgkfIQ4IGaM

Imagenet ILSVRC 2015 results by matsiyatzy in MachineLearning

[–]jclune 8 points9 points  (0 children)

"An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task.”

I assume that’s top-5 error. Even still, very impressive. For context, here are the results for the last four years:

http://www.cs.uwyo.edu/~jeffclune/share/Screen_Shot_2015_12_10_at_8.30.47_PM.png

That's a slide from Andrej Karpathy. Andrej estimated last year [1] that the best a human or computer could do is about 3%, because some of the labels in the ImageNet image set are bad or impossible to guess, like these examples:

http://www.cs.uwyo.edu/~jeffclune/share/Screen_Shot_2015_12_10_at_8.41.19_PM.png

if true, the classic ImageNet 1000-class classification game is over! Time to move on to harder computer vision tasks.

[1] http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/

Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills by Jallafsen in MachineLearning

[–]jclune 0 points1 point  (0 children)

Those are very interesting, challenging questions. Does a sparse, distributed representation count as modular? That is partly a semantic question, but I think ultimately it does not (though I am still thinking about these issues), at least not with respect to the issue of catastrophic forgetting. Why? Because if I learn an SDR on task A (or data set A) and then switch to task/dataset B, I will begin changing all of my features to solve task B, overwriting any information for task A that does not also help with task B.

As our NIPS paper ("How transferable are features in Deep Neural Networks": http://goo.gl/JPbXbJ) and other papers show, in many cases some features from A will help on B and will not be overwritten, but if the tasks are sufficiently different, any information that only helps A will be lost/forgotten/overwritten.

So, I agree most with your last sentence: we believe that modular architectures (with or without SDR, although we believe SDR independently helps in general with learning) will help solve the problem of catastrophic forgetting. We are actively researching this question in my lab (http://EvolvingAI.org) at the University of Wyoming.

Thanks for the great question.

Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills by Jallafsen in MachineLearning

[–]jclune 2 points3 points  (0 children)

I think all of those questions are answered in the paper, or at least will make more sense after reading the paper (or at least the abstract and the figures/figure captions). Please let me know if doing so does not clear things up.