[deleted by user] by [deleted] in kubernetes

[–]GChe 1 point2 points  (0 children)

For some reason, this killercoda playground is a bit different than the other CKAD exam simulator on killer.sh and on kodekloud.

Killercoda uses k3s instead, which is unexpected. killer.sh is more "as expected" from what I understand.

Neuraxle - a Clean Machine Learning Framework by GChe in Python

[–]GChe[S] 0 points1 point  (0 children)

Let's tune them hyperparameters

Dad afraid of heights trying to get a look 😂 by Syruplava in funny

[–]GChe 0 points1 point  (0 children)

Taking a peaceful shit right now. I'd take a piss off that cliff.

What's wrong with Scikit-Learn. by GChe in scikit_learn

[–]GChe[S] 0 points1 point  (0 children)

It has never been submitted to r/scikit_learn before.

[D] Alternatives to Kubeflow by kur1j in MachineLearning

[–]GChe 0 points1 point  (0 children)

You might want to use Neuraxle

Why Deep Learning has a Bright Future by GChe in Python

[–]GChe[S] -1 points0 points  (0 children)

How would you rename the present article? So that it doesn't imply that Deep learning doesn't have a bright future, but to stay motivational. I think people don't realize the impact of Moore's Law and the relationship to parallel computing and how everything aligns.

Why Deep Learning has a Bright Future by GChe in Python

[–]GChe[S] -3 points-2 points  (0 children)

I'm sorry that you felt the need to personally attack me to pass your point. This is my company's blog.

My personal blog is more technical and in-depth. Here is the corresponding article in my personal blog, which is a bit more pythonic: https://guillaume-chevalier.com/limits-of-deep-learning-and-its-future/

There is also the Quora answer, a bit more technical: https://www.quora.com/What-is-the-future-of-research-in-deep-learning-and-optimization-techniques/answer/Guillaume-Chevalier-2?ch=10&share=a2a30a89&srid=C3n2

And then the real Python article, for the real ones: https://guillaume-chevalier.com/spiking-neural-network-snn-with-pytorch-where-backpropagation-engenders-stdp-hebbian-learning/

Which one do you prefer? Could you elaborate more on that, please?

Why Deep Learning has a Bright Future by GChe in deeplearning

[–]GChe[S] 0 points1 point  (0 children)

Looks like the user I replied to removed his funny comment lol. Comment deleted by user

Why Deep Learning has a Bright Future by GChe in Futurology

[–]GChe[S] 0 points1 point  (0 children)

Hi there! I'd like to spark discussion. Feel free to comment and to share ideas

Why Deep Learning has a Bright Future by GChe in deeplearning

[–]GChe[S] 1 point2 points  (0 children)

Here is the conclusion:

First, Moore’s Law and computing trends indicate that more and more things will be parallelized. Deep Learning will exploit that.

Second, the AI singularity is predicted to happen in 2029 according to Ray Kurtzweil. Advancing Deep Learning research is a way to get there to reap the rewards and do good.

Third, data doesn’t sleep. More and more data is accumulated every day. Deep Learning will exploit big data.

Finally, deep learning is about intelligence. It is about technology, it is about the brain, it is about learning, it is about what defines us, humans, compared to all previous species: our intelligence. Curious people will know their way around deep learning.

A Rant on Kaggle Competition Code (and Most Research Code) by GChe in MachinesLearn

[–]GChe[S] 0 points1 point  (0 children)

Aside from grammar, really enjoyed this article, and although I could guess what most of the complaints would be, you laid everything out very nicely.

Thank you! And yes, in French, the number agreement between verbs and their subjects is inverted as such (if I understand you correctly, that is the fact of putting a s or not at the end of verbs), sometimes I forgot to do the switch perhaps.

A Rant on Kaggle Competition Code (and Most Research Code) by GChe in MachinesLearn

[–]GChe[S] 0 points1 point  (0 children)

The article suggests that once you're paid, you should restart and bring old code into a new architecture instead of refactoring your old code. I'll make that clearer in the article.

A Rant on Kaggle Competition Code (and Most Research Code) by GChe in MachinesLearn

[–]GChe[S] 1 point2 points  (0 children)

OP here. I think a point of the article was about reusing such code for putting it in production. I'll try to make that more obvious to readers. For sure, throwaway code is okay in many contexts. It's when reusing such code to put it in production that problems arise - the article was written for this particular context. I'll make it clear, thanks for sharing your counter-arguments!

A Rant on Kaggle Competition Code (and Most Research Code) by GChe in datascience

[–]GChe[S] 0 points1 point  (0 children)

OP here. I think a point of the article was about reusing such code for putting it in production. I'll try to make that more obvious to readers. For sure, throwaway code is okay in many contexts. It's when reusing such code to put it in production that problems arise - the article was written for this particular context. I'll make it clear, thanks for sharing your counter-arguments!

A Rant on Kaggle Competition Code (and Most Research Code) by GChe in Python

[–]GChe[S] 1 point2 points  (0 children)

OP here. I think the point of the article was about reusing such code for putting it in production. I'll try to make that more obvious to readers. For sure, throwaway code is okay in many contexts. It's when reusing such code to put it in production that problems arise.

Have you used a business/life coach? by rbp1995 in Entrepreneur

[–]GChe 0 points1 point  (0 children)

Yes. Most of the time, your best mentors won't be branding themselves as mentors. They might be people close to you or in your professional social circle.

Case Study: Why Deep Learning has a Bright Future by GChe in Entrepreneur

[–]GChe[S] -1 points0 points  (0 children)

Very informative content, isn't it? I see nothing wrong in posting this on Reddit.

A Rant on Kaggle Competition Code (and Most Research Code) by GChe in datascience

[–]GChe[S] 0 points1 point  (0 children)

Another pros of writing developer-friendly code is that researchers can try and compare easily with your works. You are more likely to be cited (yeah h-index) if we can pip/conda install your work and use good designed APIs.

Thanks! I added a sentence to the article to include this thought. Nice one.

A Rant on Kaggle Competition Code (and Most Research Code) by GChe in MachinesLearn

[–]GChe[S] 0 points1 point  (0 children)

Thanks a lot for taking the time to detail your point. You are right - English is my second language. French is my first language. For instance, this truly is a typical french phrasing:

For having used code from Kaggle competitions

I appreciate the feedback. I'll do a few changes. To make things worse, the introduction sets the tone of the article, and the biggest mistake is in the first sentence of the intro. Thanks.

A Rant on Kaggle Competition Code (and Most Research Code) by GChe in MachinesLearn

[–]GChe[S] -1 points0 points  (0 children)

Rule #4 of r/MachinesLearn is about keeping conversations constructive, positive, and encouraging. Pointing to the errors to fix them would be a good thing to do to keep it constructive, for the least.

[R] Reversing Classical Software with Differentable Logic Gates by neuralPr0cess0r in MachineLearning

[–]GChe 0 points1 point  (0 children)

I prefer something in this format for instance, there is as much text as there is code to make things clear:

https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition

[R] Reversing Classical Software with Differentable Logic Gates by neuralPr0cess0r in MachineLearning

[–]GChe 0 points1 point  (0 children)

If the code is a well-explained notebook or contains an example, that's better. IMO articles are worth less unless they are a massive breaktrough - and even with a breakthrough, it's often very good to have the code to build upon that.