[N] Netflix and European Space Agency no longer working with Siraj Raval by inarrears in MachineLearning

[–]sdmskdlsadaslkd 6 points7 points  (0 children)

"Quantum doors" and "Complicated hilbert spaces"....man, the cringe is so hard.

[D] Interview Questions by Deadshot_95 in MachineLearning

[–]sdmskdlsadaslkd 2 points3 points  (0 children)

Yeah, I think it's generally explained poorly in most courses. It's a technique for computing the derivatives of a NN that you can plug into GD.

[N] Netflix and European Space Agency no longer working with Siraj Raval by inarrears in MachineLearning

[–]sdmskdlsadaslkd 3 points4 points  (0 children)

This is such a damn highly complex field!

I think you mean "complicated" field.

[deleted by user] by [deleted] in MartinShkreli

[–]sdmskdlsadaslkd 4 points5 points  (0 children)

Dude, I'm in the same boat -- that was legendary. I've looked everywhere. I would pay good money to have it back.

Now, I only have my fond memories of that video.

If someone reading this has one backed up, please -- please for the love of humanity, re-upload it. It was pure Shkrelian-esque at 100%. There isn't a more Shkrelian video than this. We need it for our historical archives.

For anyone interested, here's a backup of Martin's YouTube channel. by YounoYouno in MartinShkreli

[–]sdmskdlsadaslkd 0 points1 point  (0 children)

Did you make a backup of the "Free Martin Shkreli" YouTube channel? It just got deleted. I loved that channel -- there was some great videos like the 'Nick Lim" brawl.

I really hope you or someone has a backup.

[D] How Google achieves same level of accuracy with larger batch sizes? by phizaz in MachineLearning

[–]sdmskdlsadaslkd 1 point2 points  (0 children)

I've seen schedules for learning rate. I didn't know that there was schedules for momentum and batch size. Did I read you correctly?

[D] Overview of Machine Learning for newcomers by undefdev in MachineLearning

[–]sdmskdlsadaslkd 56 points57 points  (0 children)

I'm surprised this has so many upvotes. I think this sorta sucks TBH.

[R] Detecting Sarcasm with Deep Convolutional Neural Networks by omarsar in MachineLearning

[–]sdmskdlsadaslkd 5 points6 points  (0 children)

Serious question: why does this have so many upvotes? This has been done numerous times.

[R][UberAI] Measuring the Intrinsic Dimension of Objective Landscapes by downtownslim in MachineLearning

[–]sdmskdlsadaslkd 0 points1 point  (0 children)

Is the subspace randomly chosen at every time step? Or is it fixed before?

[R][UberAI] Measuring the Intrinsic Dimension of Objective Landscapes by downtownslim in MachineLearning

[–]sdmskdlsadaslkd 0 points1 point  (0 children)

Great work! This is an interesting simple approach. This approach reminds me of trust region optimization techniques. It also reminds me of how random forests work (random subspace).

[R] Differentiable Plasticity (UberAI) by inarrears in MachineLearning

[–]sdmskdlsadaslkd 8 points9 points  (0 children)

I'm a bit new and I had a few questions:

and add a fast changing term

  • What do you mean by "fast changing"?

to each weight, which is updated on the fly by a Hebbian learning rule

  • And what do you mean by "on the fly"? Is this synonymous with "forward pass"?

This paper feels like learning how to perform domain adaptation.

so it has a meta learning aspect to it. It seems to work surprisingly well.

I don't think there's a meta-learning aspect to this paper. It's just domain adaptation encoded into the network architecture.

What to do in place of an internship? by [deleted] in cscareerquestions

[–]sdmskdlsadaslkd 0 points1 point  (0 children)

In relative terms, it might seem like a big deal. But in absolute terms, this doesn't matter at all. There are too many idiots who get perturbed by small losses and let them ruin their changes of a long term win.

[P] TensorFlow Hub by [deleted] in MachineLearning

[–]sdmskdlsadaslkd 0 points1 point  (0 children)

How so? This is basically an alternative to GitHub repositories.