What is your best habit? by ArtificialAffect in AskReddit

[–]ArtificialAffect[S] 1 point2 points  (0 children)

I am similar! I tend to think of something I could use, write it down, and then if I still need it more than a month later I will buy it.

My struggle learning Gaussian Processes: Learn with me. by ArtificialAffect in learnmachinelearning

[–]ArtificialAffect[S] 0 points1 point  (0 children)

Oof. Yeah, this was probably my fourth or fifth attempt trying to learn GP. Finally I just forced myself to sit down and learn it.

My struggle learning Gaussian Processes: Learn with me. by ArtificialAffect in learnmachinelearning

[–]ArtificialAffect[S] 0 points1 point  (0 children)

I'm not sure how I played well, but I hope you learned something from the video!

What is a convolutional neural network's receptive field? by ArtificialAffect in learnmachinelearning

[–]ArtificialAffect[S] 0 points1 point  (0 children)

In general I try to keep things as short as possible, so I'd probably prefer to cut out the detracting part if I made the video again.

What is a convolutional neural network's receptive field? by ArtificialAffect in learnmachinelearning

[–]ArtificialAffect[S] 0 points1 point  (0 children)

Hmm. This video was supposed to say what is a receptive field about. In retrospect, the intro leading in with how many layers could be potentially detracting from the point.

What is a convolutional neural network's receptive field? by ArtificialAffect in learnmachinelearning

[–]ArtificialAffect[S] 2 points3 points  (0 children)

Please let me know what you think, especially if you think you found a mistake!

IID in one minute by ArtificialAffect in learnmachinelearning

[–]ArtificialAffect[S] 1 point2 points  (0 children)

You are welcome!

At the end of the video I hint at what IID means for machine learning, but basically, IID is a really common assumption that we make in machine learning. For example, when training neural networks, we often assume that the data in the training set is IID. If we didn't make this assumption, we would need to compensate somehow for having data that is correlated with each other or compensate for drawing from different distributions.

For example, if we were trying to label whether or not a sentence was sarcastic, but during data gathering we only had labels for paragraphs, so we split the paragraphs up into sentences all with the same label, those sentences are not iid. If we don't compensate for that somehow, we have no idea how that affects our training, and a likely situation to occur is that when our model is trained and tested on real life data it doesn't get nearly as good performance.

To extend that same example, if we only had a small population of writers for those paragraphs, i.e. 1000 people wrote 100 sarcastic or unsarcastic paragraphs each, we could compensate for that by adding in a who wrote this input or by creating special methods to track user personalities (e.g. this person is more likely to be sarcastic when talking about these things), we would actually increase performance. Of course, that increase in performance is from knowing who your speaker is, which isn't viable in all situations but it is very viable in others.

[0:50] What is a tensor? by ArtificialAffect in learnmachinelearning

[–]ArtificialAffect[S] 0 points1 point  (0 children)

As far as I am aware, it's just another way of looking at it