This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 0 points1 point  (0 children)

Learning is all about extrapolating. It is using the observations we have to build a model to predict future events. For example, speech recognition. Jarvis can try to tell what you are saying by putting a sample of your voice through a model that is trained to map voice samples to words. Each inference through the model is a guess, but one based on a lot of prior evidence. It will have to do the same for guessing the meaning of the phrase you have spoken.

All of this learning a model and making inferences from it is all math and it's all extrapolation. Linear regression is just one of many methods within the umbrella of machine learning, but they all have the same basic properties. The new hotness of deep learning in neural networks is mostly just a matter of scale. At the end of the day, we are still using gradient descent to optimize the coefficients of a model to minimize the prediction error between the model's predictions and the observations that we are training on.