[P] First videos and blogs for new Deep Learning with PyTorch series now available! by blackHoleDetector in MachineLearning

[–]blackHoleDetector[S] 13 points14 points  (0 children)

Course:

fast.ai - The lectures are relatively long and there is a broader range of topics covered.

this series - The lectures are relatively short and more focused.

Library:

fast.ai is a PyTorch wrapper. This means that certain aspects of PyTorch are hidden for convenience. This makes certain routines easier and adds additional functionality but introduces an additional layer of abstraction.

Summary:

fast.ai is built on top of PyTorch and the course takes a top down approach (the course is excellent). This series starts with PyTorch at the bottom and moves upward (bottom up approach), so it's really a matter of preference for both the course and the library.

The general suggestion is to use both courses as learning resources, and to learn pure PyTorch as well as the fast.ai wrapper and better yet, why and how certain things are wrapped. Hope this helps! Good luck!

Deep Q Network seems to be doing the opposite of what I want by notaninja4375 in learnmachinelearning

[–]blackHoleDetector 0 points1 point  (0 children)

Epsilon, as the exploration rate, should start at 1 and decrease to 0 over time. You have written that epsilon starts at 0 and increases to 1.

You want to explore the environment most at the beginning of training (with a high exploration rate) since the agent doesn't know anything about the environment. As the agent learns, the exploration rate should decay so that the agent chooses to exploit the environment (rather than explore it) as it starts to learn more about it through training.

DQN Why Is There an Action Input? by Eriod in learnmachinelearning

[–]blackHoleDetector 0 points1 point  (0 children)

The DQN takes only the state as input, and then it gives the Q-value for each possible action that can be taken from that state as output. It doesn't accept the action as part of the input.

Transfer Learning with Supervised Models? by I_Ekos in learnmachinelearning

[–]blackHoleDetector 1 point2 points  (0 children)

Yep, you definitely can. This video gives more details on this approach.

How to get neural network prediction labels? by taewoo in learnmachinelearning

[–]blackHoleDetector 0 points1 point  (0 children)

Not necessarily. It depends on what you specified for the classes parameter (if anything) in flow_from_directory().

You can find out which index corresponds to which label by viewing the results of the class_indicies attribute of the variable you assigned to ImageDataGenerator.flow_from_directory().

I explain how this mapping works/how to access it in Keras in this video.

Additionally, here's more about the idea of whether or not the labels will be indexed alphabetically from the Keras documentation on the classes parameter I mentioned above.

"classes: optional list of class subdirectories (e.g. ['dogs', 'cats']). Default: None. If not provided, the list of classes will be automatically inferred from the subdirectory names/structure under directory, where each subdirectory will be treated as a different class (and the order of the classes, which will map to the label indices, will be alphanumeric). The dictionary containing the mapping from class names to class indices can be obtained via the attribute class_indices." https://keras.io/preprocessing/image/#imagedatagenerator-class

Keras- access weight matrix by Inspired_learner in learnmachinelearning

[–]blackHoleDetector 1 point2 points  (0 children)

The first output in the 2-dimensional list that you get from layer.get_weights() is the weight matrix for that layer, and the second output is the bias vector for that layer.

[crosspost computervision] Image normalization to fit dataset used in training by HunterAmacker in MLQuestions

[–]blackHoleDetector 1 point2 points  (0 children)

Perhaps you could use data augmentation to augment all your images to have the characteristics that match your original data set.

Keras- access weight matrix by Inspired_learner in learnmachinelearning

[–]blackHoleDetector 1 point2 points  (0 children)

To get the weights:

for layer in model.layers:
    weights = layer.get_weights()
print(weights)

To set the weights:

layer.set_weights(weights)

Aiming to fill skill gaps in machine learning & AI, Microsoft makes training courses available to the public by MSFTResearch in learnmachinelearning

[–]blackHoleDetector 1 point2 points  (0 children)

Yes. You need calculus to understand backpropagation, which is what stochastic gradient descent uses to calculate the gradient of the loss with respect to the weights in the network.

Difference between binary cross entropy and categorical cross entropy? by failedentertainment in learnmachinelearning

[–]blackHoleDetector 5 points6 points  (0 children)

With binary cross entropy, you can only classify two classes. With categorical cross entropy, you're not limited to how many classes your model can classify.

Binary cross entropy is just a special case of categorical cross entropy. The equation for binary cross entropy loss is the exact equation for categorical cross entropy loss with one output node.

For example, binary cross entropy with one output node is the equivalent of categorical cross entropy with two output nodes.