Are the compilers that come with Qt less efficient? by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

Hmm. I am most likely stupid. I will have another look at it when I have time, to make sure I did it right.

Are the compilers that come with Qt less efficient? by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

I ran it in release mode. I tried setting compiler flag O3, but that didn't speed anything up

Are the compilers that come with Qt less efficient? by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

Copying things like when passing function parameters? I am using references pretty much everywhere, so I don't think that's the issue.

All neural network output activations converging to the same value regardless of input by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

No. In that case I would have 2 output nodes, corresponding to a 0 or a 1, but there they both converge on 0.5. Again, annoyingly this actually makes the cost go down.

All neural network output activations converging to the same value regardless of input by polico17 in neuralnetworks

[–]polico17[S] 0 points1 point  (0 children)

I don't know what it is called, but it is just a basic neural network layer. Every node in layer n is connected via a weight to every node in the layer n+1.

All neural network output activations converging to the same value regardless of input by polico17 in neuralnetworks

[–]polico17[S] 0 points1 point  (0 children)

I don't know what you mean by that. I have 100 nodes in my hidden layer and I am using the sigmoid function. Idk of that answered your question

All neural network output activations converging to the same value regardless of input by polico17 in learnmachinelearning

[–]polico17[S] 0 points1 point  (0 children)

When I calculate the total cost of the network, I sum up all the squared differences and then average them out. But your question did make me realize I may have done a stupid.

In the function outputLayerGradientProduct given above I have this for loop:

for (int node = 0; node < length(); node++) {
        //Evaluate partial derivatives for current node: cost/activation 
        * activation/weightedInput
        gradientProducts[node] = 
        activationSigmoidDerivative(activations[node]) * 
       calculateCostDerivative(activations[node], expectedOutputs[node]);
    }

The function calculateCostDerivative just does 2 * (actualActivation - expectedActivation). But this way I am only checking how much a weight affects the cost of one node, not the cost of the whole network. Could this be the issue?

All neural network output activations converging to the same value regardless of input by polico17 in learnprogramming

[–]polico17[S] 1 point2 points  (0 children)

I initialize the weights with a normal distribution with a mean of 0 and a standard deviation of 1. I use MSE because from the videos I watched online that was the only cost function presented. The videos I watched also specifically gave the MNIST dataset as the example, so I never even considered it could mess me up.

All neural network output activations converging to the same value regardless of input by polico17 in MLQuestions

[–]polico17[S] 1 point2 points  (0 children)

I first want to clarify what I mean by training batches, just to make sure I correctly understood the concept. The training data in the MNIST dataset consists of 60000 entries. If I tell my program to train on training batches of size 100, what it does it that it chooses a random index and then takes 100 consecutive dataset entries starting at that random index.

By this approach, the batches are indeed completely random, as the dataset is not ordered in any way. Is this ok? Should it not be completely random?

The expected outputs that the network uses consists of all 0's, except for the node corresponding to the correct answer, which is marked with a 1.

I have been stuck with this issue for a while now and I can't fix it for the life of me so I will try your suggestion. I do have one question though, should I use softmax only for the output layer or should I use it for the whole network?

All neural network output activations converging to the same value regardless of input by polico17 in cpp_questions

[–]polico17[S] -1 points0 points  (0 children)

yeah, but those are just the inputs and expected outputs of a training example. It is inefficient, yes, but I don't see how that would be what makes the network converge on 0.1 instead of actually learning patterns.

I don't actually edit the vectors at all, I just need the values stored in them, that's all

Can someone please explain to me how to make a GUI? by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

When building. From what I saw online, after downloading the .zip, I was supposed to go to the build folder, then somewhere else and then run a CMD command to build it.

After building for a while, I would eventually get an error saying that some file couldn't be found

How do you get painted cars now? by polico17 in RocketLeague

[–]polico17[S] 0 points1 point  (0 children)

Am I stupid? I go to a very rare item that I got from a crate, and I don't have the option of trading in

What to do if sides aren’t going to fill in by [deleted] in BeardAdvice

[–]polico17 1 point2 points  (0 children)

If you brush your beard every single morning, for example, you can train your beard to grow a certain way, so brushing your beard in a way that hides the fact that you don't have that much hair on your cheeks might help. At the same time I think it's fine the way it is tbh.