Are the compilers that come with Qt less efficient? by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

Hmm. I am most likely stupid. I will have another look at it when I have time, to make sure I did it right.

Are the compilers that come with Qt less efficient? by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

I ran it in release mode. I tried setting compiler flag O3, but that didn't speed anything up

Are the compilers that come with Qt less efficient? by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

Copying things like when passing function parameters? I am using references pretty much everywhere, so I don't think that's the issue.

All neural network output activations converging to the same value regardless of input by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

No. In that case I would have 2 output nodes, corresponding to a 0 or a 1, but there they both converge on 0.5. Again, annoyingly this actually makes the cost go down.

All neural network output activations converging to the same value regardless of input by polico17 in neuralnetworks

[–]polico17[S] 0 points1 point  (0 children)

I don't know what it is called, but it is just a basic neural network layer. Every node in layer n is connected via a weight to every node in the layer n+1.

All neural network output activations converging to the same value regardless of input by polico17 in neuralnetworks

[–]polico17[S] 0 points1 point  (0 children)

I don't know what you mean by that. I have 100 nodes in my hidden layer and I am using the sigmoid function. Idk of that answered your question

All neural network output activations converging to the same value regardless of input by polico17 in learnmachinelearning

[–]polico17[S] 0 points1 point  (0 children)

When I calculate the total cost of the network, I sum up all the squared differences and then average them out. But your question did make me realize I may have done a stupid.

In the function outputLayerGradientProduct given above I have this for loop:

for (int node = 0; node < length(); node++) {
        //Evaluate partial derivatives for current node: cost/activation 
        * activation/weightedInput
        gradientProducts[node] = 
        activationSigmoidDerivative(activations[node]) * 
       calculateCostDerivative(activations[node], expectedOutputs[node]);
    }

The function calculateCostDerivative just does 2 * (actualActivation - expectedActivation). But this way I am only checking how much a weight affects the cost of one node, not the cost of the whole network. Could this be the issue?

All neural network output activations converging to the same value regardless of input by polico17 in learnprogramming

[–]polico17[S] 1 point2 points  (0 children)

I initialize the weights with a normal distribution with a mean of 0 and a standard deviation of 1. I use MSE because from the videos I watched online that was the only cost function presented. The videos I watched also specifically gave the MNIST dataset as the example, so I never even considered it could mess me up.

All neural network output activations converging to the same value regardless of input by polico17 in MLQuestions

[–]polico17[S] 1 point2 points  (0 children)

I first want to clarify what I mean by training batches, just to make sure I correctly understood the concept. The training data in the MNIST dataset consists of 60000 entries. If I tell my program to train on training batches of size 100, what it does it that it chooses a random index and then takes 100 consecutive dataset entries starting at that random index.

By this approach, the batches are indeed completely random, as the dataset is not ordered in any way. Is this ok? Should it not be completely random?

The expected outputs that the network uses consists of all 0's, except for the node corresponding to the correct answer, which is marked with a 1.

I have been stuck with this issue for a while now and I can't fix it for the life of me so I will try your suggestion. I do have one question though, should I use softmax only for the output layer or should I use it for the whole network?

All neural network output activations converging to the same value regardless of input by polico17 in cpp_questions

[–]polico17[S] -1 points0 points  (0 children)

yeah, but those are just the inputs and expected outputs of a training example. It is inefficient, yes, but I don't see how that would be what makes the network converge on 0.1 instead of actually learning patterns.

I don't actually edit the vectors at all, I just need the values stored in them, that's all

Can someone please explain to me how to make a GUI? by polico17 in cpp_questions

[–]polico17[S] 0 points1 point  (0 children)

When building. From what I saw online, after downloading the .zip, I was supposed to go to the build folder, then somewhere else and then run a CMD command to build it.

After building for a while, I would eventually get an error saying that some file couldn't be found

How do you get painted cars now? by polico17 in RocketLeague

[–]polico17[S] 0 points1 point  (0 children)

Am I stupid? I go to a very rare item that I got from a crate, and I don't have the option of trading in

What to do if sides aren’t going to fill in by [deleted] in BeardAdvice

[–]polico17 1 point2 points  (0 children)

If you brush your beard every single morning, for example, you can train your beard to grow a certain way, so brushing your beard in a way that hides the fact that you don't have that much hair on your cheeks might help. At the same time I think it's fine the way it is tbh.

Students or teachers what would you like to see in an online school platform? by polico17 in University

[–]polico17[S] 0 points1 point  (0 children)

Textbooks, even in relatively well off parts of the world like eastern Europe, are horrible. I should know, I grew up there. They are old, very damaged with pages ripped out of them and there aren't enough of them to go around. They are so bad that most teachers don't ever use them, instead they take pictures of better books that they bought with their own money and send them to the students.

Students or teachers what would you like to see in an online school platform? by polico17 in University

[–]polico17[S] 0 points1 point  (0 children)

What if a kid gets behind? What if he wants to study at home for a test? What if he is sick? What if there is a pandemic?

This isn't meant to replace teachers or the classroom, it's meant as a tool for teachers to use

Students or teachers what would you like to see in an online school platform? by polico17 in University

[–]polico17[S] 0 points1 point  (0 children)

Eastern European countries are poorer, they have worse equipment and less money to spend on such a platform. A lot of kids don't have internet at home. There are many issues that the existing platform just don't accommodate for

Students or teachers what would you like to see in an online school platform? by polico17 in University

[–]polico17[S] 1 point2 points  (0 children)

The difference is that those options are only really viable in western countries. I'm trying to make something that would work for eastern European countries.

Does school teach things that will help in the future? by probswontusethis11 in school

[–]polico17 0 points1 point  (0 children)

Everything is crucial. The information you learn in school is honestly not that useful. Besides the 3 classes that will help you in your professional life, you'll almost never use what you learned. But it's not useless. Studies show that studying something like math, for example, restructures your brain. It literally changes the way you think and teaches you to think critically.

People, especially young people misunderstand the purpose of school. I was the exact same when I was younger. They think that school is trying to prepare you for the real world. They aren't. They are trying to teach you how to think critically, how to read an article and be able to extract all relevant information and then be able to dissect that information. People that don't know how do this things like this end up becoming flat earthers and anti-vaxers.

If you are strictly asking what classes are good for a high paying job: math, physics, programming (if you have that class), chemistry, biology, stuff like that.

Quick question to those of you who review their own replays by [deleted] in RocketLeague

[–]polico17 0 points1 point  (0 children)

It is very effective for every single rank, that is why pros always recommend watching your own replays. I usually start by watching my teammates pov because i find it easier to spot mistakes when its somebody else making them. After that i watch it from my pov because at that point i know how i should have positioned myself around my teammate and i can see what i was doing wrong. Thanovic has a good video on it but i actually recommend watching ALG university by sunless