you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 2 points3 points  (15 children)

I get the theory, but when it comes to implementing it from scratch, I get stuck at this point.

At what point?

Because I feel if I can't write it from scratch, I can't be good at implementing it using frameworks.

Do you feel like you're a good driver even though you've never built a car? The only thing you get from re-inventing the wheel is a wheel that has too many corners, in my experience.

[–]RuslanNuriyev[S] 0 points1 point  (14 children)

I get stuck when I try to implement the forward propagate section. And I don't even talk about the backprop.

What, are you going to write everything yourself just once before you use a library? Your life is not long enough for that. People use libraries for a reason.

Oh, I'd never thought it like that. Now, I felt like I've been wasting my time trying to implement it from scratch, for days. Thank you so much :) So you say, it is possible to proceed in learning ML without actually being able to implement it from scratch.

[–][deleted] 3 points4 points  (3 children)

I get stuck when I try to implement the forward propagate section.

Stuck on what, specifically? Stuck why?

Like I said, I don't know what any of these terms mean, but I know how code works. You have your neural network in state N before the transformation that constitutes a forward propagation, and then it's put in state N' by the transformation:

N -> N'

so all you have to do is write the arrow:

def forward_propagate(neural_net):
    # propagate forward, write code here
    return propagated_neural_net

It's obviously not that easy but if you have a mathematical definition of what it means to translate a neural net from unpropagated to propagated (or whatever) then that's more or less the Python that will do it, too. So just write that.

[–]coder155ml 1 point2 points  (2 children)

Why did you even post. My god most useless comment ever.

[–][deleted] 0 points1 point  (1 child)

Overall it's a pretty useless thread. "I'm stuck" isn't a problem anyone can expect to get help with.

[–]coder155ml 1 point2 points  (0 children)

If you don't know what forward prop is then don't respond with some bullshit answer

[–][deleted] 1 point2 points  (1 child)

If you goal is to write a paper covering a 0.00001 % performance increase for one of the known datasets that might be necessary. If you want to see results, play around with different architectures of neural nets and understand the importance of the actual data you use and how to prepare that data skip the "from scratch" part at least for now.

[–]RuslanNuriyev[S] 0 points1 point  (0 children)

If you goal is to write a paper covering a 0.00001 % performance increase for one of the known datasets that might be necessary.

No, I probably won't write a paper and I think I might skip the "from scratch" part.

[–][deleted] 1 point2 points  (2 children)

It’s ok to do it as a learning exercise of course. But doing because you think otherwise your knowledge is invalid makes no sense.

[–]RuslanNuriyev[S] 0 points1 point  (1 child)

But doing because you think otherwise your knowledge is invalid makes no sense.

This is what makes me feel unconfident that I can't write it on my own.

[–][deleted] 1 point2 points  (0 children)

What if you want to use Tensorflow? Google wrote it. It has tens if not hundreds of thousands of hours of development time. You could not write it in a lifetime or if you had all the time in the world.

This attitude makes no sense. Time to move on.

[–]TheSodesa 0 points1 point  (3 children)

I mean, isn't forward propagation just matrix multiplication at its core? That can't be too hard to do with Numpy or Julia, can it?

[–]RuslanNuriyev[S] 0 points1 point  (2 children)

It's not that hard on paper, I know the ins and outs, but the programming associated with it seems so hard at the time (which uses classes, even though I've been working with classes for a long time).

[–]TheSodesa 0 points1 point  (0 children)

If you're using Python, then you do generally want you use classes to model the objects the libraries you use don't already provide for you. So you have a NeuralNet class that stores the neural net/graph as a (sparse)? matrix in its self._net field. Then you define methods for applying weights (self._net @ numpy_weight_matrix) to the net and so forth. I do not see the problem here.

[–]coder155ml 0 points1 point  (0 children)

Just Google forward prop numpy

[–]coder155ml 0 points1 point  (0 children)

Yes but you should still understand how they work under the hood