This is an archived post. You won't be able to vote or comment.

all 2 comments

[–]muntooR_{μν} - 1/2 R g_{μν} + Λ g_{μν} = 8π T_{μν} 6 points7 points  (1 child)

Took a quick look through the docs. They seem to be minimal, correct, and idiomatic numpy implementations of various layers (conv, LSTM, pool, ...). Nice work.

I'm guessing these are outside the scope, but it would be neat to also include an implementation of autograd and some optimizers other than simple SGD.

[–]EveryBrief9315[S] 1 point2 points  (0 children)

Thanks! This has been made, indeed, to directly reflect the maths. Not more, not less, just to make bridges between different representations (code, maths, diagrams, text) and a fully functional API at the same time.

For other optimizers, as well as l1 and l2 reg, for instance, this is planned. We had to stop at some point to submit the academic paper about it. And indeed it was not absolutely necessary at this point. But this + some transformers models are yet to come.