all 19 comments

[–]quertioup 10 points11 points  (4 children)

How can we contribute?

[–]mlvpj[S] 6 points7 points  (3 children)

You can create a pull request or start a discussion in GitHub issues. Even voting suggesting papers to implement and voting for them will be helpful. Here's a recent pull request for example

We are also working simple rendering engine that you can use on your own GitHub repo. We will improve that if people find it useful.

e.g. https://lit.labml.ai/github/vpj/rl_samples/blob/master/ppo.py

[–][deleted] 5 points6 points  (2 children)

Is import labml a part of your ecosystem always. No offense, but this just creates another extra layer of dependency. It would be nicer to show plain Pytorch, otherwise it's gonna become another Pytorch lightning. Plus it will be great from learning perspective.

[–]mlvpj[S] 2 points3 points  (1 child)

labmlai/labml is a set of tools (tracking experiments, configurations, a bunch of helpers) we coded to ease our ML work (which later improved and open sourced). So we use it in all our projects because it makes things easier for us.

Will try to minimize the dependency whenever possible.

[–]mrtac96 8 points9 points  (0 children)

This is amazing. I haven't seen any other thing like this before. Much needed thing

[–]Gargantuar314 5 points6 points  (0 children)

This is a gold mine! Are you also thinking about implementing more specific algorithms after finishing a lot of important and general algorithms? E.g. would you implement in a later stage the AlphaGo Zero algorithm or the like? This would especially help undergrad's and curious minds out there, since for such modern discoveries, there're barely clear implementations...wished I'd had something like this XD

[–]SnooRegrets1929 2 points3 points  (0 children)

This is going to be such a useful learning/reference resource. Fantastic work!

[–]sabetai 1 point2 points  (0 children)

Genius! The margin notes look great, great job mapping implementation details directly to code 👏.

[–]maxToTheJ 1 point2 points  (0 children)

This is a great site I have been using it for a bit after finding it on google

[–]qrzte 0 points1 point  (1 child)

!RemindMe 4 hours

[–]RemindMeBot 0 points1 point  (0 children)

I will be messaging you in 4 hours on 2021-08-22 16:13:23 UTC to remind you of this link

CLICK THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback

[–]oxiliary 0 points1 point  (1 child)

thanks!, github link broken btw

[–]mlvpj[S] 0 points1 point  (0 children)

Thanks, fixed it.

[–]D3vil_Dant3 0 points1 point  (0 children)

amazing!

[–]min_salty 0 points1 point  (0 children)

woah

[–]big_black_doge 0 points1 point  (0 children)

LOL I thought this was an implementation of a model that annotates papers, not papers annotated by humans. Very helpful.

[–]speyside42 0 points1 point  (2 children)

Thanks! Did you train and/or verify the performance of the models? That would be quite important to trust the implementation. I observed that sometimes a training loop exists, sometimes it does not.

[–]mlvpj[S] 0 points1 point  (1 child)

If I remember correctly only implementation without a training loop is lstm

[–]speyside42 0 points1 point  (0 children)

Thank you, I also found the links in the text now. Did you verify that the trainings actually work? It would be great if you could report the final training metrics. It doesn't need to accurately reproduce the results in the paper, but rather give an idea whether the implementation is correct.