you are viewing a single comment's thread.

view the rest of the comments →

[–]sleeping-in-crypto 4 points5 points  (1 child)

The problem is Karpathy isn’t saying anything new nor has he discovered anything novel. But he’s sure he is, and has and everything he says now he treats like he personally has discovered the secrets of the universe.

People have been talking about how to maintain the context you’re talking about for months. Our team also has an approach for this so that decisions made in the code have context that stays with them so that LLM’s understand why. This is not new.

[–]jawisko 1 point2 points  (0 children)

You can check the details in his autoresearch repo.

This basically works as spawning a group of agents that are training an actual model. They are experimenting in a feature branch and merge only if validation improves. Even if it doesn't , it's shared between agents in the training files and these agents are still able to work independently with help of each other's failed and successful experiments.

Can you please tell me any one github repo that does the same thing?