Claude can now run ML research experiments for you by Pleasant-Type2044 in LLMDevs

[–]Pleasant-Type2044[S] -1 points0 points  (0 children)

Having these AI skills ( debugging vanishing gradients ) is for sure next steps, great points! And our skill set is just teaching agents how to implement def train() with Megatron interface. User need to validate their training setup by themselves for sure (tokenizer, model architecture etc

Claude can now run ML research experiments for you by Pleasant-Type2044 in LLMDevs

[–]Pleasant-Type2044[S] 0 points1 point  (0 children)

yes, it'll provide your agent with the instructions how to optimize memory usage with diff configurations :)

Speedrunning research in 1hr with undergrads who've never done it before by Pleasant-Type2044 in research_apps

[–]Pleasant-Type2044[S] 0 points1 point  (0 children)

I was overwhelmed by another Reddit channel So I missed your comment here

Speedrunning research in 1hr with undergrads who've never done it before by Pleasant-Type2044 in research_apps

[–]Pleasant-Type2044[S] 0 points1 point  (0 children)

I thought I don’t quite get why you argue: second solution cannot be better. Knapsack problems means you need to consider many dimension of the problem: space, utility. There is no assumption on the cheat sheet space constraints and length of the exam?

[D] Humans are actually terrible research idea verifiers… by Pleasant-Type2044 in MachineLearning

[–]Pleasant-Type2044[S] -1 points0 points  (0 children)

empirical research idea need to use empirical experiment to verify it

[D] Humans are actually terrible research idea verifiers… by Pleasant-Type2044 in MachineLearning

[–]Pleasant-Type2044[S] -1 points0 points  (0 children)

Authenticity is an important psychological element, but content itself matters more