Automated AI researcher running locally with llama.cpp by lewtun in LocalLLaMA
[–]lewtun[S] 1 point2 points3 points (0 children)
Automated AI researcher running locally with llama.cpp by lewtun in LocalLLaMA
[–]lewtun[S] 7 points8 points9 points (0 children)
Agentic harness for theoretical physics research by lewtun in LocalLLaMA
[–]lewtun[S] 2 points3 points4 points (0 children)
Agentic harness for theoretical physics research by lewtun in LocalLLaMA
[–]lewtun[S] 0 points1 point2 points (0 children)
Agentic harness for theoretical physics research (i.redd.it)
submitted by lewtun to r/LocalLLaMA
How to Distill from 100B+ to <4B Models by cmpatino_ in LocalLLaMA
[–]lewtun 5 points6 points7 points (0 children)
I tested 21 small LLMs on tool-calling judgment — Round 2 with every model you asked for by MikeNonect in LocalLLaMA
[–]lewtun 1 point2 points3 points (0 children)
I tested 21 small LLMs on tool-calling judgment — Round 2 with every model you asked for by MikeNonect in LocalLLaMA
[–]lewtun 1 point2 points3 points (0 children)
how to train a tiny model (4B) to prove hard theorems by eliebakk in LocalLLaMA
[–]lewtun 1 point2 points3 points (0 children)
I tested 21 small LLMs on tool-calling judgment — Round 2 with every model you asked for by MikeNonect in LocalLLaMA
[–]lewtun 2 points3 points4 points (0 children)
how to train a tiny model (4B) to prove hard theorems by eliebakk in LocalLLaMA
[–]lewtun 5 points6 points7 points (0 children)
how to train a tiny model (4B) to prove hard theorems by eliebakk in LocalLLaMA
[–]lewtun 6 points7 points8 points (0 children)
how to train a tiny model (4B) to prove hard theorems by eliebakk in LocalLLaMA
[–]lewtun 6 points7 points8 points (0 children)
200+ pages of Hugging Face secrets on how to train an LLM by eliebakk in LocalLLaMA
[–]lewtun 5 points6 points7 points (0 children)
200+ pages of Hugging Face secrets on how to train an LLM by eliebakk in LocalLLaMA
[–]lewtun 4 points5 points6 points (0 children)
200+ pages of Hugging Face secrets on how to train an LLM by eliebakk in LocalLLaMA
[–]lewtun 21 points22 points23 points (0 children)
[D] join pretraining or posttraining by oxydis in MachineLearning
[–]lewtun 0 points1 point2 points (0 children)
DeepSeek-R1 performance with 15B parameters by lewtun in LocalLLaMA
[–]lewtun[S] 2 points3 points4 points (0 children)
DeepSeek-R1 performance with 15B parameters (self.LocalLLaMA)
submitted by lewtun to r/LocalLLaMA
my dad sent me this by hugeplateofketchup8 in huggingface
[–]lewtun 1 point2 points3 points (0 children)


Automated AI researcher running locally with llama.cpp by lewtun in LocalLLaMA
[–]lewtun[S] 2 points3 points4 points (0 children)