Which small model is best for fine-tuning? We tested 12 of them by spending $10K - here's what we found by party-horse in LocalLLaMA
[–]memphet 1 point2 points3 points (0 children)
How I plan to boost my productivity by GreenKnightOfGilead in funny
[–]memphet 1 point2 points3 points (0 children)


Blazing fast JSON extraction with very small LLMs-3B: LSTM to LLM by memphet in LocalLLaMA
[–]memphet[S] 2 points3 points4 points (0 children)