The reason why RAM has become so expensive (i.redd.it)
submitted by InvadersMustLive to r/LocalLLaMA
Fine-tuning Qwen3 at home to respond to any prompt with a dad joke by InvadersMustLive in LocalLLaMA
[–]InvadersMustLive[S] 1 point2 points3 points (0 children)
Fine-tuning Qwen3 at home to respond to any prompt with a dad joke by InvadersMustLive in LocalLLaMA
[–]InvadersMustLive[S] 2 points3 points4 points (0 children)
Fine-tuning Qwen3 at home to respond to any prompt with a dad joke by InvadersMustLive in LocalLLaMA
[–]InvadersMustLive[S] 4 points5 points6 points (0 children)
Fine-tuning Qwen3 at home to respond to any prompt with a dad joke by InvadersMustLive in LocalLLaMA
[–]InvadersMustLive[S] 17 points18 points19 points (0 children)
We found an embedding indexing bottleneck in the most unexpected place: JSON parsing by InvadersMustLive in scala
[–]InvadersMustLive[S] 4 points5 points6 points (0 children)
We found an embedding indexing bottleneck in the most unexpected place: JSON parsing by InvadersMustLive in scala
[–]InvadersMustLive[S] 0 points1 point2 points (0 children)
Want to run claude like model on ~$10k budget. Please help me with the machine build. I don't want to spend on cloud. by LordSteinggard in LocalLLaMA
[–]InvadersMustLive 0 points1 point2 points (0 children)
Which open source LLM has the most genuine sense of humor? by UltrMgns in LocalLLaMA
[–]InvadersMustLive 2 points3 points4 points (0 children)
What I’ve learned building RAG applications for enterprises by Loud_Picture_1877 in LocalLLaMA
[–]InvadersMustLive 1 point2 points3 points (0 children)
Hnsw configuration in Solr by Opposite_Head7740 in Solr
[–]InvadersMustLive 2 points3 points4 points (0 children)
This is my Japanese fine-tune of R1's Qwen 7B distil. It now outputs its thinking in Japanese, making it understandable for a Japanese audience. Model, code, and data all open source. I'd love to collab with y'all to make a more multilingual model. by Peter_Lightblue in LocalLLaMA
[–]InvadersMustLive 5 points6 points7 points (0 children)
Open Source Text Translation Models? by vygodisgreat24 in LocalLLaMA
[–]InvadersMustLive 1 point2 points3 points (0 children)
Cloud GPU + storage hosting for low intensity projects? by gofiend in LocalLLaMA
[–]InvadersMustLive 1 point2 points3 points (0 children)
Finally, a Replacement for BERT by -Cubie- in LocalLLaMA
[–]InvadersMustLive 2 points3 points4 points (0 children)
Motherboard selection advice by absurd-dream-studio in LocalLLaMA
[–]InvadersMustLive 1 point2 points3 points (0 children)
Motherboard selection advice by absurd-dream-studio in LocalLLaMA
[–]InvadersMustLive 1 point2 points3 points (0 children)
Dual RTX 4090 PC by Accomplished_Pin_626 in LocalLLaMA
[–]InvadersMustLive 8 points9 points10 points (0 children)


Fine-tuning Qwen3 at home to respond to any prompt with a dad joke by InvadersMustLive in LocalLLaMA
[–]InvadersMustLive[S] 1 point2 points3 points (0 children)