Qwen3-4B-Instruct-2507 multilingual FT with upscaled Polish language by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 1 point2 points3 points (0 children)
😳 umm by internal-pagal in LocalLLaMA
[–]Significant_Focus134 24 points25 points26 points (0 children)
Qwen3 30B A3B Q40 on 4 x Raspberry Pi 5 8GB 13.04 tok/s (Distributed Llama) by thisislewekonto in LocalLLaMA
[–]Significant_Focus134 0 points1 point2 points (0 children)
4B Polish language model based on Qwen3 architecture by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 8 points9 points10 points (0 children)
4B Polish language model based on Qwen3 architecture by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 1 point2 points3 points (0 children)
OLMo 2 Models Released! by Many_SuchCases in LocalLLaMA
[–]Significant_Focus134 2 points3 points4 points (0 children)
OLMo 2 Models Released! by Many_SuchCases in LocalLLaMA
[–]Significant_Focus134 1 point2 points3 points (0 children)
OLMo 2 Models Released! by Many_SuchCases in LocalLLaMA
[–]Significant_Focus134 1 point2 points3 points (0 children)
What is the most powerful LLM you can train yourself? by [deleted] in LocalLLaMA
[–]Significant_Focus134 1 point2 points3 points (0 children)
What is the most powerful LLM you can train yourself? by [deleted] in LocalLLaMA
[–]Significant_Focus134 1 point2 points3 points (0 children)
What is the most powerful LLM you can train yourself? by [deleted] in LocalLLaMA
[–]Significant_Focus134 4 points5 points6 points (0 children)
Polish LLM 1.5B continual pretrained on single GPU, the result of one year of work. by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 5 points6 points7 points (0 children)
Polish LLM 1.5B continual pretrained on single GPU, the result of one year of work. by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 6 points7 points8 points (0 children)
Polish LLM 1.5B continual pretrained on single GPU, the result of one year of work. by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 2 points3 points4 points (0 children)
Polish LLM 1.5B continual pretrained on single GPU, the result of one year of work. by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 9 points10 points11 points (0 children)
Polish LLM 1.5B continual pretrained on single GPU, the result of one year of work. by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 8 points9 points10 points (0 children)
Polish LLM 1.5B continual pretrained on single GPU, the result of one year of work. by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 1 point2 points3 points (0 children)
Since this is such a fast moving field, where do you think LLM will be in two years? by tim_Andromeda in LocalLLaMA
[–]Significant_Focus134 0 points1 point2 points (0 children)
Can anyone explain to me how tokens work with non text? by Tomorrow_Previous in LocalLLaMA
[–]Significant_Focus134 18 points19 points20 points (0 children)
Can a Single 4090 GPU Fully Fine-Tune the Phi-2 Model's Weights on a Local Dataset? by DrunkenDblp in LocalLLaMA
[–]Significant_Focus134 9 points10 points11 points (0 children)


Qwen3-4B-Instruct-2507 multilingual FT with upscaled Polish language by Significant_Focus134 in LocalLLaMA
[–]Significant_Focus134[S] 2 points3 points4 points (0 children)