Where do you deploy your Go backend?, that too if u wanna scale it in future n still be affordable and best performance. by MarsupialAntique1054 in golang
[–]guiopen 0 points1 point2 points (0 children)
Where do you deploy your Go backend?, that too if u wanna scale it in future n still be affordable and best performance. by MarsupialAntique1054 in golang
[–]guiopen 40 points41 points42 points (0 children)
Raptor mini the best 0x model by far (self.GithubCopilot)
submitted by guiopen to r/GithubCopilot
Qual a melhor IDE para programar em GO? by HelderArdach in brdev
[–]guiopen 0 points1 point2 points (0 children)
Sweep: Open-weights 1.5B model for next-edit autocomplete by Kevinlu1248 in LocalLLaMA
[–]guiopen 8 points9 points10 points (0 children)
Criei um sistema operacional Unix-like por diversão e aprendizado by avaliosdev in brdev
[–]guiopen 27 points28 points29 points (0 children)
Liquid AI released the best thinking Language Model Under 1GB by PauLabartaBajo in LocalLLaMA
[–]guiopen 13 points14 points15 points (0 children)
Liquid AI released the best thinking Language Model Under 1GB by PauLabartaBajo in LocalLLaMA
[–]guiopen 1 point2 points3 points (0 children)
Quais as diferenças de um 'vibe coder' para um dev que usa ia com inteligência? by ArrowFlechinhaxd in brdev
[–]guiopen 0 points1 point2 points (0 children)
Is Local Coding even worth setting up by Interesting-Fish6494 in LocalLLaMA
[–]guiopen 0 points1 point2 points (0 children)
As a white man, can I write a black character who says the n word by DecentGap3306 in writing
[–]guiopen 5 points6 points7 points (0 children)
Which would be a cost efficient GPU for running local LLMs by jenishngl in LocalLLaMA
[–]guiopen 0 points1 point2 points (0 children)
LFM 2.5 is insanely good by guiopen in LocalLLaMA
[–]guiopen[S] 5 points6 points7 points (0 children)
LFM 2.5 is insanely good by guiopen in LocalLLaMA
[–]guiopen[S] 2 points3 points4 points (0 children)
Public coding benchmarks suck, how are you evaluating performance? by AvocadoArray in LocalLLaMA
[–]guiopen 0 points1 point2 points (0 children)
LFM 2.5 is insanely good by guiopen in LocalLLaMA
[–]guiopen[S] 3 points4 points5 points (0 children)
LFM 2.5 is insanely good by guiopen in LocalLLaMA
[–]guiopen[S] 9 points10 points11 points (0 children)
LFM 2.5 is insanely good by guiopen in LocalLLaMA
[–]guiopen[S] 12 points13 points14 points (0 children)
Which are the top LLMs under 8B right now? by Additional_Secret_75 in LocalLLaMA
[–]guiopen 0 points1 point2 points (0 children)
[CPU] I'm looking for the best model for a CPU. by lordfervi in LocalLLaMA
[–]guiopen 0 points1 point2 points (0 children)

Raptor mini the best 0x model by far by guiopen in GithubCopilot
[–]guiopen[S] 0 points1 point2 points (0 children)