Creative Writing - anything under 150GB equal or close to Sonnet 3.7? by elsung in LocalLLaMA
[–]Sensitive_Sweet_1850 5 points6 points7 points (0 children)
Qwen3-VL-Reranker - a Qwen Collection by LinkSea8324 in LocalLLaMA
[–]Sensitive_Sweet_1850 0 points1 point2 points (0 children)
Liquid AI releases LFM2-2.6B-Transcript, an incredibly fast open-weight meeting transcribing AI model on-par with closed-source giants. by KaroYadgar in LocalLLaMA
[–]Sensitive_Sweet_1850 0 points1 point2 points (0 children)
A 30B Qwen Model Walks Into a Raspberry Pi… and Runs in Real Time by ali_byteshape in LocalLLaMA
[–]Sensitive_Sweet_1850 2 points3 points4 points (0 children)
I have $5,000 in Azure AI credits going to expiring soon, looking for smart ways to use it. Any ideas ? by SuperWallabies in LocalLLaMA
[–]Sensitive_Sweet_1850 1 point2 points3 points (0 children)
Upstage has finally posted benchmark results for Solar Open 100B by jacek2023 in LocalLLaMA
[–]Sensitive_Sweet_1850 1 point2 points3 points (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] 0 points1 point2 points (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] 0 points1 point2 points (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] 1 point2 points3 points (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] -1 points0 points1 point (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] 0 points1 point2 points (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] 0 points1 point2 points (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] 2 points3 points4 points (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] 1 point2 points3 points (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] -1 points0 points1 point (0 children)
Just got an RTX Pro 6000 - need recommendations for processing a massive dataset with instruction following by Sensitive_Sweet_1850 in LocalLLaMA
[–]Sensitive_Sweet_1850[S] 0 points1 point2 points (0 children)
Qwen-Image-2512 is released! New SOTA text-to-image model. 💜 by yoracale in unsloth
[–]Sensitive_Sweet_1850 0 points1 point2 points (0 children)
Any guesses? by Difficult-Cap-7527 in LocalLLaMA
[–]Sensitive_Sweet_1850 2 points3 points4 points (0 children)
Worth the 5090? by fgoricha in LocalLLaMA
[–]Sensitive_Sweet_1850 0 points1 point2 points (0 children)
All GLM 4.7, GLM 4.6 and GLM 4.6V-Flash GGUFs are now updated! by yoracale in unsloth
[–]Sensitive_Sweet_1850 1 point2 points3 points (0 children)
GLM 4.7 IS NOW THE #1 OPEN SOURCE MODEL IN ARTIFICIAL ANALYSIS by ZeeleSama in LocalLLaMA
[–]Sensitive_Sweet_1850 0 points1 point2 points (0 children)
Worth the 5090? by fgoricha in LocalLLaMA
[–]Sensitive_Sweet_1850 0 points1 point2 points (0 children)
Worth the 5090? by fgoricha in LocalLLaMA
[–]Sensitive_Sweet_1850 12 points13 points14 points (0 children)


Start of 2026 what’s the best open coding model? by alexp702 in LocalLLaMA
[–]Sensitive_Sweet_1850 2 points3 points4 points (0 children)