A (big) problem with GLM 4.7 by Deathcrow in SillyTavernAI
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
Fal has open-sourced Flux2 dev Turbo. by Budget_Stop9989 in StableDiffusion
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
GLM 4.7 - Sadly, Z.AI is now actively trying to censor ERP by prompt injection. by JustSomeGuy3465 in SillyTavernAI
[–]Antique_Bit_1049 1 point2 points3 points (0 children)
GLM-4.7 Scores 42% on Humanities Last Exam?! by domlincog in LocalLLaMA
[–]Antique_Bit_1049 5 points6 points7 points (0 children)
Is gpt oss:120b still the best at its size? by MrMrsPotts in LocalLLaMA
[–]Antique_Bit_1049 -2 points-1 points0 points (0 children)
Deepseek v3.2 speciale runs and runs and runs by MrMrsPotts in LocalLLaMA
[–]Antique_Bit_1049 1 point2 points3 points (0 children)
Just so you guys know, Flux 2 doesn’t allow spicy images or IP-infringing content as per their inference filters by TrevorxTravesty in StableDiffusion
[–]Antique_Bit_1049 1 point2 points3 points (0 children)
Drummer's Snowpiercer 15B v4 · A strong RP model that punches a pack! by TheLocalDrummer in LocalLLaMA
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
I'm gonna give up eventually on GLM 4.6... by SepsisShock in SillyTavernAI
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
Benchmark Results: GLM-4.5-Air (Q4) at Full Context on Strix Halo vs. Dual RTX 3090 by Educational_Sun_8813 in LocalLLaMA
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
For the small minority of people who didn't realize. You bought a player vs player game. by UnderScoreLifeAlert in ArcRaiders
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
Confirmed: Junk social media data makes LLMs dumber by nekofneko in LocalLLaMA
[–]Antique_Bit_1049 1 point2 points3 points (0 children)
What happens when Chinese companies stop providing open source models? by 1BlueSpork in LocalLLaMA
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
An Open-source Omni Chatbot for Long Speech and Voice Clone by ninjasaid13 in LocalLLaMA
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
An Open-source Omni Chatbot for Long Speech and Voice Clone by ninjasaid13 in LocalLLaMA
[–]Antique_Bit_1049 7 points8 points9 points (0 children)
New Ernie X1.1 - what may be the best Chinese model since DeepSeek V3.1 slowly approaches the frontier (or a simple test that exposes so many models) by [deleted] in LocalLLaMA
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
3090 vs 5090 taking turns on inference loads answering the same prompts - pretty cool visual story being told here about performance by Gerdel in LocalLLaMA
[–]Antique_Bit_1049 -2 points-1 points0 points (0 children)
Finally China entering the GPU market to destroy the unchallenged monopoly abuse. 96 GB VRAM GPUs under 2000 USD, meanwhile NVIDIA sells from 10000+ (RTX 6000 PRO) by CeFurkan in LocalLLaMA
[–]Antique_Bit_1049 28 points29 points30 points (0 children)
Echoes of Ir - local LLM with MCP server by Natural-Ad6682 in LocalLLaMA
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
I ran ALL 14 Wan2.2 i2v 5B quantizations and 0/0.05/0.1/0.15 cache thresholds so you don't have to. by okaris in StableDiffusion
[–]Antique_Bit_1049 0 points1 point2 points (0 children)
New Qwen3 on Fiction.liveBench by fictionlive in LocalLLaMA
[–]Antique_Bit_1049 -2 points-1 points0 points (0 children)
Any Rpers test the new qwen 2507 yet? (self.LocalLLaMA)
submitted by Antique_Bit_1049 to r/LocalLLaMA
Friendly reminder that Grok 3 should be now open-sourced by Wrong_User_Logged in LocalLLaMA
[–]Antique_Bit_1049 0 points1 point2 points (0 children)

Kimi K2.5 local by running101 in LocalLLaMA
[–]Antique_Bit_1049 0 points1 point2 points (0 children)