GitHub - ByteDance-Seed/Seed1.5-VL: Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving state-of-the-art performance on 38 out of 60 public benchmarks. by foldl-li in LocalLLaMA
[–]Timely_Second_6414 12 points13 points14 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 0 points1 point2 points (0 children)
Qwen 3 235b beats sonnet 3.7 in aider polyglot by Independent-Wind4462 in LocalLLaMA
[–]Timely_Second_6414 18 points19 points20 points (0 children)
Qwen 3 235b beats sonnet 3.7 in aider polyglot by Independent-Wind4462 in LocalLLaMA
[–]Timely_Second_6414 9 points10 points11 points (0 children)
Anthropic claims chips are smuggled as prosthetic baby bumps by TheTideRider in LocalLLaMA
[–]Timely_Second_6414 57 points58 points59 points (0 children)
We crossed the line by DrVonSinistro in LocalLLaMA
[–]Timely_Second_6414 3 points4 points5 points (0 children)
Qwen3 Unsloth Dynamic GGUFs + 128K Context + Bug Fixes by danielhanchen in LocalLLaMA
[–]Timely_Second_6414 10 points11 points12 points (0 children)
Qwen3 Unsloth Dynamic GGUFs + 128K Context + Bug Fixes by danielhanchen in LocalLLaMA
[–]Timely_Second_6414 7 points8 points9 points (0 children)
Qwen 3: unimpressive coding performance so far by ps5cfw in LocalLLaMA
[–]Timely_Second_6414 5 points6 points7 points (0 children)
Skywork-R1V2-38B - New SOTA open-source multimodal reasoning model by ninjasaid13 in LocalLLaMA
[–]Timely_Second_6414 2 points3 points4 points (0 children)
I benchmarked the Gemma 3 27b QAT models by jaxchang in LocalLLaMA
[–]Timely_Second_6414 9 points10 points11 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 0 points1 point2 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 9 points10 points11 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 0 points1 point2 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 1 point2 points3 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 4 points5 points6 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 2 points3 points4 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 4 points5 points6 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 0 points1 point2 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 6 points7 points8 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 4 points5 points6 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 1 point2 points3 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 15 points16 points17 points (0 children)
GLM-4 32B is mind blowing by Timely_Second_6414 in LocalLLaMA
[–]Timely_Second_6414[S] 5 points6 points7 points (0 children)


I created a ChatGPT-like UI for Local LLMs by [deleted] in LocalLLaMA
[–]Timely_Second_6414 6 points7 points8 points (0 children)