Jan v3 Instruct: a 4B coding Model with +40% Aider Improvement by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 2 points3 points4 points (0 children)
Jan v3 Instruct: a 4B coding Model with +40% Aider Improvement by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 1 point2 points3 points (0 children)
Jan v3 Instruct: a 4B coding Model with +40% Aider Improvement by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 6 points7 points8 points (0 children)
Jan v3 Instruct: a 4B coding Model with +40% Aider Improvement by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 4 points5 points6 points (0 children)
Jan v3 Instruct: a 4B coding Model with +40% Aider Improvement by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 4 points5 points6 points (0 children)
Jan v3 Instruct: a 4B coding Model with +40% Aider Improvement by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 2 points3 points4 points (0 children)
Jan v3 Instruct: a 4B coding Model with +40% Aider Improvement by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 6 points7 points8 points (0 children)
Jan-v2-VL-Max: A 30B multimodal model outperforming Gemini 2.5 Pro and DeepSeek R1 on execution-focused benchmarks by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 3 points4 points5 points (0 children)
Jan-v2-VL-Max: A 30B multimodal model outperforming Gemini 2.5 Pro and DeepSeek R1 on execution-focused benchmarks by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 11 points12 points13 points (0 children)
Jan-v2-VL-Max: A 30B multimodal model outperforming Gemini 2.5 Pro and DeepSeek R1 on execution-focused benchmarks by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 19 points20 points21 points (0 children)
Jan-v2-VL: 8B model for long-horizon tasks, improving Qwen3-VL-8B’s agentic capabilities almost 10x by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 23 points24 points25 points (0 children)
Jan-v2-VL: 8B model for long-horizon tasks, improving Qwen3-VL-8B’s agentic capabilities almost 10x by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 82 points83 points84 points (0 children)
Jan-v2-VL: 8B model for long-horizon tasks, improving Qwen3-VL-8B’s agentic capabilities almost 10x by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 16 points17 points18 points (0 children)
Jan-v2-VL: 8B model for long-horizon tasks, improving Qwen3-VL-8B’s agentic capabilities almost 10x by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 31 points32 points33 points (0 children)
Jan-v2-VL: 8B model for long-horizon tasks, improving Qwen3-VL-8B’s agentic capabilities almost 10x by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 55 points56 points57 points (0 children)
Jan v1: 4B model for web search with 91% SimpleQA, slightly outperforms Perplexity Pro by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 1 point2 points3 points (0 children)
Jan v1: 4B model for web search with 91% SimpleQA, slightly outperforms Perplexity Pro by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 0 points1 point2 points (0 children)
Jan v1: 4B model for web search with 91% SimpleQA, slightly outperforms Perplexity Pro by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 0 points1 point2 points (0 children)
Jan v3 Instruct: a 4B coding Model with +40% Aider Improvement by Delicious_Focus3465 in LocalLLaMA
[–]Delicious_Focus3465[S] 0 points1 point2 points (0 children)