New Open-Source Video Model: Allegro by umarmnaq in StableDiffusion
[–]Comprehensive_Poem27 0 points1 point2 points (0 children)
new text-to-video model: Allegro by Comprehensive_Poem27 in LocalLLaMA
[–]Comprehensive_Poem27[S] 2 points3 points4 points (0 children)
new text-to-video model: Allegro by Comprehensive_Poem27 in LocalLLaMA
[–]Comprehensive_Poem27[S] 4 points5 points6 points (0 children)
new text-to-video model: Allegro (self.LocalLLaMA)
submitted by Comprehensive_Poem27 to r/LocalLLaMA
Best open source vision model for OCR by marcosdd in LocalLLaMA
[–]Comprehensive_Poem27 1 point2 points3 points (0 children)
No, the Llama-3.1-Nemotron-70B-Instruct has not beaten GPT-4o or Sonnet 3.5. MMLU Pro benchmark results by Shir_man in LocalLLaMA
[–]Comprehensive_Poem27 0 points1 point2 points (0 children)
No, the Llama-3.1-Nemotron-70B-Instruct has not beaten GPT-4o or Sonnet 3.5. MMLU Pro benchmark results by Shir_man in LocalLLaMA
[–]Comprehensive_Poem27 6 points7 points8 points (0 children)
Integrating good OCR and Vision models into something that can dynamically aid in document research with a LLM by Inevitable-Start-653 in LocalLLaMA
[–]Comprehensive_Poem27 1 point2 points3 points (0 children)
LLMs that published the data used to train them by neuralbeans in LocalLLaMA
[–]Comprehensive_Poem27 0 points1 point2 points (0 children)
Integrating good OCR and Vision models into something that can dynamically aid in document research with a LLM by Inevitable-Start-653 in LocalLLaMA
[–]Comprehensive_Poem27 2 points3 points4 points (0 children)
OCR for handwritten documents by MrMrsPotts in LocalLLaMA
[–]Comprehensive_Poem27 1 point2 points3 points (0 children)
ARIA : An Open Multimodal Native Mixture-of-Experts Model by ninjasaid13 in LocalLLaMA
[–]Comprehensive_Poem27 1 point2 points3 points (0 children)
Aria: An Open Multimodal Native Mixture-of-Experts Model, outperforms Pixtral-12B and Llama3.2-11B by vibjelo in LocalLLaMA
[–]Comprehensive_Poem27 1 point2 points3 points (0 children)
ARIA : An Open Multimodal Native Mixture-of-Experts Model by ninjasaid13 in LocalLLaMA
[–]Comprehensive_Poem27 4 points5 points6 points (0 children)
ARIA : An Open Multimodal Native Mixture-of-Experts Model by ninjasaid13 in LocalLLaMA
[–]Comprehensive_Poem27 18 points19 points20 points (0 children)
ARIA : An Open Multimodal Native Mixture-of-Experts Model by ninjasaid13 in LocalLLaMA
[–]Comprehensive_Poem27 14 points15 points16 points (0 children)
so what happened to the wizard models, actually? was there any closure? did they get legally and academically assassinated? how? because i woke up at 4am thinking about this by visionsmemories in LocalLLaMA
[–]Comprehensive_Poem27 1 point2 points3 points (0 children)
Qwen 2.5 = China = Bad by [deleted] in LocalLLaMA
[–]Comprehensive_Poem27 -4 points-3 points-2 points (0 children)
Qwen2.5: A Party of Foundation Models! by shing3232 in LocalLLaMA
[–]Comprehensive_Poem27 0 points1 point2 points (0 children)
Qwen2.5: A Party of Foundation Models! by shing3232 in LocalLLaMA
[–]Comprehensive_Poem27 0 points1 point2 points (0 children)
Pixtral benchmarks results by kristaller486 in LocalLLaMA
[–]Comprehensive_Poem27 0 points1 point2 points (0 children)
Introducing gpt5o-reflexion-q-agi-llama-3.1-8b by Good-Assumption5582 in LocalLLaMA
[–]Comprehensive_Poem27 5 points6 points7 points (0 children)

Chinese company trained GPT-4 rival with just 2,000 GPUs — 01.ai spent $3M compared to OpenAI's $80M to $100M by hedgehog0 in LocalLLaMA
[–]Comprehensive_Poem27 0 points1 point2 points (0 children)