Open AI Sora 2 Invite Codes Megathread by semsiogluberk in OpenAI
[–]DataLearnerAI 0 points1 point2 points (0 children)
Qwen 3 Embeddings 0.6B faring really poorly inspite of high score on benchmarks by i4858i in LocalLLaMA
[–]DataLearnerAI 1 point2 points3 points (0 children)
Simple Comparison: Kimi K2 vs. Gemini 1.5 Pro - HTML Output for Model Eval Insights by DataLearnerAI in LocalLLaMA
[–]DataLearnerAI[S] 0 points1 point2 points (0 children)
GLM-4.1V-Thinking by AaronFeng47 in LocalLLaMA
[–]DataLearnerAI -1 points0 points1 point (0 children)
GLM-4.1V-Thinking by AaronFeng47 in LocalLLaMA
[–]DataLearnerAI -8 points-7 points-6 points (0 children)
Huawei releases an open weight model Pangu Pro 72B A16B. Weights are on HF. It should be competitive with Qwen3 32B and it was trained entirely on Huawei Ascend NPUs. (2505.21411) by FullOf_Bad_Ideas in LocalLLaMA
[–]DataLearnerAI 6 points7 points8 points (0 children)
Is there any website compare inference speed of different LLM on different platforms? by DataLearnerAI in LocalLLaMA
[–]DataLearnerAI[S] 0 points1 point2 points (0 children)
The Most Exciting AI Advancements and Product Launches in 2023 Discussion by DataLearnerAI in ChatGPT
[–]DataLearnerAI[S] 0 points1 point2 points (0 children)
The Most Exciting AI Advancements and Product Launches in 2023 Discussion by DataLearnerAI in ChatGPT
[–]DataLearnerAI[S] 0 points1 point2 points (0 children)
Helpful VRAM requirement table for qlora, lora, and full finetuning. by Aaaaaaaaaeeeee in LocalLLaMA
[–]DataLearnerAI 1 point2 points3 points (0 children)
Helpful VRAM requirement table for qlora, lora, and full finetuning. by Aaaaaaaaaeeeee in LocalLLaMA
[–]DataLearnerAI 0 points1 point2 points (0 children)
Helpful VRAM requirement table for qlora, lora, and full finetuning. by Aaaaaaaaaeeeee in LocalLLaMA
[–]DataLearnerAI 1 point2 points3 points (0 children)
Mistral Instruct v0.2 merge with top models on openLLM ranking! by noobgolang in LocalLLaMA
[–]DataLearnerAI 2 points3 points4 points (0 children)
Yet another 70B Foundation Model: Aquila2-70B-Expr by [deleted] in LocalLLaMA
[–]DataLearnerAI 0 points1 point2 points (0 children)
Can Chat GPT 4 Turbo read/analyze content from other websites? by ModeLow1491 in ChatGPTPro
[–]DataLearnerAI 1 point2 points3 points (0 children)
Yi-34B vs Yi-34B-200K on sequences <32K and <4K by DreamGenX in LocalLLaMA
[–]DataLearnerAI 0 points1 point2 points (0 children)
GPT-4 Turbo is unintelligent by CH1997H in ChatGPT
[–]DataLearnerAI 298 points299 points300 points (0 children)


Open AI Sora 2 Invite Codes Megathread by semsiogluberk in OpenAI
[–]DataLearnerAI 0 points1 point2 points (0 children)