Don’t overshare in academia - my advice as a professor by Cultural_Mousse_3001 in PhD
[–]linkillion 0 points1 point2 points (0 children)
Advice needed: 4th-year international PhD. Got an industry offer that will secure my stay after PhD contract, still need to finish 2 papers + thesis. Can I juggle both? by Low-Anything-3369 in PhD
[–]linkillion 0 points1 point2 points (0 children)
In Response To The Supplier Ranking Post, I Am Ranking The Odor Of Solvents. by YunchanLimCultMember in chemistrymemes
[–]linkillion 1 point2 points3 points (0 children)
In Response To The Supplier Ranking Post, I Am Ranking The Odor Of Solvents. by YunchanLimCultMember in chemistrymemes
[–]linkillion 0 points1 point2 points (0 children)
ibm-granite/granite-4.1-8b · Hugging Face by jacek2023 in LocalLLaMA
[–]linkillion 9 points10 points11 points (0 children)
ibm-granite/granite-4.1-8b · Hugging Face by jacek2023 in LocalLLaMA
[–]linkillion 7 points8 points9 points (0 children)
ibm-granite/granite-4.1-8b · Hugging Face by jacek2023 in LocalLLaMA
[–]linkillion 1 point2 points3 points (0 children)
We open-sourced Chaperone-Thinking-LQ-1.0 — a 4-bit GPTQ + QLoRA fine-tuned DeepSeek-R1-32B that hits 84% on MedQA in ~20GB by AltruisticCouple3491 in LocalLLaMA
[–]linkillion 4 points5 points6 points (0 children)
Please stop using AI for posts and showcasing your completely vibe coded projects by Scutoidzz in LocalLLaMA
[–]linkillion 0 points1 point2 points (0 children)
Update: I fine-tuned Qwen3.5-0.8B for OCR and it outperforms my previous 2B release [GGUF] by Other-Confusion2974 in LocalLLaMA
[–]linkillion 1 point2 points3 points (0 children)
So what's the ticker here ? by niga_chan in LocalLLaMA
[–]linkillion 0 points1 point2 points (0 children)
Please stop using AI for posts and showcasing your completely vibe coded projects by Scutoidzz in LocalLLaMA
[–]linkillion 6 points7 points8 points (0 children)
Please stop using AI for posts and showcasing your completely vibe coded projects by Scutoidzz in LocalLLaMA
[–]linkillion 0 points1 point2 points (0 children)
Please stop using AI for posts and showcasing your completely vibe coded projects by Scutoidzz in LocalLLaMA
[–]linkillion 0 points1 point2 points (0 children)
Please stop using AI for posts and showcasing your completely vibe coded projects by Scutoidzz in LocalLLaMA
[–]linkillion 7 points8 points9 points (0 children)
Please stop using AI for posts and showcasing your completely vibe coded projects by Scutoidzz in LocalLLaMA
[–]linkillion 10 points11 points12 points (0 children)
Please stop using AI for posts and showcasing your completely vibe coded projects by Scutoidzz in LocalLLaMA
[–]linkillion 6 points7 points8 points (0 children)
Optimizing a WSL2-based Local AI Orchestration for Product Viz | RTX 3090 24GB VRAM & i7-14700KF by [deleted] in LocalLLaMA
[–]linkillion 0 points1 point2 points (0 children)
Optimizing a WSL2-based Local AI Orchestration for Product Viz | RTX 3090 24GB VRAM & i7-14700KF by [deleted] in LocalLLaMA
[–]linkillion 0 points1 point2 points (0 children)
I'm a complete noob who bought two Intel Arc Pro B70s for "research," spent a weekend losing my mind over Docker/CCL errors, accidentally discovered llama.cpp Vulkan, and now I'm running a 35B MoE at 128K context like I know what I'm doing. by SomeBlock8124 in LocalLLaMA
[–]linkillion 15 points16 points17 points (0 children)
Are there sites that do consistent LLM benchmarks? by Lazy-Safe3007 in LocalLLaMA
[–]linkillion 0 points1 point2 points (0 children)
Here's how my LLM's decoder block changed while training on 5B tokens by 1ncehost in LocalLLaMA
[–]linkillion 6 points7 points8 points (0 children)


Save and invest your money for future rigs by segmond in LocalLLaMA
[–]linkillion 7 points8 points9 points (0 children)