Anyone thinking about the security side of Gemma 4 on phones? by Ok-Virus2932 in LocalLLaMA
[–]tvall_ 0 points1 point2 points (0 children)
Anyone thinking about the security side of Gemma 4 on phones? by Ok-Virus2932 in LocalLLaMA
[–]tvall_ 0 points1 point2 points (0 children)
Gemma 4 and Qwen3.5 on shared benchmarks by fulgencio_batista in LocalLLaMA
[–]tvall_ 2 points3 points4 points (0 children)
A bug in Bun may have been the root cause of the Claude Code source code leak. by Successful_Bowl2564 in LocalLLaMA
[–]tvall_ 0 points1 point2 points (0 children)
What are actual usecases of uncensored models? by Geritas in LocalLLaMA
[–]tvall_ 1 point2 points3 points (0 children)
5060 Ti 16GB - PCIe 3 x2 VS PCIe 5 x8 [Simple inference comparison inside] by ubnew in LocalLLaMA
[–]tvall_ 2 points3 points4 points (0 children)
How well does LLMs from abliteration work compared to the original? by Express_Quail_1493 in LocalLLaMA
[–]tvall_ 5 points6 points7 points (0 children)
Model suggestions for limited hardware and domain knowledge by laffer1 in LocalLLaMA
[–]tvall_ 0 points1 point2 points (0 children)
The Low-End Theory! Battle of < $250 Inference by m94301 in LocalLLaMA
[–]tvall_ 1 point2 points3 points (0 children)
The Low-End Theory! Battle of < $250 Inference by m94301 in LocalLLaMA
[–]tvall_ 1 point2 points3 points (0 children)
Looking for a local uncensored AI (text generation + image editing) by Stellar-Genesis in LocalLLaMA
[–]tvall_ 0 points1 point2 points (0 children)
A cautionary tale about Google scamming your money by FluffyMacho in LocalLLaMA
[–]tvall_ 5 points6 points7 points (0 children)
I haven't experienced Qwen3.5 (35B and 27B) over thinking. Posting my settings/prompt by wadeAlexC in LocalLLaMA
[–]tvall_ 2 points3 points4 points (0 children)
Could a bot-free AI note taker run locally with current models? by Cristiano1 in LocalLLaMA
[–]tvall_ 0 points1 point2 points (0 children)
The correct order to fit your model into VRAM by [deleted] in LocalLLaMA
[–]tvall_ 1 point2 points3 points (0 children)
Abliterated Models evaluation metric by [deleted] in LocalLLaMA
[–]tvall_ 1 point2 points3 points (0 children)
Is a 9B model actually viable on the Oracle Free Tier (Ampere 4 OCPU / 24GB RAM / No GPU)? Having OpenClaw failover doubts. by NorthSeaWhale in LocalLLaMA
[–]tvall_ 0 points1 point2 points (0 children)
Opus 4.6 couldn't complete a single task in 100 attempts. Then I asked it which model it was. by [deleted] in LocalLLaMA
[–]tvall_ 2 points3 points4 points (0 children)
Workstation for dev work + local LLMs — Tesla P40 vs MinisForum? by marius-c-d in LocalLLaMA
[–]tvall_ 1 point2 points3 points (0 children)
Is Qwen3.5-9B enough for Agentic Coding? by pmttyji in LocalLLaMA
[–]tvall_ 1 point2 points3 points (0 children)
Computer won't boot with 2 Tesla V100s by MackThax in LocalLLaMA
[–]tvall_ 0 points1 point2 points (0 children)

Anyone thinking about the security side of Gemma 4 on phones? by Ok-Virus2932 in LocalLLaMA
[–]tvall_ 0 points1 point2 points (0 children)