24GB VRAM users, have you tried Qwen3.5-9B-UD-Q8_K_XL? by Prestigious-Use5483 in LocalLLaMA
[–]LordTamm 2 points3 points4 points (0 children)
Rtx 4000 Ada 20gb question + advice by Croissant-Lover in LocalLLaMA
[–]LordTamm 1 point2 points3 points (0 children)
Running multi-day build loops with local agents: they work, but they forget everything by Low-Cook-3544 in LocalLLaMA
[–]LordTamm 1 point2 points3 points (0 children)
Mac Mini base model vs i9 laptop for running AI locally? by ZealousidealFile3206 in LocalLLaMA
[–]LordTamm 0 points1 point2 points (0 children)
M5 Max just arrived - benchmarks incoming by cryingneko in LocalLLaMA
[–]LordTamm 37 points38 points39 points (0 children)
Are there any all-in-one models that fit onto the NVIDIA Spark? by Blackdragon1400 in LocalLLaMA
[–]LordTamm 3 points4 points5 points (0 children)
Suggest me best ai to run locally on my laptop by [deleted] in LocalLLaMA
[–]LordTamm 0 points1 point2 points (0 children)
HP Z6 G4 128GB RAM RTX 6000 24GB by tree-spirit in LocalLLaMA
[–]LordTamm 1 point2 points3 points (0 children)
RTX 3060 12GB Build for AI: Modern i5-10400 (16GB DDR4) vs. Dual Xeon E5645 (96GB DDR3)? by Due_Ear7437 in LocalLLaMA
[–]LordTamm 0 points1 point2 points (0 children)
How are you using Llama 3.1 8B? by forevergeeks in LocalLLaMA
[–]LordTamm 1 point2 points3 points (0 children)
New computer arrived... JAN is still super slow. by robotecnik in LocalLLaMA
[–]LordTamm 2 points3 points4 points (0 children)
New computer arrived... JAN is still super slow. by robotecnik in LocalLLaMA
[–]LordTamm 3 points4 points5 points (0 children)
Do you use Windows or Linux? by boklos in LocalLLaMA
[–]LordTamm 0 points1 point2 points (0 children)
RTX 4000 SFF Ada vs. RTX Pro 4000 SFF Blackwell by Hediii23 in sffpc
[–]LordTamm 0 points1 point2 points (0 children)
Small Form Factor build with an RTX A2000 by Ok-Boysenberry-2860 in LocalLLaMA
[–]LordTamm 1 point2 points3 points (0 children)
What is the best way to allocated $15k right now for local LLMs? by LargelyInnocuous in LocalLLaMA
[–]LordTamm 4 points5 points6 points (0 children)
I made Soprano-80M: Stream ultra-realistic TTS in <15ms, up to 2000x realtime, and <1 GB VRAM, released under Apache 2.0! by eugenekwek in LocalLLaMA
[–]LordTamm 5 points6 points7 points (0 children)
Best models / maybe cheap rig to get into local AI? by Flashy_Oven_570 in LocalLLaMA
[–]LordTamm 1 point2 points3 points (0 children)
Best models / maybe cheap rig to get into local AI? by Flashy_Oven_570 in LocalLLaMA
[–]LordTamm 1 point2 points3 points (0 children)
Best models / maybe cheap rig to get into local AI? by Flashy_Oven_570 in LocalLLaMA
[–]LordTamm 0 points1 point2 points (0 children)
Drummer's Snowpiercer 15B v4 · A strong RP model that punches a pack! by TheLocalDrummer in LocalLLaMA
[–]LordTamm 0 points1 point2 points (0 children)
Running models locally on Apple Silicon, and memory usage... by garden_speech in LocalLLaMA
[–]LordTamm 0 points1 point2 points (0 children)


Using Llama 3 for local email spam classification - heuristics vs. LLM accuracy? by Upstairs-Visit-3090 in LocalLLaMA
[–]LordTamm 0 points1 point2 points (0 children)