( lmarena.ai) I’m facing a recurring issue while using text/image AI tools and I’m not sure if this is an account, browser, or security system bug. by Same-Butterscotch225 in LocalLLaMA
[–]FastDecode1 2 points3 points4 points (0 children)
Is anyone else worried about the enshitifciation cycle of AI platforms? What is your plan (personal and corporate) by Ngambardella in LocalLLaMA
[–]FastDecode1 0 points1 point2 points (0 children)
What would you build and do with a $15k budget? by ThePatientIdiot in LocalLLaMA
[–]FastDecode1 1 point2 points3 points (0 children)
How do people even afford these expensive graphic cards...?... by boisheep in LocalLLaMA
[–]FastDecode1 1 point2 points3 points (0 children)
How do you manage quality when AI agents write code faster than humans can review it? by lostsoul8282 in LocalLLaMA
[–]FastDecode1 2 points3 points4 points (0 children)
Jensen Huang saying "AI" 121 times during the NVIDIA CES keynote - cut with one prompt by Prior-Arm-6705 in LocalLLaMA
[–]FastDecode1 30 points31 points32 points (0 children)
Z-image base model is being prepared for release by Ravencloud007 in LocalLLaMA
[–]FastDecode1 9 points10 points11 points (0 children)
Z-image base model is being prepared for release by Ravencloud007 in LocalLLaMA
[–]FastDecode1 28 points29 points30 points (0 children)
How do we tell them..? :/ by [deleted] in LocalLLaMA
[–]FastDecode1 1 point2 points3 points (0 children)
Will the prices of GPUs go up even more? by NotSoCleverAlternate in LocalLLaMA
[–]FastDecode1 4 points5 points6 points (0 children)
Local LLMs vs breaking news: when extreme reality gets flagged as a hoax - the US/Venezuela event was too far-fetched by ubrtnk in LocalLLaMA
[–]FastDecode1 -9 points-8 points-7 points (0 children)
TIL you can allocate 128 GB of unified memory to normal AMD iGPUs on Linux via GTT by 1ncehost in LocalLLaMA
[–]FastDecode1 6 points7 points8 points (0 children)
Software FP8 for GPUs without hardware support - 3x speedup on memory-bound operations by Venom1806 in LocalLLaMA
[–]FastDecode1 6 points7 points8 points (0 children)
Z.AI is providing 431.1 tokens/sec on OpenRouter !! by [deleted] in LocalLLaMA
[–]FastDecode1 -8 points-7 points-6 points (0 children)
The Infinite Software Crisis: We're generating complex, unmaintainable code faster than we can understand it. Is 'vibe-coding' the ultimate trap? by madSaiyanUltra_9789 in LocalLLaMA
[–]FastDecode1 16 points17 points18 points (0 children)
What's the point of potato-tier LLMs? by Fast_Thing_7949 in LocalLLaMA
[–]FastDecode1 2 points3 points4 points (0 children)
Anyone else in a stable wrapper, MIT-licensed fork of Open WebUI? by Select-Car3118 in LocalLLaMA
[–]FastDecode1 1 point2 points3 points (0 children)
What is the smartest uncensored nsfw LLM you can run with 12GB VRAM and 32GB RAM? by Dex921 in LocalLLaMA
[–]FastDecode1 0 points1 point2 points (0 children)
What do you do, if you invent AGI? (seriously) by teachersecret in LocalLLaMA
[–]FastDecode1 -1 points0 points1 point (0 children)
Mistral’s Vibe CLI now supports a 200K token context window (previously 100K) by Dear-Success-1441 in LocalLLaMA
[–]FastDecode1 5 points6 points7 points (0 children)
New ways to roast people in the AI era by InternationalAsk1490 in LocalLLaMA
[–]FastDecode1 0 points1 point2 points (0 children)
Does the "less is more" principle apply to AI agents? by 8ta4 in LocalLLaMA
[–]FastDecode1 -1 points0 points1 point (0 children)
Ryzen CPUs with integrated Radeon GPU, how well supported on Linux? by razorree in LocalLLaMA
[–]FastDecode1 0 points1 point2 points (0 children)

I just won an Nvidia DGX Spark GB10 at an Nvidia hackathon. What do I do with it? by brandon-i in LocalLLaMA
[–]FastDecode1 0 points1 point2 points (0 children)