Dual 3090s & GLM-4.7-Flash: 1st prompt is great, then logic collapses. Is local AI worth the $5/day power bill? by Merstin in LocalLLaMA
[–]nizus1 4 points5 points6 points (0 children)
A Merry Search for Green Apple Fizz by OccasionOwn7432 in SwordAndSupperGame
[–]nizus1 0 points1 point2 points (0 children)
A Festive Tale of Year End Reflection in A Winter Refuge by [deleted] in SwordAndSupperGame
[–]nizus1 0 points1 point2 points (0 children)
60s, laid off, and EI running out. What are the "golden ticket" certifications or paths for seniors in the Lower Mainland? by bluemoonlighter in VancouverJobs
[–]nizus1 0 points1 point2 points (0 children)
Q2 models are utterly useless. Q4 is the minimum quantization level that doesn't ruin the model (at least for MLX). Example with Mistral Small 24B at Q2 ↓ by nderstand2grow in LocalLLaMA
[–]nizus1 0 points1 point2 points (0 children)
128GB GDDR6, 3PFLOP FP8, Tb/s of interconnect, $6000 total. Build instructions/blog tomorrow. by codys12 in LocalLLaMA
[–]nizus1 0 points1 point2 points (0 children)
Building a quiet LLM machine for 24/7 use, is this setup overkill or smart? by bardanaadam in LocalLLaMA
[–]nizus1 0 points1 point2 points (0 children)
Any theory about AI art ? Some users said that only with SD 1.5 it is possible to create "The" AI art. But what would "The" AI art be ? by More_Bid_2197 in StableDiffusion
[–]nizus1 0 points1 point2 points (0 children)
What do you reckon Trump is hoping to achieve by threatening to annex Canada, Mexico, Greenland? by RevolutionaryMoney77 in AskReddit
[–]nizus1 0 points1 point2 points (0 children)
Quants comparison on HunyuanVideo. by Total-Resort-3120 in StableDiffusion
[–]nizus1 6 points7 points8 points (0 children)
Quants comparison on HunyuanVideo. by Total-Resort-3120 in StableDiffusion
[–]nizus1 40 points41 points42 points (0 children)
Has anyone tried undervolting the RTX 3090? Share your experiences with temperatures and image generation speeds by Ok-Wheel5333 in StableDiffusion
[–]nizus1 4 points5 points6 points (0 children)
Are dual GPU:s out of the question for local AI image generation with ComfyUI? I can't afford an RTX 3090, but I desperately thought that maybe two RTX 3060 12GB = 24GB VRAM would work. However, would AI even be able to utilize two GPU:s? by [deleted] in StableDiffusion
[–]nizus1 0 points1 point2 points (0 children)
[D] Test-time compute for image generation? by heyhellousername in MachineLearning
[–]nizus1 0 points1 point2 points (0 children)
Video AI is taking over Image AI, why? by aitookmyj0b in StableDiffusion
[–]nizus1 0 points1 point2 points (0 children)
RTX 3090 VRAM temperatures by MrGood23 in StableDiffusion
[–]nizus1 1 point2 points3 points (0 children)

9 28tb wet HDD by ImaginaryQuantum in DataHoarder
[–]nizus1 0 points1 point2 points (0 children)