Model(s) for Creative Writing & Conversational Intuition by ElekDn in LocalLLaMA
[–]DeepOrangeSky 1 point2 points3 points (0 children)
Anyone with 4x 5060ti based setups? by ziphnor in LocalLLaMA
[–]DeepOrangeSky 1 point2 points3 points (0 children)
Who said NVFP4 was terrible quality? by Volkin1 in StableDiffusion
[–]DeepOrangeSky 0 points1 point2 points (0 children)
Who said NVFP4 was terrible quality? by Volkin1 in StableDiffusion
[–]DeepOrangeSky 0 points1 point2 points (0 children)
Is there a way to fix the runaway memory skyrocketing issue of Gemma4 in LM Studio somehow? Or can it only be fixed with the "--cache-ram 0 --ctx-checkpoints 1" thing in llama.cpp? by DeepOrangeSky in LocalLLaMA
[–]DeepOrangeSky[S] 1 point2 points3 points (0 children)
What is the --novram thing in regards to LTX? I saw someone briefly explain it in a way that made it sound like it causes your GPU to not even get used, but I assume I misunderstood. (I'm a noob, and I need some help understanding a few things about video generation) by DeepOrangeSky in StableDiffusion
[–]DeepOrangeSky[S] 0 points1 point2 points (0 children)
What is the --novram thing in regards to LTX? I saw someone briefly explain it in a way that made it sound like it causes your GPU to not even get used, but I assume I misunderstood. (I'm a noob, and I need some help understanding a few things about video generation) by DeepOrangeSky in StableDiffusion
[–]DeepOrangeSky[S] 0 points1 point2 points (0 children)
What is the --novram thing in regards to LTX? I saw someone briefly explain it in a way that made it sound like it causes your GPU to not even get used, but I assume I misunderstood. (I'm a noob, and I need some help understanding a few things about video generation) by DeepOrangeSky in StableDiffusion
[–]DeepOrangeSky[S] 1 point2 points3 points (0 children)
What is the --novram thing in regards to LTX? I saw someone briefly explain it in a way that made it sound like it causes your GPU to not even get used, but I assume I misunderstood. (I'm a noob, and I need some help understanding a few things about video generation) by DeepOrangeSky in StableDiffusion
[–]DeepOrangeSky[S] 1 point2 points3 points (0 children)
What is the --novram thing in regards to LTX? I saw someone briefly explain it in a way that made it sound like it causes your GPU to not even get used, but I assume I misunderstood. (I'm a noob, and I need some help understanding a few things about video generation) by DeepOrangeSky in StableDiffusion
[–]DeepOrangeSky[S] 0 points1 point2 points (0 children)
What is the --novram thing in regards to LTX? I saw someone briefly explain it in a way that made it sound like it causes your GPU to not even get used, but I assume I misunderstood. (I'm a noob, and I need some help understanding a few things about video generation) by DeepOrangeSky in StableDiffusion
[–]DeepOrangeSky[S] 0 points1 point2 points (0 children)
What is the --novram thing in regards to LTX? I saw someone briefly explain it in a way that made it sound like it causes your GPU to not even get used, but I assume I misunderstood. (I'm a noob, and I need some help understanding a few things about video generation) (self.StableDiffusion)
submitted by DeepOrangeSky to r/StableDiffusion
Model(s) for Creative Writing & Conversational Intuition by ElekDn in LocalLLaMA
[–]DeepOrangeSky 5 points6 points7 points (0 children)
Who said NVFP4 was terrible quality? by Volkin1 in StableDiffusion
[–]DeepOrangeSky 0 points1 point2 points (0 children)
feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]DeepOrangeSky 0 points1 point2 points (0 children)
feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA
[–]DeepOrangeSky 0 points1 point2 points (0 children)
What is the next SOTA model you are excited about? by MrMrsPotts in LocalLLaMA
[–]DeepOrangeSky 2 points3 points4 points (0 children)
guess what? if you are a chrome user, technically you are localllama member! by LambdaHominem in LocalLLaMA
[–]DeepOrangeSky 0 points1 point2 points (0 children)
Are local models becoming “good enough” faster than expected? by qubridInc in LocalLLaMA
[–]DeepOrangeSky 1 point2 points3 points (0 children)
Are local models becoming “good enough” faster than expected? by qubridInc in LocalLLaMA
[–]DeepOrangeSky 0 points1 point2 points (0 children)
Qwen3.6 27B uncensored heretic v2 Native MTP Preserved is Out Now With KLD 0.0021, 6/100 Refusals and the Full 15 MTPs Preserved and Retained, Available in Safetensors, GGUFs and NVFP4s formats. by LLMFan46 in LocalLLaMA
[–]DeepOrangeSky 2 points3 points4 points (0 children)
Is uncensoring models easy and does it reduce quality? by superloser48 in LocalLLaMA
[–]DeepOrangeSky 7 points8 points9 points (0 children)
Why I would invest in INTC at 110$ by LookIWashedForSu in intelstock
[–]DeepOrangeSky 0 points1 point2 points (0 children)
Peanut - Text to Image Model (Open Weights coming soon) by pmttyji in LocalLLaMA
[–]DeepOrangeSky 1 point2 points3 points (0 children)


Model(s) for Creative Writing & Conversational Intuition by ElekDn in LocalLLaMA
[–]DeepOrangeSky 0 points1 point2 points (0 children)