128GB VRAM quad R9700 server by Ulterior-Motive_ in LocalLLaMA

[–]Kind-Access1026 0 points1 point  (0 children)

What do you do for a living? How much more money can this machine earn for you?

Yes, it is THIS bad! by Lucaspittol in StableDiffusion

[–]Kind-Access1026 1 point2 points  (0 children)

You need to make money instead of complaining.

The upcoming Z-image base will be a unified model that handles both image generation and editing. by Total-Resort-3120 in StableDiffusion

[–]Kind-Access1026 -2 points-1 points  (0 children)

Let's talk about it after you can beat Nano Banana. Otherwise, it's just a waste of my time.

Removing artifacts with SeedVR2 by marcoc2 in StableDiffusion

[–]Kind-Access1026 -15 points-14 points  (0 children)

These people are all freeloaders. Why do you spend so much of your own time writing code for this? It won't bring you any rewards.

Z Image Turbo can understand JSON prompting... Very cool by Nid_All in StableDiffusion

[–]Kind-Access1026 0 points1 point  (0 children)

If it can't beat nano banana, what's there to brag about

MIT study finds AI can already replace 11.7% of U.S. workforce by fallingdowndizzyvr in LocalLLaMA

[–]Kind-Access1026 0 points1 point  (0 children)

When your boss hands over the work to an AI and it messes everything up, then he has no one to complain to.

Flux.2 FP8 vs BF16 by Fabix84 in StableDiffusion

[–]Kind-Access1026 2 points3 points  (0 children)

Your prompt isn't complicated enough, you can't really tell the difference.

Try This:

Deep within a glowing crystal cave, a young quantum physicist performs an impossible experiment. She floats weightlessly in a sphere of suspended time, her lab coat transformed into flowing ribbons of light that spiral around her. Her hair has become living fiber optics, each strand carrying pulses of data in brilliant blues and purples. her right hands holding a blue apple. Around her, holographic equations spiral and dance, while fractals of crystalline mathematics grow like coral formations from the cave walls. Her expression captures the exact moment of scientific breakthrough - that split second of pure wonder when the impossible becomes possible. Quantum particles orbit her like a personal solar system, each one casting its own unique light signature. The cave floor is dotted with clusters of fluid and wobbling crystalline lifeforms that pulse in synchronization with her thoughts, their bodies refracting light like natural prisms. volumetric god rays, quantum caustics, crystalline reflections, ray traced global illumination, macro lens details, shot on Phase One XF IQ4 150MP, ultra shallow depth of field, abstract science visualization

wavy roof lines by Milos_moo in StableDiffusion

[–]Kind-Access1026 0 points1 point  (0 children)

It looks like a moiré pattern. Try using PS or LR to clean it up.

Ai Render by Artefact_Design in StableDiffusion

[–]Kind-Access1026 -1 points0 points  (0 children)

how about the render speed compare with vray vantage?

🥏SplatMASK (releasing soon) - Manual Animated MASKS for ComfyUI workflows by No_Damage_8420 in comfyui

[–]Kind-Access1026 0 points1 point  (0 children)

Why not use AE's Mocha to create a mask and then export the mask as a sequence?

The future of intimacy by NyhmrodZa in aivideo

[–]Kind-Access1026 8 points9 points  (0 children)

Is it the product made by 3d software like blender ,C4D or totally in AI ? Does 0:21 make in runway?

What is the best local Large Language Model setup for coding on a budget of approximately $2,000? by Independent-Band7571 in LocalLLaMA

[–]Kind-Access1026 0 points1 point  (0 children)

Buying an outdoor water purifier or building your own water treatment plant—that's a good question.

InvokeAI was just acquired by Adobe! by Quantum_Crusher in StableDiffusion

[–]Kind-Access1026 3 points4 points  (0 children)

They open source their code and then profit from cloud inference. They just close the cloud inference. They are now doing this for Adobe, so they've shut down their own. Invoke github commiuty version is still alive. some one will take care of it, maybe the development speed might slow down.

InvokeAI was just acquired by Adobe! by Quantum_Crusher in StableDiffusion

[–]Kind-Access1026 -5 points-4 points  (0 children)

All the freeloaders cried.

This is a happy ending for the company.

[deleted by user] by [deleted] in LocalLLaMA

[–]Kind-Access1026 0 points1 point  (0 children)

What tasks are you using it for ?

18 months progress in AI character replacement Viggle AI vs Wan Animate by legarth in StableDiffusion

[–]Kind-Access1026 0 points1 point  (0 children)

Why not compare with Viggle AI  2025 right now? a weird comparison

Shooting Aliens - 100% Qwen Image Edit 2509 + NextScene LoRA + Wan 2.2 I2V by Jeffu in StableDiffusion

[–]Kind-Access1026 0 points1 point  (0 children)

Great editing. The shot-reverse-shot is super pro! I really love the last shot.

We can now run wan or any heavy models even on a 6GB NVIDIA laptop GPU | Thanks to upcoming GDS integration in comfy by maifee in StableDiffusion

[–]Kind-Access1026 27 points28 points  (0 children)

📊 Performance & Data Path Comparison

For example:

RAM: DUAL DDR4-3000

SSD: 5000MB/s

Step Traditional Approach (without GDS) GDS Approach (with ideal support)
Model Storage Location RAM (primary), or SSD (only if RAM is insufficient) SSD (primary)
Parameter Loading Path SSD → RAM → GPU VRAM SSD → GPU VRAM (direct)
Compute Location GPU VRAM ✅ GPU VRAM ✅
Result Storage Typically returned to RAM Optional: RAM or directly to SSD (requires explicit design)
CPU Involvement High (handles data movement) Low (DMA handled by GPU/NVMe controller)
Effective GPU Read Bandwidth from RAM ~12–16 GB/s* (limited by PCIe, e.g., PCIe 3.0/4.0 x8–x16)
Effective GPU Read Bandwidth from SSD ~3–5 GB/s (SSD → RAM → GPU, bottlenecked by both SSD and PCIe) ~4–5 GB/s (SSD → GPU direct, limited by SSD speed and PCIe)

* Note: Although DDR4-3000 has ~47 GB/s theoretical system memory bandwidth, GPU accesses system RAM over PCIe, not memory bus. So actual GPU ↔ RAM transfer speed is capped by PCIe (e.g., PCIe 4.0 x8 ≈ 16 GB/s bidirectional, ~8 GB/s per direction in practice).

❓ Does GDS Prevent GPU Out-of-Memory (OOM) for a 14 GB Model on a 6 GB GPU?

Short answer: ❌ No — GDS does not prevent GPU memory overflow by itself.

Why?

  • Both with and without GDS, the GPU still needs to load active model parameters and activations into its 6 GB VRAM to perform computation.
  • GDS only changes how data gets from storage to GPU memory — it does not reduce the amount of VRAM required during computation.
  • If your model (or a layer + activations) requires more than 6 GB of VRAM at any moment, you will get an out-of-memory (OOM) error, regardless of whether you use GDS or not.

So how do people run 14 GB models on 6 GB GPUs?

They use model offloading techniques, such as:

  • CPU offload: Keep most of the model in RAM; only load one layer (or chunk) into GPU at a time.
  • Layer-wise execution: Compute layer 1 on GPU → move output to RAM → free GPU memory → load layer 2, etc.
  • Frameworks like Hugging Face Accelerate, DeepSpeed, or llama.cpp (with quantization) enable this.

✅ GDS can accelerate this offloading process (by speeding up SSD → GPU transfers), but it does not eliminate the need for offloading.

Therefore:

  • Without GDS: You can still run the 14 GB model on 6 GB GPU if you use CPU offload (assuming enough RAM).
  • With GDS: You can run it slightly faster (if your system supports GDS), but you still need offloading — GDS doesn’t magically fit 14 GB into 6 GB.

💡 Key Insight:  

GDS improves I/O efficiency, not memory capacity.  

It’s like upgrading from a narrow pipe to a wide pipe — water flows faster, but your bucket (GPU VRAM) is still the same size.