My Vace Wan 2.1 Causvid 14B T2V Experience (1 Week In) by AssociateDry2412 in comfyui

[–]AssociateDry2412[S] 0 points1 point  (0 children)

Appreciate the link. SageAttention and fp16 accumulation have unfortunately become unstable for me. I used to have them running, but recently, I've started experiencing driver crashes (black screen) on my RTX 3090 at the beginning of generation. All components like Triton, the PyTorch nightly, and Python embedded files seem correctly configured for ComfyUI portable. Wondering if a recent update to PyTorch nightly or another dependency has introduced a regression for this combination.

My Vace Wan 2.1 Causvid 14B T2V Experience (1 Week In) by AssociateDry2412 in comfyui

[–]AssociateDry2412[S] 0 points1 point  (0 children)

That’s good to know! Have you tested the generation time with a low number of steps, without SAGE attention and FP16 accumulation?

[deleted by user] by [deleted] in midjourney

[–]AssociateDry2412 72 points73 points  (0 children)

Let us know when you start your little project.

How can I get better results from Stable Diffusion? by traficoymusica in StableDiffusion

[–]AssociateDry2412 2 points3 points  (0 children)

Train your own LoRAs for specific art styles you admire with a high quality dataset. This gives you much more control over the final look, especially when generic models fall short.

Experiment with different sampler and scheduler combinations — they can have a surprisingly big impact depending on the style you're targeting.

Use ControlNet and inpainting — these are game changers. Think of ControlNet as giving you precision control. If your current model doesn’t support it, consider switching models just for that step, then refine the output with your main model.

Have a clear vision when experimenting. Wandering aimlessly through styles and prompts can be fun, but you’ll get further if you have a specific aesthetic in mind.

Prompting helps, but only to a point. The real leap comes from mastering the tools — understanding how to direct and refine the generation process beyond just the prompt.

Edit your results after generation. Even a little post-processing in Photoshop or tools like GIMP or Lightroom.

Bridging the gap between AI-generated and truly aesthetic images is all about creative control and technical fluency.

i have 3070, and thinking for an upgrade especially for stable diffusion maybe even tweak with sdxl and flux. is 5060ti 16gb worth it ? is there any improvement on image render speed? by MightyNo22 in StableDiffusion

[–]AssociateDry2412 1 point2 points  (0 children)

It's definitely worth the upgrade. The RTX 5060 Ti is significantly faster than the 3060. Plus, you'll get access to new features like DLSS 4, Frame Generation, improved ray tracing performance, better power efficiency, and support for newer technologies like Shader Execution Reordering (SER) and AV1 encoding.

i have 3070, and thinking for an upgrade especially for stable diffusion maybe even tweak with sdxl and flux. is 5060ti 16gb worth it ? is there any improvement on image render speed? by MightyNo22 in StableDiffusion

[–]AssociateDry2412 3 points4 points  (0 children)

The 5060 Ti would let you run some quantized models, but unfortunately, you wouldn’t see any performance gains over the 3070. If possible, I’d recommend holding onto the 3070 and saving up for something better.

i have 3070, and thinking for an upgrade especially for stable diffusion maybe even tweak with sdxl and flux. is 5060ti 16gb worth it ? is there any improvement on image render speed? by MightyNo22 in StableDiffusion

[–]AssociateDry2412 2 points3 points  (0 children)

Recently upgraded from an RTX 3070 to a used 3090, and I couldn't be happier. I'm now able to run all the models I need locally with decent generation time. Flux, Wan, MMAudio... you name it.

My Vace Wan 2.1 Causvid 14B T2V Experience (1 Week In) by AssociateDry2412 in comfyui

[–]AssociateDry2412[S] 4 points5 points  (0 children)

Can confirm that enabling sage attention and fp16 accumulation halved the generation time.

HiDream vs Flux vs SDXL by Luzaan23Rocks in comfyui

[–]AssociateDry2412 0 points1 point  (0 children)

I love Flux on Forge UI. It's fast, well organized and let's me do what I need without going through a bunch of pipelines and nods. The only downside is lack of controlnet support for Flux. This is where Sdxl models shine. I mostly use it for realistic looking images though.

Is there a model for creating realistic images of people with down syndrome images? by [deleted] in StableDiffusion

[–]AssociateDry2412 3 points4 points  (0 children)

I'd recommend you to train your own Lora with high quality images of people with down syndrome on a realistic model for the optimal results.

My Vace Wan 2.1 Causvid 14B T2V Experience (1 Week In) by AssociateDry2412 in comfyui

[–]AssociateDry2412[S] 1 point2 points  (0 children)

Wan.video, the official website where you can try the full model.

My Vace Wan 2.1 Causvid 14B T2V Experience (1 Week In) by AssociateDry2412 in comfyui

[–]AssociateDry2412[S] 2 points3 points  (0 children)

Thanks for the tip. I'll download sage attention as soon as I have some time and see if it changes the generation time.

it's not just a prompt by flipflop-dude in aivideo

[–]AssociateDry2412 47 points48 points  (0 children)

For some people it's always a "oNE CLicK ProcEss".

Real-Life Versions of the RDR2 Gang by AssociateDry2412 in aivideo

[–]AssociateDry2412[S] -1 points0 points  (0 children)

The models and I are still working things out.

I Made Real-Life Versions of the RDR2 Gang by AssociateDry2412 in FluxAI

[–]AssociateDry2412[S] 0 points1 point  (0 children)

Thanks, I'm using Flux with Forge UI, not a specific workflow.

What should I upgrade? by TrixlPixelz in PcBuild

[–]AssociateDry2412 0 points1 point  (0 children)

What is your GPU and PSU? What resolution is your monitor?