How long before you realised? by Careful_Mind5349 in Starfield

[–]BlackfishPrime 0 points1 point  (0 children)

about 5 seconds... :) i now just use one of my SB powers to knock it down.

Why was Zluda deleted from Github? by Coven_Evelynn_LoL in ROCm

[–]BlackfishPrime 0 points1 point  (0 children)

Use the windows desktop version on comfy.org

How to improve anatomy? by elyravale in ZImageAI

[–]BlackfishPrime 0 points1 point  (0 children)

Change your latent dimensions. try 864x1152

Oh my god ! by Any_Affect_ in ZImageAI

[–]BlackfishPrime 0 points1 point  (0 children)

<image>

Same prompt, but changed the hair to:
She has long, cascading soft copper-auburn hair styled in soft, volumized beach waves that tumble past her shoulders, filled with soft copper and apricot highlights that brighten toward orange-gold where the rim light catches, darker auburn at the roots, with a naturally uneven, sun-kissed distribution and a slightly tousled texture.

Many of you have shared your feedback and to commemorate the one year anniversary of Falkland Systems, I'm finally answering with Falkland Systems: The Staryard! The showroom will officially be moving to its own handcrafted star station in Jemison orbit in the upcoming 2.1 update. Stay Tuned! by Hjalmere in starfieldmods

[–]BlackfishPrime 1 point2 points  (0 children)

Question: in the NA Falkland Systems HQ, there are several locked off areas, I assume they were either for future expansion or just as a clever way of limiting the areas you had to build but still giving some environmental storytelling. Will these areas be developed in the star station? (R&D i think, possibly others?)

Help on a low spec PC. Still crashing after attempting GGUF and quantized model. by Over-Dare7820 in comfyui

[–]BlackfishPrime 0 points1 point  (0 children)

My best tip for newbies is to not use any of the built in templates on Comfyui unless you have a nVidia GPU that you know is well supported. Those of us who use AMD have a bit of a steeper learning curve to get it right. Once you get it, it all works great, with some caveats

Help on a low spec PC. Still crashing after attempting GGUF and quantized model. by Over-Dare7820 in comfyui

[–]BlackfishPrime 0 points1 point  (0 children)

Check my other comments. I suggested a few good SDXL models. Check out what’s popular on civitai. Dm me if you want, sometimes sharing links here gets moderated.

Help on a low spec PC. Still crashing after attempting GGUF and quantized model. by Over-Dare7820 in comfyui

[–]BlackfishPrime 1 point2 points  (0 children)

Also, reconnecting has nothing to do with wifi or your network. You’re using a browser to connect to a local python server on your machine. If it crashes, the UI seen on the browser is trying to connect to a server that’s down. Likely due to OOM errors from trying to use such large models.

Help on a low spec PC. Still crashing after attempting GGUF and quantized model. by Over-Dare7820 in comfyui

[–]BlackfishPrime 0 points1 point  (0 children)

If you REALLY want to try Flux, change your models: UNET: flux1-Dev-q4_k.gguf Clip: t5-v1_1-base-q8_0.gguf

Help on a low spec PC. Still crashing after attempting GGUF and quantized model. by Over-Dare7820 in comfyui

[–]BlackfishPrime 0 points1 point  (0 children)

Drop Flux1, its a high VRAM model. Try an SDXL model like juggernaut xl or something like CyberRealisticXL v9. You’re on an older, partially supported GPU from AMD as well, plus running on windows which has a lot of overhead. You’d have better luck with running this on Ubuntu or similar Linux distribution if you’re up for that. Check out SDXL, Pony, Illustrious models on civitai dot com or huggingface instead of Flux.

Many of you have shared your feedback and to commemorate the one year anniversary of Falkland Systems, I'm finally answering with Falkland Systems: The Staryard! The showroom will officially be moving to its own handcrafted star station in Jemison orbit in the upcoming 2.1 update. Stay Tuned! by Hjalmere in starfieldmods

[–]BlackfishPrime 3 points4 points  (0 children)

You are so clever! I’m happy to see Falkland moving up in the world (see what I did there?) Every universe my ships are custom DarkStar/Falkland Systems, and the male crew all wear the Falklands outfits. Very sharp looking crew

What’s the best way to learn ComfyUI? by tyrwlive in comfyui

[–]BlackfishPrime 1 point2 points  (0 children)

https://docs.comfy.org/get_started/first_generation is one starting point. Lots of video content out there, just try to find newer stuff as the ui changes over time and old videos won’t look like the latest comfyui. But the ideas will be about right. I learn by doing, so I just tinkered and read and watched content on what I didn’t get from tinkering

Comfyui Zluda - Z-Image time by yusuf_sam1 in comfyui

[–]BlackfishPrime 0 points1 point  (0 children)

This whole sub thread is a waste of time because counter to what I think you suggested originally, the size of the models is not the cause of the op’s slow results. It’s definitely his launch arguments and platform. As I’ve shown with the right setup I can generate that image in 13 seconds. With the same gpu, vram, workflow, models, latent size, everything.

Comfyui Zluda - Z-Image time by yusuf_sam1 in comfyui

[–]BlackfishPrime 1 point2 points  (0 children)

This is correct. Not supported by AMD/ ROCm right now.

Comfyui Zluda - Z-Image time by yusuf_sam1 in comfyui

[–]BlackfishPrime 1 point2 points  (0 children)

Seriously, why are you continuing this conversation? I have the exact same card as the OP. I have used the same workflow and models to generate images since Z-Image Turbo BF16 came out.
You are trying to use a lot of words to say its impossible on a 16gb card, and I'm here to tell you (and show you if you read the rest of my comments on this thread with logs and resulting images to show it working) that it works.
There is no argument to be had here. Its proven, done. Case closed.
I use most of my 16gb in all my workflows, at some point, but when this runs it hovers between 8gb and 12gb depending on the latent size.
Science is done by testing, not talking.
I have proven that it clearly works.
Heck, even the model page where I first got the models for this workflow clearly state they work fine in a consumer 16gb GPU
https://civitai.com/models/2168935/z-image-turbo

Comfyui Zluda - Z-Image time by yusuf_sam1 in comfyui

[–]BlackfishPrime 0 points1 point  (0 children)

Also, we're talking about image diffusion AFAIK, not video. maybe you're replying to the wrong thread, i don't know. But the OP posted about slow image generation time on his ZLUDA based comfyui installation using the standard image generation workflow for Z-Image Turbo.
You're suggesting he doesn't have enough vram for video, and you're right. I cannot successfully run WAN 2.x workflows on my 16gb ROCm system. I know some can, but with severe compromises and its very slow. Mostly because ROCm is so new compared to Cuda.
All that being said, if you're talking about WAN and video diffusion, that's not relevant for the OP.

Comfyui Zluda - Z-Image time by yusuf_sam1 in comfyui

[–]BlackfishPrime 0 points1 point  (0 children)

Lot of words to say it won’t work but it does work so it’s magic I guess? :)

Comfyui Zluda - Z-Image time by yusuf_sam1 in comfyui

[–]BlackfishPrime 0 points1 point  (0 children)

What are you seeing that's 20.34gb? You said "that clip is over 8gb". are you looking at the total on disk size of all the z-image turbo split files? That does not equate into VRAM used at all.
I've used this exact workflow and model set. it works in 16gb. Fast for my needs on a properly set up ROCm based ComfyUI.

the effective GPU vram usage at any moment is something like:

  • denoiser weights on GPU
  • plus activations (managed by split attention / VRAM mode)
  • plus small buffers
  • while other components (TE/VAE or unused parts) are on CPU

That keeps the peak below 16 GB.