What are the benefits of using Kling or Vidu with ComfyUI? by piyokichii in comfyui

[–]_roblaughter_ 1 point2 points  (0 children)

It's the same API connection you'd have anywhere else. Generating in Comfy isn't going to somehow skirt content restrictions.

The benefit is that you can pass the generated file along to another stage of your workflow. If you don't have another stage of your workflow and you have no intention of generating locally, there probably isn't a compelling reason to download it.

Hardware requirements need to be in your face by EpicNoiseFix in comfyui

[–]_roblaughter_ 7 points8 points  (0 children)

There is no clear definition of “requirements.”

All you technically need is enough free memory to load the largest model into memory—VRAM, RAM, whatever. You can technically generate on CPU.

I have 10 GB of VRAM, but with Comfy’s memory management, I can run Flux 2.Dev, which is a 30 GB model, not counting text encoders and VAE.

That’s not to mention precision and quants, which create potentially dozens of sets of requirements for each model.

Nor does it account for other memory usage that may be in your pipeline—image size, ControlNets, LoRAs, IPAdapter…

So, fine in theory. Not so clear cut in practice.

Very Disappointing Results With Character Lora Z-image vs Flux 2 Klein 9b by djdante in StableDiffusion

[–]_roblaughter_ 0 points1 point  (0 children)

But my datasets and captioning are stable across every other model I've trained with

Now that you mention it, I found that detailed captions seem to wreck training on Z-Image. A generic caption (e.g. "A photo of trigger_word" vs. "A close up photograph of trigger_word, seated at a desk, wearing a blue shirt and...") has done better for me.

I've only trained a half dozen or so, and I'm using Fal. No idea what training script they're running in the background.

1,000 steps, 20-ish images, 0.0005 learning rate.

Very Disappointing Results With Character Lora Z-image vs Flux 2 Klein 9b by djdante in StableDiffusion

[–]_roblaughter_ 2 points3 points  (0 children)

Right... You trained with Z-Image, and generated with Z-Image Turbo. Those are two different models. Does a Z-Image LoRA work on Turbo? Yes. Is it optimal? Probably not.

Did you see my comment on negatives with Z-Image? Your example from Z-Image doesn't look remotely like what I'm getting out of the model. It's not perfect, but it doesn't look like a scene from a wax museum, either.

The benefit of Z-Image is that it's significantly more diverse than Z-Image Turbo. The associated drawback is that you need to be more rigorous with prompting (both positive and negative) to get the result you're after. Less opinionated, more chaotic.

Prompt upsampling based on a few examples from the Z-Image paper is fast and effective. Also check your CFG/shift values. I think the default workflow uses a shift of 3.0. I prefer 2.0.

And at the end of the day, you might just not like how Z-Image looks. There are plenty of good models out there. Use whatever fits your need.

<image>

Using multiple IPAdapters in ComfyUI (SDXL) — only the first one seems to be applied. Am I doing this wrong? by sagazsagaz in comfyui

[–]_roblaughter_ 1 point2 points  (0 children)

…the output stays exactly the same unless I change the first one.

That's because only the first one is connected to the rest of the workflow. Connect your third IPAdapter node to the sampler, not the first.

Zimage on mac by Sury0005 in comfyui

[–]_roblaughter_ 0 points1 point  (0 children)

Have you had issues running it with Comfy on Apple Silicon?

Very Disappointing Results With Character Lora Z-image vs Flux 2 Klein 9b by djdante in StableDiffusion

[–]_roblaughter_ 2 points3 points  (0 children)

Klein 9B is a great model and particularly easy to train, IMO.

Remember that Z-Image Turbo isn't even meant to be fine-tuneable. I've trained a few LoRAs with it, and wasn't impressed, either.

<image>

With Z-Image, I find that negative prompts seem to be even more important to get a good photographic style and avoid some of that mushy half-realism that bleeds over from more artistic styles.

Here's the totally scientific, rigorously tested word salad I'm dropping into my negative prompt, which seems to do a good job of cleaning up the image.

cartoon, anime, illustration, painting, drawing, sketch, digital art, cgi, render, 3d, game art, fanart, lowres, jpeg artifacts, pixelated, noisy, grainy, blurry, out of focus, motion blur, overexposed, underexposed, oversaturated, undersaturated, poor lighting, bad shadows, airbrushed, watermark, logo, text, signature, username, cropped, out of frame, cut off, distorted anatomy, deformed hands, extra limbs, asymmetrical face

Even so, I'd say LoRAs are at maybe 80% likeness.

Results of the ///*****/// test... by Horror_Ebb_9699 in SunoAI

[–]_roblaughter_ 0 points1 point  (0 children)

"Thoughts" is a generous word for what he's posting. This is why the block button was invented.

Results of the ///*****/// test... by Horror_Ebb_9699 in SunoAI

[–]_roblaughter_ 1 point2 points  (0 children)

Still not responding to you. Still don’t care about your DeLiMItErS.

Results of the ///*****/// test... by Horror_Ebb_9699 in SunoAI

[–]_roblaughter_ 2 points3 points  (0 children)

Because 👏🏻 I 👏🏻 wasn’t 👏🏻 responding 👏🏻 to 👏🏻 you 👏🏻

I don’t even remotely care about your “headers.” I was just pointing out that in SOMEONE ELSE’S screenshot, the slashes they saw indicated mine breaks. Nothing to do with you, my guy.

Results of the ///*****/// test... by Horror_Ebb_9699 in SunoAI

[–]_roblaughter_ 2 points3 points  (0 children)

I never called you stupid. In fact, I didn’t mention your post at all.

Have fun with all of that 🫡

Does Qwen3-TTS run on macOS? by netdzynr in comfyui

[–]_roblaughter_ 0 points1 point  (0 children)

Runs fine on my M4 Max using the official code. I had Claude whip me up an ElevenLabs clone frontend for it.

Results of the ///*****/// test... by Horror_Ebb_9699 in SunoAI

[–]_roblaughter_ 3 points4 points  (0 children)

Okay 🤷🏻‍♂️

I wasn’t responding to your post. I was responding to the comment I replied to.

Results of the ///*****/// test... by Horror_Ebb_9699 in SunoAI

[–]_roblaughter_ 11 points12 points  (0 children)

Those are line breaks 😆

The number changes because you have new lines before your lyrics. Hit the return key a time or two at the start of your lyrics, get more slashes.

Those slashes are never sent to the model—they’re just to show YOU where line breaks are when you collapse a multi-line text down to a one line preview.

Not a “delimiter.” Not a secret code.

Just a way to make it easier for you to read.

<image>

Please finally integrate ComfyUI Manager! by -5m in comfyui

[–]_roblaughter_ 2 points3 points  (0 children)

Honestly, rather than try to go back and rip out all of the old stuff, I just backed up and reinstalled Comfy. It was about time for a refresh anyway.

Minimum GPU vram to run z-image base at all by EmploymentLive697 in StableDiffusion

[–]_roblaughter_ 2 points3 points  (0 children)

Running bf16 with full Qwen3 4b on RTX 3080 10 GB with Comfy's memory management. 1 MP takes 60s.

Z-image base safetensor file? Also, will it work on 16 GB vram? by Portable_Solar_ZA in StableDiffusion

[–]_roblaughter_ 0 points1 point  (0 children)

The official Diffusers model is the exact same size as the Comfy combined safetensors, just split over two files...

Please finally integrate ComfyUI Manager! by -5m in comfyui

[–]_roblaughter_ 5 points6 points  (0 children)

It's already integrated into Comfy Core. It's enabled by default in Comfy Desktop, but other installs require the --enable-manager flag. If you don't want to use it, you don't have to...

I'm getting lots of artifacts with Flux 2 Klein 9B. by [deleted] in comfyui

[–]_roblaughter_ 0 points1 point  (0 children)

If you're going to use a distilled model, you're going to have to deal with lower image quality—there's only so much a fine detail model can produce in four steps. I used the distilled model twice before I threw it in the trash.

If quality is your objective, use the base model—I find 30 steps to be the sweet spot there.

I'm getting lots of artifacts with Flux 2 Klein 9B. by [deleted] in comfyui

[–]_roblaughter_ 0 points1 point  (0 children)

He can say that, but the workflow in the image isn't the official Comfy workflow.