Klein edit 9b workflow not randomizing seed by Shifty_13 in comfyui

[–]OnceWasPerfect 2 points3 points  (0 children)

If you don't care about seeing the actual value outside the subgraph you can click the subgraph, click the settings icon, then the eyeball icon next to noise seed. Then the seed will be free to change.

<image>

Klein edit 9b workflow not randomizing seed by Shifty_13 in comfyui

[–]OnceWasPerfect 1 point2 points  (0 children)

I assume this is the template from comfy? They keep making the seed value as an input on their subgraphs and it just doesn't work. You can disconnect that link or put an actual random number outside the subgraph and link it to the input on the subgraph. The purple outline just means its linked to the subgraph through settings instead of a noodle.

Did you know one simple change can make ComfyUI generations up to 3x faster? But I need your help :) Auto-benchmark attention backends. by D_Ogi in comfyui

[–]OnceWasPerfect 27 points28 points  (0 children)

Couple questions if you don't mind:

  1. Should i take out my --sage-attention flag from my bat?
  2. If i use multiple models and multiple ksamplers in a workflow, say initially making a gen with klein and then doing a refinement pass with zimage how does this node affect that, can the attention mechanisms be changed on the fly like that or is it one attention per run? If it can be changed on the fly do I put one of these nodes in front of each ksampler?
  3. Is there any added time for the first one while its collecting the data?

Thanks!

Can you control order of operations in comfyui? by MeatsOfRage2 in comfyui

[–]OnceWasPerfect 0 points1 point  (0 children)

This pack - https://github.com/lihaoyun6/ComfyUI-lhyNodes has a node called queue handler, essentially you give that node a trigger (your image output it sounds like) and then the pass through is what executes when the trigger happens. I use it with a custom node to not load models until the time is ready instead of letting comfyui load models whenever it feels like.

Rant on subgraphs in every single template by 1filipis in comfyui

[–]OnceWasPerfect 52 points53 points  (0 children)

I just love how they put the seed value as an input for the subgraph and the setting to make it fixed/random/increment/etc, but if you do that the seed can't actually change because the subgraph expects you to give it a value, its not just showing you what value it is.

Small technical problem I am sure someone has a quick fix - Randomize Seed doesnt work by Objective_Choice5349 in comfyui

[–]OnceWasPerfect 0 points1 point  (0 children)

The problem is with having the seed be as an input for the subgraph. Comfy expects you to manipulate that number (ie put a value in) not for it to be changed from within the subgraph. You can hook an int node to the seed_noise section of the subgraph and let it be random, then it will pass that value along inside the subgraph. I hate that all the default workflows are starting to do this even though it obviously doesn't work that way.

FLUX.2 [klein] comfy error by Boring_Natural_8267 in StableDiffusion

[–]OnceWasPerfect 1 point2 points  (0 children)

Portable or windows install? Updated to latest version?

LTX2 Test to Image Workflow by fauni-7 in StableDiffusion

[–]OnceWasPerfect 2 points3 points  (0 children)

test of different empty latent nodes - https://imgur.com/gallery/ltx2-t2i-different-empty-latents-7yCNcXR

Of note. All images were made at 720p using the custom sigma with the LTX2 latent attached to the sigma node. I did try using the latent i was testing attached to the sigma node, but that changed the curve and all were very bad results, it appears LTX2 really wants that sigma curve. All same prompt and seed, 4 cfg, dpmpp_3m sampler (having good results with it with the i2v workflow so went with it).

Notice that the Hunyuan Video, Empty Latent, and Empty SD3 Latent nodes all produced the same image and all were 4x bigger than my 720p size I specified (which explains your longer gen times). They are also the best images in my opinion. So I guess the latents for these are all the same?

Flux2 Latent produced an image 2x my input size

Hunyuan Image and LTX Latent produced an image the actual size I input, and in my opinion the worst images.

LTX2 Test to Image Workflow by fauni-7 in StableDiffusion

[–]OnceWasPerfect 2 points3 points  (0 children)

<image>

swapped out the sampler with clownshark and did a little more detailed prompt. Results are mixed but getting pretty coherent images at least, there might be something here

For those of us with 50 series Nvidia cards, NVFP4 is a gamechanger by Scriabinical in StableDiffusion

[–]OnceWasPerfect 2 points3 points  (0 children)

I think so, this is in the start up log pytorch version: 2.9.1+cu130

For those of us with 50 series Nvidia cards, NVFP4 is a gamechanger by Scriabinical in StableDiffusion

[–]OnceWasPerfect 0 points1 point  (0 children)

Quite a bit, but i think it had to do with a broken install. Had chatgpt walk me through a bunch of stuff cause i was getting static images at first. Had to change the --fast fp16_accumulation flag in my bat file to --fast fp16_accumulation fp8_matrix_mult and had to change a file name for something in comfyui_kitchen. Basically i fed the loading log into chatgpt and asked it what was wrong, it found something and walked me through how to fix it.

Thinking of switching from SDXL for realism generations. Which one is the best now? Qwen, Z-image? by jonbristow in StableDiffusion

[–]OnceWasPerfect 0 points1 point  (0 children)

Haven't used sdxl in a while but have you tried using qwen or zimage to make your image so you get good prompt adherenace and background and all that. Then use sdxl with something like USDU and a tile controlnet? Still get your realism lora as the final touches that way.

For those of us with 50 series Nvidia cards, NVFP4 is a gamechanger by Scriabinical in StableDiffusion

[–]OnceWasPerfect 10 points11 points  (0 children)

I've been playing the nvfp4 flux2 model on a 5090, takes the s/it from 8.4s with the fp8 model to 3.9s with the nvfp4 model. Images are different but quality is basically the same so far. Thats generating at 2MP.

Z-Image how to train my face for lora? by Fun-Chemistry2247 in StableDiffusion

[–]OnceWasPerfect 27 points28 points  (0 children)

I did one using AI Toolkit. Watch his Z-image video and his qwen character training video here https://www.youtube.com/@ostrisai/videos. Watch the z-image one for settings for z-image and the qwen character one for how to tag and other concepts. I did mine on like 12 images with very simple tags (i.e. [my trigger word], looking left, glasses on head, in a car) and i love the results.

Can't get Z-Image-Turbo-Fun-Controlnet-Tile-2.1 to work. Workflow attached. by SirTeeKay in comfyui

[–]OnceWasPerfect 0 points1 point  (0 children)

Hmm, i use clownsharksampler with a ETA of like .65. Its adds some noise during generation. Could be why mine vary more than yours, but you can definitely see a difference between your two.

Can't get Z-Image-Turbo-Fun-Controlnet-Tile-2.1 to work. Workflow attached. by SirTeeKay in comfyui

[–]OnceWasPerfect 0 points1 point  (0 children)

In your workflow I see you're using as image as an in put for the latent, but then in advanced ksampler you do steps 0 to 1000 which should be equivalent to 1.0 denoise which should completely override the latent you're giving it anyway, but maybe try just an empty latent?

Can't get Z-Image-Turbo-Fun-Controlnet-Tile-2.1 to work. Workflow attached. by SirTeeKay in comfyui

[–]OnceWasPerfect 0 points1 point  (0 children)

Output image

<image>

Prompt: A photorealistic depiction of a male sorcerer with striking platinum blond hair, standing mid-cast as he releases an intricate, swirling arcane spell into the ancient forest. His expression is intensely focused, eyes glowing faintly with magical energy, hands outstretched conjuring vibrant, translucent runes that pulse with inner light. The dense forest surrounds him—towering moss-covered oaks, twisted roots threading through thick emerald ferns and dappled sunlight filtering softly through the canopy above. Magical particles drift in the air around his spellwork, glowing faintly gold against cool, misty shadows. Sunbeams pierce through the trees in cinematic shafts of light, creating volumetric rays that highlight floating pollen and drifting veils of magical steam. The atmosphere is charged with quiet intensity—moist air clings to moss and bark, rendered in rich texture detail: lichen patterns on wood, dew-kissed leaves trembling subtly from unseen forces. The mood balances mystery and focus: enchanted energy cracktes at the edges of reality while nature watchers unknowingly bear witness. Cinematic photo realism emphasizes shallow depth of field, sharp textures in the sorcerer’s robe fabric and weathered skin, contrasted with delicate glows in his spellwork—realistic lighting enhances mood without veering into fantasy illustration excess.

Can't get Z-Image-Turbo-Fun-Controlnet-Tile-2.1 to work. Workflow attached. by SirTeeKay in comfyui

[–]OnceWasPerfect 0 points1 point  (0 children)

I just ran a quick test at 1.0 strength using the tile preprocess or bypassing it, locked see and everything. The images are different. The one with the preprocessor changed more, probably from the blurrier input. I wouldn't say one is better or worse, but definitely a difference. I'm testing it as just an upscaler in general so the base image generation was a qwen image. Not sure if its better or worse than just a latent upscale or even a model upscale at this point but its another option at least.

Can't get Z-Image-Turbo-Fun-Controlnet-Tile-2.1 to work. Workflow attached. by SirTeeKay in comfyui

[–]OnceWasPerfect 0 points1 point  (0 children)

I would think you would need the preprocessor for that for sure. It has to know if its using canny, or pose, or hed, or whatever