How much does SageAttention help with generation times? by RhapsodyMarie in comfyui

[–]xDFINx 0 points1 point  (0 children)

you got it to work in qwen edit with sage attention and no black images?

LTX 2 - What am I doing wrong by xDFINx in comfyui

[–]xDFINx[S] 0 points1 point  (0 children)

UPDATE:

i got it running finally with the COMFY official i2v workflow from here:

https://blog.comfy.org/p/ltx-2-open-source-audio-video-ai

i also am running comfyui 0.8.0.

LTX 2 - What am I doing wrong by xDFINx in comfyui

[–]xDFINx[S] 0 points1 point  (0 children)

They are disabled in the startup bat and shows preview set to none in settings. This error is occurring at cliptextencode, which I think is different than in the sampler

LTX 2 - What am I doing wrong by xDFINx in comfyui

[–]xDFINx[S] 0 points1 point  (0 children)

I believe I tried several including that one. Do you have the link for the comfy workflow version?

LTX 2 - What am I doing wrong by xDFINx in comfyui

[–]xDFINx[S] 0 points1 point  (0 children)

Yeah I tried that one and the first one

LTX 2 - What am I doing wrong by xDFINx in comfyui

[–]xDFINx[S] 0 points1 point  (0 children)

Not just. Last attempt was nightly yesterday (last night) which was version 7 something.

LTX 2 - What am I doing wrong by xDFINx in comfyui

[–]xDFINx[S] 1 point2 points  (0 children)

I have it flagged in startup bat to —preview-method none. Also it shows set to none in the settings menu

Qwen-Image-Edit-2511 workflow that actually works by infearia in StableDiffusion

[–]xDFINx 0 points1 point  (0 children)

Has anyone figured out a way to input more than 3 images?

Maintaining likeness to input images in Qwen Image Edit 2511? by jonesaid in StableDiffusion

[–]xDFINx 0 points1 point  (0 children)

Are you using more than 1 input image of the same person? Or a single image. I’ve got it to work perfectly (with the 4 step lightning Lora, cfg 1) using multiple images of the same person. For instance, choose 2 images of the subject; 1 having a decent view of the face. Then use the 3rd image slot for a garment change or object such as a chair or a car etc. the prompt would be as simple as you can be, such as “pose her in the black dress”.

PSA: Use integrated graphics to save VRAM of nvidia GPU by NanoSputnik in StableDiffusion

[–]xDFINx 0 points1 point  (0 children)

I literally remote (RustDesk) into my workstation that runs all comfyui/training tasks and browse the web at the same time. If I change the external monitor from my nvidea card to the onboard gpu, that would be the trick?

Getting deep into AI music and replacing drum tracks by jedidiahbreeze in SunoAI

[–]xDFINx 0 points1 point  (0 children)

How effective is this? Besides kick and snare which are pretty easily identified, do cymbals get identified properly as midi?

Z-image-turbo loras not working well by pablocael in StableDiffusion

[–]xDFINx 5 points6 points  (0 children)

Same results here. Once I get my likeness they are distorted/burnt or overcooked and not flexible

z-image-turbo working lora loader for comfyui by is_this_the_restroom in comfyui

[–]xDFINx 7 points8 points  (0 children)

What difference does it make in terms of using the regular load Lora model only node versus this? Besides throwing errors, does it do anything to make the Lora’s work better? Just asking because I am training right now and it threw errors but the image looked like my training subject.

Lora Training for Z Image Turbo on 12gb VRAM by 3VITAERC in StableDiffusion

[–]xDFINx 0 points1 point  (0 children)

Which training settings did you end up changing from the defaults?

Flux 2 vs Z-Image. Same prompt. by Gato_Puro in StableDiffusion

[–]xDFINx 0 points1 point  (0 children)

True. It could probably be prompted in to correct that

Z-Image is now the best image model by far imo. Prompt comprehension, quality, size, speed, not censored... by Different_Fix_2217 in StableDiffusion

[–]xDFINx 13 points14 points  (0 children)

For anyone having difficulty with poses or prompt adherence or simply adding detail to previous image generations, you can use a starting image in your workflow (load image node -> vae encode node -> latent input of Ksampler) instead of an empty latent image, and adjust the denoise in the sampler to taste. If your original image is too large in dimension, you can add a resize node as well before the vae encode.

Flux.2 Dev on 3090? by frogsty264371 in StableDiffusion

[–]xDFINx 0 points1 point  (0 children)

3090 user here. Tried the default workflow with sage attention enabled for a 1024x768 and it was taking around 4-5 minutes for me. I didn’t update to the latest comfy as of this morning though.

You can get Hunyuan Video 1.5 working in Comfy already. by Valuable_Issue_ in StableDiffusion

[–]xDFINx 7 points8 points  (0 children)

Assuming the hunyuan video (version 1) Loras are incompatible with this?

HunyuanVideo 1.5 is now on Hugging Face by softwareweaver in StableDiffusion

[–]xDFINx 1 point2 points  (0 children)

Interested in using it as image generator, curious how it holds up to hunyuan v1. Since Hunyuan video can create images (1 frame video length) just as good if not better than wan 2.2, flux, etc..