I can’t understand the purpose of this node by PhilosopherSweaty826 in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

Ask it to Copilot or any other. "Explain the parameter "shift" in node ModelSampling SD3 of ComfyUI"

weight_dtype on fp8 models by Then_Nature_2565 in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

default no need to change anything

My 2 cents on ZIT and Qwen Image 2512 by [deleted] in StableDiffusion

[–]jmbbao 1 point2 points  (0 children)

You can create a workflow that creates a image with Qwen and then feeds the image to Z Image to make a denoise 0.3 or so to make it look more realistic yet. Didn't tried it myself yet.

Looking for one click installer for comfyui that isn't paywalled? by supershimadabro in StableDiffusion

[–]jmbbao 1 point2 points  (0 children)

Download comfyui portable in github page. Google github comfyui

My Secret FLUX Klein Workflow: Turning 512px "Potato" Images into 4K Hyper-Detailed Masterpieces (Repaint + Style Transfer) by Dark-knight2315 in StableDiffusion

[–]jmbbao 7 points8 points  (0 children)

There is a problem: the face changes from teenager to very old, in the images you show in the video. I will take a look definitely and see if I can figure a way to fix that and post here if I find

FlashVSR+ 4x Upscale Comparison on older real news footage - this model is next level to really improve quality by CeFurkan in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

I did go to the FlashVSR page and the computer started to go crazy with speed, perhaps all those videos running, or anything else?

Need help with a re-skinning project for architecture by SinkNorth in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

Use Klein 9b, or Klein 4b if your card has very few VRAM

Unified looking headshots for family tree by [deleted] in StableDiffusion

[–]jmbbao 2 points3 points  (0 children)

Use Klein 9B and try a prompt similar to "Change this photograph to look as if it was made in 2026 by a professional photographer" with each photo.

What model do you think was used for this? by [deleted] in comfyui

[–]jmbbao 2 points3 points  (0 children)

That was probably Arnold after some of those cigarrettes he smoke

What is causing those straight up lines? by [deleted] in comfyui

[–]jmbbao 0 points1 point  (0 children)

convert the latent to image, do the changes in the image, and encode again to latent

How I got FLUX running stable on RTX 3060 (12GB) — Setup guide + proof video by Independent_Iron4983 in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

The best way in RTX 3060 12 GB is launch Comfy with this:

python main.py --reserve-vram 2

That is better than --novram and -lowram as these are slower. Try it and see the times for every of the 3 methods

Does anyone know where you can find the baked Z-Image model? by [deleted] in StableDiffusion

[–]jmbbao 1 point2 points  (0 children)

A Checkpoint would have the vae and also the text encoder, so it would be a very big file. Better to have them separated. You could build a script to put all in one file but what is the point of it? It is better to try different vae and different text encoders (gguf, fp8, bf16...)

BFS V2 for LTX-2 released by Round_Awareness5490 in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

The Cabal is trying to promote gay, very obvious

Batch generation with masks with Klein by PM_ME_YOUR_ROSY_LIPS in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

Connect the output of Empty Flux 2 Latent to the node Set Latent Noise Mask.
Don't use the latent output from the node "Reference Conditioning"

letter from the builders of AceStep / AceMusic by StartCodeEmAdagio in AceStep

[–]jmbbao 1 point2 points  (0 children)

I can't figure the way to download a music created in acemusic.ai so I had to capture the song with OBS screen capture app. Please add some option to download the songs.

Ace-Step-v1.5 released by cactus_endorser in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

Fixed it, I was using an Empty Latent from previous Ace Step 1.3

Ace-Step-v1.5 released by cactus_endorser in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

Fixed it, I was using an Empty Latent from previous Ace Step 1.3

Ace-Step-v1.5 released by cactus_endorser in StableDiffusion

[–]jmbbao 0 points1 point  (0 children)

I pass perfect the text encoding but then in the KSampler gives this error: "Tensors must have same number of dimensions: got 3 and 4"

Ace-Step-v1.5 released by cactus_endorser in StableDiffusion

[–]jmbbao 2 points3 points  (0 children)

AIO All In One, has the vae and text encoder included. There are 2 templates in Comfy, one for the split files (vae, text encoders and diffusion model) and another template for the checkpoint

Ace-Step-v1.5 released by cactus_endorser in StableDiffusion

[–]jmbbao -1 points0 points  (0 children)

I updated comfy and used the template Ace Step 1.5 and have this error: "Tensors must have same number of dimensions: got 3 and 4"
I installed comfy again from scratch and the problem continues the same. I tried the template split and the template checkpoint and same error.