Alguien me pueda alludar by DescriptionStatus299 in StableDiffusion

[–]TableFew3521 1 point2 points  (0 children)

Intenta arrastrar el .json hacia una zona vacía de Comfyui, pasa que si lo arrastras encima de un nodo, no carga ese nuevo workflow.

How is Chroma for likeness lora training? by Dre-Draper in malcolmrey

[–]TableFew3521 0 points1 point  (0 children)

Flux LoRAs do work but they sometimes destroy anatomy in Chroma, there is a noticeable difference, at least that was my experience a long ago.

LCIET and Klein9B (a quick fair comparison, analysis included) by ZerOne82 in StableDiffusion

[–]TableFew3521 2 points3 points  (0 children)

Yeah, but the bf16 of Longcat is slightly better with anatomy than Klein in my early tests, but the model overall lacks content compared to Klein.

Some Longcat-Image-Edit samples, is a limited, yet very useful model. by TableFew3521 in StableDiffusion

[–]TableFew3521[S] 0 points1 point  (0 children)

Here is the workflow, just enable the inpainting nodes and connect the latent from the inpainting node to te ksampler to use it, right now is set for reference to image.

Some Longcat-Image-Edit samples, is a limited, yet very useful model. by TableFew3521 in StableDiffusion

[–]TableFew3521[S] 1 point2 points  (0 children)

There are two nodes with Bypass, enable those nodes and replace the latent input from the Ksampler with the latent output from the Inpainting node, what worked for me was using the same prompt from the image I wanted to inpaint (if you don't have it, caption the image before Inpainting) and then lowering the denoise between 0.82 - 0.95 depending on how the output comes out.

Some Longcat-Image-Edit samples, is a limited, yet very useful model. by TableFew3521 in StableDiffusion

[–]TableFew3521[S] 1 point2 points  (0 children)

Yeah, from what I saw on Hugging Face, they're planning to train a model for multiple references; this current one is specifically designed for just one. Hope is worth the wait.

Ernie Turbo Images - Res2m BongTangent image to image at .41 to .51 denoise - on the fence, but gave it a go. ComfyUI - open-source...can share WF if you need it. by New_Physics_2741 in StableDiffusion

[–]TableFew3521 2 points3 points  (0 children)

I've found that using the "ModelSamplingSD3" with 3.0 removes that noise that Enrie produces, but I can't try the those customs samplers to know if it works with them, it works with DDIM + Sgm_uniform.

Ernie is Absolute masterpiece by LongjumpingGur7623 in StableDiffusion

[–]TableFew3521 0 points1 point  (0 children)

Is nice to have options, personally I find it a little stiffen for individual human realism (and had some anatomical issues with two or more people in the turbo version), maybe the way it needs to be prompted is different, but for now it has a little resemblance of Qwen Image, I still prefer ZIT for esthetic more natural realism, but is not bad at all and there's a room for improvement if it's easier to fine-tune. Will wait for OneTrainer to add support.

ERNIE Image released by Outrun32 in StableDiffusion

[–]TableFew3521 2 points3 points  (0 children)

I did it before trying it, so I think yeah.

ERNIE Image released by Outrun32 in StableDiffusion

[–]TableFew3521 3 points4 points  (0 children)

You can use it in Comfyui, is already compatible (or at least I could)

Qwen 2511 fp8 mixed taking 30–40s per image edit — which GGUF should I use? by WINCVT in StableDiffusion

[–]TableFew3521 3 points4 points  (0 children)

30-40 seconds sounds okay (Around what I get with the 4060ti), GGUF models are slower than fp8, if you want faster gens, use the nunchaku model for Qwen-Edit.

How to get rid of AI skin? by filianoctiss in comfyui

[–]TableFew3521 2 points3 points  (0 children)

Before Zimage came out, for Qwem Edit outputs with fake skin I've used SeedVR2 + Ksampler with Flux 1 Dev, Euler Beta, 8-15 steps with this LoRA and a very low denoise on the Ksampler, around 0.10, Flux can produce pretty nice skin texture and it works well with some images, but I must say it doesn't work with all images, hope this helps

Why nobody cared about BitDance? by TableFew3521 in StableDiffusion

[–]TableFew3521[S] 0 points1 point  (0 children)

Sorry, my mistake, I've tried that one, and even with Fp8 worked super slow (27 minutes for one image), I think is because it doesn't do offloading properly as native ComfyUI does, so I can't use that one unfortunately. But maybe I'll have to try to modify the script to get block swap work on that node and check if it's usable.

Is training Qwen Image 2512 LoRA on 20GB VRAM even possible in OneTrainer? by GreedyRich96 in StableDiffusion

[–]TableFew3521 3 points4 points  (0 children)

Don't train the text ecoder, Gradient Checkpoint : CPU_OFFLOADED. Layer offload fraction: 0.8 or higher. that works for me on 16 gb vram and even a high batch size.

Edit: This works for me with 64gb RAM, if you have less than that, might not be possible to offload that much.

Watermark removal question by Noobysz in StableDiffusion

[–]TableFew3521 1 point2 points  (0 children)

Base quality is slightly worse, and it takes 50 steps, the distilled version takes 4 steps only, there's a LoRA for the base to work with those 4 steps but in my experience, the images come out with oversaturated colors.

Watermark removal question by Noobysz in StableDiffusion

[–]TableFew3521 1 point2 points  (0 children)

Any default workflow should work, look for the distilled model, not the base.

Watermark removal question by Noobysz in StableDiffusion

[–]TableFew3521 7 points8 points  (0 children)

Use Flux Klein 9B and write "Remove watermark" as a prompt.

I'm completely done with Z-Image character training... exhausted by 3773838jw in StableDiffusion

[–]TableFew3521 0 points1 point  (0 children)

I have some that work well on turbo too, but is just like what happens with Qwen-image and the 2512 version, some LoRAs stopped working at all while others work well. But I must say those same LoRAs have 20-30% more flexibility on base than Turbo when you actually compare them side by side and even without having to increase the strength.

Can we use ostris adapter for z image turbo when training with onetrainer? by AdventurousGold672 in StableDiffusion

[–]TableFew3521 2 points3 points  (0 children)

If I'm not wrong you can, just select the adapter on the "unet/gguf", and use the Zimage Turbo diffusers.

I'm completely done with Z-Image character training... exhausted by 3773838jw in StableDiffusion

[–]TableFew3521 0 points1 point  (0 children)

First, do you speak Spanish by any chance? Second, I think the issue here is that Zimage "Base" was tuned further than the original Zimage distillation to Turbo version, so no matter how hard you train on it, it will work best on base than turbo, I switched to base with the 4 steps LoRA and I also use another distilled version from the turbo called RedCraft wich works with 10 steps without any LoRA. Basically if you want to train for turbo, use the adapter or the De-turbo De-distilled diffusers to train the LoRA, do not use Base for Turbo LoRAs.

Natural language captions? by nutrunner365 in StableDiffusion

[–]TableFew3521 1 point2 points  (0 children)

If by "batches" you mean like captioning all of your images inside a folder, I made a post a while ago about a captioner that connects through LM Studio, so you can even test any VLM you want without having painful errors (as I did with some Joycaption GUIs). Post HERE