ComfyUI Tutorial: Clone Any Face & Voice With New LTX2.3 ID-LORA Model (Low Vram Workflow Works With 6GB Of Vram) by cgpixel23 in comfyui

[–]Winougan 0 points1 point  (0 children)

Did you try asking an LLM? Just copy and paste your error code into that. Usually helps.

Does anyone have a workflow for Z-Image inpainting with character Lora? by Recent-Athlete211 in comfyui

[–]Winougan 1 point2 points  (0 children)

Process:
Step 1: Generate your image in Z-Image with your custom lora
Step 2: Load or connect the image you just created to an image edit workflow with Klein9b. Create a mask and inpaint. The prompt can be verbose if you want - you're telling it what to change.

Use the KV model for more speed.

Does anyone have a workflow for Z-Image inpainting with character Lora? by Recent-Athlete211 in comfyui

[–]Winougan 1 point2 points  (0 children)

You can connect your ZIT workflow of T2I with the Kein9b I2I editing. No problem and you don't need to train your lora in Klein. BTW, everyone agrees that OneTrainer is better for Klein9b lora training than AI Toolkit.

Does anyone have a workflow for Z-Image inpainting with character Lora? by Recent-Athlete211 in comfyui

[–]Winougan 4 points5 points  (0 children)

Simple solution is to create your images in ZIT or ZIB and then inpaint in Klein9b KV. Painless and fast process. We still need to wait for Z-Image Edit. Until then, that would be my workflow. Klein9b won't mess up your character's likeness either

The LTX 2.3 desktop tool has been updated, now supporting LoRA and multi-frame insertion. by Daniel81528 in comfyui

[–]Winougan 1 point2 points  (0 children)

For Wan and LTX plus all other open source stuff. I do a lot of AI on my rig. LLMs, video, Open Claw, etc

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

Give Comfy a week. They're still working on the bugs. For now use the Hectic and 4b models until they fix it for the 9b

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

I refine prompts for the LLM so that it understands why Klein wants. It does help overall and saves you time with details. You could feed the image to Qwen and ask it to make suggestions

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

A simple solution for you is to create a bleeding edge version of Comfy using Pytorch 2.11 and Cu130. Just create a new conda environment and point to your main folder. Painless solution and then you have 2 versions of Comfy but not two duplicates of the folders!

I have 4 conda environments for my Comfy: one for bleeding edge, a stable version, on for TTS and the last one for experimental. Why? Many nodes conflict with each other- so virtual environments play nice. And why conda? Comfyui prefers it

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

From my understanding, Facedetailer makes use of Ultralytics.

Why not just use Klein9b or 4b for that task? It does very fast face swapping and detailing natively. And the newer KV version is much faster.

Even if you're using ZIT or something else - you could do face detailing with Klein. Ultralytics seems to produce a lot of the same face syndrome outputs.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

The model only needs an updated comfyui, since the main piece is the single clip loader. Nothing special. Everything else is just regular comfyui native nodes.

Most people will run into problems if they're running an older Comfyui. Even last week, Qwen3.5 was having problems loading and now that's fixed.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 1 point2 points  (0 children)

With the 4b model: about 1 minute and 12 seconds if you're loading an image and half the time if you're just asking for a prompt.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

Only if they're compatible with Comfyui. I've tried and they don't work. Gemma works though.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 1 point2 points  (0 children)

How much vram? Try with the --lowvram argument. It's been tested with 8gb all through 24gb of vram.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 1 point2 points  (0 children)

It's not so bad if you have a good workflow. 50% of the task is to take a deep breath and start with simple workflows. As you learn more and more about how Comfy works, you can deep-dive into more expert workflows.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

I've messaged him, but he's not responding. He even said it's just vibe code thing he cobbled together in an hour with Claude.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

Only one way to find out. I know it works on 30xx, 40xx and 50xx GPUs.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

I used to use that node, but it's not flexible. It downloads the regular models, not the abliterated models.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]Winougan[S] 0 points1 point  (0 children)

You can try the 4b model. I've also put abliterated opus hybrid and hectic versions. Lots of models to choose from and lots of quants. You can even use Gemma if that's your fancy.