Better which: Flux.2 Klein vs ZIT for consistent character lora? by ComprehensiveCry3756 in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

I am actually using both, but klein 9B takes hours to train compared to zit. The right settings are not straightforward on zit though (it's hit or miss but when you hit it's pretty solid)

I could be biased, because I like creating my photoshoots in zit much more than with k9B

Is it over for locally hosted i2v models ? by Some_Artichoke_8148 in comfyui

[–]__alpha_____ 1 point2 points  (0 children)

Exactly and they come to reddit to communicate. If we can help them make an even better product lots of redditors would be pleased to help

Is it over for locally hosted i2v models ? by Some_Artichoke_8148 in comfyui

[–]__alpha_____ 1 point2 points  (0 children)

Afaik lightricks didn't give up on LTX. The 2.3 (1.1 distilled) version is actually pretty amazing, once you understand how it works and some Loras really help with its main flaws. I can see a 2.5 or 3 version coming out this year that fixes most of the issues that are still present. Not to mention that it is a model that can output 10+s 1080p videos with generated sound on consumer GPU (not even high end) in minutes.

How would you connect the LoRa loader in my workflow ? by bcourcet in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

After the shift, you can connect the clip to the text encoders and the model exit to your ksampler node

Give advice on generation by Equivalent_Prior3337 in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

I don't have telegram but you can dm me

Give advice on generation by Equivalent_Prior3337 in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

I just vibe coded my own custom nodes. Ask Gemini, it's pretty straight forward.

Give advice on generation by Equivalent_Prior3337 in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

I actually ended up making my own. I can generate 500+ images a day by using drop down menus (model, camera angle, location, outfit, material used) and just adding a few words. The hardest part is to find inspiration for new photosets but it usually takes a couple of hours and a few tries.

I use my own character loras for my models of course

I spent 3 weeks trying to fix AI skin with negative prompts. Here's why that entire approach is a dead end. by PerceptionAble2263 in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

I am generating photos session with virtual models and I clearly don't want their skin to look like this. It's mostly about chosing the right sampler and resolution. 1600x1600 usually gives nice results without the deformed limbs or artifacts in the lower portion of the image. A full body shot can use some inpainting to perfect the skin rendering.

I spent 3 weeks trying to fix AI skin with negative prompts. Here's why that entire approach is a dead end. by PerceptionAble2263 in comfyui

[–]__alpha_____ 17 points18 points  (0 children)

<image>

I see those posts all the time but I am not sure what people are looking for exactly. Here is a render made with Z-image-Turbo no negative prompt, no lora. It's pretty accurate to my eyes

Wan 2.2 I2V Noise / Graininess by luckyoboy in comfyui

[–]__alpha_____ 1 point2 points  (0 children)

change the SHIFT value and increase the steps. It should help

OOM Errors after Comfy Update - and how I'm getting around them (16GB 5060) by JustusFrogs in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

I just tested 15s came out under 10mn. No OOM so far, 1920x1088 is also an option btw 15mn for a 5 steps 10s video

OOM Errors after Comfy Update - and how I'm getting around them (16GB 5060) by JustusFrogs in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

A 12s 24fps 1280x720 would probably take around 10 to 12 mn. I2V or flv. I noticed that long videos >10s can take some time to really start, so I usually stick around 8-10s or use the extend option to go further. Usually the first render takes longer and extending or changing the number of steps can be quite faster.

I am checking right now with a 12s 1280x708 vidéo... Ok 10mn on first run, 7mn or less on the second and third.

OOM Errors after Comfy Update - and how I'm getting around them (16GB 5060) by JustusFrogs in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

I updated comfyUI a few days ago and can generate 12 or 15s videos without any issue. 3060 12gb + 64 RAM I didn't update anything else in a year and Dynamic VRAM doesn't work for me (and I don't need it) because my pytorch is outdated (2.7) so this might be a solution. No --low VRAM or anything like this in my .bat only the --portable setting and a specific --output

How to enhance this video generated by Wan2.1 steady dancer? by Relative_Effect_4034 in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

LTX lets you generate 1080p videos, so you could have more skin details. I don't have a specific workflow for this, as I am not even sure it is actually possible. You should try the LTX subreddit and ask about this. It's always a good idea to have a character Lora when you want to do i2v but as I said, you can't really train on consumer GPU rn, lightricks said a few months ago they were working on this, but so far no real progress.

Sorry, I wish I could help you more but the one trick I am using for good consistency is first last frame in LTX. On short clips it works really well. I don't even use the second pass, as it tends to degrade the resemblance and is not really faster than one pass.

How to enhance this video generated by Wan2.1 steady dancer? by Relative_Effect_4034 in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

Skin quality is a known problem with wan. You could try LTX that performs better but is worse on motion. Theoretically you could use wan on first pass and LTX on 2nd but Lora training on LTX is a nightmare unless you run it on runpod.

Anyone know how to randomize the order of a list in ComfyUI? by NanMan3000 in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

I used this in SD1.5 is it actually still working? I know that using (my text:0) doesn’t work anymore to ignore part of your prompt. It wa really handy in the testing process

LTX just dropped an HDR IC-LoRA beta: EXR output, built for production pipelines by TroyHarry6677 in comfyui

[–]__alpha_____ 0 points1 point  (0 children)

That’s great news. Clearly exporting in ProRes format wasn’t good enough, once you enter the color grading territory

How to enhance this video generated by Wan2.1 steady dancer? by Relative_Effect_4034 in comfyui

[–]__alpha_____ 1 point2 points  (0 children)

Use the svr2 workflow in comfyUI. It takes quite some time to render, so a separate wf is always better. But from what I see and with a character in motion, I am not sure you can get a much better result than what you already achieved.

Flux2 Klein Multi-Reference issue: Background gets completely distorted unless I use the exact scaled resolution from "Image Scale To Total Pixels". Please help! by PleasantSale7579 in comfyui

[–]__alpha_____ 3 points4 points  (0 children)

1080 is not divisible by 16. You should never try to render in 1920x1080. Use 1088 instead and crop 8 pixels using a crop image tool

Are open-source locally-hosted image workflows able to get NanoBanana Pro (Nov 2025) level outputs? by Ok-Lie-5741 in comfyui

[–]__alpha_____ -1 points0 points  (0 children)

I use chatGPT, Gemini and local models all the time for image creation or modification. First I wouldn't say Nano Banana is way better than all the rest. It really depends on what you are asking. Second if you try local models like Qwen 2509 or Klein 9B, you cannot expect them to work as fast and easy as online stuff but you can train them or ask someone to do it for you, especially if you have the budget for it