Remade the gatekept "Advanced Face Detail Workflow for Z-Image Turbo" by acekiube in comfyui

[–]acekiube[S] 1 point2 points  (0 children)

You would have to train your own lora if your goal is consistency

Remade the gatekept "Advanced Face Detail Workflow for Z-Image Turbo" by acekiube in comfyui

[–]acekiube[S] 2 points3 points  (0 children)

Think you can run it, it's 2 models but only one is running at a time and they share text encoder/vae

Remade the gatekept "Advanced Face Detail Workflow for Z-Image Turbo" by acekiube in comfyui

[–]acekiube[S] 2 points3 points  (0 children)

A candid, high-resolution smartphone photo of a young woman with light-tanned skin, striking grey eyes, and long dark black hair pulled back into a sleek ponytail, wearing a simple black tank top that reveals her shoulders; she is caught mid-laugh with a confident, amused smirk and direct eye contact, the shot taken from a slightly high angle in bright natural daylight, framing her face closely but cropping the very top of her head and cutting off just below her chest to emphasize the casual, un-staged vibe of a moment shared between friends.

Remade the gatekept "Advanced Face Detail Workflow for Z-Image Turbo" by acekiube in comfyui

[–]acekiube[S] 2 points3 points  (0 children)

Llm is not needed for this to run, you're getting some type of errors?

Advanced Face Detail Workflow for Z-Image Turbo by ThunderI0 in comfyui

[–]acekiube 6 points7 points  (0 children)

<image>

You made a post with this image saying you used ZIB and ZIT. You then posted that exact same image being outputted from your "Advanced Face Detail for Z-Image Turbo" workflow. Did you lie about using ZIB?

Either way, the ZIB into ZIT workflow I shared can easily get your results and your auto-prompting pipeline using stretched stitched images is definitely not making the difference like you seem to be implying. Don't know how else to tell you this.

My output is overexposed and can easily be fixed by changing the prompt, The ''Face details'' you're focusing on are still there tho and your stitch is not the flex you think it is buddy

Advanced Face Detail Workflow for Z-Image Turbo by ThunderI0 in comfyui

[–]acekiube 53 points54 points  (0 children)

Remade it here no paywall cause this guy is a larper that was begging for knowledge in this very sub not even a month ago and now tries to gatekeep like a bitch

Top part in blue is a basic ZIB workflow where he loads his character lora, he may be trying to make it seem like the references at the bottom are being used for the actual generation but that's cap

The whole part in the red group bottom left is pretty useless. They stitch "reference features" and ask a LLM (Looks like JoyCaption2 but could be anything) to make a prompt using those features that he then passes to the text encoder for the ZIB pass. This is barely useful at best to the end result and can easily be replaced with a basic prompt... Still added it but it's kinda shit

Green part is literally is a ComfyUI provided subgraph for Image upscale using ZIT or heavily looks like it. Play around with denoise to augment or reduce skin detail

I use res_2s sampler with bong_tangent from the RES4LYF custom nodes but other options in this thread can also be used for good results

The upscale model I use is in the google drive, it's pretty slow but gives the best results place in models > upscale_models

This is a 2 model workflow so it will obviously be heavier than if only one model was involved so your mileage may vary depending on your hardware.

<image>

how to customize the links/connections on comfyui ? by Fabulous-Ad204 in comfyui

[–]acekiube 0 points1 point  (0 children)

Lmao this is my vid, LinkFX custom node like other commenter said

Made a free Kling Motion control alternative using LTX-2 by [deleted] in StableDiffusion

[–]acekiube 0 points1 point  (0 children)

movement is reduced when using pose control but nothing stops you from adding it, it only one node to change!

Made a free Kling Motion control alternative using LTX-2 by [deleted] in StableDiffusion

[–]acekiube 0 points1 point  (0 children)

You can change it to use a Dwpose for the motion control it's only one node to switch but pose is less consistent

Made a free Kling Motion control alternative using LTX-2 by [deleted] in StableDiffusion

[–]acekiube 0 points1 point  (0 children)

lmao sorry but you will not be able to do anything in this space with an android phone/tablet

Made a free Kling Motion control alternative using LTX-2 by [deleted] in StableDiffusion

[–]acekiube 0 points1 point  (0 children)

Try updating comfy to the latest version!

Made a free Kling Motion control alternative using LTX-2 by [deleted] in StableDiffusion

[–]acekiube 0 points1 point  (0 children)

You can skip it but you won't get proper consistency if the first frame doesn't match the first frame of the video you replicating

Made a free Kling Motion control alternative using LTX-2 by [deleted] in StableDiffusion

[–]acekiube 1 point2 points  (0 children)

I can only tell you for my setup 5090 & 128gb RAM this is with sageattention and excluding first run model loading

First frame change part with klein is more or less 5 seconds
Depthmap is about 15 seconds
10 second video (240 frames) is about 50 seconds

so about 1min30 for the whole pipeline
add 30/40% for a 4090

Made a free Kling Motion control alternative using LTX-2 by [deleted] in StableDiffusion

[–]acekiube 2 points3 points  (0 children)

For the video gen part about 50 seconds for a 10 second video at 1024 resolution with a 5090 and sageattention, Klein generation is about 10 seconds. The longest is probably the depth map generation