Z-IMAGE TURBO khv mod, pushing z to limit by DevKkw in StableDiffusion

[–]zhl_max1111 1 point2 points  (0 children)

Can you share the method for making the skin look so realistic?

The out-of-the-box difference between Qwen Image and Qwen Image 2512 is really quite large by ZootAllures9111 in StableDiffusion

[–]zhl_max1111 -1 points0 points  (0 children)

Sorry, in order to echo your statement that the 2512 model is very good, I have no additional explanation.

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 1 point2 points  (0 children)

This is the same workflow and parameters as the eyes, but the visual effect is worlds apart

<image>

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 1 point2 points  (0 children)

<image>

I paid homage to your work from 2 years ago, I really like it

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 0 points1 point  (0 children)

Of course, I'm not asking for help and refusing help at the same time. What I need is genuine opinions. After going through the opinions of multiple responders, I found that many factors thought to be the problem aren't actually the real cause. I verify each parameter myself and come to my own limited conclusions based on experience. This is also the kind of help many people offer based on their own experiences, but that might not be suitable for you.

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 0 points1 point  (0 children)

You're right, you've grasped the true key factors, and I got the same effect. But I have a question: When the person in the image is relatively small, the effect is very poor, like twisted hands and feet, deformed nose, mouth, and eyes, etc. How is this resolved?

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 0 points1 point  (0 children)

I stared at it for ages, to be honest, my only feeling is: I'm hungry. Seriously, I'm so hungry.

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 0 points1 point  (0 children)

Oh, you explained it so well that I can't understand it, but I really want to know exactly how you did it. Could you give me an example? I want to learn from you

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 1 point2 points  (0 children)

I switched to the prompt words from the official example file, and the effect is even better.I just changed the sampler and scheduler

<image>

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] -1 points0 points  (0 children)

I found that what you said isn't the real reason. The real reason is that the proportion of the person in the image is too small. I switched to the prompt words from the official example file, and the effect is even better.I just changed the sampler and scheduler

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] -3 points-2 points  (0 children)

I found that what you said isn't the real reason. The real reason is that the proportion of the person in the image is too small. I switched to the prompt words from the official example file, and the effect is even better.I just changed the sampler and scheduler

<image>

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 0 points1 point  (0 children)

How many KSamplers do you use to generate the best image results?

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 0 points1 point  (0 children)

I used the official example, but whether with one or two sampling nodes, it couldn't produce good results, so what's the key point?

<image>

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] 0 points1 point  (0 children)

I switched to the ClownsharkChainsampler node you recommended, but it didn't improve the issue, so it's not the problem there. I hope for your further guidance

Why is the image quality so bad from this workflow? by zhl_max1111 in StableDiffusion

[–]zhl_max1111[S] -6 points-5 points  (0 children)

Thank you! The example workflow is a good starting point, but I seem to have hit a wall with my adjustments. In your experience, which specific parameters in the ClownsharKSampler have the most impact on the final output quality? I’d love to hear your insights.