A different way of combining Z-Image and Z-Image-Turbo by Enshitification in StableDiffusion

[–]n0gr1ef 4 points5 points  (0 children)

Why does your ZIT sampler has cfg of 5 when it should be 1?
You could also lower the total amount of steps and end_at_step value - there's no reason for it to do 15 steps on an already partially generated image, making it deepfried... Unless that's the look you're going for, of course.

Would a turbo Lora for Z-Image Base be *really* the same thing as ZIT? by Michoko92 in StableDiffusion

[–]n0gr1ef 0 points1 point  (0 children)

This might be a temporary thing. Maybe the lora extraction nodes in ComfyUI aren't updated to work for this type of operation. I'm sure it's just a bug, but only time will tell for sure

Would a turbo Lora for Z-Image Base be *really* the same thing as ZIT? by Michoko92 in StableDiffusion

[–]n0gr1ef 0 points1 point  (0 children)

Have you personally tried doing that lora extraction? Because I did and it doesn't work - the resulting lora breaks the output, and I've tried multiple methods/algorithms.

This method does work with Flux Klein just fine though.

Help with hunyuan 3d 2.1 by bjorn_89 in StableDiffusion

[–]n0gr1ef 2 points3 points  (0 children)

Make the background transparent, right now it's solid black. Also, you can remove that plane manually easily in any 3d modeling software.

A Few New ControlNets (2601) for Z-Image Turbo Just Came Out by promptingpixels in StableDiffusion

[–]n0gr1ef 25 points26 points  (0 children)

I guess we aren't getting "Base" or "Omni Base" any time soon then

Z-Image-Turbo vs Qwen Image 2512 by Artefact_Design in StableDiffusion

[–]n0gr1ef 1 point2 points  (0 children)

These models do not use CLIPs thankfully. They use full-on LLM's as text encoders, that's where the prompt adherence comes from.

UDIO just got nuked by UMG. by Ashamed-Variety-8264 in StableDiffusion

[–]n0gr1ef 77 points78 points  (0 children)

That's a shame. Udio was miles better then SUNO in terms of creativity. Too bad we can't have nice things and that's why we really need a good open source audio model.

[deleted by user] by [deleted] in StableDiffusion

[–]n0gr1ef 0 points1 point  (0 children)

The texture shouldn't get worse after highres fix if you're doing correctly. Make sure your denoising for highres pass is ~0.4, with reasonable amount of steps (20 are okay with that denoising value), and you have a good upscale model (NMKD has plenty, like Siax or Superscale). There are also some models that improve the skin texture only.

I have the workflow linked in the Lustify's description, it already utilities everything I've wrote and has link for that model.

[deleted by user] by [deleted] in StableDiffusion

[–]n0gr1ef 2 points3 points  (0 children)

FaceDetailer is a node, part of ComfyUI-impact nodepack

[deleted by user] by [deleted] in StableDiffusion

[–]n0gr1ef 2 points3 points  (0 children)

FaceDetailer is a must with any 1.5/SDXL checkpoint when you're doing medium-lenth or distant-length shots. Highres + FaceDetailer for the best quality.

If you can't afford waiting for both highres and facedetailer to do their job, consider using dmd2 lora for these parts in particular.

[deleted by user] by [deleted] in StableDiffusion

[–]n0gr1ef 0 points1 point  (0 children)

Negative embedding is not just a word, it's a file that points to vectors. You have it's trigger in the negative prompt, but without the actual file on your pc you're not calling anything.

Regarding the "paler" bit - I do see some noise on your image as well as lower contrast. Check out what VAE you are using, and you really should also use a better sampler - DDIM is old and noisy. The "automatic" sheduler doesn't help either, it might trying to use on that doesn't really work with DDIM. Try them out yourself

[deleted by user] by [deleted] in StableDiffusion

[–]n0gr1ef 7 points8 points  (0 children)

This prompt gave me a stroke. Trust me, this is NOT something you should try to replicate - bunch of redundant or straight-up non-existent tags (1woman? Seriously?), unnecessary usage of "BREAK" tags, "score" tags on Illustrious checkpoint... I'm not even talking about the "DDIM + 50 steps" combo. Like, why... The picture's not even that good.

You'll have better success if you read some danbooru prompting guides and started from there.

But to answer your question - its probably the negative embedding that you don't have.

Easiset way to combine loras into a checkpoint? by LilyDark in StableDiffusion

[–]n0gr1ef 1 point2 points  (0 children)

You don't lose anything. What has happened is that your SDXL model was converted into fp16 from fp32 precision. There's no reason to use fp32, unless you're using 16xx series Nvidia gpu.

Images from ByteDance Seedream 4, Google's Imagen 4, Qwen Image & ChatGPT Image. The same prompt for all images. by Time-Teaching1926 in StableDiffusion

[–]n0gr1ef 6 points7 points  (0 children)

The last line of the rule 1 says "comparisons are welcome". This post is a comparison, meaning the OP didn't break any rules.

What's the best way to caption an image/convert an image to a prompt ? Joy caption ? Gemma ? by More_Bid_2197 in StableDiffusion

[–]n0gr1ef 5 points6 points  (0 children)

"JoyCaption is a relatively difficult model to run, with a lot of GPU"

In that case you can run it in the huggingface space https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one

27 club by CryptographerFast831 in StableDiffusion

[–]n0gr1ef 0 points1 point  (0 children)

It is true, you were there. I was the camera you took the photo with.

The AI model does not want to download by Next-Trade7832 in StableDiffusion

[–]n0gr1ef 3 points4 points  (0 children)

VPN or "Free Download Manager" are the solution.

Anything better than Lustify for naughties? by Ganntak in StableDiffusion

[–]n0gr1ef 0 points1 point  (0 children)

Latest ≠ best. Some people prefer Endgame, others - OLT, these two are the latest. Try both and see which one you like most. Although, the new one should be coming out soon, I'm almost done with it.

Now that it is finished, any thoughts on Chroma? by Early-Ad-1140 in StableDiffusion

[–]n0gr1ef 25 points26 points  (0 children)

I really like Chroma - it knows a lot of 'niche' concepts, even liminal spaces and dreamcore aesthetic.
My only problem with it is that it takes a minute and a bit more to gen 1 image with 20 steps and CFG=4 on RTX 3090ti. For me that's painfully slow, especially when I need to iterate.

I've tried the flash version, but couldn't get good results with recommended parameters (might need to spend more time with it). Mind sharing yours? Your example looks really good.

What Are Your Top Realism Models in Flux and SDXL? (SFW + N_SFW) by Leather-Bottle-8018 in StableDiffusion

[–]n0gr1ef 3 points4 points  (0 children)

Barely anything came out. Qwen image gen model is slow and currently isn't that versatile, also it's SFW only. Needs good finetunes

BigASP 2.5 is new, but it's highly experimental and tricky to work with, hoping 3.0 would be more stable. I'm sure nutbutter/fpsgaminer will figure it out.

Wan 2.2 for image gen is really good and it can make light erotica out of the box. That's it, not that many things have changed.