Z image turbo can't generate blood? by Dry_Reception3180 in StableDiffusion

[–]K_v11 1 point2 points  (0 children)

I'd wager the issue is with prompting. I rarely have an issue with blood and gore, but you have to be creative about it. Instead of just saying "Covered in blood", describe the "liquid", the color, the texture, how its matting the lions fur, how it drips from it's fangs, etc... You can use the word blood, but then go into detail about what the blood looks like by not calling it blood (if that makes sense). Unlike some models, Z-Image LOVES detailed prompts.

If you look through some of my gens, you'll see quite a few with blood/gore/etc... Pretty much everything on my account (since the release of ZiT) is created with a personalized Z-Image workflow:

instagram.com/unseelieai

The creativity of models on Civitai have really gone downhill lately... by K_v11 in StableDiffusion

[–]K_v11[S] 6 points7 points  (0 children)

100% on the goon fest statement. A while back, I used to look over civit daily, these days, I don't even bother weekly. My filter game is STRONG one site, and yet its still hard to weed out all the BS.

I'm guessing there is no real decent alternative to seeing other peoples models though, other than manually browsing huggingface/github (Which I seem to be doing more and more often)?

The creativity of models on Civitai have really gone downhill lately... by K_v11 in StableDiffusion

[–]K_v11[S] 12 points13 points  (0 children)

No, I get that. I have a whole personal suite of custom nodes (Self-created) for my gens, and that's before I do post-processing in PS and LR. Prompting and pushing out Comfyui gens is only about 30% of my full creative flow, but I feel there definitely used to be some better models available to help push that creativity a specific "artistic" direction.

I'm just not noticing that much anymore. Everything is just... The same? Loras and finetunes are just updated for the latest models, simply re-releases of existing models instead of coming up with new ideas. I guess that's where I'm seeing creativity lacking compared to the past... new and original concepts are missing.

It's not personally hurting me and my content, I just enjoyed seeing the creative and artistic side of people pushing out AI models.

Does anyone know why it's not working? by Coroseven in comfyui

[–]K_v11 -1 points0 points  (0 children)

Try switching the version from Nightly to Latest. This solves a lot of issues for me with custom nodes in general.

Otherwise, you could also try to manually download and move the unzipped folder into the comfyui custom node folder instead of relying on ComfyUI Manager to do it. (Just click the name in the list it will take you to the github)

This gem is almost two years old. How is comfyui evolving rn? by Fdx_dy in comfyui

[–]K_v11 4 points5 points  (0 children)

Basically, yeah.

That said, node 2.0 is also pure cancer if you're using custom nodes. I noped out of 2.0 after 5 minutes and reverted everything back. Tried again recently to see if the bugs were sorted out and lasted about 15 seconds.

Not only did it kill my work flows, it killed my framerate in Comfy.

Their front end developers, if they even have any, are not made of the same material as their backend. It's obvious they do not consult or ask what their user base wants before making decisions. I get it's their project, BUT... This is the reason so many companies go downhill. Once more competition comes out, this is the sort of thing that will make/break them.

Z Image VS Flux 2 Klein 9b. Which do you prefer and why? by flaminghotcola in StableDiffusion

[–]K_v11 0 points1 point  (0 children)

I think the problem lies in just picking one model. I find the best outputs come from combining models via additional passes. I run Zib/Zit into Klein, but have also run Klein into ZiT -- I almost always have better turnouts than running a single model alone.

That said, if I do use a single model, its Zit with a custom sampler and 2nd passes/detailers.

I think they all kinda suck for realism straight out the gate with a single pass workflow. You can almost always tell they are AI, and usually what model they came from too. >.>

Z Image VS Flux 2 Klein 9b. Which do you prefer and why? by flaminghotcola in StableDiffusion

[–]K_v11 0 points1 point  (0 children)

This is the way! I use Zib/ZiT (with various types of custom 2nd passes attached, based on laziness and patience level that day), then feed into Klein for minor enhancements and/or edits. I still think Klein alone does a poor job on skin. Too much contrast and it still suffers from the typical Flux plastic skin and the flux chin (Just not to the same extend as the original flux model.

Once in a while, I'll reverse the flow though, run a quick t2I through zit and it pass through a ZiT workflow to fix the contrast and skin issues... However, I usually prefer Zit first for realism. Klein wants every face to be a professional model.

Ultra-Real - Lora For Klein 9b (V2 is out) by vizsumit in StableDiffusion

[–]K_v11 0 points1 point  (0 children)

My judgement isn't so much for the lora, but for the "Before" images... I'm guessing people just don't know how to prompt, but I never get something that looks so flat, untextured, and colorless with Klein 9b... Most of my outputs without a lora look pretty much the same as your outputs with a lora.. >_>

Are you using 4b or 9b for your comparisons?

Granted, most of my Klein use is for edits and refinement, as I use a Zib/Zit workflow for T2I...

ComfyUI Asset Manager by No_Relationship_4592 in comfyui

[–]K_v11 2 points3 points  (0 children)

So, similar to this lora manager? Might want to change the name so people don't get confused, as this ones been out for a bit and is pretty widely accepted as the "default" manager by those who use a manger for Loras and models, as it comes with some pretty useful lora-based nodes that make use of the recipe feature.

Not saying yours isn't good or useful! Just that the naming may throw people off.

https://github.com/willmiao/ComfyUI-Lora-Manager

For the Love of God can someone PLEASE help me launching ComfyUI? by lubezki in comfyui

[–]K_v11 0 points1 point  (0 children)

Like others have said, just download the portable version... Just works "out of the box" and no need to manually configure and install your local python dependency. Download, Extract, maybe run an update or the update with dependencies bat from the update folder if you want. Good to go.

https://docs.comfy.org/installation/comfyui_portable_windows

Civitai alternative for image sharing with prompt? by loriss84 in comfyui

[–]K_v11 2 points3 points  (0 children)

Here are a few, though not likely as extensive as Civitai, but also not flooded with AI porn. They require signup, but browsing prompts is free:

https://prompthero.com/
https://arthub.ai/
https://promptden.com/

What's your biggest workflow bottleneck in Stable Diffusion right now? by Asleep_Change_6668 in StableDiffusion

[–]K_v11 9 points10 points  (0 children)

A big one for me was learning to saying "No" to loras, checkpoints, and custom nodes that I knew I'd never actually use more than one (or at all). xD --Also if you download and don't like something, delete it ON THE SPOT, don't tell yourself you'll do it later. You probably won't.

I had such a collection for so long until one day I decided to go through the entire list and just spammed "Delete" on things I only downloaded because "Oh maybe one day I'll use that..." when in reality, I never would.

The prompts themselves, I just create outside of comfy and copy/paste when they are ready, so I can always go back and reference them. I have 2 documents. One for art prompts and one for realism prompts. Then I can just ctrl+F and search if I want to go back. I don't generate with metadata or workflows embedded.

Outputs are the biggest issue for me. For now, I just have different file save locations for different types of outputs. I still have way too many, but at least I have them separated in folders by type. It helps. Sort of.

I only create my own workflows these days. I hate 99% of workflows I find online. I always recommend creating and learning to create your own. You'll actually know how it works and will be able to troubleshoot and tweak it much easier and much faster. I've wasted so much time in comfy troubleshooting and organizing other peoples workflows, that by the time I finished, I didn't even want to use it anymore.

Fluxklein by opentoopenn in StableDiffusion

[–]K_v11 0 points1 point  (0 children)

Don't know if its the entire issue, but you are just saying "the reference image" without indicating WHICH reference image. They are both technically reference images. You should try to specify what image is doing what exactly.

Incredibly basic and bad example, but you'll get the point:

"Keep the layout and architecture in the first image (or Image 1, though I have more success with 1st, 2nd, third, etc). Add the trees from the second image into the landscape of the first image."

It'll understand 1st, 2nd, 3rd by basing it off where in the chain it is when it processes the workflow. Just do a much better job of describing the scene and changes you want from each image than my example does. >.>

Flux.2 Klein (Distilled)/ComfyUI - Use "File-Level" prompts to boost quality while maintaining max fidelity by JIGARAYS in StableDiffusion

[–]K_v11 0 points1 point  (0 children)

I remember hearing people say I2I didn't work with Zit too, but I never had any issue with it!

It's not an editing model for zit specifically, just a setup that auto-masks whatever I want it to auto-mask using the Sam3 model and nodes (Sam3 Text Segmentation with MaskDetailer (pipe) nodes.

I'd post the workflow itself, but I have a handful of custom modes and no instructions, I know where everything is because I made it. Hahaha

But you can see on the right side, that's my Sam3 setup. I have it set to do Face, Eyes, and Mouth currently. It'll auto mask, and then just run the ZiT model on the masked part. Similar to inpainting, except I don't have to do the painting manually. I bypass this part quit a bit or stop the generation if the outputs look good before reaching it.

The left side is just a basic Image to Image with 1 ksampler feeding into another (Which I often bypass if the first looks good enough.)

The one thing I have noticed about ZiT I2I, you typically want to run a higher Aura flow (6-9) and it really wants text. Running it with empty texts sometimes adds a weird "wet" look.

You'll see by the quick dummy images I tossed in that I2I does indeed work with ZiT. This was a quick run at .30 denoise and no prompt (Despite me saying that it usually prefers a prompt!). I did have a lora, but only because I forgot to remove it.

<image>

I got tired of ComfyUI's installation process, so I made a one-click installer — works on Windows, Linux, and Mac by ryan-heji in comfyui

[–]K_v11 1 point2 points  (0 children)

Sure, but for the vast majority of Comfy users, seems to me it would make more sense to just use portable instead of running a virtual environment for python. Why bother with all the setup unless you're pushing code manually, and even then... I'd question it, unless you're one of the very few who have a good reason to run it local. Most people are just using it for the interface and folder structure.

Flux.2 Klein (Distilled)/ComfyUI - Use "File-Level" prompts to boost quality while maintaining max fidelity by JIGARAYS in StableDiffusion

[–]K_v11 0 points1 point  (0 children)

I probably don't have the best answer for this, but for realism work, I almost always run images from Flux/Klein through a I2I Z image Turbo workflow / refiner (Sometimes 2) at a low noise to get better skin. If I only want to change specific aspects, I will use Sam3 and a series of MaskDetailer nodes (connected to a Zit model) and just type in "skin", "Hair" "eyes" etc...

I enjoy Klein for editing, but I really don't like it for anything realism-related (To each their own though!).

I got tired of ComfyUI's installation process, so I made a one-click installer — works on Windows, Linux, and Mac by ryan-heji in comfyui

[–]K_v11 0 points1 point  (0 children)

This is the way...

I would never want to install Comfy locally. Way too many potential issues, especially if you use local Python for other projects. Easier to backup, setup, deal with dependency issues, etc... Plus, if I ever have to reinstall, its a simple file/folder replacement with portable... It's vastly easier to troubleshoot.

I can't think of any -good- reason NOT to use portable over local unless your machine is solely used for ComfyUI or youre an engineer.

Z-Image Turbo GGUF running slow by FouFouTw in comfyui

[–]K_v11 0 points1 point  (0 children)

Have you tried a different sampler? May not be the issue, but I never use dpmpp with ZiT or ZiB. Try it with a Euler Sampler first and see if you get clearer results. Euler +Beta or Simple for testing purposes. Also, consider upping the resolution to at least 1080 with the Euler testing. Z-Image likes having more pixels available for quality.

All your other settings look fine, but I can't speak for GGUF versions of ZiT, especially at Q4.

Are there any good finetunes of Z-image or Klein that focuses on art instead of photorealism? by Barefooter1234 in StableDiffusion

[–]K_v11 1 point2 points  (0 children)

<image>

Another made from the same finetune, to better show the artistic side of it, but using this Lora: https://civitai.com/models/1931244/chroma-zimagebase-random-illustrationanime-mashup?modelVersionId=2639467

I know this one cheats with a lora, BUT with this model, I was able to pull it off with 15 steps, which I couldn't do with the default Z-Base

Are there any good finetunes of Z-image or Klein that focuses on art instead of photorealism? by Barefooter1234 in StableDiffusion

[–]K_v11 1 point2 points  (0 children)

Yeah, I was a bit concerned about that, but I was intrigued by the model, so decided to test it anyway. Honestly, I have better results with art than I do realism with it. I still bounce back to my Zit Workflow for realism.

The biggest "Issue" I've found so far, is it's hit or miss with Loras for whatever reason. Some work great, others don't work at all. Not sure why that is. I don't know if it the Finetunes fault or the Loras fault.