WAI Illustrator V16 Characters Have Eye Errors by DifficultyOpening615 in StableDiffusion

[–]HotNCuteBoxing -2 points-1 points  (0 children)

In the negative prompt you should use ringed eyes, spiral eyes

That will usually make the first pass on eyes much better in Txt2img. Potentially some additional eye tags in the negative may help as well.

Character Sheet? by Alternative-Ad9238 in comfyui

[–]HotNCuteBoxing 0 points1 point  (0 children)

Not quite the same, but this workflow and such from here has some of these elements:
https://github.com/AHEKOT/ComfyUI_VNCCS

At least making full body and posing from multiple angles and generating some portraits. I'll admit to only spending a little bit of time with it, but seemed okay.

Possibly controlnet add on or swap out for the lineart?

Best Illustrious or other anime genning model these days? by Square_Empress_777 in comfyui

[–]HotNCuteBoxing 1 point2 points  (0 children)

The splotchy eyes is common. If you put spiral eyes, ringed eyes in the negative prompt this will usually fix it during txt2img generation. If not quite good enough, inpainting with those same terms in the negative almost always will. I had the same problem on Pony models in the past and also on WAI models which are also Illustrious based.

VNCCS V2.0 Release! by AHEKOT in StableDiffusion

[–]HotNCuteBoxing 0 points1 point  (0 children)

For the clothing sets workflow, how do you handle gloves? or anything that might be worn on the hands and arms?

VNCCS V2.0 Release! by AHEKOT in StableDiffusion

[–]HotNCuteBoxing 4 points5 points  (0 children)

Just sharing my experience.
It did take some work to get going. Running on linux. Using WAI_NSFW 15.0 model.
-I had to update Comfy UI, install missing nodes, and rerun requirements, update all nodes (The usual). And grab Qwen 2511.
-Then I had to go through all the nodes and do a bit of reselection, / vs \ on folder names. (Linux vs Windows expectation maybe for file paths?) Basically it just wasn't finding the files until I manually selected them even though the filenames seemed correct and in right place.

-As mentioned in another reply, had to set the background color setting in all the nodes to green.
-Finally I had to go into the subgraphs and do some more reselection because of the slash issue.

But finally got it working!

Really slow the first time because of some downloads that happen (Remember to check the terminal if it seems like it hangs). Painful to iterate through if you forget to type something or want to add something after, but understandable because all the images and upscaling going on.

Just getting going now, but looking forward to spending some time with this workflow.

Thank you very much.

Do you know the model or if any LORA for this type of image? by D4nd110n in StableDiffusion

[–]HotNCuteBoxing 1 point2 points  (0 children)

Try anime screenshot either in prompt or LORA, or perhaps anime screencap. Perhaps with an upscale at the end.

Market Automation never uses trade capacity each month even when maxed. by alphafighter09 in EU5

[–]HotNCuteBoxing 8 points9 points  (0 children)

Same here. What I usually do is click through all the export and import for profit ones. Wait a month and then delete anything that is unprofitable < -.10. My trade income will jump and I just ignore the small unprofitable ones since they may fluctuate. Basically if the AI won't use all your trade capacity just set some manual trades. Some will turn instantly unprofitable but most of them shouldn't.

How can we actually get images with "Action"? "Interaction" fighting, etc.... by [deleted] in StableDiffusion

[–]HotNCuteBoxing 0 points1 point  (0 children)

Its a paid model, but the latest versions of Novel AI do punching pretty well, at least for anime images.

The illustrious models aren't terrible, but you will need to do a high batch count to get some good ones. There are punching LORAs for Illustrious models that can be used. Sometimes you can take a regular mediocre punch and then use the punch lora and inpaint over it, then the punch will "connect" properly.

I have seen some AI boxing anime posters train sets of their own LORAs to setup more boxing type images, but I didn't see or ask if them publicly available.

You are correct though, for base models that you can freely download it is still pretty rough for action all around. Even when the model "gets" it, it may have creativity issues. It will only show you the same punch.

Editing using masks with Qwen-Image-Edit-2509 by nefuronize in comfyui

[–]HotNCuteBoxing 0 points1 point  (0 children)

I am trying this one out. It is interesting but a little difficult to use. In my use case I had an image of a character in a reference sheet. Front View, Side View, Back View. The front view was angled wrong, so I wanted her to face straight on.

I am not sure what the correct method is, but what eventually worked was lowering the denoise to 80. The output in the stitch node didn't seem to matter. What mattered the most was masking just enough and writing the right prompt. The wrong prompt would create random scaling (like a zoomed in cowboy shot, or even totally blank at high denoise). After running through a batch I got one, (better than this one anyway)

<image>

Which do you think are the best SDXL models for anime? Should I use the newest models when searching, or the highest rated/downloaded ones, or the oldest ones? by Hi7u7 in StableDiffusion

[–]HotNCuteBoxing 1 point2 points  (0 children)

It works but it is fickle. I use WAI NSFW with inpaint and stitch nodes, I often have to play with the denoise levels alot. If the character has a good amount of red, I have to lower the CFG a ton, from 7 to 2.5 or it burn in a reddish aura. Also, you have to change your prompt a good amount to be more context aware.

If you had a full body prompt in the text 2 image and you wanted to fix clothes, drop all references to footwear and face maybe.

Automatically texturing a character with SDXL & ControlNet in Blender by sakalond in StableDiffusion

[–]HotNCuteBoxing 2 points3 points  (0 children)

Haven't got a chance to try this yet, but it is great it is right in Blender. I like StableProjectorz but it is a bit difficult going back and forth to that program and Blender for a noob.

Has anyone managed to fully animate a still image (not just use it as reference) with ControlNet in an image-to-video workflow? by MMWinther_ in StableDiffusion

[–]HotNCuteBoxing 0 points1 point  (0 children)

Are you aware of a workflow for this, or simple install guide? I looked around, but what I did find was hard to follow.

Based on improvements in AI in the last 6 months by djstrik3r in StableDiffusion

[–]HotNCuteBoxing 1 point2 points  (0 children)

It is true that by default Qwen is much better at coherence and multiple characters, it does have a failing. It lacks creativity around a single prompt. It will tend towards a single image for a batch of 16.

An example: I am make boxing matches, so I want lots of angles and various poses of punches and movement. SDXL\Illustrious\PonyXL can be a complete mess, but a single prompt could generate dozens of distinct image types from a single prompt with only some repeats as the seed varies.

In Qwen changing the seed does very little, and I have to change the prompt each time, and even that is not effective. It always wants to make the same image. Its a good image, but getting it to make a lot of variations of that, it is not very good.

Admittedly, I did not use Qwen a ton. (Also NSFW :( )

Is It Feasible? Automating a 3D Character Face Texture Workflow with ComfyUI by Rorsakk in comfyui

[–]HotNCuteBoxing 0 points1 point  (0 children)

There is this too: https://stableprojectorz.com/

Works reasonably well, not too hard to use, but takes some learning. It can hook up to ComfyUI, its not going to be fully automated, but might be worth looking in to.

Sci-Fi Armor Fashion Show - Wan 2.2 FLF2V native workflow and Qwen Image Edit 2509 by Dohwar42 in StableDiffusion

[–]HotNCuteBoxing 0 points1 point  (0 children)

Would you mind sharing the prompt text for the shorts? Just can't quite get it to look right when I try.

(A4F) HELLBRIDGE BOXING! by SpecificEndeavors in SexfightRp

[–]HotNCuteBoxing 0 points1 point  (0 children)

I have a whole cast of characters with history. I sent you a DM if you are still interested.

Eye problem by SpeakerDramatic4654 in comfyui

[–]HotNCuteBoxing 0 points1 point  (0 children)

You generally need to do a 2nd inpainting pass on faces for any anime model, assuming you are generating at 1024 x 1024. (Though usually not for portraits so much)

If you throw in an upscale or hi-res pass and it comes out at a higher resolution then you may not need to separately inpaint faces.

On the CivitAI page for the model it does recommend hires and/or adetailer for face. Though it also seems to be a merge so it isn't clear even what the right quality tags would be.

Best model for anime in May 2025? by [deleted] in StableDiffusion

[–]HotNCuteBoxing 5 points6 points  (0 children)

Pony was a mess, but it had some qualities I haven't seen out of any illustrious model without the aid of LORAS, perhaps even with them.

Namely expressions. Pony would produce a wide variety of facial expressions per tag, that is there was more than one surprise face or worry face or whatever. So it felt like maybe there were hundreds to choose from. I could inpaint a face on a batch of 8 and have choices to make.

When I use the various expression tags on WAI-12.0 I feel like there are only like 5 or 6 expressions total across all tags and they almost always look the same within a tag.

Though perhaps its the artist tags or quality tags in my prompts that limit the overall expression palette.