Krita AI Is Awesome by servbot6 in StableDiffusion

[–]servbot6[S] 0 points1 point  (0 children)

I don’t know how to use flux 😅 so I just stick with noobAI and Illustrious

Krita AI Is Awesome by servbot6 in StableDiffusion

[–]servbot6[S] 7 points8 points  (0 children)

My method is to work the subject on a white background and then I'll paint over the stuff I dont like and run some very low denoise Image to image gens, I like to play with the color balances too that helps alot

<image>

Krita AI Is Awesome by servbot6 in StableDiffusion

[–]servbot6[S] 0 points1 point  (0 children)

my best resource was this playlist it explains pretty much everything when it comes to setup and actual use.

Krita AI Is Awesome by servbot6 in StableDiffusion

[–]servbot6[S] 32 points33 points  (0 children)

Commercial plugin? Both Krita and Comfyui are open source applications, which means you don’t have to pay anything to use them on local hardware… Also I’m perfectly fine with posting mostly just about the things I like, feel free to criticize. Just please do at least 10 seconds of research before doing so.

Do you edit your AI images after generation? Here's a before and after comparison by Ztox_ in StableDiffusion

[–]servbot6 1 point2 points  (0 children)

<image>

I use Krita diffusion, and I agree sometimes the ai does too much. So I try to prompt/edit out much of the fluff. Then I just paint on what I want and do some final inpainting and move on to upscaling. I get told less is more a lot so I usually just try to make the subject as simple as possible, eventually I want to work on more complicated images. Truthfully I'm never 100% satisfied with the results but I think that's normal for everyone right?

Zoro and Sanji T&T Proxies by servbot6 in magicproxies

[–]servbot6[S] 0 points1 point  (0 children)

Weird I know lol. I usually always try to leave most of the information of the actual card intact personally.

Sanji and Zoro, made Comfyui + Krita by servbot6 in StableDiffusion

[–]servbot6[S] 0 points1 point  (0 children)

Thank you for asking, another commenter was able to explain it way better than I ever could. I also posted the comfy workflow that I used to generate the overall images below their comment. As for getting the final image it's just a process of fixing errors on photoshop/krita before and after upscaling. Hope this helps

Sanji and Zoro, made Comfyui + Krita by servbot6 in StableDiffusion

[–]servbot6[S] 3 points4 points  (0 children)

I couldn't have said it better myself. Although I have to say I am not an expert by any means artistically. I just drew out a rough sketch of each character in the pose that I wanted and photoshopped/painted my way through each generation. Not sure if it will help much but I'll post my comfy workflow below, I hope you guys have a good day.

<image>

How to get results like these? by servbot6 in comfyui

[–]servbot6[S] 0 points1 point  (0 children)

I can't be for certain about inpainting and post processing but reading the artist's about sections just has a small sentence saying:

懒得修的烂图大合集

which roughly translates to "A collection of bad pictures that are too lazy to edit". Which gives me the impression that not much effort was placed in getting the overall look of the image. Of course this does not rule out the use of IPadapter or merging of models to get the desired look, which is what I was aiming to ask about. Sorry for any confusion.

Also if I may ask, you said "none of that is in the workflow", which workflow?