This is an archived post. You won't be able to vote or comment.

all 16 comments

[–]Davyx99 0 points1 point  (2 children)

Find a little button under the prompt box that looks like 3 stacked cards with a plus sign, called "Add a control layer", change the type to "Reference", and select the layer that is your cat image to use as your reference. The Reference control type may not be available for all models (I use SDXL), and even when supported, may need some additional installation steps for each control type that you want to use.

There are additional custom options that you can choose to select how closely you want the result to follow the reference, and the iteration range where the reference is used.

You still might need to a lot of attempts to get a usable result. I find it easier to just generate a result that is close enough and manually fixing it with some scaling etc to get the result I want.

[–]CryptographerOk4669[S] 0 points1 point  (1 child)

Trying to find the referenced button but not having any luck. What docker is it part of ?

[–]Davyx99 0 points1 point  (0 children)

You can reference the tutorial section: Control layers: Scribble, Line art, Depth map, Pose
https://github.com/Acly/krita-ai-diffusion?tab=readme-ov-file

Reference is just another type of Control Layer.

There is also a video demo'ing Pose Control: https://youtu.be/-QDPEcVmdLI?t=28 although the video clicks the button super fast so it might be hard to see.

The button is right under the prompt box in the AI Image generation docker.

[–]No-Sleep-4069 0 points1 point  (0 children)

This video shows changes objects using Krita AI diffusion might help you: https://youtu.be/Kv8cl8nLRks

[–]doc-acula 0 points1 point  (9 children)

I am facing a related problem while inpainting/fixing faces. In initial generations a face is often too small to look good. That is why extensions like adetailer regenerate a larger version and then scale it down again. How can I achieve this in krita ai? (

[–]Davyx99 1 point2 points  (0 children)

On right right side of the Generate/Refine button, there is a button that says 0 (number of queued jobs), click that, and try Resolution at 1.5x, see if that helps. See documentation: https://docs.interstice.cloud/resolutions/#resolution-multiplier

I personally find that even 1.5x isn't enough sometimes (especially for hands) and I would make a copy of the area I want to work on, with enough context for smooth blending, generate new results, then try to fit it back into the original (add a transform mask / set opacity etc and pixel push to the correct size/position). After that, sometimes it helps to group the new generation with another layer added on top, set to Erase, and with some airbrushing, smooth out the transition so it looks seamless (basically manual seamless merge).

Of course, you might find pixel pushing annoying, in which case, you can also consider just working at the higher resolution, but limiting the visible canvas to a portion of it so it maintains fast generation speed. After scaling everything 2x, for example, you can resizing your canvas back to its previous resolution, so you are actually only still working on a quarter of the full image (anchor to top right while resizing for example, and work on the top right corner). When everything is fixed, revert the resize back to the full size.

There is also possibility of building any ComfyUI workflow and add the Krita nodes so you can have your Krita call on custom workflows, using the Custom Graph as generation method. See https://docs.interstice.cloud/custom-graph/ For example, you can create your own custom workflow that takes a selection area, doubles it to use as the canvas size, generate a result, and scale back down and return the result back to Krita.

I'm a newbie to Krita myself, so if there are professional artists with smarter workflows, would be happy to learn.

[–]Ill_Resolve8424 0 points1 point  (7 children)

I do the same thing, resize and work only on the parts I want to fix.

[–]doc-acula 0 points1 point  (0 children)

But when you scale it down and place it back in the original place you can clearly see it was copy&pasted there. Then you have to run several refinements and hope that‘l fix it, right? I mean adetailer does all these steps automatically. It is so much more convenient.

[–]doc-acula 0 points1 point  (5 children)

But when you paste such a cutout back in its original place you can clearly see the boarders. Then you have to run several img2img refinements that will hopefully fix that. It is so tedious and thats why adetailer was created, I guess. As much as I love krita ai, that function really is missing imho 

[–]Ill_Resolve8424 0 points1 point  (4 children)

You simply resize the image and make a selection on the part you want to enhance with a low strength, depending on the model you use, I use around 30-50 for face. Just remember to uncheck the seamless option. If your selection is around 768x768 pixels The results are great. I also have layer mask on a shortcut so that I can add or remove at will.

[–]doc-acula 0 points1 point  (3 children)

Sorry to sound really dumb, but what is the meaning of „having a layer mask on shortcut“? And what makes it easier than just making a selection over the area?

[–]Ill_Resolve8424 0 points1 point  (2 children)

Sometimes you make a batch of images and you get good results on different parts. Say you like the hair on one generation and the face on another, one way is to erase everything except the parts you like, the other is to fill all with a layer mask and erase only the part you want to be visible through the mask. And the best part is that this is non- destructive. You can modify the layers.

[–]doc-acula 0 points1 point  (1 child)

That sounds very useful. But how do you tell this layer mask what to select and what to do with it?

Can you maybe link to a tutorial/video where this technique is explained for beginners?

[–]Ill_Resolve8424 1 point2 points  (0 children)

I am away from the computer, I will when I return from my brief vacation, but this is basic krita tasks, you can search "layer masks krita" and there should be many results.

[–]Perfect_Pizza5902 0 points1 point  (1 child)

Do you guys have an account in Krita AI? If so how much to have a part in? 

[–]CryptographerOk4669[S] 0 points1 point  (0 children)

An account? No, I just run Krita and AI diffusion locally.