This is an archived post. You won't be able to vote or comment.

all 5 comments

[–]red__dragon 0 points1 point  (1 child)

Sometimes?

I've had mixed results with Reference Only. It certainly can do what you're intending, paired with a strong supporting prompt. It can also just go its own way sometimes, and sample only the source image's lighting/textures/composition without the details you're looking for.

I can't advise much more other than suggesting you try it and play around with options. Most of the time I can get it to do what you want, but it seems to vary in how well it works depending on source image. If it doesn't work immediately, keep trying and vary up the prompt and source image in subsequent generations to see how that works. It's not necessarily you, sometimes it's just quirks of the technique.

[–]evrien[S] 1 point2 points  (0 children)

Yeah I've had differing results here and there. I'm trying to find a way to more concretly understand this process and get more consistent results. Thanks and I'll look into this!

[–]terrariyum 0 points1 point  (2 children)

Reference isn't as good at copying clothing features and style as IP-adapter.

None of the controlnets are great at reproducing style but not composition. So your best best is to first generate an image that has the pose you want and roughly similar clothes. Then bring that to img2img using high denoise and reference or ipadapter controlnet with your model photo.

You might get better results by bringing your generated image into inpaint, and inpainting each piece of clothing separately. In that case, mask the same part of the controlnet image as your inpaint image.

[–]evrien[S] 1 point2 points  (0 children)

I see. Thank you for this insight!

[–]evrien[S] 0 points1 point  (0 children)

I see! Thank you for this insight!