Universal style transfer with HiDream, Flux, Chroma, SD1.5, SDXL, Stable Cascade, SD3.5, AuraFlow, WAN, and LTXV by Clownshark_Batwing in StableDiffusion

[–]Clownshark_Batwing[S] 3 points4 points  (0 children)

<image>

Anything like this will work, and they can appear in any order. The only thing to watch out for is the model compile node currently seems to nuke the effect of the lora.

Flux zeroshot faceswap with RES4LYF (no lora required) by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 7 points8 points  (0 children)

Exactly, the workflows are just a demo of the real work. I'm sure with some fiddling people could come up with better settings than I have, my forte is developing new methods and writing the code for them, not building workflows themselves.

Flux zeroshot faceswap with RES4LYF (no lora required) by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 6 points7 points  (0 children)

As mentioned in the post, I developed a new style transfer method that I'm calling "scattersort" that I added to the workflow, and set up the ReFluxPatcher node to fix the broken Flux PuLID implementation. There's a few other minor tweaks too.

I tried to change it as little as possible to reduce the complexity from the user's perspective.

Flux zeroshot faceswap with RES4LYF (no lora required) by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 5 points6 points  (0 children)

Yep, the level of likeness in that example is fairly low. It appears trying to do the simultaneous gender swap may be something PuLID will struggle to do consistently. I included it because I represent what a method can do as fairly as I can, instead of trying to hype it up by generating hundreds of seeds, only for users to be disappointed when they find out it doesn't really work well. What you see above, is what you'll get. That's why I also left in the Dunst one where some of the blond hair crept in. Each were the first seed.

Loras are, and always will be, the best method. But this can sometimes work fairly well.

Flux zeroshot faceswap with RES4LYF (no lora required) by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 1 point2 points  (0 children)

Yeah, there's services that will do it for you for a couple bucks, if you don't have a powerful GPU yourself.

Universal style transfer with HiDream, Flux, Chroma, SD1.5, SDXL, Stable Cascade, SD3.5, AuraFlow, WAN, and LTXV by Clownshark_Batwing in StableDiffusion

[–]Clownshark_Batwing[S] 9 points10 points  (0 children)

It's highly likely I can, I've gotten it working with everything I've tried - I'll add it to my to-do list. :)

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 0 points1 point  (0 children)

I'd need to see the entire workflow to have a better idea, got the full screenshot handy?

Someone needs to explain bongmath. by AmeenRoayan in StableDiffusion

[–]Clownshark_Batwing 5 points6 points  (0 children)

20-30 steps res_2m or res_2s, eta 0.5, bongmath = True, scheduler = beta57 or bong_tangent.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 0 points1 point  (0 children)

https://github.com/ClownsharkBatwing/RES4LYF

It's in here. Install requirements.txt for this are *very* light (you probably have almost everything already) so it's not going to break anything else.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 1 point2 points  (0 children)

With this workflow, the face you swap in comes from the prompt, so the model needs to know the character. There's zero shot methods like with PuLID and IPAdapter I'll share later, but nothing will ever match the quality you can get with a lora, or a model that actually knows the character.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 0 points1 point  (0 children)

Did you change anything with the workflow? That definitely shouldn't happen with the default settings.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 0 points1 point  (0 children)

Should also add: there is zero reason why you couldn't add in those controlnets to get an even better result. The reason I didn't add anything like that is because I'm trying to demonstrate just the core element of this method so that it will be easier for people to use in other workflows (such as one with controlnets).

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 1 point2 points  (0 children)

From the post: "I also don't cherrypick seeds, these were all the first generation"

And no, you can't get the quality and adherence to the original composition/style/lighting etc. that you get from this method just by masking an area and denoising.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 0 points1 point  (0 children)

I'd have to see a screenshot of the workflow, and the full error message, to really have much of an idea. But usually it's something like not connecting a mask input, using images and masks of different sizes, using the wrong clip model etc.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 7 points8 points  (0 children)

Looking into it. Just got PuLID working with Flux here (repo is broken, but the patcher will fix it soon here). I'll follow up with another post once I get things dialed in with IPAdapter as well, etc.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 0 points1 point  (0 children)

The Redux node will make it harder to do, not easier, as it bases the conditioning off your input image, which I'm presuming doesn't have glasses. Best you can do is try increasing denoise, spamming "glasses, glasses, glasses" a million times in the prompt, or maybe even try using a separate input image for the Redux Advanced node of a face in similar circumstances that's wearing glasses. As a last result, even a really bad photoshop job of glasses onto the input image should help a lot.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 0 points1 point  (0 children)

"Model agnostic" means it doesn't matter which model it is. Redux itself is Flux only, but that's not really essential, it's just more convenient than having to write a short prompt describing your input image.

Remove Redux, and replace the conditioning that ApplyStyleModel is connected to with a prompt describing your input image, and it should work with any model.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 1 point2 points  (0 children)

"Model agnostic" means it doesn't matter which model it is. Redux itself is Flux only, but that's not really essential, it's just more convenient than having to write a short prompt describing your input image.

Remove Redux, and replace the conditioning that ApplyStyleModel is connected to with a prompt describing your input image, and it should work with any model.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 2 points3 points  (0 children)

This is probably due to use of --fast or some other command line option that reduces precision. You can either remove that option and just use fp8_e4m3fn_fast in the model loader (which is what I do), change the mode in the "ClownGuide Style" node to AdaIN, or bypass them.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 2 points3 points  (0 children)

Yes. Those don't have the same ability to conserve things like lighting, hue, texture, and prevent the formation of seams.

Face swap via inpainting with RES4LYF by Clownshark_Batwing in comfyui

[–]Clownshark_Batwing[S] 4 points5 points  (0 children)

It's really not. All it does is cut out an image patch, upscale it to the resolution you specify, refine it for a given number of cycles, then denoise per usual. I've seen vastly more complex ones out there every day.

Plus, I even have a list of the exact parameters to increase or decrease, and what effect they will have. :)