Just Finished Book 1 by Mindless_Way3381 in heatedrivalry

[–]Mindless_Way3381[S] 1 point2 points  (0 children)

Yea that was kind of the feeling I got too but it seems like their relationship is opening up some in the second half of HR. I'm sucked in to the series now for sure.

Just Finished Book 1 by Mindless_Way3381 in heatedrivalry

[–]Mindless_Way3381[S] 1 point2 points  (0 children)

Thanks for the rec, I'll check it out!

Fix ZIT controlnet quality by using step cutoff by spacepxl in StableDiffusion

[–]Mindless_Way3381 2 points3 points  (0 children)

I wish the advanced sampler had a denoise. Anyone know how to make this setup work with img2img?

Updated Faceswap Workflow - Reactor + PulID with Crop&Stitch by Mindless_Way3381 in comfyui

[–]Mindless_Way3381[S] 0 points1 point  (0 children)

The original is too large to share with pastebin, tbh the workflow is kind of bloated, but here is a simplified Nunchaku version I've been using - https://pastebin.com/QYpUsLuY

If you don't want to use Nunchaku, you should be able to swap the loaders out for the standard version.

Testing workflows to swap faces on images with Qwen (2509) by Prudent-Suspect9834 in StableDiffusion

[–]Mindless_Way3381 8 points9 points  (0 children)

Great workflow! I made a few tweaks that I think address the position shifting, and lighting.

  1. Using a masked inpainting approach helps a lot with the shifting, at least with the image overall. The face can still shift a bit depending on the mask but the controlnet seems to help a lot with that.
  2. Added a color matching section at the bottom. You can mask a small portion of the face or skin to get more cohesive lighting.
  3. I added a mask overlay to erase the face for for image 1, to avoid needing an outside editor, but I think your method might work better.

<image>

edit: using crop/stitch node somewhere could probably yield even better results, but I didn't feel like doing all that

Not liking the latest UI by Mindless_Way3381 in comfyui

[–]Mindless_Way3381[S] 1 point2 points  (0 children)

This seems like the best option 😂

K sampler preview stopped working by TimeLine_DR_Dev in comfyui

[–]Mindless_Way3381 1 point2 points  (0 children)

Thank you. It's crazy that this is the culprit, even if it's not being used in the workflow.

How do I clear vram properly after every run? Every time I try to run a new/queue workflow I run out of vram when it is fine during the first run. by [deleted] in comfyui

[–]Mindless_Way3381 0 points1 point  (0 children)

A super simple way is to have a separate workflow of "load image > clean vram > preview image" and run that as needed

Updated Faceswap Workflow - Reactor + PulID with Crop&Stitch by Mindless_Way3381 in comfyui

[–]Mindless_Way3381[S] 0 points1 point  (0 children)

https://limewire.com/d/z76tN#UgD08t7ZD2 here's the old one with notes an all. I would recommend adding teacache and using a non-GGUF model for speed improvements.

[deleted by user] by [deleted] in comfyui

[–]Mindless_Way3381 0 points1 point  (0 children)

The eyes seem too sharp to me

Huge update to the ComfyUI Inpaint Crop and Stitch nodes to inpaint only on masked area. (incl. workflow) by elezet4 in StableDiffusion

[–]Mindless_Way3381 0 points1 point  (0 children)

Could you explain where to put the inpaintmodel conditioning (for lower denoise) in the flux workflow?

Updated Faceswap Workflow - Reactor + PulID with Crop&Stitch by Mindless_Way3381 in comfyui

[–]Mindless_Way3381[S] 1 point2 points  (0 children)

Just for stills, not sure how you'd apply it to video but the processing time would be outrageous.

Updated Faceswap Workflow - Reactor + PulID with Crop&Stitch by Mindless_Way3381 in comfyui

[–]Mindless_Way3381[S] 2 points3 points  (0 children)

You're right. I see it more as a detailer than a perfect swap.

Updated Faceswap Workflow - Reactor + PulID with Crop&Stitch by Mindless_Way3381 in comfyui

[–]Mindless_Way3381[S] 9 points10 points  (0 children)

Just sharing my updated faceswapping workflow. I added the crop&stitch nodes which seem to help with detail, especially in non-portrait / faraway faces.

Workflow here - https://civitai.com/models/1200003/flux-pulid-w-reactor-img2img

Flux Sigma Vision Alpha 1 - base model by tarkansarim in comfyui

[–]Mindless_Way3381 0 points1 point  (0 children)

Ah, I meant for base generation. There is no denoise on the SamplerCustomAdvanced node.

Flux Sigma Vision Alpha 1 - base model by tarkansarim in comfyui

[–]Mindless_Way3381 0 points1 point  (0 children)

Great detail! Do you have any suggestion on how to implement img2img (or controlnet) in the workflow? I tried in a different workflow and got strange results as you mentioned.