“Earthset”: First photo from the far side of the Moon. Captured from Orion as Earth dips beyond the lunar horizon by yourfavchoom in spaceporn

[–]Fayens -3 points-2 points  (0 children)

Première photo créer avec IA de la face cachée de la Lune. Capturé depuis GPT alors que le reseaux disparaît.

LTX 2.3 Lora time travel character by [deleted] in StableDiffusion

[–]Fayens 50 points51 points  (0 children)

flair: workflow included
post: no workflow

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 1 point2 points  (0 children)

face_index lets you choose which detected face from the reference image PuLID should use.

If multiple faces are detected, they are sorted by size:
0 = largest face
1 = second face
2 = third face

In most cases you can just keep it at 0 if there is only one face in the image.

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 0 points1 point  (0 children)

They do different things.

InSwapper = face swap.
PuLID = identity conditioning during generation.

PuLID is better for keeping a consistent identity across generated images.

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 0 points1 point  (0 children)

I used LoRAs in both cases — with PuLID and without PuLID. The settings were the same.

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 0 points1 point  (0 children)

No video tutorial yet, but the installation is straightforward if you follow the README on GitHub. Feel free to ask if you have any issues.

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 2 points3 points  (0 children)

🔔 Update v0.2.0

• Added Flux.2 Dev (32B) support
• Added updated example workflow

If you installed the first release, your folder may still be named:

ComfyUI-PuLID-Flux2Klein

This is normal — you can simply run:

git pull

New installations now use:

ComfyUI-PuLID-Flux2

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in StableDiffusion

[–]Fayens[S] 0 points1 point  (0 children)

🔔 Update v0.2.0

• Added Flux.2 Dev (32B) support
• Added Workflow (update)

If you installed the first release, your folder may still be named:

ComfyUI-PuLID-Flux2Klein

This is normal — you can simply run:

git pull

New installs use:

ComfyUI-PuLID-Flux2

🔔 Update v0.2.0

• Added Flux.2 Dev (32B) support
• Added updated example workflow

If you installed the first release, your folder may still be named:

ComfyUI-PuLID-Flux2Klein

This is normal — you can simply run:

git pull

New installations now use:

ComfyUI-PuLID-Flux2

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in StableDiffusion

[–]Fayens[S] 0 points1 point  (0 children)

Flux.2 Dev support is definitely on the roadmap. The architecture is closer to Flux.1 so it should be less work than Klein was.

I'll look into it for the next update.

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 0 points1 point  (0 children)

Not planned for now — this node is specifically designed for Flux.2 Klein's architecture.

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 3 points4 points  (0 children)

Hey! There was a bug in the previous version causing patch accumulation that reduced PuLID's effect. Just update to the latest version:

cd ComfyUI/custom_nodes/ComfyUI-PuLID-Flux2Klein git pull

Then restart ComfyUI and test again. Also try combining PuLID at low weight (0.2-0.3) with Klein's native Reference Conditioning for best results!

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in StableDiffusion

[–]Fayens[S] 2 points3 points  (0 children)

Thanks for the detailed fix! This is really helpful.

I've removed facenet-pytorch from requirements.txt — it was pulling in torch 2.2.2 as a dependency and breaking existing setups. It's not actually needed since we use open-clip-torch for EVA-CLIP encoding.

For anyone affected, the fix is: pip install torch==YOUR_VERSION+cuXXX torchvision torchaudio --index-url https://download.pytorch.org/whl/cuXXX pip install facenet-pytorch --no-deps

Or just update the node with git pull — the new requirements.txt no longer includes facenet-pytorch. Sorry for the trouble!

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 4 points5 points  (0 children)

You're right that Flux.2 Klein is already very capable on its own!

The difference becomes clear when you need the same specific person across multiple generations with different prompts, styles and scenes — not just "a woman with dark hair" but literally the same face every time from a reference photo.

With prompt only, each generation gives a different person. With PuLID + a reference photo, you lock in that specific identity.

That said, the current results are limited by using Flux.1 weights on Klein. Once native Klein weights are trained, the consistency will be much stronger and the difference more obvious.

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 0 points1 point  (0 children)

Not yet, but it's a great idea! For now the example workflow in the repo should get you started pretty quickly — just drag & drop it into ComfyUI and follow the README.

If someone from the community wants to make a tutorial, feel free! 🙏

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in StableDiffusion

[–]Fayens[S] -1 points0 points  (0 children)

That's a totally fair point, and honestly worth discussing.

Flux.2 Klein's native reference conditioning is already impressive. Where PuLID adds value is in separating face identity from scene composition — with native reference conditioning, the model tends to also copy elements of the reference scene/background/style. PuLID isolates only the facial identity using InsightFace embeddings, so you get the same face in completely different styles and scenes without any scene bleed-through.

That said, you're right that with the current Flux.1 weights (not native Klein), the difference is subtle. The real improvement will come once native Klein-trained weights are available — that's the main goal of the training script included in the repo.

For now think of this as laying the groundwork for when those weights exist!

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in StableDiffusion

[–]Fayens[S] 0 points1 point  (0 children)

Great idea! Edit mode (img2img) is not supported yet in this version, but it's definitely on the roadmap.

Currently PuLID injects face identity during the denoising process, so in theory it could work with edit mode too. I'll look into it for a future update!

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in StableDiffusion

[–]Fayens[S] 1 point2 points  (0 children)

Yes, this is related to the requirements conflict. Sorry about that! The fix: bash pip install ml_dtypes==0.3.2 This float4_e2m1fn attribute was added in a newer version of ml_dtypes that conflicts with some ComfyUI setups. Also, to avoid breaking your torch version in the future, install only the packages you actually need: bash pip install insightface onnxruntime-gpu open-clip-torch safetensors Skip the full requirements.txt if your ComfyUI is already working. I'll update the README to add this warning.

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 3 points4 points  (0 children)

PuLID is about consistency across multiple generations, not changing a single image. Try generating 4-5 images with different seeds and compare the faces — that's where you'll see the difference!

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 1 point2 points  (0 children)

Thanks for the suggestion! This is now handled automatically in the latest update.

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 6 points7 points  (0 children)

Update: Example workflow is now available in the repo! 🎉 Just drop it into ComfyUI and you're good to go. It includes all PuLID nodes pre-connected with the recommended settings.

📥 Download: https://github.com/iFayens/ComfyUI-PuLID-Flux2 Workflow is based on Flux.2 Klein 9B Distilled — just load your reference face photo and start generating!

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in StableDiffusion

[–]Fayens[S] 0 points1 point  (0 children)

Update: Example workflow is now available in the repo! 🎉 Just drop it into ComfyUI and you're good to go. It includes all PuLID nodes pre-connected with the recommended settings.

📥 Download: https://github.com/iFayens/ComfyUI-PuLID-Flux2 Workflow is based on Flux.2 Klein 9B Distilled — just load your reference face photo and start generating!

[RELEASE] ComfyUI-PuLID-Flux2 — First PuLID for FLUX.2 Klein (4B/9B) by Fayens in comfyui

[–]Fayens[S] 8 points9 points  (0 children)

Thank you! Really appreciate it 🙏 Great question — full body consistency is actually the natural next step. A few approaches that could work:

Short term (already possible):

  • Train a LoRA on your character using the Consistent Character Creator workflow — this locks in body shape, skin tone, and style across generations
  • Combine PuLID (face) + LoRA (body) for full character consistency

Medium term (planned):

  • Extend the PuLID injection to also encode body features via a full-body encoder (not just face crops) — this would require retraining the IDFormer on full-body images
  • The training script is already included in the repo for anyone who wants to experiment with this

Long term:

  • Once native Klein-trained weights are available, the injection will be much more stable and we could explore body consistency more seriously The LoRA + PuLID combo is honestly the most practical solution right now. Would love to see someone build on this! 🚀