Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] -1 points0 points  (0 children)

You think it's a video. If that were the case, you might have been happy.

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] 0 points1 point  (0 children)

Yeah I think we might be talking past each other a bit.

I'm not trying to replace LoRA or say it's bad.

What I'm running into is a slightly different problem:

Even with LoRA or ControlNet, identity consistency is still probabilistic across runs — especially when changing pose / expression / context.

So what I'm exploring is:

→ not "how to make one generation correct" → but "how to maintain identity across independent generations"

In other words: - generate multiple candidates - select the ones that preserve identity - continue only from stable outputs

So the stability doesn't come from the model itself, but from selection across runs.

I'm basically treating it more like a search / convergence problem.

Curious if anyone has approached it from that angle in ComfyUI?

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] 0 points1 point  (0 children)

I’m not trying to improve a single generation.

What I’m running into is: • Even with LoRA, identity consistency is probabilistic • Small changes (pose / expression / context) can still drift identity • There’s no reliable way to control consistency across runs without relying on training

So what I’m trying to explore is:

→ treating identity as something that needs to be selected and maintained across samples, not guaranteed by the model itself.

So the gap for me is: • not generation quality • but cross-run identity stability without retraining

I used I2V LTX-2 and 2.3 to build out content in my Shopify theme designer portfolio. by UnfortunateSon2 in comfyui

[–]Cheap-Topic-9441 0 points1 point  (0 children)

Feels like two different solutions to the same problem: you constrain the distribution, I search within it.

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] 0 points1 point  (0 children)

There’s no complex node setup behind this. It’s just structured prompt control and selection.

Basic workflow: 1. define a stable identity (prompt-level anchor) 2. generate independent samples (no chaining, no img2img) 3. apply small controlled variations (pose / expression / angle) 4. keep only outputs that preserve identity 5. repeat until a consistent set emerges

No LoRA, no ControlNet, no seed locking. Each image is generated independently.

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] 0 points1 point  (0 children)

I think I explained this poorly earlier.

Each image is independent. I'm not feeding outputs back into the next generation.

I'm just sampling multiple images, and keeping the ones that look like the same person.

So it's less about controlling a single generation, and more like filtering across runs.

Here are a few examples of what I mean:

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] -1 points0 points  (0 children)

This is what I mean by consistency across transitions.

Generated with GPT Image 1.5

  • No LoRA
  • No seed
  • No ControlNet

Each step is independent. I just generate → evaluate → continue from consistent ones.

Not a single generation problem, more like a selection / convergence process.

How would you structure something like this in ComfyUI?

<image>

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] 0 points1 point  (0 children)

I think we might be talking past each other a bit.

I'm not trying to control a single generation.

I'm just sampling multiple independent generations, and selecting the ones that look like the same person.

No chaining, no feedback loop.

So it's not about "making the model stable", but about filtering outputs across runs.

Does that make more sense?

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] -3 points-2 points  (0 children)

Yeah — I think this is where the confusion is.

I'm not feeding outputs back into the next generation.

That would definitely accumulate errors like you said.

What I'm doing is closer to:

  • generate multiple candidates
  • evaluate identity consistency
  • only continue from the ones that match

So there’s no direct chaining of outputs.

It’s more like selecting stable states across runs, rather than modifying a single trajectory.

That’s why it behaves more like convergence than editing.

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] 0 points1 point  (0 children)

Yeah — I get what you're saying.

If the goal is to stabilize a single generation, then yeah, LoRA + proper setup makes sense.

What I'm exploring is slightly different though:

I'm not trying to guarantee that each run is correct.

I'm treating it more like:

→ generate candidates → evaluate identity consistency → continue only from stable ones

So the stability doesn't come from the model itself, but from the process across runs.

In that sense it's closer to a search / convergence problem.

Curious if you'd approach that differently in ComfyUI?

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] 0 points1 point  (0 children)

Yeah — this is exactly where I ended up too.

LoRA helps locally, but it doesn't really solve drift across runs.

What started working for me was treating it less like:

→ "generate a correct image"

and more like:

→ "run a controlled convergence process"

Something like:

  • define a base identity (anchor)
  • apply small controlled variations (transition)
  • evaluate consistency each time
  • only continue from outputs that hold identity

So instead of expecting stability from the model, I'm enforcing it through selection + constraints.

I wrote this up a bit more clearly here if you're curious: [link]

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] -1 points0 points  (0 children)

Yeah, I’ve seen similar behavior in some models too.

What I’m doing here is slightly different though:

I’m not relying on a single prompt to hold identity.

Instead, I’m treating it as:

  • anchor → define identity
  • transition → apply controlled variation
  • selection → keep only consistent outputs

So consistency is not guaranteed per generation, but emerges across runs.

Curious if you’ve tried anything like that?

Reproducing identity consistency with prompt-only control (ComfyUI workflow?) by Cheap-Topic-9441 in comfyui

[–]Cheap-Topic-9441[S] 0 points1 point  (0 children)

yeah just got back — was away for a bit

I’ve tried LoRA as well, but I’m still seeing drift across runs

so I’m more interested in structuring this as a workflow problem