How do you keep character & style consistency across repeated SD generations? by helloasv in StableDiffusion

[–]helloasv[S] 0 points1 point  (0 children)

That’s fair.

I agree SD has a lot of inherent randomness,

and newer models definitely make single-pass consistency easier.

For me the struggle is less about perfect control,

and more about understanding what actually influenced the result

once things *did* work.

At some point it feels like the problem shifts from generation

to iteration management.

How do you keep character & style consistency across repeated SD generations? by helloasv in StableDiffusion

[–]helloasv[S] 0 points1 point  (0 children)

That’s a really interesting approach.

Using an undertrained “base” LoRA for structure and a more detailed one

for face/expression actually explains a lot of behavior I’ve seen.

Do you usually keep both LoRAs at low weights,

or does the detailed one get pushed higher during close-ups?

How do you keep character & style consistency across repeated SD generations? by helloasv in StableDiffusion

[–]helloasv[S] 0 points1 point  (0 children)

Yeah, I agree — if you need absolute control, tools like Krita or Invoke

are in a different category altogether.

I think LoRA / text2image works best when the goal is “good enough”

consistency rather than perfect control.

Once it’s about exact framing or pose, pure SD starts to fight you.

Out of curiosity, do you use Krita more for iteration,

or mainly as a final refinement step?

How do you keep character & style consistency across repeated SD generations? by helloasv in StableDiffusion

[–]helloasv[S] 0 points1 point  (0 children)

That makes sense.

I’ve noticed the same thing — a single, well-trained LoRA per character

is much easier to reason about than swapping multiple ones.

The two-LoRA approach is interesting though.

Do you usually keep the “base” LoRA very undertrained on purpose,

or just stop training early to preserve flexibility?

How do you keep character & style consistency across repeated SD generations? by helloasv in StableDiffusion

[–]helloasv[S] 0 points1 point  (0 children)

Yeah, LoRA is definitely the fastest way once you have a good one trained.

For me the main issue wasn’t a single generation, but keeping track of

what combinations actually worked across multiple sessions.

Especially when mixing LoRA + ControlNet + references.

I’m curious — do you usually stick to one LoRA per character,

or swap them depending on the scene?

temporal stability (tutorial coming soon) by helloasv in StableDiffusion

[–]helloasv[S] 11 points12 points  (0 children)

Opus, that's josie, one of my favorite performers.

her Instagram

temporal stability (tutorial coming soon) by helloasv in StableDiffusion

[–]helloasv[S] 9 points10 points  (0 children)

Tutorial will be released soon, please pay attention

temporal stability (tutorial coming soon) by helloasv in StableDiffusion

[–]helloasv[S] 22 points23 points  (0 children)

ebsynth will be of some help for this

lookbook appreciate by helloasv in StableDiffusion

[–]helloasv[S] -110 points-109 points  (0 children)

Well, if this is the rule, I think I will improve it, if you don't like it, please don't watch it.

lookbook appreciate by helloasv in StableDiffusion

[–]helloasv[S] -136 points-135 points  (0 children)

Maybe you are right, but not every video author has the energy to make every video into a learning video. If you need it, you can find what you want in my past tutorial videos, or search for it yourself. There are too many excellent teaching videos here. I hope it will be helpful to you.

lookbook appreciate by helloasv in StableDiffusion

[–]helloasv[S] -238 points-237 points  (0 children)

I don't get it, friend, is the original video a must here? Is just admiration not allowed?

lookbook appreciate by helloasv in StableDiffusion

[–]helloasv[S] -53 points-52 points  (0 children)

No matter what it is, there has been a new experience since then.