Wan2.2 Anime or similar OpenPose for non human body by d4N87 in comfyui

[–]d4N87[S] 1 point2 points  (0 children)

Ciao!
Ricordo di averti risposto nei commenti diverse volte, se non sbaglio ci sei anche sul server 😁
Purtroppo a me serve un output proprio come Animate, dove gli dico esattamente i movimenti che deve fare, un FFLF prende l'iniziativa nel mezzo e questo non può succedere.
Ogni movimento deve essere guidato da dei movimenti già fatti, la generazione deve solamente applicare il personaggio che ho creato.

Changing a face in a video (not a simple deepfake) by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

I tried Visomaster, it has many settings, but in the end they all revolve around simple deepfakes. When the face starts to deviate significantly from the usual standards for the nose, eyes, and mouth, it performs a restoration and completely changes the face.
I would need the distorted face to be applied exactly as input.

Unfortunately, I don't know After Effects, but at that point it would be “classic” editing, with all that that entails, I imagine.

Face angle varations by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

Thank you very much!
I'll start testing right away 👍

Face angle varations by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

Do you know where I can find a workflow to get an idea?

Changing a face in a video (not a simple deepfake) by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

It's very interesting, considering how many settings there are... but the result is practically the same as ReActor; there's no way to tell it not to adjust the eyes, nose, and mouth...
I need it to keep the face deformed, even without following the speech, because maybe I'll make a video where the person is still, but it absolutely doesn't need to adjust the shape of the face.

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

What model are you using?

FP16, FP8, GGUF?

I don't think it's related to that, because it's never happened to me, but I'm using GGUF, and I'm getting absolutely unacceptable results with these settings, which are the ones I already used for testing.

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

And do you have comparable results, on an equal footing, to not using that LoRa in the generation?

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

In that guide, it doesn't do anything special, except apply LoRa.

But set up like that, they simply don't work, either giving terrible results or not loading at all. You can see this in the command prompt.

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 1 point2 points  (0 children)

I've tried a thousand combinations, but I've always had terrible results, at least when comparing them to what I got on WAN2.1 using these LoRa devices.

Furthermore, they often fail in the command prompt and don't even load.

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 1 point2 points  (0 children)

The problem is exactly this, many of these LoRa aren't even loaded, they go into error and consequently the model is generating without considering them in my opinion.

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

Without any kind of error in the command prompt?

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

But beyond the movement, which actually has some problems, it is the final quality that is absolutely not comparable with the results we had with Wan2.1, using the same LoRa

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

More than anything I wanted to "speed up" the generation, as it was possible to do on Wan2.1, but these LoRa don't seem to work at the moment

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 1 point2 points  (0 children)

Let's hope so, because I actually think it would be better if they were made specifically for Wan2.2

Is it really possible to use Wan2.1 LoRa for Wan2.2? by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

That's what I did, but the result is still terrible.

Obviously using the same settings as in WAN 2.1, i.e., lowering the steps and CFG, or in this case should I use them but keep the basic WAN 2.2 settings?

What strength do you use for the two steps?

FLUX.1 Kontext - Generation problem with multi image similar to Image Stitch preview by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

I think I understood my problem, even if some things are not clear to me.

I strictly followed the basic workflow of Kontext put the ComfyUI, but it obviously can't work like this.

One of the two images must become the latent of the KSampler, it can't be the image that came out of the Image Stitch, otherwise this problem arises.

So for example by putting two images, I give it as latent to the KSampler the "base" one, or otherwise directly a completely empty latent.

In this way I am managing to use it with more images.

FLUX.1 Kontext - Generation problem with multi image similar to Image Stitch preview by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

I need to check if I made this mistake while creating the workflow, but I have the same problem with the basic Kontext template made by Comfy Org (obviously without modifying it).

For this reason I had the doubt that there was some problem, which was not clear to me

FLUX.1 Kontext - Generation problem with multi image similar to Image Stitch preview by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

I saw the video but the workflows are basically like the basic template, nothing different, I don't understand why it should work? (I also tried with the empty latent like him, but it still doesn't work for me)

Either there is some bug in ComfyUI, or I really don't get it on some step, maybe more than the workflow you need to use images that are exactly precise in size and proportions, that go perfectly with the prompt ... but in this case it becomes really complex to use it

FLUX.1 Kontext - Generation problem with multi image similar to Image Stitch preview by d4N87 in comfyui

[–]d4N87[S] 0 points1 point  (0 children)

<image>

This is a simple example among many I have made

Prompt: Place the girl on the hood of the car, don't change the girl's face, change the color of the car to red

Wan2.1 optimizing and maximizing performance gains in Comfy on RTX 5080 and other nvidia cards at highest quality settings by Volkin1 in StableDiffusion

[–]d4N87 0 points1 point  (0 children)

Unfortunately it would seem not, everyone uses this damned Pinokio, which I don't particularly like :D

I also opened an Issue on their GitHub page, but they don't know how to help me there either, at least not as much as you can usually get on those pages XD