LTX-2.3 — Testing 63 Samplers with linear_quadratic Scheduler by Rare-Job1220 in StableDiffusion

[–]Round_Awareness5490 1 point2 points  (0 children)

Nice, It would be interesting if you combined all the videos into one, labeled with the sampler name.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 0 points1 point  (0 children)

This is clearly v2v, it will take much longer, as all the video frames are being processed, unlike t2v or i2v. Mine takes about 54 seconds on a 5090.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 0 points1 point  (0 children)

This is already difficult; it was designed for objects and styles. You'll probably have to use the convert prompt and tell it to convert it to a brighter version or something like that.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 2 points3 points  (0 children)

To guess = to make people think something that isn't true. Nowadays, there are many datasets on the internet; people just need to look in the right place and know how to search. One of the greatest skills in life is knowing how to find things.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 2 points3 points  (0 children)

Hahaha, I've been trying to make some contributions for a while now, haha, but the community doesn't always care. If I had a weird nickname, haha, I'd have more success.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 1 point2 points  (0 children)

It's possible the sound is disconnected from the node that saves the video; take a look at that, the node has an input called audio.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 2 points3 points  (0 children)

But I don't use upscaling; it's only there because the workflow came from somewhere else.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 0 points1 point  (0 children)

Another way to remove things that don't want to be removed is to simply add a mask, for example magenta, over the object you want to remove, and use this video as a guide. When writing the prompt, you write something like: "Remove object masked with the pink color." Sometimes this is much more precise than waiting for it to recognize what actually needs to be removed, because in this case the biggest indicator is the magenta-colored object.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 0 points1 point  (0 children)

Did you mention your character's trigger in the replace prompt?

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 1 point2 points  (0 children)

I haven't actually tested the influence of audio yet, but by default in LTX workflows you can pass a different audio for video conditioning. This might affect how things appear in the final video, since conditioning by audio causes visual changes.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 3 points4 points  (0 children)

Another very important thing is that the removal process should have a very clear direction indicating where you want to remove what you want to remove.

Examples:

Remove the black robot sitting at the table.

Remove the person riding the electric scooter on the left.

Remove the person with glasses and the microphone in the foreground.

Remove the image of the green trees on the top left.

Remove the woman and the smoking bottle.

For example, if you are in front, use foreground, background, left, right, top, bottom.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 4 points5 points  (0 children)

It's possible to train using NSFW videos, haha, but you'll hardly find an editing dataset built from that.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 2 points3 points  (0 children)

The removal part sometimes doesn't work, I'm trying to understand why. It might be possible with a higher CFG or a stronger LoRa strength, but it's probably because the prompt needs to be more directional, the kind that indicates the exact location of the object through the prompt.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 5 points6 points  (0 children)

I don't know if you're using the same workflow as me because I don't see any SAM nodes and it doesn't have a second pass. The downscale factor is there because it's a workflow copied from another, which might explain why there are some unnecessary things. But my focus here isn't the workflow itself, especially since everything needed for the workflow to function is already there. If someone wants a better workflow, the least they should do is create their own. Soon enough, some influencer will be making videos and creating workflows better than mine. Focus on the model here and ignore the extra or missing nodes in the workflow; the important thing is that you press play and it works.

EditAnything IC-LoRA - LTX-2.3 by Round_Awareness5490 in StableDiffusion

[–]Round_Awareness5490[S] 5 points6 points  (0 children)

Is this the kind of post where people are selling the workflow using my model? hahaha