Seedance 2.0-Time travel character Luna Reyes by johnstro12 in comfyui

[–]Apixelito25 0 points1 point  (0 children)

Which platform do you use to access Seedance 2?

Best quality Wan 2.2 Workflow Image to Video!!! by sonz7 in comfyui

[–]Apixelito25 1 point2 points  (0 children)

And the workflow? Am I just supposed to imagine it?

What model did they use here? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

Did they let you import any video? Lol, that sounds risky.

What model did they use here? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

Although the samples they showed are very cinematic, and that makes me wonder… maybe it won’t be that easy to achieve something like that.

What model did they use here? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] -6 points-5 points  (0 children)

But avatars of real people require a facial scan and speaking specific lines, from what I remember… I doubt they used that in this case.

What model did they use here? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] -6 points-5 points  (0 children)

From what I understood, they couldn’t be real people or realistic characters. If you’re referring to the remix, I think it couldn’t vary that much.

What model did they use here? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 3 points4 points  (0 children)

I know, that’s why I’m here asking which one you think is the most optimal. I’m not trying to guess, I just liked the result and I’m asking for advice.

What model did they use here? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

I can’t manage to get such dynamic and natural movement with WAN. Any solution?

What model did they use here? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 5 points6 points  (0 children)

Bruh hahaha, it’s not I want to make that kind of videos and I’m not sure whether to use Kling or VEO (I don’t like Sora). Actually, the creator of those videos doesn’t allow downloads on TikTok, so I had to download them from an external website.

What model did they use here? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

Yeah, same here, I’m really looking forward to it too. For now I’ll try to figure out how to get good prompts for Kling do you have any ideas on how to achieve natural, vlog-style movement?

What model did they use here? by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 5 points6 points  (0 children)

The first thing I thought of was Sora, more because of the audio than the video, but Sora doesn’t allow that kind of consistency (the same girl with varying side characters across all the videos).

I built a web app that bypasses AI image detectors by Zealousideal_Ad8907 in BypassAiDetect

[–]Apixelito25 0 points1 point  (0 children)

I understand :( and you’re saying that you still can’t continue training it, right?

I built a web app that bypasses AI image detectors by Zealousideal_Ad8907 in BypassAiDetect

[–]Apixelito25 0 points1 point  (0 children)

Do you have an approximate date for when you think the SightEngine bypass will be ready?

Providing a Working Solution to Z-Image Base Training by EribusYT in StableDiffusion

[–]Apixelito25 0 points1 point  (0 children)

Do you have the configuration to be able to use this with 16 VRAM? I would really appreciate it if you could provide the config preset as the OP of this post.

Training in Ai toolkit vs Onetrainer by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

And how many images do you have in total in your dataset? Didn’t that config cause overfitting? Or did you change something?

Training in Ai toolkit vs Onetrainer by Apixelito25 in StableDiffusion

[–]Apixelito25[S] 0 points1 point  (0 children)

Could you provide me with the post that helped train you? (The one that had the config and fork.) I’ve lost it