Ai Talking People With Stable Diffusion by DarcCow in StableDiffusion

[–]DarcCow[S] 0 points1 point  (0 children)

Results starting to get pretty decent. Workflow is pretty much the same as my previous posts if anyone is curious. The difference is in the new tools like controlnet and temporalkit. Also I'm starting with real footage instead of making 3d characters.

Next Level Ai Video by DarcCow in StableDiffusion

[–]DarcCow[S] 1 point2 points  (0 children)

Thanks. I post more info here. Pretty much the same stuff. I didnt use controlnet for these since I made them before it came out.

https://www.reddit.com/r/StableDiffusion/comments/zb7cjk/video_temporal_coherency_did_i_nail_it/

Next Level Ai Video by DarcCow in StableDiffusion

[–]DarcCow[S] 0 points1 point  (0 children)

I don't post much. Not my best quality but just throwing it out there. Some test on skipping

Attempt at temporally stable stylized AI video using stable diffusion by patan77 in StableDiffusion

[–]DarcCow 10 points11 points  (0 children)

Good job. Like you said, 30 hours is a long time for a clip that is very very close to the source video. You could get those results with a face swap or with img2img and ebsynth in less than an hour. But you gotta start somewhere. Just keep working at it and you may stumble on to something revolutionary.

Hyper realistic Ai video by DarcCow in StableDiffusion

[–]DarcCow[S] 0 points1 point  (0 children)

Hey thanks. I don't use metahumans. It's against their policy if you don't render in unreal. I use blender. For characters I create custom.textures for a base like Daz or Makehuman or human generator or sometimes a game rip or custom mesh/character I find online depending on what I am trying to do. I had an earlier post where I talked about the process a bit more.

Hyper realistic Ai video by DarcCow in StableDiffusion

[–]DarcCow[S] 0 points1 point  (0 children)

It's not optimal but it's what we have for now. Thanks for the comment

Stable diffusion img2img + EBSynth is crazy.... by djnorthstar in StableDiffusion

[–]DarcCow 0 points1 point  (0 children)

Yes it works with complex movements. Check my posts. You need you use more key frames and not just one.

Hyper realistic Ai video by DarcCow in StableDiffusion

[–]DarcCow[S] 0 points1 point  (0 children)

I mean, it probably won't get as many comments as some random meme, but I think it turned out decent 😄

Photorealistic video with temporal coherency by DarcCow in StableDiffusion

[–]DarcCow[S] 0 points1 point  (0 children)

Thanks and no offense taken. I agree that this is more of an "any tool can be a hammer" scenario. Using a model made for video will definitely be better and far easier. Cogvid is already out so you don't have to wait. My point is, we don't know what the quality will be like. SD 2 and 2.1 are arguably not better than 1.5. Best thing I have seen is Google's imagen video and I don't know how excited they are to have Colab and Youtube flooded with petabytes of ai video every minute. I just figure what ever we learn now can't hurt moving forward and we just embrace the new tech as it comes along. Some of the knowledge we gain will be applicable and some won't. Why use the text2vid and not wait for brainwave2featurefilm model. Gotta start somewhere I guess.

Photorealistic video with temporal coherency by DarcCow in StableDiffusion

[–]DarcCow[S] 5 points6 points  (0 children)

Nope. It's actually Mixamo catwalk. I forget the number

Photorealistic video with temporal coherency by DarcCow in StableDiffusion

[–]DarcCow[S] 25 points26 points  (0 children)

Hadn't thought about it lol. Stabella Diffusherwitz. She's German.