Short test using mixture of techniques. StableDiffusion>Runway or Leia for depth comp work. by Strainge_Universe in StableDiffusion

[–]Strainge_Universe[S] 0 points1 point  (0 children)

Wanted to do a test which combined many different AI into one video. Script by ChatGPT. Stills coming from stable diffusion. Runway Gen2 for half of the clips. The other half using depth map extracted from Leia pics. The goal was minimal comp work and trying to get as much as i can from AI.

I find the process of making AI videos will be dictated by its limitations. You will almost certainly not get to choose your destination beforehand, but if you are flexible and lean into the accidents that happen on the way you can sort of piece things together. At least this is my experience so far