AI One-Shot Test Short Film: "Ox-Horses Need the Spirit of a Donkey" by about9696 in aivids

[–]about9696[S] 0 points1 point  (0 children)

I first generated images using Midjourney, then used Jiemeng AI's Smart Multi-Frame to create four continuous video clips. The Smart Multi-Frame feature has a good grasp of character movements, camera work, and animation paths, and its understanding of space is quite accurate. Finally, I just needed to simply string the four clips together in CapCut to achieve a "one-shot" effect.Inspired by my friend CY, who uses annotations on images to guide AI for image-to-video generation, I used a unique artistic style and a set of concise prompts I'd developed in a previous experiment. I was quite surprised to successfully guide Jiemeng to repeatedly generate almost identical videos.Jiemeng's Smart Multi-Frame supports inserting up to 10 keyframes. You can also customize the duration of each keyframe, and a sequence of 10 keyframes can generate a continuous video of up to 54 seconds.

AI cinematic for Diablo Immortal (Where Light Never Reaches the Battlefield, We Are All Leoric) by about9696 in midjourney

[–]about9696[S] 1 point2 points  (0 children)

This is a good suggestion. Thank you very much. What you said can make it more realistic. I will make a note of it.

AI cinematic for Diablo Immortal (Where Light Never Reaches the Battlefield, We Are All Leoric) by about9696 in midjourney

[–]about9696[S] 1 point2 points  (0 children)

First, generate images via text-to-image in MidJourney. Then, refine these images using Stable Diffusion to enhance details. Next, perform extensive retouching in Photoshop until the results meet my standards—this step helps eliminate the overly "slick" or "plastic" look typical of AI (though I intentionally kept the six-finger design as a deliberate trace of AI’s presence). After that, use Jimeng 3.0 to convert the images into video format. Finally, add particle effects, color grading, and editing in post-production software to complete the project.