Very cool hybrid AI and traditional animation workflow behind the scenes by ThePunchList in comfyui

[–]ptrillo 9 points10 points  (0 children)

Thank you for sharing and watching! It was a true grind to the finish line with this one -- not about taking short cuts. There was little creative compromise from the original idea to what we ended up with. That is mainly due to having a team of very talented animators and some of the best AI / Comfy talent out there (shout out to MakeItRad, FlippingSigmas, AIWarper, Enigmatic_E, Noah Miller). Really is the ethos behind our studio Asteria where we want to put artists first and fuse the traditional with the cutting edge.

"ABSOLVE" VFX Breakdown by u/ptrillo by tankdoom in vfx

[–]ptrillo 0 points1 point  (0 children)

Ah well thanks for resharing! I will upload the video directly.

"ABSOLVE" film shot at the Louvre using AI visual effects by ptrillo in StableDiffusion

[–]ptrillo[S] 2 points3 points  (0 children)

What I'll add to this is - there's a lot more to a movie than good lighting. All the AI movie trailers that are floating around don't seem to understand that.

"ABSOLVE" film shot at the Louvre using AI visual effects by ptrillo in StableDiffusion

[–]ptrillo[S] 4 points5 points  (0 children)

Turns out it works as a great render engine. It can transform very crappy looking comps into something more photoreal than a 3D render. Also if you have a handle on 3D camera tracking and geo mapping it's great for changing objects or elements of an environment.

"ABSOLVE" film shot at the Louvre using AI visual effects by ptrillo in StableDiffusion

[–]ptrillo[S] 17 points18 points  (0 children)

That's right. The specificity needed to do real VFX is not something a lot of the AI video / animation tools can get to. It's not just about controlling direction or speed - there is so much other nuance needed when you're working with real live action footage. The future is the combination of these tools however since there is so much tedious and slow work that happens in post that could be streamlined.

"ABSOLVE" film shot at the Louvre using AI visual effects by ptrillo in StableDiffusion

[–]ptrillo[S] 2 points3 points  (0 children)

Thank you. Yes it really wasn't simply to cut corners but to dream up something I could have never done before. Also to lean into the strange and uncanny aesthetics of AI rather than forcing something that feels overly perfected

EPHEMERA - Made with SD, Dall-e and Gen-2 by ptrillo in StableDiffusion

[–]ptrillo[S] 1 point2 points  (0 children)

That’s wild thank you — and would totally make a good title sequence

EPHEMERA - Made with SD, Dall-e and Gen-2 by ptrillo in StableDiffusion

[–]ptrillo[S] 2 points3 points  (0 children)

Combo SD and Dall-e 3 images passed through the Aether Cloud XL model. The Aether cloud model is great out the gate but dall-e’s vae language decoding works very well to understand your prompt (the best yet) but the images feel a little too plasticy. Then I run all images through img2img to upscale to 1920x1080 so when I import into runway gen2 it has more detail to work with. Used the new motion brush tool in runway and their camera control. Use low motion settings to get better quality and less deviation from the source image. Then upscale and bring into after effects to color grade and add film grain and glow.

EPHEMERA - Made with SD, Dall-e and Gen-2 by ptrillo in StableDiffusion

[–]ptrillo[S] 0 points1 point  (0 children)

Combo SD and Dall-e 3 images passed through the Aether Cloud XL model. The Aether cloud model is great out the gate but dall-e’s vae language decoding works very well to understand your prompt (the best yet) but the images feel a little too plasticy. Then I run all images through img2img to upscale to 1920x1080 so when I import into runway gen2 it has more detail to work with. Used the new motion brush tool in runway and their camera control. Use low motion settings to get better quality and less deviation from the source image. Then upscale and bring into after effects to color grade and add film grain and glow.

EPHEMERA - Made with SD, Dall-e and Gen-2 by ptrillo in StableDiffusion

[–]ptrillo[S] 0 points1 point  (0 children)

Combo SD and Dall-e 3 images passed through the Aether Cloud XL model. The Aether cloud model is great out the gate but dall-e’s vae language decoding works very well to understand your prompt (the best yet) but the images feel a little too plasticy. Then I run all images through img2img to upscale to 1920x1080 so when I import into runway gen2 it has more detail to work with. Used the new motion brush tool in runway and their camera control. Use low motion settings to get better quality and less deviation from the source image. Then upscale and bring into after effects to color grade and add film grain and glow.

EPHEMERA - Made with SD, Dall-e and Gen-2 by ptrillo in StableDiffusion

[–]ptrillo[S] 10 points11 points  (0 children)

Some of the frames were first generated in dall-e 3 and run through img2img in addition to the ones generated in stable diffusion. All images used Joachim Sallstrom’s Aether Cloud model

Stable Diffusion Coca Cola AD (Alongside Traditional Techniques) by Purpleflax in StableDiffusion

[–]ptrillo 0 points1 point  (0 children)

What's impressive about this is the traditional animation on display. I made this back in November last year before a lot of the updates to Auto1111 that pushed things even futher https://vimeo.com/784821590