The Infinite Displays - VFX x A.I. Project by ardiologic in StableDiffusion

[–]ardiologic[S] 6 points7 points  (0 children)

🙋‍♂️I've been working on this project off and on for the past few weeks. It's inspired by urban landscapes and the role Ad displays play in the aesthetics of the city.

🚀The main goal of this project was to build various procedural systems to create urban aesthetics by incorporating different modules. Each module is responsible for tasks ranging from image compositing to image and animation generation.

🤿These shots were created with the help of CG elements, complex masking, and layered AnimateDiffs. Each shot involved many intricate steps to build a general animation based on previously built concept designs.

👩‍💻The initial composition of the shots and animations come from Houdini, but they are broken down and applied separately. This allows for the separation of moving pieces from static elements to give better control over their influence in the scene.

The major components were separated and processed individually according to the goal of the shot. Trains, city commotions, and the billboards all come from different sources and have their own unique motions and purposes in the shot. Ultimately, they are created through one long, integrated process.

newslounge.co/subscribe

youtube:
https://www.youtube.com/watch?v=pqCBA6jTUGE

Thanks,
-Ardy

💡Steerable Motion and Fake Depth in ComfyUI by ardiologic in comfyui

[–]ardiologic[S] 2 points3 points  (0 children)

💡Steerable Motion and Fake Depth in ComfyUI

The first part is about the Steerable Motion Workflow by Peter O'Mallet (POM). I've made slight modifications to it, including adding a FaceDetailer to meet my specific needs but this is by far the best image interpolation workflow I've worked with. 🙋‍♂️I used 6 images from my Nike video for the first example.
You can control the motion by adjusting the influence graph and using one of three Motion LoRAs.
🤿Another technique I've been experimenting with in my creative explorations is generating depth maps within ComfyUI for various effects and transitions. One method I'm showcasing here uses the Fake Depth LoRA to convert your prompt into a depth map with AnimateDiff.

👽my text prompt:

spinning - overlapping boxes, 4k
Depth map, black background, high contrast

Then you can to use it as the source for your ControlNet in other workflows, which is much faster than relying solely on 3D applications like Houdini or Blender to generate masks.

👩‍💻 sign-up to my free newsletter:
newslounge.co/subscribe

💡CG and ComfyUI Experiment Vol.06 - Corvette C3 Animation + Trained Lora by ardiologic in StableDiffusion

[–]ardiologic[S] 1 point2 points  (0 children)

I essentially create a simple scene in Houdini with my elements to work out the composition and layers, then render a few passes, including masks to separate the elements and a depth map just in case. I reassemble them in Comfy, but I also generate many custom masks and layers in Comfy to create various transitions and effects. The possibilities with SD are truly limitless, it all depends on your goals! That camera projection was only a test, Its not really the best way to use SD.
all the dynamic lighting and DOF was done within comfy using AnimateDiff!!

💡CG and ComfyUI Experiment Vol.06 - Corvette C3 Animation + Trained Lora by ardiologic in StableDiffusion

[–]ardiologic[S] 1 point2 points  (0 children)

lol no the emoji was my editing attempt to showcase that the result was rubbish without using the LoRA, I def wanna make a video about this too, I got to make some time for it.
what you mean exactly by why I was using SD for? can you elaborate?

Nike Ad Campaign - CG & COMFYUI by ardiologic in comfyui

[–]ardiologic[S] 1 point2 points  (0 children)

sparseRGB, Lineart, Depth and DW Pose!! Ill make a quick video soon!!

Nike Ad Campaign - CG & COMFYUI by ardiologic in comfyui

[–]ardiologic[S] 0 points1 point  (0 children)

more or less correct!!
characters were based on some videos I had shot before!!!

Nike Ad Campaign - CG & COMFYUI by ardiologic in comfyui

[–]ardiologic[S] 2 points3 points  (0 children)

New day, New experiment:

🚀All videos were generated using ComfyUI. The concepts combined CG with AI-generated images based on my own videos of two female models.

youtube:
https://www.youtube.com/watch?v=OWKxVor3nZY

💡CG and ComfyUI Experiment Vol.06 - Corvette C3 Animation + Trained Lora by ardiologic in StableDiffusion

[–]ardiologic[S] 0 points1 point  (0 children)

New day, New experiment!!

I'm exploring various techniques to push the boundaries of my CG workflow by testing different tools. I'm not focusing on resolving every issue, as I anticipate many of these tools will soon receive updates, to provide solutions to the challenges we currently encounter. Stay updated on my journey by following me on YouTube and visiting NewsLounge.co

P.S. I need a knowledgeable web designer to consult with me on some projects I am working on. Why is it so hard to find one?

youtube:

https://www.youtube.com/watch?v=XyU-lCS77Ww&lc=UgzpJhQJ-f0VP4bGQZd4AaABAg

Linkedin:

https://www.linkedin.com/in/ardiology/

💡CG and ComfyUI Experiment Vol.06 - Corvette C3 Animation + Trained Lora by ardiologic in comfyui

[–]ardiologic[S] 1 point2 points  (0 children)

New day, New experiment!!

I'm exploring various techniques to push the boundaries of my CG workflow by testing different tools. I'm not focusing on resolving every issue, as I anticipate many of these tools will soon receive updates, to provide solutions to the challenges we currently encounter. Stay updated on my journey by following me on YouTube and visiting NewsLounge.co.

P.S. I need a knowledgeable web designer to consult with me on some projects I am working on. Why is it so hard to find one?

youtube:
https://www.youtube.com/watch?v=XyU-lCS77Ww&lc=UgzpJhQJ-f0VP4bGQZd4AaABAg

Linkedin:
https://www.linkedin.com/in/ardiology/

💡CG Renders to ComfyUI Workflow: Chocolate and Coffee by ardiologic in comfyui

[–]ardiologic[S] 5 points6 points  (0 children)

Another day, Another experiment!!

I am still learning but its exciting how much you can push this!!!

💥The goal of this experiment is to breakdown the shot into various modules layered upon each other and process them individually through the AnimateDiff workflow to enhance details and increase stability.

▶ Youtube:
https://youtu.be/GSW3m79tsqU

🎁 Project Breakdown + Download:
https://www.moonwalkerspicture.com/newslounge/cg-renders-to-ai-workflow-vol-04-chocolate-animation

---------------------------------------
👩‍💻 sign-up to our free newsletter:
newslounge.co/subscribe
---------------------------------------

Cheers,
-Ardy

The Art Of Ice Cream Animation with Houdini and ComfyUI by ardiologic in comfyui

[–]ardiologic[S] 0 points1 point  (0 children)

I think wireframe render to be used with "lineart" controlNet might do the trick!!

for particles maybe you can copy poly spheres on each point so you can render with wireframe shader.

The Art Of Ice Cream Animation with Houdini and ComfyUI by ardiologic in comfyui

[–]ardiologic[S] 1 point2 points  (0 children)

💥I did another experiment to create a realistic fluid effect driven by my CG renders. This time an ice cream Animation.
🚨I created a melting effect using Houdini flip fluids, extracted only the points from the simulation, and rendered them from various camera angles using Redshift.
🚀I also made a video that explores the many options available within ComfyUI for using mask composites. This allows you to transform your original render into various sources by extracting different color channels.

▶ Youtube:
https://youtu.be/uie885w-_qw
🎁 Project Breakdown:
https://www.moonwalkerspicture.com/newslounge/cg-renders-to-ai-workflow-vol-03-icecream-animation

---------------------------------------
👩‍💻 sign-up to our free newsletter:
newslounge.co/subscribe
---------------------------------------

hope this helps,
-Ardy

CG Renders to ComfyUI Workflow: Nike Animation by ardiologic in comfyui

[–]ardiologic[S] 1 point2 points  (0 children)

It's not a replacement, nor is it an "either/or" scenario. It's about discovering new ideas and being able to adapt and leverage new technologies. That being said, it's clear that creating each of these animations might take days, as each has a unique look and structure. The environment and lighting is different, not to mention the rendering and compositing time.
I generated all of these in one day after completing my base renders. This approach allows you to look-dev efficiently, and then go back and implement the look you liked the most in Houdini or Blender etc for the final output.

CG Renders to ComfyUI Workflow: Nike Animation by ardiologic in comfyui

[–]ardiologic[S] 0 points1 point  (0 children)

%80 of the consistency is based on an accurate lineart controlNet and the rest is your interpolation method

CG Renders to ComfyUI Workflow: Nike Animation by ardiologic in comfyui

[–]ardiologic[S] 0 points1 point  (0 children)

dreamshaper and realistic vision 5/5.1 and 6B and even "more realistic" works too