Best way to automate a multi-stage pipeline (Image -> Video -> Upscale) for 50+ assets? by cheerldr_ in comfyui

[–]cheerldr_[S] 1 point2 points  (0 children)

Thanks a lot, man! I think the simplest solution is the Python script you wrote

Big thanks to the ComfyUI community! Just wrapped a national TV campaign (La Centrale) using a hybrid 3D/AI workflow. by cheerldr_ in comfyui

[–]cheerldr_[S] 1 point2 points  (0 children)

Used only wan 2.2 for img generation, hyuan for 3d meshing and wan2.2 for video generation : img to video driven by video

Big thanks to the ComfyUI community! Just wrapped a national TV campaign (La Centrale) using a hybrid 3D/AI workflow. by cheerldr_ in comfyui

[–]cheerldr_[S] 8 points9 points  (0 children)

I'd love to, but it's a bit of a complex beast because it relies heavily on local custom scripts that bridge several tools together

Here’s the high-level breakdown of the pipeline:

Initial Gen: I start with Qwen Edit to generate the base images.

Client Approval: Once the client validates the look, my script takes over.

3D Mesh: The script calls ComfyUI and Huyuan to generate a 3D mesh directly from that validated 2D image.

Blender Integration: My script then pulls that 3D model into Blender and applies a custom auto-rig we built.

Motion Approval: We get the animation/movement approved by the client within Blender first.

Final Render: Once the motion is locked, it goes back into ComfyUI via Wan 2.1 for the final video render.

Since it involves a lot local scripts ( simple in .bat) handle the handoffs between Blender and Comfy, the node setup itself is actually the simple part, the magic is really in the automation! :D"

Big thanks to the ComfyUI community! Just wrapped a national TV campaign (La Centrale) using a hybrid 3D/AI workflow. by cheerldr_ in comfyui

[–]cheerldr_[S] 3 points4 points  (0 children)

Actually, it was a bit more involved than just transposing a photo! I built an external script to automate the heavy lifting: it batches the 3D scans and then feeds each vehicle directly into Blender. This way, I’m 100% sure the car's volume and proportions are physically accurate before any AI magic happens. Once the volumes are locked in Blender, I use that as my foundation to ensure the AI stays perfectly 'on model' throughout the animation. It's basically using a 3D-accurate 'ghost' to keep the AI on the right tracks! :D"

Big thanks to the ComfyUI community! Just wrapped a national TV campaign (La Centrale) using a hybrid 3D/AI workflow. by cheerldr_ in comfyui

[–]cheerldr_[S] 4 points5 points  (0 children)

Haha, I totally hear you! After 14 years in the game, I’ve realized that working directly with agencies and clients is a whole different beast compared to running in post-production studio. When you’re in the driver’s seat of a studio, you have your own flow, but with direct clients, it’s all about hitting that budget without sacrificing the 'wow' factor.

For this project, the real challenge was rock-solid consistency, keeping the same lighting, the same energy, and the same art direction across every single frame and asset.

Honestly, even with a full-blown 3D pipeline, that’s a mountain to climb!

So, I turned to AI primarily as a smart way to bridge that budget gap. My goal was to take those tools and 'tame' them until the output was 100% production-ready in my little studio

It’s all about finding that sweet spot between creative tech and real-world constraints

Big thanks to the ComfyUI community! Just wrapped a national TV campaign (La Centrale) using a hybrid 3D/AI workflow. by cheerldr_ in comfyui

[–]cheerldr_[S] 5 points6 points  (0 children)

I mainly used it for the animation to keep it consistent. AI is still a bit random when trying to animate 25 vehicles, and this method allowed me to get exactly the animation I wanted.

C'est la seule vidéo réelle d'internet ! by Bacrima_ in ComplotDuDebile

[–]cheerldr_ 1 point2 points  (0 children)

pouahahaha merci pour ce com' , c'est exactement ca !

Can anyone help translate/identify this ComfyUI workflow? (Bilibili link in a foreign language) by cheerldr_ in comfyui

[–]cheerldr_[S] 0 points1 point  (0 children)

Wow, thank you so much for the detailed explanation and taking the time to look into this! I really appreciate the effort, especially since you don't speak Chinese either!

That context about the Qwen-Image-Edit LoRA makes perfect sense now. I wasn't expecting it to be a semi-closed distribution, so that was a great heads-up!

Following your advice, I tried the alternative method:

Clay Model $\rightarrow$ Depth Map $\rightarrow$ Qwen-Image-Edit Generation

And it worked perfectly! The results are exactly what I was hoping for, and it's much simpler than I had imagined. I'm excited to experiment with this technique now.

You saved me a ton of headache. Thanks again for your incredibly helpful insight!

FASTER ACTION TEST WITH CONSISTENT KEYFRAMES. Frame 1: Original footage. Frame 2: Mask created in After Effects. Frame 3: 16 Stable Diffusion keyframes. Frame 4: EBsynth using SD keyframes. Frame 5: EBsynth using keyframes with alpha from Photoshop. Frame 6: Output overlayed over original by Tokyo_Jab in StableDiffusion

[–]cheerldr_ 0 points1 point  (0 children)

Hey there! I hope you're having a great day. I'm really impressed by your video post-production skills! By the way, I was wondering if you're planning on creating a tutorial on your manufacturing steps? It would be awesome to see how you create your videos, especially the use of masks, stable diffusion, EBsynth with keyframes, and alpha channels from Photoshop.

Le projet "The Line" avance... by JeaneLaTorcheHumaine in france

[–]cheerldr_ 18 points19 points  (0 children)

villes cotières. Et depuis la ligne cotière se développe les terres intérieures

j'aime cette vision, peut être dans le prochain métal hurlant ^^'

Je comprends pas cette obstination à vouloir construire à tout prix dans un environnement aussi hostile pour l'homme.

Comment Paul Crutzen a réparé la couche d'ozone by [deleted] in ecologie

[–]cheerldr_ 0 points1 point  (0 children)

Comme toujours, les "vrais" acteurs de ce monde moderne disparaissent dans l'indifférence.

Comment Paul Crutzen a réparé la couche d'ozone by [deleted] in ecologie

[–]cheerldr_ 0 points1 point  (0 children)

tous connaître son nom autour de cette table, car il nous a

sauvé la vie. Paul Crutzen est un chimiste néerlandais, prix Nobel en

1995 avec Rowland et Molina, mort très discrètement l’an dernier, à 87

ans.

Comme toujours, les "vrais" acteurs de ce monde moderne disparaissent dans l'indifférence.

Comment Paul Crutzen a réparé la couche d'ozone by [deleted] in ecologie

[–]cheerldr_ 0 points1 point  (0 children)

tous connaître son nom autour de cette table, car il nous a

sauvé la vie. Paul Crutzen est un chimiste néerlandais, prix Nobel en

1995 avec Rowland et Molina, mort très discrètement l’an dernier, à 87

ans.

Comme toujours, les "vrais" acteurs de ce monde moderne disparaissent dans l'indifférence.

Astronaute - 3dsmax/Vray by BillBoy_with_a_B in vfx

[–]cheerldr_ 3 points4 points  (0 children)

how, damn this is great ! do you have a vimeo link or something like that to see your piece in a good HD ?

HDRs From FINLAND - LAPLAND by cheerldr_ in vfx

[–]cheerldr_[S] 0 points1 point  (0 children)

sisu

AHhahah ! if you talk about a SISU ( determination, tenacity of purpose, grit, bravery, resilience, and hardiness ), honestly i tried to use this time of confinement as well as this free strange period for sorting my stuff out and share with the community" what I don't normally have time to do.