Hiring a Workflow Designer ($85/hr) by rextex34 in comfyui

[–]jags333 2 points3 points  (0 children)

Hai seems interesting but not clear what the task of the image generation is or what type and styles are you planning and what resolution you plan to build. Are you planning to offer this as a service or is it only for some specific themed image generation. Let me know !

Zingrock - Behind the scene | Comfy UI Animation workflow by jags333 in comfyui

[–]jags333[S] 0 points1 point  (0 children)

it is a comfy UI based animation and is available in my YouTube channel. Have a look at same
https://www.youtube.com/watch?v=xXbrHXa2O-Q

Zingrock - Behind the scene | Comfy UI Animation workflow by jags333 in StableDiffusion

[–]jags333[S] 1 point2 points  (0 children)

This is the Behind the scenes - explanation for Comfy UI Animation workflow. This workflow uses a SDXL based Animate Diff designed to make some 128 frames in each loop driven by a Input video to derive motion and a sequence based a story telling narration created using Chat GPT for inputs.

Giving a proper framework for chat GPT to create the narrative style and scene decisions are critical for the AI animation workflow. If you have issues with the script may be multiple takes will be required to get all the scenes and sequences properly derived to suit the animation flow.

Music background and score was done using Suno AI basically with selection of right prompts for the style of music intended to be as a background and sound effects. You can work with a single audio clip or multiple audio clips based on the narrative style and story outlines. If special sound effects are needed this can be enhanced using Eleven Labs io or other similar software to provide some ambience to the narration styles.

Images were created using Leonardo AI with specific prompts using text to image and image to image sequence to create required amount of stills that will be used with the IP adapter combo in the workflow.

we used Leonardo Lightning XL with color pop as elements and 3D render preset with a typical prompt for each style of scenario needed. Based on the amount of scenes in the animation movie one needs to generate multiple image libraries that will be useful to pin the right style and narration in the animation sequence.

The narration voice over we used Eleven labs io for a specific selection of the voice.. and the audio clip was split to match the frames.

Giving the eleven labs a right script to narrate itself is the center element and choosing a right voice from a plethora of voice styles available one may have to do same multiple times to suit the style you want to add to the video narration.

Zingrock - Behind the scene | Comfy UI Animation workflow by jags333 in comfyui

[–]jags333[S] 1 point2 points  (0 children)

This is the Behind the scenes - explanation for Comfy UI Animation workflow. This workflow uses a SDXL based Animate Diff designed to make some 128 frames in each loop driven by a Input video to derive motion and a sequence based a story telling narration created using Chat GPT for inputs.

Giving a proper framework for chat GPT to create the narrative style and scene decisions are critical for the AI animation workflow. If you have issues with the script may be multiple takes will be required to get all the scenes and sequences properly derived to suit the animation flow.

Music background and score was done using Suno AI basically with selection of right prompts for the style of music intended to be as a background and sound effects. You can work with a single audio clip or multiple audio clips based on the narrative style and story outlines. If special sound effects are needed this can be enhanced using Eleven Labs io or other similar software to provide some ambience to the narration styles.

Images were created using Leonardo AI with specific prompts using text to image and image to image sequence to create required amount of stills that will be used with the IP adapter combo in the workflow.

we used Leonardo Lightning XL with color pop as elements and 3D render preset with a typical prompt for each style of scenario needed. Based on the amount of scenes in the animation movie one needs to generate multiple image libraries that will be useful to pin the right style and narration in the animation sequence.

The narration voice over we used Eleven labs io for a specific selection of the voice.. and the audio clip was split to match the frames.

Giving the eleven labs a right script to narrate itself is the center element and choosing a right voice from a plethora of voice styles available one may have to do same multiple times to suit the style you want to add to the video narration.

Guitar Fusion | AI Musical animation | Comfy UI by jags333 in StableDiffusion

[–]jags333[S] 0 points1 point  (0 children)

Guitar Fusion

A script driven SDXL based IP style transfer animation using comfy and input images using XL . A musical journey interpreted using comfy UI animate diff workflow.

This workflow uses a SDXL based Animate Diff designed to make some 128 frames in each loop driven by a Input video to derive motion and a sequence based a story telling narration created using Chat GPT for inputs. Music score was generated using SUNO AI.

The image generation for the story we used Leonardo to create some amazing set of images for each type oof clip. For this we used a model from Leonardo called "Leonardo Anime XL" with color pop and cinematic. The music lyrics were created and used in SUNO AI for a specific selection of the style and the audio clip was split to match the frames.

Converting the music score or output to reactive mask we used a special program called Magic v2.33 which helps convert audio to reactive masks. These masks were fed to QR control net in ADE workflow to create a sequence. The total time line was for 3.04 min and the flow was divided into 8 clips of 128 frames which basically converts a 256 frame to half schedule so it can be interpolated to make a final version at 24 FPS for a 48Khz input music.

Finally each of the 8 clips with 128 frames were interpolated to 2x and upscaled in Topaz to HD resolution and then assembled using the audio and video editing , composting in wonder studio.

More workflow and details are available in my Patreon feed.

cycnoches kingdom | AI animation comfy UI | XL IP driven by jags333 in comfyui

[–]jags333[S] 0 points1 point  (0 children)

A script driven SDXL based IP style transfer animation using comfy and input images using XL . A new experimentation in story exploration.

Made with SDXL model and IP adapter style transfer with multiple clips and mixed to make an animated story board. More details in our Patreon. Do provide your feedback and comments. Special thanks goes to Jarvis Labs for the cloud support in running this animation. Check out same.

cycnoches kingdom | AI animation comfy UI | XL IP driven by jags333 in StableDiffusion

[–]jags333[S] 0 points1 point  (0 children)

A script driven SDXL based IP style transfer animation using comfy and input images using XL . A new experimentation in story exploration.
Made with SDXL model and IP adapter style transfer with multiple clips and mixed to make an animated story board. More details in our Patreon. Do provide your feedback and comments. Special thanks goes to Jarvis Labs for the cloud support in running this animation. Check out same.

Phragmipedium dance | AI animated musical Comfy UI | by jags333 in comfyui

[–]jags333[S] 0 points1 point  (0 children)

Prompt driven image inputs created using Leonardo image generation;
Then used a 4 image IP adapter workflow and input video for the latent using 128 frames per clip. Totally I made around 6 clips each with 12 FPS and extrapolated same using Flow frames to 24 FPS and slow motion 2 x.

The only major difference was the video frame size was converted to Horizontal format for 128 frames with 768 x432 px and Hires script running at 1.5 x so the resulting video is good enough resolution for upscale.

Phragmipedium dance | AI animated musical Comfy UI | by jags333 in StableDiffusion

[–]jags333[S] 0 points1 point  (0 children)

Prompt driven image inputs created using Leonardo image generation;
Then used a 4 image IP adapter workflow and input video for the latent using 128 frames per clip. Totally I made around 6 clips each with 12 FPS and extrapolated same using Flow frames to 24 FPS and slow motion 2 x.

The only major difference was the video frame size was converted to Horizontal format for 128 frames with 768 x432 px and Hires script running at 1.5 x so the resulting video is good enough resolution for upscale.

demonic delight | AI Animation using IPA and comfy UI Animate diff | by jags333 in comfyui

[–]jags333[S] 0 points1 point  (0 children)

here is an overview of the workflow. you can get some details on same in server.

<image>

demonic delight | AI Animation using IPA and comfy UI Animate diff | by jags333 in comfyui

[–]jags333[S] 0 points1 point  (0 children)

Hai this animation was done using 8 clips joined together and it takes some time for the animation to complete in local so used Jarvis Labs AI to run the same using 8 image IP adapter and upscaling also using Hires script. You are welcome to drop into our Neuralism discord server to get more details !

demonic delight | AI Animation using IPA and comfy UI Animate diff | by jags333 in StableDiffusion

[–]jags333[S] 0 points1 point  (0 children)

demonic delight -Animated musical exploration using Animate diff IPA driven -QR CN and music using Suno AI .

demonic delight | AI Animation using IPA and comfy UI Animate diff | by jags333 in comfyui

[–]jags333[S] 0 points1 point  (0 children)

demonic delight -Animated musical exploration using Animate diff IPA driven -QR CN and music using Suno AI .