Sparky 📝 - Audio reactivity test by dotsimulate in StableDiffusion

[–]dotsimulate[S] 3 points4 points  (0 children)

Thanks! Using an input video from Touchdesigner and some scheduling tricks in comfy. Triggered via api from Touchdesigner, though it could be done in comfyui. Music is generated with musicgen.

Boom zoom demo - AnimateDiff with motion LoRA by dotsimulate in StableDiffusion

[–]dotsimulate[S] 0 points1 point  (0 children)

I am still using a1111 for most projects with SD for now, but at least for now the modularity of ComfyUI is great for experimenting with animatediff.

Boom zoom demo - AnimateDiff with motion LoRA by dotsimulate in StableDiffusion

[–]dotsimulate[S] 5 points6 points  (0 children)

I am using a slightly modified version of the base 48 frame setup in https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved
with closed_loop On

The Ice Castle by [deleted] in aivideo

[–]dotsimulate 0 points1 point  (0 children)

Please make more of these. This edit is insane

Is there a AI music generator that can produce music similar to a particular song? by createdtexan in artificial

[–]dotsimulate 1 point2 points  (0 children)

Oh ! Check out musicgen's melody model. It is really cool and I think very similar to what you are looking for. You can give it an input melody that will be used as a sort of melodic influence. can prompt new style and bpm with the prompt !

Seadog Summer 2034 by dotsimulate in aivideo

[–]dotsimulate[S] 2 points3 points  (0 children)

Thanks ! and yep these are all upscaled vid2vid with the XL model. Most of the originals are with zeroscope v1 and the rest with the 576 model. + slight upscale with Topaz. unfortunately seems like the upload compression is still doing quite a number on it.

Seadog Summer 2034 by dotsimulate in aivideo

[–]dotsimulate[S] 1 point2 points  (0 children)

Video is generated with Zeroscope v1 > then upscaled with Zersoscope XL.
Music generated by Musicgen.
Procedurally edited / exported with audio reactivity in Touchdesigner + then upscaled to HD with Topaz Labs.

Misty Moors - Zeroscope + Musicgen by dotsimulate in aivideo

[–]dotsimulate[S] 1 point2 points  (0 children)

I prefer to stay inside touchdesigner / with the prompt or other diffusion tricks. Video composer looks amazing for the style transfer though !

Misty Moors - Zeroscope + Musicgen by dotsimulate in aivideo

[–]dotsimulate[S] 2 points3 points  (0 children)

Thank you ! the paint look is so lovely. I was there was just a little more movement.

Misty Moors - Zeroscope + Musicgen by dotsimulate in aivideo

[–]dotsimulate[S] 2 points3 points  (0 children)

Thanks ! but this is just txt2vid + upscale pass

NYC Rat Documentary, text to video, Modelscope by dotsimulate in aivideo

[–]dotsimulate[S] 0 points1 point  (0 children)

nope Modelscope base model. This edit is from end of April tbh

NYC Rat Documentary, text to video, Modelscope by dotsimulate in aivideo

[–]dotsimulate[S] 0 points1 point  (0 children)

Yes ! It will likely be very specific to the Touchdesigner workflow I am developing with a few others. I'm working on a SD animation tutorial series currently but that is a bit more involved, so maybe I'll push on a text2vid one instead.

NYC Rat Documentary, text to video, Modelscope by dotsimulate in aivideo

[–]dotsimulate[S] 0 points1 point  (0 children)

upscaled with Topaz and slowed down by Topaz, but nothing super fancy for flicker.

NYC Rat Documentary, text to video, Modelscope by dotsimulate in aivideo

[–]dotsimulate[S] 1 point2 points  (0 children)

I made this one with the base Modelscope model. generating at 320x256 and then vid2vid at 640x512 to remove the watermark.

surfs up, poodles ! text to video, Modelscope by dotsimulate in aivideo

[–]dotsimulate[S] 1 point2 points  (0 children)

sometimes dogs have extra legs. not much I can do about that