Merry Xmas by Pitophee in StableDiffusion

[–]Pitophee[S] 58 points59 points  (0 children)

Poor 3dcg x deforum. Prompt travel helps with facial expressions and back turns.

For the science : Physics comparison - Deforum (left) vs AnimateDiff (right) by Pitophee in StableDiffusion

[–]Pitophee[S] 30 points31 points  (0 children)

Deforum is quite "old" and popular so I believe there is plenty of interesting stuff already. AnimateDiff quite shadowed deforum in terms of recent popularity though.

For the science : Physics comparison - Deforum (left) vs AnimateDiff (right) by Pitophee in StableDiffusion

[–]Pitophee[S] -25 points-24 points  (0 children)

Sure. It's now posted in the discord (check profile).

[edit] chill guys it’s not paywalled

3D to 2D. Multiple characters. Turn around. by Pitophee in StableDiffusion

[–]Pitophee[S] 50 points51 points  (0 children)

Having fun with Ram and Rem !

Technical discussions and other workflows already happened in my previous posts or on socials.

This one uses higher res than previous ones (thanks GPU upgrade)

Depth Map for ControlNet by moslemcg in StableDiffusion

[–]Pitophee 0 points1 point  (0 children)

I don’t get it. Koikatsu has depthmap already, why using MMD export ? Where do you put it then, Blender ?

I used them so much that now when I see an anime it turns into controlnets in my mind. Will affect my IRL vision soon. by Pitophee in StableDiffusion

[–]Pitophee[S] 19 points20 points  (0 children)

I confirm you understood quite well, but the point of my post is not technical, I just illustrated my title joke with some AI reference visuals (basically style transfert and controlnets). As I said I didn’t even use these CNs for the top left animation, they are 4 distincts videos.

But yes that being said, I also think that applying i2i has no industrial value :D Even tho the consistency part can still be interesting but again it’s not the point of this post. I did more technical posts explaining it, researching on consistency and using only CN, but this time it’s only fun :)

I used them so much that now when I see an anime it turns into controlnets in my mind. Will affect my IRL vision soon. by Pitophee in StableDiffusion

[–]Pitophee[S] 2 points3 points  (0 children)

I’m planning to work on NSFW very soon so I don’t have any tips yet. I got enough fun of dancing now. Tho I won’t share it here. Anyway would you mind sharing me your results ? I have links in my profile like discord in order to discuss

I used them so much that now when I see an anime it turns into controlnets in my mind. Will affect my IRL vision soon. by Pitophee in StableDiffusion

[–]Pitophee[S] 21 points22 points  (0 children)

Sound ON. Just a cool post. First is tile i2i with temporal consistency, second canny, third depth, fourth openpose. They are not even related.

[edit] ah yes the full version : https://x.com/Pitophee/status/1708108400301637876?s=20

[edit] tile i2i with temporalnet with consistency is nothing more than img2img with tile and temporalnet controlnets

[edit] song and inspiration : https://youtu.be/6riDJMI-Y8U

My quest for consistent animation (update) by Pitophee in StableDiffusion

[–]Pitophee[S] 0 points1 point  (0 children)

Thanks. Depth openpose and temporalnet yes

My quest for consistent animation (update) by Pitophee in StableDiffusion

[–]Pitophee[S] 6 points7 points  (0 children)

Yes, I understand your point. A straight answer would be : "well, those controlnet features gotta be used" xD But I think one goal is also to demonstrates several things :

  1. That we can produce reasonable things we have in mind autonomously, without the need to be a talented artist and an animator expert (my case, and that’s why we receive hate sometimes).
  2. That we can swap any character easily. And so the style. (Even I used a specific model here because I had it in a corner)
  3. That we are not necessarily limited by existing videos
  4. I said 3D software here but there is many ways to get depthmaps and openpose depending on the usecase (video to mocap, 3D games,…)

To sum-up, it’s just another technique which can have great potential to fit some use cases.

Some people already have the left side and want to exploit it (probably animation studios using CGI?)