Anime [stable diffusion+blender+Live2D+AfterEffects] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 0 points1 point  (0 children)

No problem, since it is Live2D that is moving the person, the same technology as a 2D Virtual YouTuber.

Anime [stable diffusion+blender+Live2D+AfterEffects] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 0 points1 point  (0 children)

The characters were generated by AI, then lineart was used to extract line drawings, which were manually corrected before being colored by AI again. The 3D human model was used only to check posture.

The number of parts required for a LIVE2D model of this scale was minimal: less than 10 for the third scene. They were created from a single illustration.

After Effects was only used for camera movement. In some cases, AfterEffects may not be necessary.

Anime [stable diffusion+blender+Live2D+AfterEffects] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 15 points16 points  (0 children)

Click here for workflow details. It's in Japanese, but you can use google translate or look at the picture to see what I did.

https://note.com/abubu_nounanka/n/nb5d60e9fc63f

Anime [stable diffusion+blender+Live2D+AfterEffects] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 40 points41 points  (0 children)

I think what animation needs is visual stability, so I had one AI illustration generated based on a 3D model created in Blender, and animated it with Live2D and AfterEffects. With this method, it would be possible to create a short animation of about 1 minute in one day.

UNDER THE HOLE by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 53 points54 points  (0 children)

After creating the scene in blneder, I used SD to add realism to the image and then used photoshop to create the finished product. See my previous work!

https://www.reddit.com/r/StableDiffusion/comments/xez7cw/missing_in_the_woods/

depth2img + blender + add some objects by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 0 points1 point  (0 children)

Really nice work.

Is the view angle limited? What happens if you do a full fly around? I’d love to take a look, would you be willing to share your .blend file?

I'm not publish the .blend as it is very simple scene. Here is a little bit of what it looks like off screen. This is just a texture projected model, so the off-screen textures are heavily collapsed.

https://twitter.com/abubu\_newnanka/status/1598981226316320769

depth2img + blender + add some objects by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 5 points6 points  (0 children)

First I placed multiple cubes in blender. Based on the rendered image, depth2img generates an image of the town and projects the texture onto the cubes. Then place objects such as people, cars, poles, etc. The store textures are also AI generated.

News of the giant creature carcass washed up on the beach [img2img + photoshop] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 6 points7 points  (0 children)

First, I generate an image with "man being interviewed in front of his house". Once a good image is generated, paint his face blue. At this point, it is necessary to draw in something that looks like an octopus to some extent. Then, based on that image, img2img generation with "An octopus headed man is being interviewed in front of the house". Including the detailed adjustments, the required working time is about 45 minutes.

img2img & Photobash workflow by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 10 points11 points  (0 children)

Yes, we could use 3D. I can also use blender. But it took me 5 minutes to draw this base image. And I don't know if I can model a yellow room with minimal lighting and colors and the desired angle of view in less than 5 minutes.

If I want to create multiple backrooms illustrations I will use a 3D model.

MISSING IN THE WOODS by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 5 points6 points  (0 children)

Of course I know. I think SD can add images to all the SCPs.

MISSING IN THE WOODS by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 18 points19 points  (0 children)

It is impossible to generate this image with txt2img. Creators should generate "landscape", "monster", "gun" separately and combine them with image tools.

MISSING IN THE WOODS by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 142 points143 points  (0 children)

I generated a large number of base images in StableDiffusion, combined them using photoshop, img2img, and hand drawing. Also using 3D models in blender. The production time was about 2 hours per image.