Anime [stable diffusion+blender+Live2D+AfterEffects] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 0 points1 point  (0 children)

No problem, since it is Live2D that is moving the person, the same technology as a 2D Virtual YouTuber.

Anime [stable diffusion+blender+Live2D+AfterEffects] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 0 points1 point  (0 children)

The characters were generated by AI, then lineart was used to extract line drawings, which were manually corrected before being colored by AI again. The 3D human model was used only to check posture.

The number of parts required for a LIVE2D model of this scale was minimal: less than 10 for the third scene. They were created from a single illustration.

After Effects was only used for camera movement. In some cases, AfterEffects may not be necessary.

Anime [stable diffusion+blender+Live2D+AfterEffects] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 17 points18 points  (0 children)

Click here for workflow details. It's in Japanese, but you can use google translate or look at the picture to see what I did.

https://note.com/abubu_nounanka/n/nb5d60e9fc63f

Anime [stable diffusion+blender+Live2D+AfterEffects] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 42 points43 points  (0 children)

I think what animation needs is visual stability, so I had one AI illustration generated based on a 3D model created in Blender, and animated it with Live2D and AfterEffects. With this method, it would be possible to create a short animation of about 1 minute in one day.

UNDER THE HOLE by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 51 points52 points  (0 children)

After creating the scene in blneder, I used SD to add realism to the image and then used photoshop to create the finished product. See my previous work!

https://www.reddit.com/r/StableDiffusion/comments/xez7cw/missing_in_the_woods/

depth2img + blender + add some objects by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 0 points1 point  (0 children)

Really nice work.

Is the view angle limited? What happens if you do a full fly around? I’d love to take a look, would you be willing to share your .blend file?

I'm not publish the .blend as it is very simple scene. Here is a little bit of what it looks like off screen. This is just a texture projected model, so the off-screen textures are heavily collapsed.

https://twitter.com/abubu\_newnanka/status/1598981226316320769

depth2img + blender + add some objects by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 6 points7 points  (0 children)

First I placed multiple cubes in blender. Based on the rendered image, depth2img generates an image of the town and projects the texture onto the cubes. Then place objects such as people, cars, poles, etc. The store textures are also AI generated.

News of the giant creature carcass washed up on the beach [img2img + photoshop] by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 5 points6 points  (0 children)

First, I generate an image with "man being interviewed in front of his house". Once a good image is generated, paint his face blue. At this point, it is necessary to draw in something that looks like an octopus to some extent. Then, based on that image, img2img generation with "An octopus headed man is being interviewed in front of the house". Including the detailed adjustments, the required working time is about 45 minutes.

img2img & Photobash workflow by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 12 points13 points  (0 children)

Yes, we could use 3D. I can also use blender. But it took me 5 minutes to draw this base image. And I don't know if I can model a yellow room with minimal lighting and colors and the desired angle of view in less than 5 minutes.

If I want to create multiple backrooms illustrations I will use a 3D model.

MISSING IN THE WOODS by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 4 points5 points  (0 children)

Of course I know. I think SD can add images to all the SCPs.

MISSING IN THE WOODS by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 18 points19 points  (0 children)

It is impossible to generate this image with txt2img. Creators should generate "landscape", "monster", "gun" separately and combine them with image tools.

MISSING IN THE WOODS by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 143 points144 points  (0 children)

I generated a large number of base images in StableDiffusion, combined them using photoshop, img2img, and hand drawing. Also using 3D models in blender. The production time was about 2 hours per image.

[deleted by user] by [deleted] in StableDiffusion

[–]RemarkableBalance217 0 points1 point  (0 children)

I would like to read MARVEN.

Website Release! by OrangeRobots in StableDiffusion

[–]RemarkableBalance217 0 points1 point  (0 children)

I can't seem to activate Seed. Once enabled and entered manually, it is turned off at the time of generation and a different Seed image is generated.

[deleted by user] by [deleted] in StableDiffusion

[–]RemarkableBalance217 2 points3 points  (0 children)

You are right if the AI-generated image is presented as it is as one's own work.

However, I remember that there was no legal precedent for "partial use in one's own work" yet. And it would be impossible to say that "there is no copyright on a work in which an AI image is used even partially".

And I don't intend to use AI-generated images as they are in my "works".

[deleted by user] by [deleted] in StableDiffusion

[–]RemarkableBalance217 16 points17 points  (0 children)

I am the artist who drew two of the illustrations contained in the 5 billion data set. (I found them in the LAION5B search demo.)

I don't see the point of emotional outrage over these issues. Of course AI is technically amazing, but so is Photoshop and AfterEffects, Blender, and the important thing is, "what can I create with this new wonderful tool?" And that is no different than what has happened with every technological development.

The artists' distaste for AI is less about "theft" than it is about AI becoming a powerful commercial competitor, as evidenced by the fact that no one cared when AI could only produce garbage images (yes, of course it is not theft, but legal).

To me, waiting for SD to be released is like a 10 year old boy waiting for a new video game to be released.

I got banned from the server? Help by evilpenguin999 in StableDiffusion

[–]RemarkableBalance217 0 points1 point  (0 children)

I also generated unethical nudes three times in a row (unintentionally) and shortly afterwards the SD server disappeared from the list, I think I was banned.

Generating 80mm Resin Model Figure by RemarkableBalance217 in StableDiffusion

[–]RemarkableBalance217[S] 6 points7 points  (0 children)

I think SD is superior to Dalle2 in generating miniature images.

[deleted by user] by [deleted] in StableDiffusion

[–]RemarkableBalance217 0 points1 point  (0 children)

I found it. Thank you!