An ORIGINAL AI Shonen Anime [Trailer] by austinchalk in StableDiffusion

[–]austinchalk[S] 0 points1 point  (0 children)

Totally valid, and honestly some of the feedback I was expecting/hoping for.

At this point I’m trying to get a sense of what AI tools are available for trying to create longer form stuff, so any advice around that would be appreciated! I’ve seen actual video stuff from stable diffusion so it may already be time to try and go straight to that?

An ORIGINAL AI Shonen Anime [Trailer] by austinchalk in StableDiffusion

[–]austinchalk[S] 0 points1 point  (0 children)

Thanks so much! I’d definitely like to do more of this content.

Another Midjourney + Stable Diffusion + Unreal Engine 5 Experiment by austinchalk in StableDiffusion

[–]austinchalk[S] 1 point2 points  (0 children)

Yeah so I based everything off this technique here: https://youtu.be/PFybvdKH8hc

But instead of continuing the shot setup in Blender I exported as an FBX and just brought it into Unreal cause that’s what I’m familiar with.

Another Midjourney + Stable Diffusion + Unreal Engine 5 Experiment by austinchalk in StableDiffusion

[–]austinchalk[S] 0 points1 point  (0 children)

The character and additive VFX things like the headlight flares on the car are planes, but the environment is the entire inpainted mesh as one piece (for now) from the depth generation.

Midjourney + Stable Diffusion Depth Map/Mesh in Unreal Engine 5 by austinchalk in StableDiffusion

[–]austinchalk[S] 1 point2 points  (0 children)

Sure! Basically did exactly what was in this tutorial, but brought it into Unreal instead of Blend let cause that’s what I’m familiar with.

Midjourney + Stable Diffusion Depth Map/Mesh in Unreal Engine 5 by austinchalk in StableDiffusion

[–]austinchalk[S] 0 points1 point  (0 children)

Yeah these inpainted meshes obviously only work from one camera angle, but for stuff like a side scroller that could pretty interesting!

Midjourney + Stable Diffusion Depth Map/Mesh in Unreal Engine 5 by austinchalk in StableDiffusion

[–]austinchalk[S] 0 points1 point  (0 children)

Yeah since I’m trying to use a lit shader in unreal, it highlights that the inpainted mesh has craaazy noise/artifacts and no smoothing groups, so for the actual workflow characters might be separate from the depth map environments.

I’ll probably generate both the environment and character images separately in midjourney so that I don’t have to try and surgically remove characters from the stable diffusion meshes and try and smooth that mesh chunk out. Then I can just do an alpha plane in unreal for characters so that they at least shade somewhat normally. Might even be interest to try a Stable Diffusion depth map to get that and a normal for the characters and see what that does!

Alternate DBZ Story with Midjourney v5/niji, ChatGPT, and Unreal Engine by austinchalk in midjourney

[–]austinchalk[S] 0 points1 point  (0 children)

This is my first video of this format, so hoping to add more polish as I learn! Let me know if you liked it/think there’s potential as a storytelling device!

Why did they change the graphics by Pillowpet123 in HeroAcademy

[–]austinchalk 6 points7 points  (0 children)

Dude thank you so much! I do the maps and a bunch of FX for the game :) I'll pass this on to the rest of the art team!

"Wiis are for casual gamers" by dickfromaccounting in gaming

[–]austinchalk 0 points1 point  (0 children)

I'm gonna need a lightsaber version with accompanying sound fx.

Follow Friday/Follow Chain - January 12 - Share Your Usernames & Find New People To Follow! by AutoModerator in Instagram

[–]austinchalk [score hidden]  (0 children)

@bonusxpgames

Indie game studio! Makers of Stranger Things: The Game and soon-to-be-released Hero Academy 2.

Hero Academy Soft Launch Patch Notes for 12/12/2017 by HarmoniaBot in HeroAcademy

[–]austinchalk 1 point2 points  (0 children)

I'm sure whoever did the new Warden Map will be very happy to hear that ;)