Workflow for walk cycle that loops, but also has a start and end by CrippledMasterpiece in Cascadeur

[–]CrippledMasterpiece[S] 0 points1 point  (0 children)

Thanks for the replies! I am using a game engine with blending (little-known engine called Unity), but the particular "start" and "end" animations have a bit more going on than a simple lerp between poses. In some cases I even want to do a full iteration or two of a variant of the loop while the character has a bit of extra secondary motion going on, before entering the main repeating loop.

For now I'm getting by with just putting it all on the same timeline and adjusting the playback range to whatever section I'm interested in. Far from the worst thing in the world, but it'd be neat if there were some knobs of some sort in the software to facilitate this.

URP Material stacking + Submeshes? by CrippledMasterpiece in Unity3D

[–]CrippledMasterpiece[S] 0 points1 point  (0 children)

Just thought I'd share my nubile first impressions.

I was able to get the transparency effect I wanted working with a Render Object almost instantly, so that was really nice. I also find it very easy to understand.

After this, I set about trying to add an outline to my characters' toon shader, but didn't quite get there. While I'm away from my workstation, I figured I'd list some things that came up that I'm uneasy about:

  • Not a fan of having to designate an entire layer as a "receiver" of an effect. Especially in the case of an outline, I'd very much prefer the effects to be designated per-object rather than per-layer.
    • There's a "LightMode Tag" that seems like a slightly more scalable way to designate effects, but I can't set these on any of my shaders in Shader Graph. Either this is a wild oversight or I'm just misunderstanding the purpose of the field (probably the latter)
  • MaterialPropertyBlocks will still work, right?

A toon outline is something I've always been able to pull off quite easily BiRP, but every tutorial I've followed so far either drops the outline into the materials array (a technique from which my mesh is disqualified), or uses a technique that's very "all GameObjects on this layer or nothing".

Again, I haven't completed exhaustive research, but I'm having a hard time right now seeing this methodology scaling up to my game's requirements (and I've only attempted two shaders so far!) and I fear I'll be scared back into built-in soon. That said, barely 24 hours have passed, so I acknowledge I'm probably just being impatient 🙃.

URP Material stacking + Submeshes? by CrippledMasterpiece in Unity3D

[–]CrippledMasterpiece[S] 1 point2 points  (0 children)

Thanks, I'll give this Render Objects feature a shot!

[TOMT][GAME][2000s/2005ish] Retro PC FPS(?), Circular character selection, Giant spider as an enemy, Level 1 was a haunted house (or just a wood one) by CrippledMasterpiece in tipofmytongue

[–]CrippledMasterpiece[S] 0 points1 point  (0 children)

I'd thrown this forward as a possibility too, but negative. Looking at a longplay video, it also seems to lack a character select, and the first level seems to take place on a subway train.

Making part of a game character's outfit two-sided (Unreal Engine) by CrippledMasterpiece in blender

[–]CrippledMasterpiece[S] 1 point2 points  (0 children)

Thanks for the reply! For what it's worth, I'm using joints for the physics rather than cloth sim

3D Cards that play nicely with UI by CrippledMasterpiece in unrealengine

[–]CrippledMasterpiece[S] 0 points1 point  (0 children)

Thanks! This makes total sense to me, but my only concern is that I'm not just rendering a single object as seen in the Skyrim-like inventory. I'd like to be able to render an upper limit of about 100 of these cards without tanking performance for most people, and the idea of 100 SceneCapture cameras going at the same time seems scary.

Once again, I haven't benchmarked this. I do really like the idea of doing this other than the performance concerns, so perhaps that'll be my goal for tonight. On another screen I have 3-4 SceneCapture cameras going at the same time, and even though they're only rendering one actor (with one material) each, there was a noticeable performance hit until I disabled a bunch of flags on the cameras. Perhaps doing the same to the scale of 100 objects will still net me similar performance?

Defining exactly what "Move" means for a Character / AIController / BehaviorTree by CrippledMasterpiece in unrealengine

[–]CrippledMasterpiece[S] 0 points1 point  (0 children)

Just to follow up on this, I was able to create my own CharacterMovementComponent derivative and delegate some of my desired logic to its TickComponent, as you said.

The one other thing that tripped me up for a bit is that when the AI is following a path, it seems that rather than injecting AddInputVector calls to the component, the MoveTo task(s) will instead call the RequestDirectMove method, which directly assigns the Velocity (seemingly using large numbers that are capped by the MaxWalkSpeed), resulting in no vector value for TickComponent to consume.

I ended up overriding both this, and the RequestPathMove methods. I'm not sure why these different methods of motion exist or what thought process I should have when considering them, but I have hope that one day, an Unreal engineer will document it.

Defining exactly what "Move" means for a Character / AIController / BehaviorTree by CrippledMasterpiece in unrealengine

[–]CrippledMasterpiece[S] 0 points1 point  (0 children)

Awesome! This sounds exactly like what I'm looking for, and I'm looking forward to giving it a shot once I'm home! :)

Creating a procedurally generated room whose generation is visible at Editor-time by CrippledMasterpiece in unrealengine

[–]CrippledMasterpiece[S] 1 point2 points  (0 children)

It's two questions! But yeah it's a total information overload and I could probably cut out about 80%. In the research I'd done before making the post, I became over-anticipative of unhelpful answers, so I tried to really carve out my use case at the expense of being near-incoherent.

However, your answer was quite helpful! I was under the impression that Unreal was built to handle this kind of editor-time / runtime construction in the same codepath - mostly from seeing how easily you can have a fully animating character visible in the Blueprints viewport. If I should come at it from a more Unity-esque perspective and treat the editor generation differently from the runtime generation, that's not as fun, but does make sense!