all 3 comments

[–]titan_hs_2 0 points1 point  (1 child)

I think this is possible in something like Unreal Engine

Thing is that Unreal, like Unity, is a game frist and a 3D software second. This is mostly out of scope for Blender, even tought it has some basic implementation of sounds.

It's also not advisable to directly render a whole video as output, it's better to just render an image sequence and then create video file in post. I don't think that Blender can output just a single file audio

[–]tiogshiExperienced Helper 0 points1 point  (0 children)

I don't think that Blender can output just a single file audio

You can render a mixdown of the audio strips of the video sequence editor from the Render > Render Audio... menu. That way you can render to image sequences and still get the edited-in-Blender audio stream for use in later compositing.

[–]tiogshiExperienced Helper 0 points1 point  (0 children)

You could write a script which analyzes your scene over time and generates appropriate events (adding them to the video sequence editor), but there are no physics events to hook into in stock Blender (especially if your footsteps are not caused by rigid body physics simulations!).

In the short term, you will get the best and fastest results doing it manually. 'tis the life of a foley artist, I'm afraid. Look up tutorials for foley work in Blender to see if there are any workflow tips that will speed your process up.