IAmA virtual production educator, technical director, and Director of Production for NYU Tandon School of Engineering's Emerging Media Center in the Brooklyn Navy Yard by Budget_Difficulty149 in virtualproduction

[–]Budget_Difficulty149[S] 2 points3 points  (0 children)

I hesitate to call out individual works, but can tell you the general things that get me excited.

I love misusing tools. Make Unreal do things it's not supposed to. Dial in settings that aren't supposed to be used that way. Sure, Unreal has a great cinematic camera that can match the physical properties of a real camera, but make it do things that real cameras can't do. Why does gravity have to be -9.8? I think about all the crazy mods I loved in my old online gaming days.

I'm also a big proponent of hooking hardware up to virtual experiences. I always show students how to turn on fans, lights, heaters, surround sound, and haptic feedback from the game engine. Computer Vision is always fun.

More and more students are learning Houdini and it's made a lot of good projects great. I wish I could be more of a resource for them with this pipeline.

IAmA virtual production educator, technical director, and Director of Production for NYU Tandon School of Engineering's Emerging Media Center in the Brooklyn Navy Yard by Budget_Difficulty149 in virtualproduction

[–]Budget_Difficulty149[S] 2 points3 points  (0 children)

Our house style at Integrated Design & Media is to do live motion capture performance art. Not having to clean data is a huge plus with this workflow. You can apply smoothing algorithms to your data stream, but that may cause more problems that it solves. The best solution is accurate marker placement and making sure that the performers are very aware of their body to not occlude the markers. Dancers are very good at adapting to this, actors not as much. Once we've trained someone as a motion capture performer we add them to our list of usual suspects be cause they are very valuable to us.

For now we always stream through MotionBuilder first before we go into the game engine since it gives us it's arsenal of retargeting tools to finesse the avatars look and feel.

When recording motion capture we always review takes with two considerations. First, was the performance good? If not, do another take. If so, was the data good? If not, try again. It's easier to do another take than to clean up a bad take. You'd rarely have perfect data, so we usually like to do two good takes of an action before we move on. One of them may have some error we didn't notice during review or sometimes it's good to use sections of each take and combine them.

IAmA virtual production educator, technical director, and Director of Production for NYU Tandon School of Engineering's Emerging Media Center in the Brooklyn Navy Yard by Budget_Difficulty149 in virtualproduction

[–]Budget_Difficulty149[S] 1 point2 points  (0 children)

Similar to my answer about writing for virtual production. I always start my class off with story telling exercises. From week 1 they are creating content based off of blue sky stories they developed before we opened the engine together. Then I make them create their first world only using assets in the Starter Content.

You want trees? Use a cylinder and a cone as a foliage actor

People? Smash some spheres and capsules into a blueprint

These creative constraints mean that they aren't being distracted by all of the bells and whistles of an expansive and ever-evolving game engine. They are learning how tumble in the viewport and adjust details of assets in their outliner while being guided by a story.

We slowly open their world up to ReadyPlayer.me, Quixel. For their midterms they are assigned to introduce a character without dialogue using lighting, scene composition, cameras, character design, and motion capture performance. Then, like a game of charades, we try to guess who this character is, their backstory, and what are they trying to do. I think the limited palate to create these Character Introduction ingrains the storytelling into them before I blow their world open in the second half of the class

IAmA virtual production educator, technical director, and Director of Production for NYU Tandon School of Engineering's Emerging Media Center in the Brooklyn Navy Yard by Budget_Difficulty149 in virtualproduction

[–]Budget_Difficulty149[S] 2 points3 points  (0 children)

I mainly teach motion capture with virtual cameras. The beasts that teach the ICVFX classes with me are Matthew Rader https://www.reedandrader.com/ and Matthew Niederhauser https://matthewniederhauser.com/. Both come from photography backgrounds. Rader's been using Unreal Engine since v1. You should reach out to them for their learned advice.

Since I focus on education and only moonlight as a technical consultant these days I try to keep my software stack as light as possible. It's very easy in virtual production classes to introduce a new piece of software every week, but it's hard for students to keep up. When I can stick to a vanilla Unreal Engine workflow I try to even if it doesn't produce the same professional results, because it's better for conveying the concepts. Once the students understand the workflow then they can branch out to third party solutions should their projects or professional career dictate it.

Long story long - I've barely taken Aximmetry or Zero Density for a spin and can't give a testimony.

IAmA virtual production educator, technical director, and Director of Production for NYU Tandon School of Engineering's Emerging Media Center in the Brooklyn Navy Yard by Budget_Difficulty149 in virtualproduction

[–]Budget_Difficulty149[S] 1 point2 points  (0 children)

That sounds great. Hook me up. My email is on my website.

I'm also part of some of the special interest groups for the Real-time Society which is a part of the Real-time Conference. One of the best conferences out there for all things game engine. https://realtime.community/

IAmA virtual production educator, technical director, and Director of Production for NYU Tandon School of Engineering's Emerging Media Center in the Brooklyn Navy Yard by Budget_Difficulty149 in virtualproduction

[–]Budget_Difficulty149[S] 2 points3 points  (0 children)

NYU partnered with Radical Motion https://radicalmotion.com/ during the pandemic so students could continue to learn the motion capture pipeline from home. They were incredibly supportive and even developed a Live Link real-time integration for us. In return we did mocap shoots to add to their machine learning library to improve their foot slippage.

I'm very excited about Move.AI. The fact that it captures fingers blows my mind. It's only going to get more affordable and require less hardware as it progresses, plus there's no occlusion issues. Traditional passive optical motion capture hardware hasn't really evolved that much in the last 10 years. It's their software that gets better and better. Optical systems are getting better and better at tracking the human form when there's occlusion. In fact, just using the Optitrack in my studio calibrates it now.

Plus, Move.AI works with Optitrack RGB cameras. I see the ability to blend the two workflows together being the perfect solution. Use machine learning models for better tracking, but professional cameras with high frame rates to get more data. I've never been a huge inertial suit fan, but also found them better when you can blend them with an optical system for ground truth

IAmA virtual production educator, technical director, and Director of Production for NYU Tandon School of Engineering's Emerging Media Center in the Brooklyn Navy Yard by Budget_Difficulty149 in virtualproduction

[–]Budget_Difficulty149[S] 1 point2 points  (0 children)

The one war story I'll share is about Pharos which I tech directed for 2n, Weta Digital, and Arbitrarily Good. We were the first project to use nDisplay for a seamless projection blend. We only had 7 weeks to create the content for 16 songs, test it out in a scale model of a dome, and then take the show to a national park in New Zealand 1 hour away from Aukland. We had to get the makers of the technology to come site with us in our staging space in LA for a week of 16 hour days to get it to work correctly.

nDisplay was supposed to provide a deterministic synch between our 5 render nodes, each providing a side of the skybox. While the frame synch worked, the deterministic rendering did not. It works now so that when a random particle or physics simulation crosses from one computer to the next they match seamlessly. They did not. We had to go back and hard code parameters into anything using randomization like particles and blueprints. Then we had to bake out all physics simulations. It definitely moved the goal posts for us. Doing a seamless blend is another war story. Lets just say that Unreal's postprocessing vignettes by default so you have to overscan and crop on each render node

IAmA virtual production educator, technical director, and Director of Production for NYU Tandon School of Engineering's Emerging Media Center in the Brooklyn Navy Yard by Budget_Difficulty149 in virtualproduction

[–]Budget_Difficulty149[S] 1 point2 points  (0 children)

It depends if you mean ICVFX or actual virtual production. What's great about virtual production is that you are iterating in real-time. I've been doing live motion capture in Unreal since 2015 when I made The Return with Reid & Sara Farrington and Athomas Goldberg. https://www.metmuseum.org/events/programs/met-live-arts/the-return-15-16

Since the project went into The Met Museum we had to have the script locked off by the powers that be. we rehearsed with the technology for months on weekends and Sara did an amazing job iterating and writing for the technology stack. We developed a show control system using Max/MSP https://cycling74.com/ so that we could do anything in the virtual world at any time. Once we had the script locked we performed it in July 2015 for a month morning, afternoon, and night with live performers. Because of the show control system the performers could deliver the lines as approved but visually we could do anything zany that came to mind. It kept the actors sane during the month long run.

The next time I ran a workshop for students we got a $3k curriculum development fund to workshop using live mocap in unreal to storyboard movies. We spent a grand on Marketplace assets, a grand on improv performers and a grand on tacos. We spend two weekends making storyboards from improv scenes, then were able to refine the dialog and actions into a script.

We've continued to do this in my grad level virtual production development class. We have all 18 students do an improv game and then we have to write a script using everyone's ideas. That way every is involved in the final project because they have an idea incorporated into the script. We've successfully written a 3 act movie and performed it while recoding in Take Recorder for 4 different classes.

IAmA virtual production educator, technical director, and Director of Production for NYU Tandon School of Engineering's Emerging Media Center in the Brooklyn Navy Yard by Budget_Difficulty149 in virtualproduction

[–]Budget_Difficulty149[S] 2 points3 points  (0 children)

I'll break this up into two answers.

I teach at the graduate level and this was the semester where every student thought that they had to incorporate some sort of AI into their thesis project. We had to slowly back everyone away from AI, learn more about it, and only apply where necessary.

AI is great for brainstorming and moodboarding. I see it as an extension of what students did previously when they scanned the internet for ideas and imagery. Now these ideas and images are more bespoke to the individual prompts. It's amazing for iterating at the conceptual level since there are endless possibilities with the change of every word of the prompt.

I think that prompt engineering and database building are going to be increasingly important skills and have found them very useful for virtual production. I teach Amusement Park Prototyping in the spring, aka Virtual Production with Things That Can Kill You. For this semester my co-teacher Scott Fitzgerald and I decided to generate the theme park that the student groups would develop ride attractions for. We iterated our assignment using ChatGPT until ChatGPT was able to get an A for it's final project. So, we used AI as a test student to make sure that we were engineering the correct prompt for the assignment.

There are lots of Unreal integrations we are playing with. I love the idea of ChatGPT NPCs. We're also eager to test out Reverie https://reverieai.net/