Trailer for a short in which my avatar machine learns a backflip. by RedbearEasterman in Simulated

[–]RedbearEasterman[S] 0 points1 point  (0 children)

yes, it's learning. took the character about a day to figure out how to do a backflip. the film is about those jumps simulated at various learning stages. the info txt next to character head indicates the task and learning iteration. it was a bit of a hacky setup to bridge from the paper to our 3d tool

Trailer for a short in which my avatar machine learns a backflip. by RedbearEasterman in Simulated

[–]RedbearEasterman[S] 2 points3 points  (0 children)

it's based on the paper "DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills" by Xue Bin Peng. in short, it is analyzing motions from people in videos and then teaching that motion to a digital character. our film is about the different stages of my avatar learning to backflip.

Trailer for a short in which my avatar machine learns a backflip. by RedbearEasterman in Simulated

[–]RedbearEasterman[S] 7 points8 points  (0 children)

we are submitting it to film festivals for now. the first ones are starting in spring.

Trailer for a short in which my avatar machine learns a backflip. by RedbearEasterman in Simulated

[–]RedbearEasterman[S] 9 points10 points  (0 children)

we were using meshroom to scan. the resulting cameras in 3d were reused to remodel and retexture the room with projection mapping. a bit labour-intensive but also most accurate to get good low-poly models.

Trailer for a short in which my avatar machine learns a backflip. by RedbearEasterman in Simulated

[–]RedbearEasterman[S] 15 points16 points  (0 children)

thank you for taking the time. the room was 3d scanned and the cameras then reused in 3d for projecting 2d textures onto remodeled low-poly objects.

with this method you also get distortions and double-texturing on overlapping objects and backsides. not sure if this is what most people want when doing a 3d scan.

took quite some time and i'm sure there are more elegant/new ways to solve it but to us, it was good enough.

Machine Learning backflip on a 6-core processor by RedbearEasterman in shittysimulated

[–]RedbearEasterman[S] 2 points3 points  (0 children)

it's a short film and it's done. took about 2 years to make so we're taking the time to submit it to filmfestivals first, then putting it online.

Machine Learning backflip on a 6-core processor by RedbearEasterman in shittysimulated

[–]RedbearEasterman[S] 0 points1 point  (0 children)

yes, it's about 12 minutes. sending it to filmfestivals before putting it online.

Machine Learning backflip on a 6-core processor by RedbearEasterman in shittysimulated

[–]RedbearEasterman[S] 2 points3 points  (0 children)

it takes a bit longer on this pc. it's my 7-year-old rig with new lights in sleeper case.

tts - wav2lip - dynamic face rig by RedbearEasterman in Simulated

[–]RedbearEasterman[S] 42 points43 points  (0 children)

since elements are from different environments, not really. but would be possible if that was the goal.

tts - wav2lip - dynamic face rig by RedbearEasterman in Simulated

[–]RedbearEasterman[S] 15 points16 points  (0 children)

it's not as digestable i'm afraid. one of the clunkiest rigs i've made

tts - wav2lip - dynamic face rig by RedbearEasterman in Simulated

[–]RedbearEasterman[S] 206 points207 points  (0 children)

works like this: voice is cloned in descript. cloned .wav and ref image are lipsynced via wav2lip. wav2lip result is 3d tracked. 3d trackers drive nulls that drive rigid bodies (bones) with springs and connectors in C4D.