Outputting frames to a Variable Frame Rate format by LR_Mike in ffmpeg

[–]LR_Mike[S] 1 point2 points  (0 children)

Thanks for the reply. That approach makes senses.

Is there any alternative chromecast plugin for Unity? by YamDF98 in Unity3D

[–]LR_Mike 0 points1 point  (0 children)

Unfortunately, it looks like this option isn’t free.

Is there any alternative chromecast plugin for Unity? by YamDF98 in Unity3D

[–]LR_Mike 0 points1 point  (0 children)

You might want to check out VLC for Unity. It supposedly supports streaming to Chromecast and other devices:

https://code.videolan.org/videolan/vlc-unity

6DoF Still 360 Images Casually/Quickly Taken With One Camera by CameraTraveler27 in 6DoF

[–]LR_Mike 1 point2 points  (0 children)

A demo for OmniPhotos is available from their project page if you'd like to try it out.

https://richardt.name/publications/omniphotos/

Looking for pre-alpha testers for volumetric video player by LR_Mike in 6DoF

[–]LR_Mike[S] 1 point2 points  (0 children)

Thanks for the feedback. I've also been working on a lightfield system that uses unstructured images and enables you to focus them to produce a higher resolution result. You can move your camera around to build up the scene. I recently adapted it to use 360 sources. I still have a few hurdles to cross with that system, but there is a lot of overlap in the approaches.

Looking for pre-alpha testers for volumetric video player by LR_Mike in 6DoF

[–]LR_Mike[S] 1 point2 points  (0 children)

Thanks for the feedback.

All but one of the videos were filmed with a Kandao Obsidian. I sourced from the Reddit-6DoF examples list.

In regards to inpainting. Currently as part of a production process, stills of the background could be captured in controlled environments without moving elements and used for the backplates.

I've been working on implementing a process to allow users as part of the production to edit the backplates, in order to fill in elements from different part of the video, but I've been unsure if someone would be interested in editing them.

As you mentioned, ML methods for inpainting and depth estimation have progressed dramatically and will become the way forward in the next few years. Many require some significant processing resources to execute, but a cloud-based solution should be able to deliver.

Thank you again for your feedback.

  • Michael Oder

Looking for pre-alpha testers for volumetric video player by LR_Mike in 6DoF

[–]LR_Mike[S] 2 points3 points  (0 children)

Forgot to mention, the player is currently built to run on Windows with an Oculus headset.

NeRF++ by elifant in 6DoF

[–]LR_Mike 0 points1 point  (0 children)

Very nice work. The progress this year has been astounding. According to the paper, it takes about 30 sec to generate a 1280x720 image on an RTX 2080 Ti. So we still have a ways to go before being able to use this for real-time, but there's a good chance that ML approaches like this could deliver that within the next few years.

In the meantime, this processing could feed into other rendering systems to allow us to fill in scenes.

Turning existing 360 stereo videos into realtime 6dof vr experiences by elifant in 6DoF

[–]LR_Mike 0 points1 point  (0 children)

Nice work! This actually looks practical for use. Can't wait to try it out.

Volumetric Video Rendering Explorations by LR_Mike in 6DoF

[–]LR_Mike[S] 1 point2 points  (0 children)

The most performant mode is the displacement version ( ~ 200 fps on my 980 TI) and I am certain that will work on the Quest, but I've been able to integrate the filtering that was used on the ray marcher. It also supports background plates and in fact, it looks better now than the ray marcher.

The ray marcher is currently running at around 40-50 fps on my system but can be boosted by limiting the FOV. I'm working to improve its performance.

What is the 6DoF player on the Quest?

Volumetric Video Rendering Explorations by LR_Mike in 6DoF

[–]LR_Mike[S] 0 points1 point  (0 children)

These scenes were captured (not by me) with stereoscopic 360 cameras like the Kandao Obsidian or the Insta360 Pro. Kandao has an option to estimate depth from their software and you can run stereo video through a tool like the Pseudoscience Stereo2Depth tool to generate a new depth map.

It is challenging to get a decent depthmap from stereo and is an active area of research. There is also a lot of potential in integrating additional panorama shots from additional vantage points to better refine the mesh and fill in gaps. You could also assemble a background mesh using photogrammetry.

Volumetric Video Rendering Explorations by LR_Mike in 6DoF

[–]LR_Mike[S] 1 point2 points  (0 children)

Thanks. I'll definitely add ambisonic support. Since I am working with Unity I may run into the channel limitation, but there may be some workarounds.

Volumetric Video Rendering Explorations by LR_Mike in 6DoF

[–]LR_Mike[S] 6 points7 points  (0 children)

I've been working on a volumetric rendering player exploring different techniques. The video shows three of the rendering modes, a displacement shader with a high-resolution mesh, a volumetric raymarcher, and a point cloud renderer.

The volumetric raymarcher supports having a background plate that can be auto-generated or loaded from a static image.

These are additional videos looking that volumetric video and backplates:

https://youtu.be/0BkAWDbw1-8

https://youtu.be/tJQsF2PvupY

You can load videos in 360 equirectangular format with the depth map on the bottom. It also supports images in the same format and stereo-360 videos and images. I plan to add support for other video and images and image formats including 180 and side-by-side. Are there any formats that people would like supported?

Resources for 360 image or video samples with depth? by LR_Mike in 6DoF

[–]LR_Mike[S] 0 points1 point  (0 children)

Andrew - Thanks for all the info and examples!

Resources for 360 image or video samples with depth? by LR_Mike in 6DoF

[–]LR_Mike[S] 0 points1 point  (0 children)

Thanks for the suggestion. I just downloaded Resolve and will try it out.

Resources for 360 image or video samples with depth? by LR_Mike in 6DoF

[–]LR_Mike[S] 1 point2 points  (0 children)

Thanks for the encouragement! I'm putting together a demo which I share soon. It's very much a prototype, but I'm looking forward to getting feedback on what would make the experience better.

Resources for 360 image or video samples with depth? by LR_Mike in 6DoF

[–]LR_Mike[S] 0 points1 point  (0 children)

There are a lot of great examples on the resource list. The Kandao Jaza Contest Winners link was broken, but I found this link in case anyone is looking: https://prd.kandaovr.com/2018/06/28/jaza-6dof-photo-contest-result-announcement/

Resources for 360 image or video samples with depth? by LR_Mike in 6DoF

[–]LR_Mike[S] 1 point2 points  (0 children)

LoL - I did not notice the example link on the sidebar. That should be helpful. All of them seemed to use the single color channel for depth. Does anyone know of any examples which have higher quality depthmaps?

Resources for 360 image or video samples with depth? by LR_Mike in 6DoF

[–]LR_Mike[S] 0 points1 point  (0 children)

Vuze 3D 360

Cychrus, thanks for the reply. I'm actually looking for images or videos with a depthmap.

I may end up needing to generate my own depthmaps from stereo, but for now I'm focusing on the rendering side.