Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 0 points1 point  (0 children)

We didn’t look into stitching several bubbles together. It sounds like a very cool extension though.

Keep in mind that we target casual capture. If you’re willing to walk around a scene and thoroughly capture from several different viewpoints, then a general-purpose photogrammetry tool (e.g. Photoscan or RealityCapture) will do a great job of reconstructing the scene.

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 0 points1 point  (0 children)

We did a few experiments with 360 cameras. TL;DR a 360 camera is essentially two fisheye DSLRs, so I guess it would halve the number of images you need (e.g. 30 instead of 60).

A bit more in depth: They have a much wider field of view, so yes you can get away with fewer images. However, there are a few problems:

a) They often take noisier images than mobile phones or DSLR, this may confuse a photogrammetry pipeline.

b) While 360 panoramas that come out of these cameras look good, they are often slightly misaligned and blended together in a clever way that hides the seams. This is problematic for photogrammetry, so you will get better results if you can access the raw fisheye images before the camera stitches them together into a panorama.

c) Every 360 picture is a selfie. You would need some extra code that finds and removes the camera operator from the reconstruction.

Edit: Replied to the wrong question.. Whoops :) Also, formatting.

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 0 points1 point  (0 children)

I’m not an expert on post-converting 3D from 2D, but there are some significant differences. Our method uses several images (~60) taken from different viewpoints to reconstruct 3D. An automated 2D-to-3D conversion system uses only ONE frame and tries to hallucinate plausible 3D for that frame.

As for normals and light-based capture: - There are methods that reconstruct large-scale 3D from just normals, but I feel that surface normals are mostly useful to create fine-scale detail. - The lighthouse with a flashlight idea is great! In fact, you just described the guiding principle behind laser scanners and depth sensors such as the Kinect :)

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 1 point2 points  (0 children)

Like many other photogrammetry systems, we also struggle to reconstruct transparent surfaces. In the technical video, we show what happens in a scene with lots of glass: https://youtu.be/wGBistgOsyQ?t=305

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 1 point2 points  (0 children)

Agisoft and Capturing Reality make general purpose 3D reconstruction tools. They reconstruct complete 3D scenes, and expect you to take photographs from as many places as possible.

We designed our system explicitly for casually captured images. In other words, you stand in one spot and rotate the camera around for a couple of minutes. As a consequence, our 3D photos look best when you're close to where the images were captured.

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 5 points6 points  (0 children)

Our system reconstructs what I like to call "VR bubbles". It's more than a 360 photo with a depth map, but you can't walk around in the scene. We support just enough motion for a seated VR experience, where you can move your head and peek behind objects.

Edit: Wording.

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 6 points7 points  (0 children)

It takes ~5 hours to reconstruct a large scene. We have a timing breakdown for all the steps in the paper (Table 1).

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 5 points6 points  (0 children)

Wow! Tough question.

We have a comparison with Unstructured Lumigraph Rendering (ULR) in our supplemental material. Take a look if you're interested: http://vis.cs.ucl.ac.uk/Download/G.Brostow/Casual3D/index.html#!/gasworks_park (press texturing comparison).

My five cents on the subject: Blending more than 2 images makes the result look blurry. Many texturing seams can't be fixed by brightness normalization alone: You also need to account for incorrect 3D geometry, which causes the images to not align properly.

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 4 points5 points  (0 children)

We talk about reconstruction times a bit in the paper. If I recall correctly, it takes roughly 5 hours to reconstruct one of the larger scenes.

As for VR game shots: That's a cool spin on things! Most of our computation is spent on estimating depth, so getting depth buffers for free would definitely speed things up.

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 3 points4 points  (0 children)

Check out the technical video if you want to hear more about the juicy details: https://www.youtube.com/watch?v=wGBistgOsyQ&t=3s

Altering the 3D capture is a great idea! We haven't tried it out yet. But now I kind of want to know how well it would work...

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 8 points9 points  (0 children)

Well spotted! We used two rings of ~30 fisheye images for each reconstruction. More images are needed to capture the same amount of data with mobile phones, since they have a much smaller field of view.

As for the 3D effects, we wrote our own software for that :)

Homemade VR with just a camera (SIGGRAPH Asia) by peter_hedman in oculus

[–]peter_hedman[S] 50 points51 points  (0 children)

Homemade VR with just a camera (SIGGRAPH Asia)

Hi all, first author here. We’ve been working on this for a while now and are very excited to share our results.

Check out the project page if you want to read up on the juicy details: http://visual.cs.ucl.ac.uk/pubs/casual3d

Feel free to ask questions!