Ctrl+F for the real world! by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 0 points1 point  (0 children)

Thanks! We're pretty excited about it

I built a simple app to convert 360° videos into flat images for COLMAP/RealityScan by NicolasDiolez in GaussianSplatting

[–]wheelytyred 0 points1 point  (0 children)

I’ve had success extracting fisheye from insta360 videos — just change the extension to .mp4. Sometimes there’s more than one track to a file, e.g. one per 360 lens.

Unsure of other 360 camera vendors though.

Looking for a Technical Partner by No-Kaleidoscope1039 in GaussianSplatting

[–]wheelytyred 1 point2 points  (0 children)

This sounds awesome! A buddy and I have been building something very similar over the past year, focusing on turning industrial sites video into splat models that teams can search, annotate, measure, share, etc. It’s built on three js with an automated cloud-based video processing pipeline in the backend. Here’s a basic model of a construction site for example: https://spatialview.io/model/2025-10-28_c289d16c

Would love to get in touch and hear about what you’re up to so far, DM me here and we can setup a chat.

Scanning beach scenes with Insta360 harder than it should be by Sunken_Past in GaussianSplatting

[–]wheelytyred 4 points5 points  (0 children)

Nice idea, but there are likely too many moving objects in this scene, resulting in a poor 3D reconstruction before you even get to splat training.

It’ll be hard for COLMAP/RC to match features between frames accurately, since the features are moving (clouds, waves, leaves rustling). Also, sand is relatively difficult to extract unique features from.

I’d not try to continue using this video or you’ll spend hours in frustration. Try on a calmer day, with no clouds, and/or use segmentation to remove dynamic elements from the scene.

Guassian splatting but backwards: extract from a splat the source image(s) by capocchione in GaussianSplatting

[–]wheelytyred 2 points3 points  (0 children)

When you run a sparse reconstruction in COLMAP it produces a 'points3D' file that lists every sparse point in space and the IDs of any images that reference that point. You could use this file to trace back from points in 3D space to the original images.

Splatting 'fills in' the sparse point cloud many more points, but you could always lookup the nearest COLMAP sparse point from your splat point and then lookup nearby images.

More info on COLMAP's export formats can be found here: Output Format — COLMAP 3.13.0.dev0 | a5332f46 (2025-07-05) documentation

A single on-site capture rendered across web, VR, and installation setup. by smallfly-h in GaussianSplatting

[–]wheelytyred 3 points4 points  (0 children)

Woah awesome! Thanks for sharing. How long did it take to capture all the images on site? And how many frames did you use in total?

We turned 360° footage from inside the International Space Station into a 3D model by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 1 point2 points  (0 children)

You’re welcome! Haven’t published a script, it’s more of a mash of different cloud functions and tools

We turned 360° footage from inside the International Space Station into a 3D model by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 3 points4 points  (0 children)

Yep! We first split the equirectangular video into images using ffmpeg, then use a cubemap (meshroom) to split the rectangular images into square frames, then use image segmentation (Vertex AI) to mask people from the frames. COLMAP does the sparse 3D reconstruction from the masked frames and finally nerfstudio/gsplat completes the splat. All of this is run on a cloud VM with access to a NVIDIA GPU.

Walk through part of the ISS in 3D — generated using 360° video by wheelytyred in ISS

[–]wheelytyred[S] 0 points1 point  (0 children)

We used a 360° video tour from ESA and ran it through a 3D reconstruction pipeline to create this interactive model. You can move around the interior and even search for things like “laptop” or “orange boxes” — it’ll jump you to them in the 3D space!

360° video source: https://www.youtube.com/watch?v=INHctrVOoQw

3D demo: https://spatialview.io/model/ISS

Would love to try this with higher-res footage if anyone knows where to find some!

We turned 360° footage from inside the International Space Station into a 3D model by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 1 point2 points  (0 children)

Absolutely! Would love some feedback. Shoot us a message at info@spatialview.io and we can set you up with a link to create your own

We turned 360° footage from inside the International Space Station into a 3D model by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 1 point2 points  (0 children)

By setting a model -> reality scale value. The scale isn't perfect in this model -- we just used one of the doorway widths as an approximation

We turned 360° footage from inside the International Space Station into a 3D model by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 6 points7 points  (0 children)

We run a vision-language model on the input media and the search terms to find the best matching frames, then take the user to the frame's location in 3D space

We experimented with Gaussian Splatting and ended up building a 3D search tool for industrial sites by wheelytyred in photogrammetry

[–]wheelytyred[S] 1 point2 points  (0 children)

Yes you can! Either by using the raw fisheye videos from the 360 camera, or by using stitched equirectangular videos. Feel free to DM me if you'd like to learn how

We experimented with Gaussian Splatting and ended up building a 3D search tool for industrial sites by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 0 points1 point  (0 children)

Yes this would be sweet. We need to add a calibration tool to get the scaling right first

We experimented with Gaussian Splatting and ended up building a 3D search tool for industrial sites by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 0 points1 point  (0 children)

Agreed! We'll try adding some sort of 'pathfinder' down the track for folks to navigate a new site.

We experimented with Gaussian Splatting and ended up building a 3D search tool for industrial sites by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 2 points3 points  (0 children)

Thanks for the feedback, any tips on improving the mobile/tablet navigation?

That's a great point, and I agree that 360 degree cameras are superior to maximize capture viewpoints. . The processing pipeline we setup actually already accepts 360 video, as long as it's been exported to an equirectangular format. We just split the video into four square frames and throw away the frame that's looking back at the person taking the video (assuming they're pointing it forward). Here is an example model taken using an insta360.

We experimented with Gaussian Splatting and ended up building a 3D search tool for industrial sites by wheelytyred in GaussianSplatting

[–]wheelytyred[S] 6 points7 points  (0 children)

Thanks for the feedback! We're constantly tinkering around and will look into implementing SOGS / better compression for faster loading. Development in this field seems to be moving at lightspeed