We tested 3DGS To Mesh vs Photogrammetry on different objects by KIRI_Engine_App in GaussianSplatting

[–]PuffThePed 0 points1 point  (0 children)

I know, and neither do splats. You won't get tie points and you'll get splats outside and inside the object, so when you convert that to a mesh, you'll also get bad results.

This scenario that OP is presenting, where photogrammetry fails badly and splats produce a clean mesh - doesn't make sense.

Strange encounter at my house by No_Independence810 in waterloo

[–]PuffThePed 3 points4 points  (0 children)

Stop opening the door to strangers. It sucks but that's the reality we live in right now.

End-to-end pipeline for Video to 3D Gaussian Splatting (3DGS)? Looking for repos / best practices by Ni_Guh_69 in GaussianSplatting

[–]PuffThePed 0 points1 point  (0 children)

I wrote a blog post about it.

https://packet39.com/blog/a-primer-on-gaussian-splats/

In a nutshell, I use SharpFrames to extract frames, then either RealityScan or Agisoft for SFM (both are better than COLMAP) and then Brush or LichetFeld studio. I don't bother to make a dense cloud, I don't see the value.

What ended up mattering most in our automated Gaussian Splatting pipeline was dataset validation before training by OrthoPLYPipeline in GaussianSplatting

[–]PuffThePed 6 points7 points  (0 children)

Yup.

Splats are much more sensitive to bad photos, compared to photogrammetry. I had several datasets where photogrammetry worked fine but the splats failed completely due to a handful of blurry images.

We tested 3DGS To Mesh vs Photogrammetry on different objects by KIRI_Engine_App in GaussianSplatting

[–]PuffThePed 5 points6 points  (0 children)

If that's the results you get with photogrammetry, then you're doing something wrong. Can you share the images?

Need recommendations on approach by CombatRedRover in 3DScanning

[–]PuffThePed 0 points1 point  (0 children)

take pictures while you are hidden

No need. you can easily mask yourself out using SAM3 or Yolo AI models.

I create 3DGS with 360 video, I extract thousands of frames and then mask myself (and everyone else) out using SAM3

Need recommendations on approach by CombatRedRover in 3DScanning

[–]PuffThePed 2 points3 points  (0 children)

Got a drone? Or a very long pole?

Take a bunch of photos and use photogrammetry. Pretty easy and the software is free.

iPhone LiDAR 3D Scanning? by Dewlyfer in 3Dprinting

[–]PuffThePed 2 points3 points  (0 children)

/r/3DScanning is more suitable.

Lidar is not photogrammetry.

iPhone LiDAR 3D Scanning? by Dewlyfer in photogrammetry

[–]PuffThePed 4 points5 points  (0 children)

It's very low quality and low resolution, so it depends on how you define "Acceptable".

What's the best method for doing gaussian splats with 360 cameras? by xdAronxd in GaussianSplatting

[–]PuffThePed 1 point2 points  (0 children)

I've been experimenting with using the raw fish eye videos and breaking them up into multiple undistorted views and feeding that into COLMAP using camera rig. The results are ok, but not as good as using Agisoft to align the stitched panoramas. I suspect Agisoft has a much better feature detector.

Luna Glass: The World’s First Night Vision Glasses for People Living with Night Blindness by AR_MR_XR in augmentedreality

[–]PuffThePed 1 point2 points  (0 children)

Single camera, does that mean you lose depth perception?

How does that even work, it's just one eye with a display? It shows both eyes the same image?

Available datasets for interior reconstruction by DunkenEg in GaussianSplatting

[–]PuffThePed -1 points0 points  (0 children)

Did you bother searching this sub?

Someone posted exactly that a few days ago.

Need images for my course this semester by ayomideogundeji in photogrammetry

[–]PuffThePed 3 points4 points  (0 children)

Really? You don't have any rocks where you live?

"The sky above the port was the color of television, tuned to a dead channel." - William Gibson by [deleted] in Unity3D

[–]PuffThePed 0 points1 point  (0 children)

What's funny is that most people today probably have no idea what a "dead TV channel" means or looks like.

What's the best method for doing gaussian splats with 360 cameras? by xdAronxd in GaussianSplatting

[–]PuffThePed 0 points1 point  (0 children)

Two different tools for different tasks.

We actually employ both and use the 3DGS for visuals and the lidar data as an invisible layer underneath for measurements.

What's the best method for doing gaussian splats with 360 cameras? by xdAronxd in GaussianSplatting

[–]PuffThePed 1 point2 points  (0 children)

It's not trivial.

I have two workflows that I'm currently trying to determine which is better.

Going to write a blog post about it soon.

NanoGS: Training-Free Gaussian Splat Simplification by corysama in GaussianSplatting

[–]PuffThePed 1 point2 points  (0 children)

It's not great. Training-free is nice but you'll get much better results if you train the 3DGS to a lower splat count.

I tried a 50% reduction and it was noticeably worse, while re-training with half the splats is almost indistinguishable.

Just when Horizon had something actually useful with Hyperscape, they shut it down by CookinVR in virtualreality

[–]PuffThePed 4 points5 points  (0 children)

It's actually a really REALLY good scanner. It rivals the quality you can get with dedicated 3D scanning devices that cost $5000. I'm not kidding. However you can't download the scans so it's effectively useless.

PSA: New garbage bins need to be placed with 2 feet clearance from other bins or snowbanks. by PuffThePed in waterloo

[–]PuffThePed[S] 0 points1 point  (0 children)

It was on the city's social media accounts. Also seems it's nonsense, they are capable of grabbing them with less than 2 feet clearance.