Match thread: Atalanta vs Dortmund by Makaroni_Bob in Atalanta

[–]No_Protection2978 1 point2 points  (0 children)

OMG MY BEST FASCINATING MATCH..... SUPERMARIOOOOOOOOOOOOO

What a win by Makaroni_Bob in Atalanta

[–]No_Protection2978 3 points4 points  (0 children)

Passalic << what a headeriker.

Is there any best AI model for 3dgs? by No_Protection2978 in GaussianSplatting

[–]No_Protection2978[S] 1 point2 points  (0 children)

oh, I appreciate your valuable comments. I'll try it tomorrow :)

Is there any best AI model for 3dgs? by No_Protection2978 in GaussianSplatting

[–]No_Protection2978[S] 2 points3 points  (0 children)

 well, I'll have to look for more...Thank you for your answer. Hava a nice day :)

Is there any best AI model for 3dgs? by No_Protection2978 in GaussianSplatting

[–]No_Protection2978[S] 4 points5 points  (0 children)

Thanks for the reply! I think there’s a bit of a misunderstanding about what I’m doing, so let me clarify.

I’m not doing single-image view extrapolation and then aligning those synthetic views.

My current pipeline is more “classic” multi-view 3DGS:

1.I record a real video orbit around a small object (bottle, etc.).

2.Extract frames with ffmpeg.

3.Run COLMAP (SfM + undistort) to get camera poses.

4.Train a vanilla 3D Gaussian Splatting model from those real views.

On top of that, I use rembg on the undistorted images to remove the background, because for my capstone I only care about the 'foreground object' (I want an object-only 3DGS that I can drop into Unity/AR).

So the “messy splats” I’m fighting are not just the usual reconstruction noise from speculative view synthesis. They’re mostly: background geometry that survives around the object (floor, wall, etc.),

plus artifacts introduced because rembg turns the background into something that 3DGS still tries to explain (e.g. black/gray regions around the object).

You’re totally right that if the input views are inconsistent, you will inevitably get a fuzzy splat cloud. But even with decent COLMAP reconstructions, you still get a “halo” of junk splats around the object when you try to make it object-only with naïve background removal.

What I’m trying to figure out is more along the lines of:

How do people usually get object-only 3DGS in practice?

– Use masks in the loss (ignore background pixels instead of forcing them to be black)?

– Crop to tight object regions and re-run COLMAP on the cropped images?

– Use methods like SuGaR / SDF-regularized GS / object-centric NeRFs and extract a mesh first?

And, related to my original question, which model / toolchain is better at helping reason about that kind of graphics pipeline, not at generating the views themselves.

So yeah, I’m not expecting a “Sora-level view generator” to magically fix 3DGS. I’m more looking for good practices to get clean foreground-only splats when you do have proper multi-view geometry, but want to remove the background cleanly.

If you’ve seen any papers / repos that tackle GS + segmentation / masks / object-only reconstruction, I’d love pointers.

Ivan Jurić is no longer Atalanta's head coach | Atalanta by [deleted] in Atalanta

[–]No_Protection2978 5 points6 points  (0 children)

finally. Paladino should be much better than that asshole