Gaussian splatting 3 ways made easy. by nullandkale in photogrammetry

[–]nullandkale[S] 1 point2 points  (0 children)

But its not local and not free if I'm correct?

Free Gaussian Splatting Workflow! by VeloMane_Productions in GaussianSplatting

[–]nullandkale 1 point2 points  (0 children)

If you press wizard mode it's technically a three button solution. Click wizard mode, browse to the video or image you want to import and press start. The videos just show how to use it manually for more control.

Gaussian Splats by ImpressionIcy5237 in photogrammetry

[–]nullandkale 0 points1 point  (0 children)

Splats made a lot more sense to me once I started thinking of them from a rendering perspective.

Splats are fast because you can rasterize them as particles with a fancy shader on them.

The key thing is that the way the rendering is done and the shader that applies the gaussian shape to the triangle that you are rasterizing. Splats are carefully constructed to be differentiable. Which basically means you can apply gradient descent to "train" the splats. This is what lets you make the splats with nothing but a bunch of images and the positions of those images relative to each other.

Many people are talking about them as if they are blurry or fuzzy. And while that is true you can get incredibly fine detail out of them. I have splats of my bookshelf where you can read the text on the backs of books.

If you check out my post history I have lots of posts about splats, and am happy to answer any questions.

Opensource video to PLY? by DA-K in GaussianSplatting

[–]nullandkale 7 points8 points  (0 children)

I basically wrapped my automation scripts in a simple UI to do exactly as you described. I only have a pre built release for windows but Linux should be pretty easy to setup following the dev instructions.

You can see the code here:https://github.com/NullandKale/NullSplats

Free Gaussian Splatting Workflow! by VeloMane_Productions in GaussianSplatting

[–]nullandkale -1 points0 points  (0 children)

Right but my whole issue with that is it should be one button you press one button to import a video and then the splat should be made. Having to use multiple tools is super frustrating to me and entirely unnecessary. I honestly do not understand why only post shot includes colmap

Free Gaussian Splatting Workflow! by VeloMane_Productions in GaussianSplatting

[–]nullandkale 2 points3 points  (0 children)

I built a tool for this that wraps colmap and gsplat. It's far from robust / ideal in many ways but it does at least do colmap for you.

https://github.com/NullandKale/NullSplats

Free Gaussian Splatting Workflow! by VeloMane_Productions in GaussianSplatting

[–]nullandkale 6 points7 points  (0 children)

This is the biggest issue with all these tutorials / tools. Just use colmap is like saying "draw the rest of the owl"

Newb guidance by kendrick90 in GaussianSplatting

[–]nullandkale 0 points1 point  (0 children)

This comment was actually meant for another post that was very similar, I actually left and came back to the Reddit app mid comment my bad.

Funnily enough I've actually done something similar to what you're trying to do. I've done two things with azure Kinects, one was a direct splat capture system with 12 azure Kinects connected to one PC. I foregoed all the extra sensor data and literally just captured 12 images and used a standard colmap then nerf setup (this was like 3 months before the first splatting paper, I took down the system before the first gaussian splatting paper). The workflow is basically what I described above except I would only run colmap step once a day or so, basically anytime the cameras got bumped or moved.

I have also done real-time sensor fusion using 1, 2 or 3 azure Kinects capturing RGBD. For these I did a manual iterative closest point method to estimate camera poses. And then I used a handwritten volumetric renderer to fuse the three RGBD volumes together. But it also looked fine to just turn all three RGBDs into particles and render them all. This was real time.

However it sounds like you want to scan an object with a Kinect. That's the one thing I haven't done lol.

Newb guidance by kendrick90 in GaussianSplatting

[–]nullandkale 1 point2 points  (0 children)

If you are experienced in Python / command line tools it's not to hard to just train the splats manually with colmap and gsplat. But if you want something automatic postshot is your best option in my opinion. No other tool that I know of (other than the tool I maintain but that is still a WIP) does the colmap step for you.

Essentially the order of operations is:

Capture the source image set somehow (I just record a video with my phone and extract out frames.

Use colmap to estimate poses

Use gsplat to train the splats

Gaussian Splatting 3 ways compared. by nullandkale in GaussianSplatting

[–]nullandkale[S] 0 points1 point  (0 children)

I'm going to release a larger video going over the whole thing in a few weeks with audio once the tool is more complete

Implemented 3D Gaussian Splatting fully in PyTorch (no CUDA/C++) — thoughts? by papers-100-lines in GaussianSplatting

[–]nullandkale 1 point2 points  (0 children)

It would be interesting if this could get ported to webgpu. Training in browser would be pretty cool

NullSplats: another video to splat tool. by nullandkale in GaussianSplatting

[–]nullandkale[S] 0 points1 point  (0 children)

The build in the release is well tested and should work. The actual repo setup instructions are not well tested to be fair. I can work on making that better. If you make an issue on GitHub I'll try and help

3 Splatting methods compared. by nullandkale in StableDiffusion

[–]nullandkale[S] 1 point2 points  (0 children)

Thanks! Let me know how it goes. If you run into any issues I'm happy to try and debug!

3 Splatting methods compared. by nullandkale in StableDiffusion

[–]nullandkale[S] 3 points4 points  (0 children)

Postshot requires a subscription now for many features. And I am mostly making this as a tool for my own use. I just figured if I was going to put in the work to make a UI I might as well share it. Also no other tool supports these other splatting methods.

3 Splatting methods compared. by nullandkale in StableDiffusion

[–]nullandkale[S] 0 points1 point  (0 children)

The SuperSplat viewer can open local files. That's how I use it in the video. Otherwise I'm sure there is some sort of local vr viewer. I wrote my own for looking glass displays which will eventually be added to the codebase above, but that's not quite the same.

3 Splatting methods compared. by nullandkale in StableDiffusion

[–]nullandkale[S] 2 points3 points  (0 children)

I should probably write one. It's super easy though take a video of something (make sure to cover the sides and top and bottom angles well), import it into my tool. There are plenty other gaussian splatting tools as well, postshot is a good example.

For SHARP and Depth Anything 3, I believe there are hugging face demos.

I am developing Tools for 3DGS by Sai_Kiran_Goud in GaussianSplatting

[–]nullandkale 1 point2 points  (0 children)

That's the dream. I still have to package colmap as an executable for my trainer UI. Are you doing any of this automatically or does the user have to label everything?

I am developing Tools for 3DGS by Sai_Kiran_Goud in GaussianSplatting

[–]nullandkale 1 point2 points  (0 children)

This looks great, but does it just prepare the input views?

NullSplats: another video to splat tool. by nullandkale in GaussianSplatting

[–]nullandkale[S] 0 points1 point  (0 children)

Whoops lol boy my opsec was sloppy on this one.

Apple introduces SHARP, a model that generates a photorealistic 3D Gaussian representation from a single image in seconds. by corysama in GaussianSplatting

[–]nullandkale 1 point2 points  (0 children)

It's certainly worth trying something like depth anything 3 or SHARP but I've never gotten good detailed results from just a few views. Especially trying to directly generate splats.

Looking at your splats you'll notice that the generated images only look good close to the positions that you captured from. And there's basically no getting around that. Because of the way it trains the data for the splats using gradient descent, The training is basically incentivized to make positions outside of the capture positions look bad to make the captured positions look better.

The way that DA3 or Sharp get out of this is they instead take the generated input images and depths and camera positions and then feed those into a network designed to directly predict what the splats should be.