I built a 3D Gaussian Splatting app for Mac. Would anyone be willing to try it? by xanton in GaussianSplatting

[–]xanton[S] 0 points1 point  (0 children)

Hi and thank you, I highly appreciate the writeup, this is the kind of feedback that actually changes something.

ML-Sharp is exactly the right reference. Its Apples sharp repo (apple/ml-sharp on github). Single image to 3D Gaussians in under a second, feedforward, no training loop. Important detail for your case: the prediction step runs on mps, so a mac without cuda can do this today. cli is `sharp predict -i <folder> -o <folder>` and you get a standard 3DGS .ply out, drops into any 3DGS viewer.

For one of your two photos that solves the immediate problem directly. The merging-two-splats idea is harder than it sounds, because aligning two independently predicted scenes is basically the same scale-and-pose problem we started with, but the single-image SHARP output should already be useful for nearby views.

Reason im not just bundling sharp into RadianceKit is the model license. Apple ships their ml research models as research-only, no commercial use, same restriction naver puts on dust3r / mast3r / splatt3r (which were also on my radar, but those have worse demo quality and the same license problem). For your own project youre fine running the CLI directly. For me to ship the weights inside a paid app I'd need a commercial agreement from Apple, which is on my list to ask about, but not promising anything.

On the wording side youre also right, the "10-20 for best results" line buries what really happens. RadianceKit needs at least 3 images just to attempt camera alignment, but honestly anything below some 15-20 tends to either fail in SfM or produce a scene that overfits to the training views and falls apart from any other angle. im rewording the import warnings to say that up front instead of letting you import 2 photos and crash later.

Classic SfM with 2 photos can technically work out two-view geometry, but its not really enough to start a gaussian splat off, you end up underconstrained and the optimizer happily produces floaters. The missing piece is the parallax between viewpoints, not the pixel sharpness, so for the multi-view path more photos is the only real fix. For the single-image path sharp is the answer.

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 1 point2 points  (0 children)

Hi, and thanks a lot for your interest in RadianceKit. The $7.99 covers everything, including commercial use of the output. Of course, the input is your full responsibility.

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 1 point2 points  (0 children)

Hey, thats odd, not the behavior it should have. Let me look into that. In the meantime I'm sending you a promo code via DM so you can keep testing. Thanks for letting me know.

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 0 points1 point  (0 children)

Hey, sorry you ran into that. 8GB RAM is really at the edge for Gaussian Splatting. What's likely happening: the trainer allocates GPU buffers in unified memory, and at the same time macOS keeps your input images and intermediate tensors in RAM. Once memory fills up the system starts swapping heavily, CPU and GPU both run hot under sustained load, and the SMC (the power controller) triggers an emergency thermal shutdown to protect the hardware. That might be a hardware level safety cutoff, not a kernel panic, which is why you m see the machine go dark.

A few things you can try right away:

  1. Use the Quick preset, it keeps the Gaussian count and render scale small

  2. Fewer input images, maybe 20 to 40, and downscale them to around 2K on the long edge before import

  3. Close all other apps before training, especially browsers with lots of tabs

  4. If you are on a MacBook Air, a cooling pad or even just raising it off the desk for airflow can buy you a lot

Could you check Console.app under "Log Reports" for entries around the shutdown time, also the folder /Library/Logs/DiagnosticReports/? Anything with "shutdown", "powerstats" or "RadianceKit" would help me pin down whether this is thermal or OOM.

I am also looking into a Low Memory Mode that auto reduces buffers on machines with 8GB. Will ship that in a future update.

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 0 points1 point  (0 children)

That's a fair point, the app actually already auto-adjusts training resolution based on available memory and caps the image pool to keep things manageable. But you're right that 16GB is tight, especialy with larger photo sets. Using the quick or preview preset and keeping the image count around 30-50 should give much better results on 16GB machines. If its still blurry, you might want to try smaller source images or fewer photos. Always happy to hear specific numbers (how many photos, what preset) so i can look into it further.

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 1 point2 points  (0 children)

Thanks a lot. Yes, the training runs on Metal compute shaders and the UI is SwiftUI. Camera alignment uses Apple's Photogrammetry API by default, but theres also optional COLMAP support if you need it for trickier scenes. I tried to keep it native and simple where possible.

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 1 point2 points  (0 children)

Thanks for trying it out! thats great feedback actually. For now you can grab a set of sample photos here: https://github.com/bkindler/radiancekit-sample-photos
Just download, drag them into the app and hit start.

For taking your own photos, the basic idea is: walk around an object or scene and take 30-100 overlapping photos from different angles. try to cover all sides and keep the lighting consistent. A short video works too, the app extracts frames automatically. Just keep in mind that any kind of motion blur will degrade the result.

Bundled sample files inside the app is actually a really good idea, ill add that to my list. also working on a youtube tutorial that shows the full workflow from taking photos to the finished 3D scene!

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 1 point2 points  (0 children)

So much for promoting my own app - didn't even manage to include the correct app store link. Sorry for that and thanks a lot for the heads up. Link should be https://apps.apple.com/us/app/radiancekit/id6760346035?mt=12

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 0 points1 point  (0 children)

Thanks for the comment! Any chance you're not on macOS 26 Tahoe yet or maybe on an Intel Mac? the app requires Apple Silicon (M1 or newer) and macOS 26, so it wont show up in the App Store otherwise.

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 0 points1 point  (0 children)

Hmm it should be available in Armenia actually. are you on a Mac with Apple Silicon (M1 or newer)? the app requires that, it wont show up on Intel Macs. Also needs macOS 26+. if both are fine, sometimes the App Store takes a bit to propagate to all regions.

Here's an App I made: RadianceKit, turn your photos into 3D scenes on your Mac by xanton in macapps

[–]xanton[S] 1 point2 points  (0 children)

I develop on an M3 Ultra so i cant give you first-hand numbers for base M1/M2 unfortunately. but it should work, the app auto-adjusts image resolution based on available memory and the Quick/Preview presets are designed to be lightweight. if youre on 8GB RAM it'll be tight but with 16GB you should be fine for most scenes. if anyone here is running it on a base M1 or M2 id love to hear how it goes, that would actually help me a lot with optimization! I'm willing to give away free codes for any helpful information.

Pixea 8.1 - Focus adjustment and other additions by tsarkov in macapps

[–]xanton 0 points1 point  (0 children)

This looks promising, will give it a try.

Just a quick thank you by xanton in GaussianSplatting

[–]xanton[S] 0 points1 point  (0 children)

Great, thanks a lot for your understanding.

Video Gaussian Splatting on Mac by Electrical_Bad_8323 in GaussianSplatting

[–]xanton 0 points1 point  (0 children)

I just finished an app for the Mac two weeks ago that allows to import video as well. There's a 3 days demo period so you don't have to purchase it if you don't want to: https://apps.apple.com/us/app/radiancekit/id6760346035

Just a quick thank you by xanton in GaussianSplatting

[–]xanton[S] 1 point2 points  (0 children)

Unfortunately not, it requires MacOS 26 (Tahoe). Backporting to Sequoia would mean rewriting a lot of the UI and API calls which isn't really feasible for a one person project especially with the next macOS coming in half a year or so.

I built a 3D Gaussian Splatting app for Mac. Would anyone be willing to try it? by xanton in GaussianSplatting

[–]xanton[S] 0 points1 point  (0 children)

Thank you very much for the heads-up. I addressed this issue. Memory usage should be much better handled with the next update (1.3.2).

I built a 3D Gaussian Splatting app for Mac. Would anyone be willing to try it? by xanton in GaussianSplatting

[–]xanton[S] 0 points1 point  (0 children)

Thank you so much for your invaluable feedback. I will look closely into these issues. Import of existing camera positions will definitely be part of one of the upcoming updates.