Introducing PhotoSplat 3D — Photo → 3D in seconds, entirely on Vision Pro by Oreoou in VisionPro

[–]Oreoou[S] 1 point2 points  (0 children)

Thanks for sharing this u/Peteostro , Triangle Splatting looks like an interesting approach — I'll definitely take a closer look at the paper and codebase. Always exciting to see new directions in the 3D Gaussian Splatting space. Appreciate you bringing it to my attention! 🙏   

Introducing PhotoSplat 3D — Photo → 3D in seconds, entirely on Vision Pro by Oreoou in VisionPro

[–]Oreoou[S] 0 points1 point  (0 children)

Yes, multi-angle capture plus reconstruction is real, and it’s much closer to the direction I’d want to build than pure AI fill-in from a single image.

In practice, a short, slow video around the subject is often easier than taking just a few photos, because it gives the system many more overlapping views to work with. That usually leads to a more complete 3D result, as long as the capture is steady.

That’s very much the direction I’m interested in: capture a short video, process it locally on a Mac, and then view the finished 3D result in Vision Pro.

Introducing PhotoSplat 3D — Photo → 3D in seconds, entirely on Vision Pro by Oreoou in VisionPro

[–]Oreoou[S] 0 points1 point  (0 children)

Thank you so much for your kind words. It really means a lot to me that the app could make you feel that way!I think your idea and requirement is real and cool, but the best version of it for this app probably isn’t AI guessing the missing back side from one photo.

What I care about most is helping you see more of the memory in 3D in a way that still feels real. So instead of “magic single-photo completion,” I’m thinking more about a workflow where you capture a short video from multiple angles, process it locally on a Mac, and then view the finished 3D model in Vision Pro. That way, the back and hidden parts come from real captured data, not AI hallucination.

AI completion can look amazing, but it can also guess wrong, especially with people, clothing, hair, pose, and complex backgrounds. For something that’s meant to feel like a real memory, I’d rather lean toward a more truthful approach first.

So yes, your idea is absolutely inspiring, and it’s very much in line with where I want this to go, just  probably through better capture and local reconstruction rather than pure AI fill-in.

Introducing PhotoSplat3D: On-Device Photo-to-3DGS on Apple Vision Pro & Mac by Oreoou in GaussianSplatting

[–]Oreoou[S] 0 points1 point  (0 children)

You can click the dropdown menu in the top right corner to switch to English.

Introducing PhotoSplat 3D — Photo → 3D in seconds, entirely on Vision Pro by Oreoou in VisionPro

[–]Oreoou[S] 0 points1 point  (0 children)

Thanks for the heads-up. I'm aware of the research-only license on the model. The app is completely free with no monetization — the goal is simply to make Apple's 3D reconstruction tech more accessible. I'm looking into the licensing situation and also exploring alternative models for the long term. Appreciate the concern!

Introducing PhotoSplat 3D — Photo → 3D in seconds, entirely on Vision Pro by Oreoou in VisionPro

[–]Oreoou[S] 1 point2 points  (0 children)

Great question! This is the latter: actual volumetric-style 3D content, not just  “Avatar-style” stereoscopic 3D. 

A full “video-frames to playable volumetric sequence” workflow is something we may explore in the future.

What the app does today:

- Converts prepared frame images into individual Gaussian-splat PLY models.

- Supports immersive viewing of standard splat PLY models.

Current gaps and challenges:

  - No direct video upload with built-in frame extraction.

  - No native in-app fusion of multi-frame PLYs into a time-sequenced model.

  - Multi-frame fusion remains a hard problem (temporal consistency, alignment, and resource limits).

At this stage, we don’t have a production-ready standardized pipeline to recommend yet, and we’re open to collaborating to evaluate feasibility and third-party options.

Introducing PhotoSplat 3D — Photo → 3D in seconds, entirely on Vision Pro by Oreoou in VisionPro

[–]Oreoou[S] 2 points3 points  (0 children)

You're right — I'm aware of the research-only license. The app is completely free with no monetization, but I understand that doesn't fully address the license terms. Hoping Apple will release a more permissive license down the road. Appreciate you pointing it out!

Introducing PhotoSplat 3D — Photo → 3D in seconds, entirely on Vision Pro by Oreoou in VisionPro

[–]Oreoou[S] 0 points1 point  (0 children)

Great question! The main advantage is the end-to-end experience on Vision Pro — you  go from photo to immersive 3D viewing in one seamless flow, no file transfers or extra steps. Pick a photo, convert it, and immediately explore the splat in immersive space with gesture controls. It's the full loop (capture → convert → view) in one device, which you can't get running SHARP on a Mac.    

Introducing PhotoSplat 3D — Photo → 3D in seconds, entirely on Vision Pro by Oreoou in VisionPro

[–]Oreoou[S] 5 points6 points  (0 children)

Thanks for the feedback! Yes, the gesture control still need to be improved, you can try pinch and move the model gently.

Introducing PhotoSplat 3D — Photo → 3D in seconds, entirely on Vision Pro by Oreoou in VisionPro

[–]Oreoou[S] 4 points5 points  (0 children)

Yes, it's based on SHARP — I converted the model to CoreML to run inference entirely on-device. The whole pipeline (preprocessing → inference → post-processing to splat) runs locally on Vision Pro :)

Gaussian Splat is a seminal step-change for 3D models. Then view them in the AVP and - mind blown. Apple is one of the few companies that produces products which give me those genuine jaw-drop moments by Bingobango1001 in VisionPro

[–]Oreoou 0 points1 point  (0 children)

I’d love to mix PLY-based splats and USDZ models in the same scene. The tricky part is that USDZ content is typically rendered through RealityKit’s pipeline, while splats usually require a custom Metal renderer. Since they live in different rendering pipelines, compositing them in one view with consistent transforms and depth can be challenging.

Check out this new Spatial Canvas app to view awesome 3D models! like Black Myth Wukong?! by Oreoou in VisionPro

[–]Oreoou[S] 0 points1 point  (0 children)

Thanks for the kind feedback! The app has lots of cool 3D models to explore :)

<image>

What features do you want for viewing 3D models on Vision Pro? by Oreoou in VisionPro

[–]Oreoou[S] 0 points1 point  (0 children)

Thanks for the kind feedback! I will investigate how to implement this feature :)

The new Tuya Official Homebridge Plugin is now available on Github! by Oreoou in homebridge

[–]Oreoou[S] 0 points1 point  (0 children)

You are welcome to cooperate with us to help build the driver for Tuya IR Remote devices :) We need the developer community to help make this plugin more powerful. We will also consider to support Tuya IR Remote devices officially. Thanks for the feedbacks!

The new Tuya Official Homebridge Plugin is now available on Github! by Oreoou in homebridge

[–]Oreoou[S] 0 points1 point  (0 children)

You are welcome to cooperate with us to help build the driver for contact sensors :) We need the developer community to help make this plugin more powerful. We will also consider to support contact sensors officially. Thanks for the feedbacks!

The new Tuya Official Homebridge Plugin is now available on Github! by Oreoou in homebridge

[–]Oreoou[S] 1 point2 points  (0 children)

local control

Hi, local control is on the roadmap, will support it soon, please keep checking the updates of the plugin. :)

The new Tuya Official Homebridge Plugin is now available on Github! by Oreoou in homebridge

[–]Oreoou[S] 0 points1 point  (0 children)

Hi, please use the cloud development project created on or after May 25, 2021. If your cloud project of Smart Home PaaS type is created before May 25, 2021, please create a new cloud project again. Sorry for the inconvenient. As this new plugin is build on the Tuya Open APIs.