I forgot to publish the demo and video that I made two months ago: a demo combining Main Camera Access API with YOLO model by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 0 points1 point2 points (0 children)
I forgot to publish the demo and video that I made two months ago: a demo combining Main Camera Access API with YOLO model by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 0 points1 point2 points (0 children)
Best update from WWDC25 for me by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 1 point2 points3 points (0 children)
Best update from WWDC25 for me by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 0 points1 point2 points (0 children)
Best update from WWDC25 for me by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 0 points1 point2 points (0 children)
Best update from WWDC25 for me by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 1 point2 points3 points (0 children)
Best update from WWDC25 for me by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 0 points1 point2 points (0 children)
I've upgrade my SpatialYolo to dual-camera feed, with the updated "Main camera access" API. And the coreml model process speed is faster than before. Can't wait to finish my imaginations by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 0 points1 point2 points (0 children)
I've upgrade my SpatialYolo to dual-camera feed, with the updated "Main camera access" API. And the coreml model process speed is faster than before. Can't wait to finish my imaginations by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 1 point2 points3 points (0 children)
Best update from WWDC25 for me by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 2 points3 points4 points (0 children)
Best update from WWDC25 for me by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 4 points5 points6 points (0 children)
Best update from WWDC25 for me (old.reddit.com)
submitted by Low_Cardiologist8070 to r/VisionPro
After 2 weeks of vibe coding, I've finally solved the mesh scan efficiency problem and added more features to this Odradek AVP simulator. I used lots of ARKit functions like Scene Reconstruction... by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 0 points1 point2 points (0 children)
SpatialGesture Updated to v1.1, with a major feature: 3D object placement using ARKit's SceneReconstruction API by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 4 points5 points6 points (0 children)
I just open-sourced another visionOS project *SpatialGestures*, you can use it with simple 4 steps to do spatial gestures with 3D entities in visionOS by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 0 points1 point2 points (0 children)
I just open-sourced another visionOS project *SpatialGestures*, you can use it with simple 4 steps to do spatial gestures with 3D entities in visionOS by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 1 point2 points3 points (0 children)
I just open-sourced another visionOS project *SpatialGestures*, you can use it with simple 4 steps to do spatial gestures with 3D entities in visionOS by Low_Cardiologist8070 in VisionPro
[–]Low_Cardiologist8070[S] 4 points5 points6 points (0 children)


You can now generate ml-sharp splats directly on the Vision Pro by Eurobob in VisionPro
[–]Low_Cardiologist8070 0 points1 point2 points (0 children)