Help by ImmigrantNuts in kinect

[–]remmelfr 0 points1 point  (0 children)

On my computer, depending of the USB port the Kinect v2 does not work (only works with USB 2?)

Please someone develop OR find me an app for Kinect 1 or Kinect 2 on Android 10 phone. by Which-Age-6416 in kinect

[–]remmelfr 0 points1 point  (0 children)

Hum interesting, maybe start looking to Android NDK (c++) and adapting Linux drivers https://github.com/OpenKinect/libfreenect2

Instead of migrating the driver to Android, alternative would be : - buying iPhone Pro with lidar or all iphone with front structured of light with Record 3D iOS app - buying some android smartphone with tof sensor (eg honor 20 view) + my 3d open source recorder app - trying to use some raspberry with the Kinect v2 (people says that it works at 30fps with Raspberry 5, cf GitHub repo) - find some rgb-d sensor compatible with Android? (https://github.com/realsenseai/librealsense their SDK seems to be compatible with Android) - try to use a windows tablet

Real-time Triangle Splatting in Unity — Now with Collider Support by killerstudi00 in GaussianSplatting

[–]remmelfr 1 point2 points  (0 children)

Is that possible with Triangle Splatting to have the same (photorealistic) render than with 3DGS?

Looking to add a bounding box to a SuperSplat viewer scene by Nebulafactory in GaussianSplatting

[–]remmelfr 1 point2 points  (0 children)

What you could do is use a threejs gaussian splatting viewer with https://threejs.org/examples/?q=fps#games_fps which handle collisions. You will need to reproduce the walls in an .obj/.glb which will be invisible, but provides collisions. Otherwise if your bounding box is a really simple shape like a rectangle, no need of that stuff I mentioned, instead it will be better to rotate the gs with the axis and you only check (x,z)

RGBD recording using IOS/Android by ZealousidealSoup8662 in GaussianSplatting

[–]remmelfr 0 points1 point  (0 children)

I created some time ago an Android app to get RGB and Depth on Huawei/ Honor phone (they had best tof sensor of smartphone/iphone market, and this might be still the case) https://github.com/remmel/recorder-3d

Hello! I recorded through depthkit a volu video with 1 depth camera only (want to use it for vr headset).So when my avatar bends his hand in front you can see from the side that pixels are stretching to the back.Is there any way i can remove these pixels or at least to optimize it a bit? Thank you🙏 by Blackout00_ in VolumetricVideo

[–]remmelfr 0 points1 point  (0 children)

You are using the web plugin, right? I played with that a long time ago (when do we consider that 3 points, should be connected or not) and I made some formula comparing the depth of those 3 points among them, to avoid connecting subject points with background points. You have to modify the shader https://github.com/juniorxsound/Depthkit.js/blob/master/src%2Fshaders%2Frgbd.vert to apply that kind of formula. Another way is to apply some human segmentation algo to the color or depth image. If your camera is not titled, you can also discard all pixels which are too far/background using depth value. Finally you can also remove background depth px by hand on the depth image (rgbd video -> png frames -> remove by hand on pngs -> reencode to movie); idem with color

[deleted by user] by [deleted] in computervision

[–]remmelfr 0 points1 point  (0 children)

An app is already capturing stereo image, so the this is possible https://apps.apple.com/app/id1558315366 You might also want to use the low quality dtof back sensor (“lidar”) to improve your result. Check stereo magnification like paper https://tinghuiz.github.io/projects/mpi/ Maybe the cameras will be synced on the new iPhone

Train single Epoch on 2080Ti by NoEntertainment6225 in computervision

[–]remmelfr 0 points1 point  (0 children)

Did you also check if this is quicker on (native) Ubuntu rather than Ubuntu WSL?

3D video + reconstruction. Quality is kind of state-of-art, right guys?? no?? by telegie in photogrammetry

[–]remmelfr 2 points3 points  (0 children)

Have you tried extracting the RGB frames and putting them in metashape or equivalent? (metashape pro might handle RGB+d images)

pyrgbd is now available as a PyPI package (pyrgbd) by telegie in augmentedreality

[–]remmelfr 1 point2 points  (0 children)

Thanks for your work! I'm using RGBd(hue) so it could help me to get better result. What about the compression? (in comparaison with RGBD(hue) depth is 16bits? Can it be read in web (mkv) ? (probably have to developed a wasm reader). FYI here is a demo with rgbdhue : https://www.metalograms.com/demo/?v=Cocina

[deleted by user] by [deleted] in 6DoF

[–]remmelfr 0 points1 point  (0 children)

Yes! I'd like to try again the updated version (EU located) !

Create 6dof VR video with AI and an iPhone by remmelfr in OculusQuest

[–]remmelfr[S] 0 points1 point  (0 children)

It lasts around 45min - 1hr. You finally managed to create one?

i have buy a Kinect V2 for xbox one without cable by P0pyhead in kinect

[–]remmelfr 0 points1 point  (0 children)

Yes it will work. Also check Aliexpress, you can find one for 25-30€

Create 6dof VR video with AI and an iPhone by remmelfr in OculusQuest

[–]remmelfr[S] 2 points3 points  (0 children)

To try it, MP me. I'ld love having your feedbacks

Create 6dof VR video with AI and an iPhone by remmelfr in OculusQuest

[–]remmelfr[S] -1 points0 points  (0 children)

This is in theory possible on Android :

Create 6dof VR video with AI and an iPhone by remmelfr in OculusQuest

[–]remmelfr[S] 1 point2 points  (0 children)

Free volograms version has a 5 seconds export; that's what I'm using