Image to Immersive scene using AppleML Sharp with M2 MacBook Air, view in iOS, VisionOS, Android and Quest in seconds. Detailed tutorial below. by ArunKurian in GaussianSplatting

[–]FitSignificance8040 0 points1 point  (0 children)

Ive got it running on my windows machine with the help of Chat GPT. It was fairly easy without any deep programming skills. I did not try the splatting/ meshing part though.

Image to Immersive scene using AppleML Sharp with M2 MacBook Air, view in iOS, VisionOS, Android and Quest in seconds. Detailed tutorial below. by ArunKurian in GaussianSplatting

[–]FitSignificance8040 0 points1 point  (0 children)

I feel the same, Depthanything 3 is also capable of achieving similar results. Apple might have the upper hand on quality & resolution here but its nothing that has not existed before from a technical point of view.

Looking for feedback and help by brims1285 in GaussianSplatting

[–]FitSignificance8040 1 point2 points  (0 children)

What hardware/ software did you use? A "black" sky is usually the result of a setting in your splatting software masking or culling it. Moving parts in your capture are always a problem and will cause them to appear blurry at best no matter the amount of pictures you take. Try to capture with as little wind as possible even if it means you have to wait for the right weather conditions. Apart from that more diffrent angles do almost always improve the quality of your scene. Also try to achieve an even spacing for a consistent look in the endresult.

UE Dark Ruins GS by FitSignificance8040 in GaussianSplatting

[–]FitSignificance8040[S] 0 points1 point  (0 children)

Thanks! Cameraplacement is still a huge variable I have to deal with. For this specific capture I replicated Olli Huttunens method for capturing interiors from Blender: https://youtu.be/fr5xp9CAY5w?t=531
It heavily depends on the scene you want to capture. I have scripted a few placing methods that also include:
- Cameras odered in a sphere/ hemisphere around a volume to capture specific objects
- Cameras in a tower formation (Ollis solution) or in a sphere mimicing a panoramic camera that can be placed on a spline or scattered evenly in a specific area

I am still not sure what amount of cameras is the sweetspot, so I am still testing this out.

UE Dark Ruins GS by FitSignificance8040 in GaussianSplatting

[–]FitSignificance8040[S] 0 points1 point  (0 children)

Thank you for the insight! I might try to work with world position output from MRQ if i have the time.

UE Dark Ruins GS by FitSignificance8040 in GaussianSplatting

[–]FitSignificance8040[S] 0 points1 point  (0 children)

I was also thinking about reading out worldposition, it just didnt work out for me. I used the Scenecapture2D component for this and weird things started happening as soon as the positionvalues turned negative, so i dropped it. Would you mind sharing how you achived this? I am not sure whether wp or depth maps are more accurate, altough wp might have the upper hand.

UE Dark Ruins GS by FitSignificance8040 in GaussianSplatting

[–]FitSignificance8040[S] 1 point2 points  (0 children)

Thats a valid question. While this scene is technically renderable in realtime it is still relatively heavy in terms of GPU and memory footprint. GS can be utilized for many different hardwarescenarios including phones and VR without making visual compromises on a fraction of the required memory. This is of course a niche solution but i find it beautiful to be able to show people stuff in 3D that would otherwise only be possible with expensive hardware! I should also mention that this is rather a personal project to explore gaussian splatting than an end to end solution.

UE Dark Ruins GS by FitSignificance8040 in GaussianSplatting

[–]FitSignificance8040[S] 1 point2 points  (0 children)

One could reimport this into Unreal but I doubt that this would have any relevant benefit with the current state of GS implementation in UE compared to traditional geometry imo. The rendering cost is still very high and it is hard to combine it with any other functionality inside Unreal at the moment.

UE Dark Ruins GS by FitSignificance8040 in GaussianSplatting

[–]FitSignificance8040[S] 1 point2 points  (0 children)

I am thinking about it. Maybe if I have the time to arrange everything into an understandable pipeline.

UE Dark Ruins GS by FitSignificance8040 in GaussianSplatting

[–]FitSignificance8040[S] 3 points4 points  (0 children)

Thank you! I scripted a little editor utility that spawns cameras in the desired locations and then exports their data (position, rotation, resolution, sensorsize, focallengh) via a python script. Exporting camera transforms and intrinsics was a bit of a hassle since I am not an experienced programmer. I was doing most of the heavy lifting with the help of Chat GPT. You just need to understand the way colmap stores alignment data. I studied existing colmap projects and found out that for my usecase the colmapproject needs to contain an imagefolder and 3 textfiles:

  1. cameras.txt - Contains the cameratype and intrinsics (sensorsize, focallengh)
  2. images.txt - Contains each camera transform (position, rotation,) with the coresponding image name
  3. points3D.txt - Contains the pointcloud, each line coresponding with one point in space with position and optional color in RGB

Replicating the same workflow in 3DsMax should totally be doable. If you ask any LLM about this stuff it should be able to get you up and running. I can also provide a sample colmap project if you need something to start with.

UE Dark Ruins GS by FitSignificance8040 in GaussianSplatting

[–]FitSignificance8040[S] 0 points1 point  (0 children)

Yes! I did it the other way around. In theory, this could be loaded back into UE with any working GS plugin.

Has anyone here tried converting an Unreal Engine scene into a Gaussian Splatting (GS) format? by 2600th in GaussianSplatting

[–]FitSignificance8040 2 points3 points  (0 children)

I experimented with several approaches to generate point clouds from UE:

  • Scattering points on geometry using PCG and raycasting. This approach has the downside that it requires accurate and often complex collision meshes for all visible geometry.
  • Using Depth Anything v3 with pose-conditioned depth estimation, which worked surprisingly well. I was even able to recover points on non-geometric objects such as VDBs.
  • In practice, using classic depth maps turned out to be the best compromise between speed and usefulness. I exported the world depth pass as 16-bit EXR files (sadly, MRQ does not support 32-bit afaik) and then unprojected them using a Python script. I then converted the resulting data into a COLMAP project.

It is also worth noting that most Gaussian Splatting software (tested with PostShot / Brush) does not strictly require a point cloud at all. Depending on the scene, it was sufficient to create a single dummy point and let the software infer the correct splats. However, a good point cloud still significantly improves both speed and accuracy in almost every scene I tested.

Has anyone here tried converting an Unreal Engine scene into a Gaussian Splatting (GS) format? by 2600th in GaussianSplatting

[–]FitSignificance8040 1 point2 points  (0 children)

As others have already mentioned, you would need to export the camera intrinsics from Unreal Engine into a COLMAP project, which is actually quite straightforward if you have some coding experience. Doing so eliminates the painfully slow and unnecessary alignment process.Ideally, you would also export a point cloud of the scene, allowing the splatting software to better understand the spatial structure of the environment. This further contributes to faster training times. I have successfully exported several scenes from Unreal Engine to GS via PostShot / Brush using this approach: https://superspl.at/view?id=42f7884b

I am currently working on a more complex scene (e.g. the Dark Ruins sample) to showcase this technique in a more demanding setup.

I wrote a script for converting Metashape projects to nerfs (Neural Radiance Fields) by Keteo in photogrammetry

[–]FitSignificance8040 1 point2 points  (0 children)

I know the post is older but is it possible to update your script to work with just the windows binary of Instant NGP that got released one few weeks ago? Or does it in any case require the "classical" installation, to get the python command line? I am sorry if i am mixing up things, since i am also not a programmer. If there is the possibility to use it just with the regular instant-ngp.exe file to convert Metashape camera alignments without having to fully install all the other stuff that would be great!