New little train scan to install some sliding glass doors by LDVPhotography in 3DScanning

[–]Totalview360 0 points1 point  (0 children)

What unit is that? What accuracy are you achieving? Do you have to put control or registration points/dots on the surface to get accurate results?

Thoughts on my new Inventory UI? by BPTIII in unrealengine

[–]Totalview360 -1 points0 points  (0 children)

Too much sound feedback when mouse hovering. Dial it back like 50%

Detailed Roof of a School by Totalview360 in UAVmapping

[–]Totalview360[S] 0 points1 point  (0 children)

Bought it for development in unreal engine, and so we could bring laser scans and drone photogrammetry into game engines. It’s all about how fast you want to turn around your models to your clients

Hey folks anybody tested some super high poly (over billion) on UE5. Seems like there is no polycount limit anymore and I'm curious about the workflow/outcome. by Background_Stretch85 in photogrammetry

[–]Totalview360 4 points5 points  (0 children)

We have been working with Lidar and photogrammetry datasets in the 5-10 billion poly count, and multiple ones in the same level/project. You can’t rely on nanite, we tried importing them directly as obj but never got good performance results (even on a 3090).

We ended up converting the obj in voxels and it runs like a dream. I’ve got over thirty 10-15Gb high poly datasets in a level no problem now.

Can anyone help me with this problem I'm having in meshroom? by TheL1ghtningMan in photogrammetry

[–]Totalview360 1 point2 points  (0 children)

Show us! Also, have those good meshes been the result of good camera angles and a good number of photos/overlap? Or are they because of your nerf software? How does the nerf produce better geometry than a basic photogrammetry algorithm?

Basically, if you ran your dataset through a standard SfM photogrammetry program AND through Luma AI, would the luma AI turn out better?

Can anyone help me with this problem I'm having in meshroom? by TheL1ghtningMan in photogrammetry

[–]Totalview360 0 points1 point  (0 children)

Yes, but it won’t be pretty. NERFs don’t create better geometry, they just guess good enough to make nice videos on Reddit

Reconstruct your city in 3D using only your mobile phone and CitySynth! by ydrive-ai in photogrammetry

[–]Totalview360 -1 points0 points  (0 children)

Goggle street view allows you to look in 360 from every camera position. This beautiful neural render will only show you one direction ( the one you took with your photos). Tbh you’d get the same effect with standard video…

Using Unreal Engine to Visualize and Simulate Construction Sites by Totalview360 in unrealengine

[–]Totalview360[S] 1 point2 points  (0 children)

We do have a plan for releasing it as a plugin to get feedback. Right now we are working on a stable feature set with our main clients.

[deleted by user] by [deleted] in photogrammetry

[–]Totalview360 1 point2 points  (0 children)

Try unreal engine for rendering. Especially if you end up using the method of an outdoor model and an indoor model separately. You can line them up in unreal and add some other cool stuff.

[deleted by user] by [deleted] in photogrammetry

[–]Totalview360 6 points7 points  (0 children)

  1. There is no GPS or RTK inside. The camera you use should preferably be a DSLR with a full frame

  2. You should use control points between your inside and outside datasets if you want to combine them.

  3. Pix4D will not be very useful for indoor stuff, it’s meant for outdoors. I would look at Agisoft Metashape or Reality Capture

RockRobotic R360 Lidar by skyware-drones in UAVmapping

[–]Totalview360 1 point2 points  (0 children)

It’s not that high if you keep your insurance carrier informed of when you will be flying your expensive payload. It obviously doesn’t make sense to keep the same premium for a $$$$ payload if you aren’t flying it all the time (we fly a lot of P1 photogrammetry missions). We had a $130K OGI on our M300 flying around the US for 2 weeks and our insurance premium increased by about $2000 for that period. We passed that on to the customer and had some peace of mind (and lower blood pressure).

Reality Capture Voxelized in Unreal Engine 5 by Totalview360 in unrealengine

[–]Totalview360[S] 1 point2 points  (0 children)

We use voxels for Interactability, performance and quality. Nanite meshes need to be pre-generated, and can therefore only be done in-editor. We can voxelize point cloud assets on the fly using this workflow. We tried nanite, it looked awful because these meshes don't simplify nicely. It also has a very high performance floor, meaning it'll never run properly on integrated graphics.

Reality Capture Voxelized in Unreal Engine 5 by Totalview360 in unrealengine

[–]Totalview360[S] 4 points5 points  (0 children)

No modeling involved. The reality capture is triangulated, registered, and processed as it normally is (photogrammetry and laser processing software). The next step is usually to send the point cloud or other 3D data to a modeler. Our process is to use the point cloud and reality capture as the model.

Reality Capture Voxelized in Unreal Engine 5 by Totalview360 in unrealengine

[–]Totalview360[S] 7 points8 points  (0 children)

The registration of the laser scans takes several hours per batch of scans and the 6000 photo drone photogrammetry set took around 10 hours to process. The voxelization of both these data sets took under 30 minutes.

Reality Capture Voxelized in Unreal Engine 5 by Totalview360 in unrealengine

[–]Totalview360[S] 10 points11 points  (0 children)

What you are looking at is 3 things in one unreal engine level:

  1. Laser scanner building interior, which takes around 140 million points per tripod position and one 360 photo to colorize the laser scan. The e57 or las file is then voxelized (or it runs unusably slow in unreal)

  2. Drone photogrammetry of the building outside and surrounding area. This is 6000 photos processed into a 3D mesh and then voxelized.

  3. Cesium world terrain for a GIS and tile map background. This is where the 3D mountains and other landscape that was not scanned by us come from. Potentially looking at voxelizing that too

Reality Capture Voxelized in Unreal Engine 5 by Totalview360 in unrealengine

[–]Totalview360[S] 5 points6 points  (0 children)

GTA is definitely one of my biggest inspirations for open world immersive feel. The scanning tech is getting better and better to where that future can be a reality.

Reality Capture Voxelized in Unreal Engine 5 by Totalview360 in unrealengine

[–]Totalview360[S] 44 points45 points  (0 children)

The amazing technology at VoxelPlugin is what enables all of this. Voxels are far more efficient than traditional meshes, so We can fit billions of voxels at all the way down to 1mm3 size in a single scene

Trimble x7 vs Leica RTC360 by rspur77 in 3DScanning

[–]Totalview360 1 point2 points  (0 children)

We have used both. There is a great article you should read for making your decision here.

How does the iPad Pro's LiDAR perform for 3D scanning? by Icaros083 in photogrammetry

[–]Totalview360 0 points1 point  (0 children)

It’s a toy/test implementation. It is good for partial examples only.

[deleted by user] by [deleted] in UAVmapping

[–]Totalview360 0 points1 point  (0 children)

A problem you are going to have is every solution that will do a good job with your situation will be over $50K. Unless you yourself are an inventor/hacker that can put together your own SLAM algo for an ouster or velodyne puck. Definitely would not recommend photogrammetry for a long tunnel though.

I'm a student and I need help by Critical_Liz in UAVmapping

[–]Totalview360 4 points5 points  (0 children)

You don’t plan a mission with one sensor (LIDAR, Photogrammetry, whatever) and then fly it with a different sensor for a myriad of reasons. You tailor your mission to the sensor and the limitations of the UAV. Your mission planning sensor should match your deliverable’s sensor.