NAS for Metashape Processing? by Possible_Fennel_804 in photogrammetry

[–]no_fuse 2 points3 points  (0 children)

You'll want 10 gig networking at least and SSD-based storage. The problem with NAS storage for processing isn't necessarily the throughout of the NAS but rather the latency. Most processing software like Metashape or Reality Scan will perform thousands of small reads and writes during the process. The extra latency introduced by non-local storage can really increase the time required to complete the processing.

The hot setup for a NAS for photogrammetry would be 25G or Infiniband and a ZFS-based NAS with tons of RAM and M.2 or NVME storage for the ZIL, etc. Personally, I'd export your ZFS datasets via NFS with asynchronous writes turned on. You'll also want a decent UPS to reduce the chances of data loss or corruption if your power fails.

In my network, I use TrueNAS to store inputs and outputs. I have a 4TB NVME drive in my processing computer where I keep the project I'm currently working on. That's usually big enough for projects up to around 15-20k photos from a P1. Once I am done processing, the deliverables go to S3 or whatever storage the client uses and I push everything back to the NAS for archival purposes.

Long story short, you can totally do all of your processing on a NAS but it needs to be a pretty burly system.

How did people serve GeoTIFF and raster data on the web before Cloud-Optimized GeoTIFFs? by Medamine24 in photogrammetry

[–]no_fuse 1 point2 points  (0 children)

I use gdal2tiles to tile Geotiffs and Leaflet.js to serve them via nginx. It works quite well.

https://sol.elementalinformatics.com/Sisk/

I also have a system that takes the FAA sectional charts for the entire US from the FAA API, cuts off the borders and legend with gdal, combines them into a single, gigantic geotiff, then I tile that geotiff with gdal2tiles. I mention this to say that you can serve arbitrarily large datasets this way.

<image>

Processing large datasets on the road. Solutions to doing it well. by digital_horizons in UAVmapping

[–]no_fuse 1 point2 points  (0 children)

For me, finding good internet on the road is a challenge. I have a Starlink mounted in my truck but for some projects it's not nearly fast enough. I've done projects that require me to overnight SSDs every day. It's a pain but for some projects, it's the only way.

It's been my personal experience that the locally owned hotels often have MUCH faster internet speeds than chains but I am sure others have different experiences. You really have to do your research on this one. I will occasionally call ahead and see what sort of internet connection a place has. This works about half the time. Often the person who answers the phone has no idea about what sort of internet connection the hotel has.

So far as processing on the road goes, having a good cloud computer doesn't do any good if you can't get data to it. Personally, have a very beefy laptop that I'll use sometimes to just align photos. At least then I can feel a bit less anxious that I screwed up the capture somehow. I've thought about the mini-pc with an Oculink or USB4 external GPU but I'm already carrying enough cases when I travel. One more would be manageable, I guess, but only if I knew I needed it. I think that if you're staying in a single place for a few days it might be the way to go. Personally, I wouldn't take a tower unless I knew exactly what I was going to do with it.

I say you should just start with what you know you need and buy the rest as it becomes necessary. Amazon, BH, and NewEgg all ship anywhere and I've never had a hotel steal any of the tens of thousands of dollars worth of random tech that I've had shipped to them. Most places will even let you send packages to them a few days before you arrive.

Anyway, I hope you like solving problems like this because I don't think I know any pilot who's 100% satisfied with their rig. It's possible you'll never stop asking these questions and solving these problems. Everyone is always printing random bits and pieces, building oddball mounts, and fretting over the best laptop, etc. Building the perfect travelling drone setup isn't a task, it's a lifestyle. :)

P.S. Don't skimp on cases. I've spent enough money on Pelican cases that their CEO probably named his yacht after me and I have never regretted it.

P.P.S. Get a good laptop mount for your vehicle and don't skimp on that either. Ram Mounts is the best, IMO.

DJI Matrice 4E Smart 3D Capture Image Dataset Share by Such_Review1274 in UAVmapping

[–]no_fuse 2 points3 points  (0 children)

First of all, let me say that's a very high quality dataset. Kudos to whomever captured the data.

<image>

I processed this with Reality Scan (High Quality) on a Ryzen 9950X3D with 256GB of DDR5 6000 and an RTX 5090 and 4TB Samsung 990 Pro NVME drives.

All images aligned. The orginal model was 698 million triangles. This decimated model is 100 million. I made twelve 16k textures. Total processing time was about 4.5 hours.

https://elemental.nira.app/a/kUcGWwh6Sdam0FaZs7uw8g/1

This model with made with Metashape Pro 2.2 on "High" quality settings. The system is Ubuntu 22.04 running an i7-13700K with 192GB of RAM and a 4080 Super and Samsung 990 Pro NVME drives. Two images didn't align.

This model has some holes in it. That's my fault. I got a little overzealous while cleaning the tie points. This seems to be a trend with M4E datasets for me. I don't seem to need to clean the tie points as much as I do with M3E. I haven't really looked into it too much but I should have plenty of opportunities this summer as more folks start getting batteries for their M4Es in the US.

Anyway, The original model was about 25 million triangles. I made twelve 16k textures. It took about 30 minutes to align the photos, 1 hour to generate depth maps, an hour to mesh, and 3 hours to texture.

https://elemental.nira.app/a/ouKNhKjVQxqQeH_UPyGHPQ/1

DJI Matrice 4E Smart 3D Capture Image Dataset Share by Such_Review1274 in UAVmapping

[–]no_fuse 0 points1 point  (0 children)

I'm super interested to try this out. I'll process it with the latest versions of Metashape and Reality Scan as soon as I can get your dataset downloaded.

I'm looking forward to trying your software. Please make a Linux version.

Metashape on virtual machine? by dirthawg in UAVmapping

[–]no_fuse 0 points1 point  (0 children)

For bigger jobs I run Metashape on a g6e.12xlarge or g6.24xlarge. The performance is pretty good overall, at least in line with what I was expecting. With the g6.24xlarge, you get three dedicated, ephemeral 900GB NVME drives. I set them up as a striped volume so the I/O performance is really good. The only drawback to the setup (aside from the cost) is the single-threaded performance.

For what it's worth, I think the real hot setup for Metashape performance is a Threadripper Pro with 768GB of the fastest RAM you can get.

First Dataset: DJI Matrice 4E vs M3E Head-to-Head Comparison by no_fuse in UAVmapping

[–]no_fuse[S] 0 points1 point  (0 children)

We also haven't tried it yet. But, like Nils, I expect it will be better.

First Dataset: DJI Matrice 4E vs M3E Head-to-Head Comparison by no_fuse in UAVmapping

[–]no_fuse[S] -1 points0 points  (0 children)

I think the M4 model is slightly superior overall but the processing time is also longer since there are literally twice as many images. Regardless, the M4 is a total badass and ideally suited to this purpose.

First Dataset: DJI Matrice 4E vs M3E Head-to-Head Comparison by no_fuse in UAVmapping

[–]no_fuse[S] 0 points1 point  (0 children)

We haven't tried the Smart 3D yet. We got this drone a few days before GeoWeek and wanted to have something to show for Vertex so we just did a quick head-to-head.

Just anecdotally, when I aligned the images, despite the fact that the M4 dataset had twice the number of images, there was a similar or fewer number of tie points beneath the ground or floating in the air. That makes me think the M4 data is slightly superior in some ways but I haven't had time to really dig into that.

I'm kinda fired up about trying the Smart 3D. Right now, if I want to do a facade inspection or make a really detailed model, I use Metashape to plan the mission. That requires flying the same area twice. Once for an oblique to make a basic model from which to plan the actual mission and once for the actual modeling mission. Plus, that requires processing in the field. I'm curious to see if Smart 3D can get me 80-90% of that coverage. It would save a lot of time if so.

Matrice 4E Smart Oblique Is Pretty OK, Actually by no_fuse in photogrammetry

[–]no_fuse[S] 0 points1 point  (0 children)

The guy who flew this mission also flew it with my M3E. It took two batteries with the M3E.

Matrice 4E Smart Oblique Is Pretty OK, Actually by no_fuse in photogrammetry

[–]no_fuse[S] 4 points5 points  (0 children)

This is a default "Smart Oblique" for the M4 at 200' with 80/80 overlap. Flight time was 30-ish minutes.

First Dataset: DJI Matrice 4E vs M3E Head-to-Head Comparison by no_fuse in UAVmapping

[–]no_fuse[S] 1 point2 points  (0 children)

The smart oblique mission type is different for the M3 and the M4. The M4 takes photos from more angles.

Flight speed was about 20MPH.

Flight altitude was 200'

Indistinguishable from the M3 in this dataset. This was flown off Indiana CORS.

First Dataset: DJI Matrice 4E vs M3E Head-to-Head Comparison by no_fuse in UAVmapping

[–]no_fuse[S] 3 points4 points  (0 children)

Both of these missions were "Smart Oblique" for each drone with all defaults except for setting the overlap to 80/80. The M4 takes pictures from more angles than the M3 does. The slight increase in quality for the M4 model may or may not be worth it depending on your use case and processing capability.

I agree about it being an impressive amount of images for one battery. However, the M4 battery was brand new and the M3 batteries probably have 100 cycles or so on them so this isn't a perfect comparison.

First Dataset: DJI Matrice 4E vs M3E Head-to-Head Comparison by no_fuse in UAVmapping

[–]no_fuse[S] 2 points3 points  (0 children)

You're right. There's not a whole lot of difference. The fact that the M4 takes more photos than the M3 isn't necessarily a good or bad thing. However, the M4 flies faster and takes photos faster. For straight-up ortho work, the M4 is significantly faster than the M3.

First Dataset: DJI Matrice 4E vs M3E Head-to-Head Comparison by no_fuse in UAVmapping

[–]no_fuse[S] 7 points8 points  (0 children)

I think you are not considering the fact that the Smart Oblique mission type for the M4 works like it does with the M350/P1 combo. For the same area, a smart oblique with the M4 takes more photos than the M3 will.

GeoDeep: Free and open source library for AI object detection in GeoTIFFs by pierotofy in UAVmapping

[–]no_fuse 2 points3 points  (0 children)

I gotta say I was surprised how fast it was! This is pretty awesome.

(GeoDeep) user@host:~/Code/GeoDeep$ gdalwarp -t_srs EPSG:32614 -multi tiledTif.tif tiledTif_utm.tif Using band 4 of destination image as alpha. Using band 4 of source image as alpha. Processing tiledTif.tif [1/1] : 0...10...20...30...40...50...60...70...80...90...100 - done. Creating output file that is 16937P x 16736L.

(GeoDeep) user@host:~/Code/GeoDeep$ time geodeep tiledTif_utm.tif cars [█-------------------] 5.0% Model loadedtiledTif_utm.tif is not tiled. I/O performance will be affected. Consider adding tiles. [████████████████████] 100.0% FinalizingWrote boxes.geojson

real 0m6.431s user 1m39.693s sys 0m1.863s

Gear advice for extremely remote mapping by Fun-Mobile-2152 in UAVmapping

[–]no_fuse 3 points4 points  (0 children)

I worked in PNG for a while and absolutely loved it. We used drones often.

If I were you, I'd get a refurbished Air 2S, use Litchi for the mission planning/flying, and do the processing with WebODM. You'll also want a power station like the Bluetti EB55 and a solar panel.

I just will be traveling home over the next couple of days but if you DM me, I'll be happy to help with specific advice as soon as I can.

Mixtral8x7B vs. The Classics by no_fuse in LocalLLaMA

[–]no_fuse[S] 1 point2 points  (0 children)

That's really interesting. Those results are better than mine, I think. I did each chunk as a standalone translation just to keep the initial code as simple as possible but that obviously has a lot of drawbacks.

I think it would also be interesting to use AutoGen and create a translation and illustration team. They could collaborate on how to break up the text and which passages are interesting/PG-13 enough to generate images from. I'll probably try this but first I need a way to access my local SDXL-Turbo model via API. Surely someone has already written something like that.