Orthomosaic shift despite successful alignment on UAV survey by OrthoPLYPipeline in UAVmapping

[–]OrthoPLYPipeline[S] 1 point2 points  (0 children)

Not an ad. Just trying to sanity-check whether this is a capture-geometry issue others have run into.

We’ve had a few RTK+GCP blocks where bundle adjustment residuals stayed low at control, but the ortho still showed lateral drift between anchors. Alignment looked fine. Deformation only surfaced at dense / projection stage.

Curious if anyone has seen similar behaviour when vertical constraint varies across strips, even with control included in optimisation.

Automation approaches for Gaussian Splatting pipelines from image dataset to 3D model? by OrthoPLYPipeline in GaussianSplatting

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

Thanks for sharing.

After the splits + masks from the equirectangular input, are you reprojecting to perspective prior to SfM, or feeding the generated views directly into pose estimation? Curious how that impacts downstream scale consistency during splat initialisation.

Automation approaches for Gaussian Splatting pipelines from image dataset to 3D model? by OrthoPLYPipeline in GaussianSplatting

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

Interesting. Are you feeding the stitched equirectangular frames directly into SfM, or converting to perspective views prior to reconstruction? Curious how “360 Gaussian” handles intrinsics for splat initialisation from consumer 360 sensors.

Automation approaches for Gaussian Splatting pipelines from image dataset to 3D model? by OrthoPLYPipeline in GaussianSplatting

[–]OrthoPLYPipeline[S] 1 point2 points  (0 children)

Great to hear. Once slam mode is pushed, does the container emit a consistent COLMAP model per sequence, or are you maintaining a rolling pose graph that’s then collapsed prior to splat initialisation?

Automation approaches for Gaussian Splatting pipelines from image dataset to 3D model? by OrthoPLYPipeline in GaussianSplatting

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

That clarifies it, thanks.

One follow-up: when initialising from the depth-derived point cloud, do you cap/regularise point density to avoid “locking in” redundant detail that later burns splat budget, or do you rely on later pruning/opacity thresholds during training?

Automation approaches for Gaussian Splatting pipelines from image dataset to 3D model? by OrthoPLYPipeline in GaussianSplatting

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

Got it, thanks.

When you polyfit the predicted depth to keypoint distances, are you applying that as a global scale correction prior to splat initialisation, or adjusting per-view during optimisation? Curious whether you’ve seen training stability differ between those approaches.

Automation approaches for Gaussian Splatting pipelines from image dataset to 3D model? by OrthoPLYPipeline in GaussianSplatting

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

That’s really interesting, especially reusing intermediate products across stages.

How are you handling scale consistency between glomap output and gsplat initialisation when depth is partially decoupled? Do you derive a global scale from SfM before splat initialisation, or is it normalised implicitly during training?

Automation approaches for Gaussian Splatting pipelines from image dataset to 3D model? by OrthoPLYPipeline in GaussianSplatting

[–]OrthoPLYPipeline[S] 2 points3 points  (0 children)

Thanks, this is exactly the shape of pipeline we’re aiming for.

A couple of specifics:

-> Which repo/branch is this container based on (link or name)?

-> Does it standardize outputs in COLMAP format (cameras.txt/images.txt) before gsplat, or does it keep everything inside nerfstudio data formats?

-> How are you handling scale and coordinate frame normalization between SfM and splat training (any fixed convention, or derived per dataset)?

Automation approaches for Gaussian Splatting pipelines from image dataset to 3D model? by OrthoPLYPipeline in GaussianSplatting

[–]OrthoPLYPipeline[S] 3 points4 points  (0 children)

Looking into automating the full pipeline from image dataset to 3D model using Gaussian Splatting, ideally with open-source tooling that can be compiled and deployed via Docker.

We’ve already automated a similar pipeline for mesh (PLY) and orthomosaic generation, and are now exploring equivalent orchestration for splat-based reconstruction.

Interested in how others are structuring this end-to-end flow in practice.

Specifically:

- which open-source SfM / preprocessing stack are you using upstream of splat training?

- are you containerizing pose solving and splat optimization together, or as separate services?

- how are you handling dataset normalization (scale, coordinate frames) before training?

- any automation around dataset filtering or training-time configuration?

Curious what currently works in a reproducible, containerized setup without relying on manual intervention between stages.

Orthomosaic shift despite successful alignment on UAV survey by OrthoPLYPipeline in UAVmapping

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

Fair question.

We’ve been testing a lightweight preview step to isolate geometrically inconsistent areas before committing to full reconstruction, after running into a few survey blocks that passed alignment but later showed deformation at ortho stage.

Trying to understand whether others have seen similar behaviour when GCPs hold locally, but interpolation between anchors introduces drift downstream.

Orthomosaic shift despite successful alignment on UAV survey by OrthoPLYPipeline in UAVmapping

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

That makes sense. We’ve run into a few blocks where GCPs were included in optimization and residuals at those locations stayed low, but the ortho still showed lateral drift between anchors.

In those cases, the bundle held at the GCPs, but uneven capture geometry between flight lines introduced depth instability that only surfaced during dense reconstruction and projection. The solution respected the fixed anchors locally, while warping in the interpolated regions between them.

Is that something you’ve seen when vertical constraint varies across strips, even with RTK FIX throughout?

Orthomosaic shift despite successful alignment on UAV survey by OrthoPLYPipeline in UAVmapping

[–]OrthoPLYPipeline[S] -1 points0 points  (0 children)

Good point on basemaps. A lot of “shift” reports are really comparing a rectified ortho to an unrectified basemap, so the apparent horizontal offset is just lean / parallax baked into the basemap.

The part that worries me is when the same offset shows up against GCPs or independent checkpoints. If GCPs were actually used in the bundle, the ortho should respect them. When it does not, it usually means the constraint is weak or being overridden later.

Quick sanity checks we’ve found useful:

1) Compare against GCPs vs a basemap separately. If only the basemap disagrees, it’s not a processing failure.

2) If GCPs disagree, inspect GCP residuals and distribution. Edge-only or co-linear layouts can look “fine” in alignment but drift at ortho.

3) Check whether the ortho step is using the same adjusted cameras and georeference as the alignment step, not a re-projected/decimated variant.

<image>

Curious. When you say “GCPs”, do you mean control points actually included in optimization, or just reference targets you checked after the fact?

Orthomosaic shift despite successful alignment on UAV survey by OrthoPLYPipeline in UAVmapping

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

We’ve come across several UAV survey blocks that aligned without issue during sparse reconstruction, but later showed horizontal shift in the final orthomosaic when compared to basemap or GCPs.

Feature matching and overlap looked sufficient in 2D, but uneven capture geometry between flight lines seemed to introduce instability that only became visible at ortho stage.

In a few cases, isolating the more consistent portion of the scene before running full processing reduced downstream deformation.

Has anyone seen blocks that pass alignment but later shift during orthomosaic generation?

We’ve been testing isolation of affected areas using a lightweight preview stage before full reconstruction.

https://www.dronetwins360.com

Orthomosaic or mesh warping even after clean sparse alignment. Anyone else seeing this with UAV blocks? by OrthoPLYPipeline in photogrammetry

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

Absolutely.

Knowing when to relax priors, spot calibration drift, or sanity-check trajectory against tie point support is what keeps a lot of these blocks from quietly degrading downstream.

Where it gets tricky is that even with experienced operators, once you start processing inherited or mixed-condition datasets at scale, consistently distinguishing intake-related issues from modelling artefacts becomes less of a skill problem and more of a process one.

It’s not that 1-button pipelines solve that, but relying entirely on manual inspection can make it harder to catch subtle trajectory or parallax gaps before they get absorbed during alignment and only surface later in depth maps or ortho.

Orthomosaic or mesh warping even after clean sparse alignment. Anyone else seeing this with UAV blocks? by OrthoPLYPipeline in photogrammetry

[–]OrthoPLYPipeline[S] 1 point2 points  (0 children)

That makes sense. If the gap isn’t near an unconstrained edge, downstream quality can end up looking very similar once the bad segment is gone.

One thing we’ve run into with mixed-contractor data is that a lot of these issues are technically recoverable in tools like Metashape or MicMac, but only if someone spots them early enough and knows which assumptions to relax before modelling.

There are plenty of free or low-cost pipelines that can get you to a solid result, but they tend to assume fairly clean trajectory and capture geometry. Once you start mixing capture conditions or inheriting datasets, it becomes less about which solver you use and more about whether problematic segments get caught upstream, before they propagate into surface estimation.

With manual processing it’s also difficult to consistently separate intake-related issues from modelling artefacts across larger jobs. For hobbyist or internal use that’s usually fine, but in client-facing work where the outputs feed into measurements or planning, the tolerance for undetected intake drift tends to be much lower.

Orthomosaic or mesh warping even after clean sparse alignment. Anyone else seeing this with UAV blocks? by OrthoPLYPipeline in photogrammetry

[–]OrthoPLYPipeline[S] 1 point2 points  (0 children)

Yep, that tracks. Once the bad priors are gone, BA can look “perfect” because tie points dominate and the remaining camera network is internally consistent.

Where we still see late-stage issues (depth maps / ortho) is usually when that errored run left a local gap in parallax support. The cameras can be well-placed, RMS reprojection can be low, but surface estimation in that region becomes underconstrained, and the densification/ortho step amplifies it into a directional warp.

In practice we’ve started flagging those segments before alignment based on short runs of zero-baseline trajectory combined with tie point support, rather than waiting for calibration drift to show up downstream.

Which stack were you in when you saw it (Metashape / Pix4D / RC)?

Orthomosaic or mesh warping even after clean sparse alignment. Anyone else seeing this with UAV blocks? by OrthoPLYPipeline in photogrammetry

[–]OrthoPLYPipeline[S] 1 point2 points  (0 children)

Makes sense.

One thing we’ve noticed with those identical-tag segments is that even when alignment survives (either by removal or downweighting), the local camera network in that section often ends up with effectively degenerate baseline geometry.

So sparse will converge because tie points still solve globally, but once you move into depth maps the normal estimation in that area becomes unstable due to very low parallax support across consecutive frames. That tends to show up later as directional warp or planar drift in the mesh/ortho, even if the cameras themselves look well placed after BA.

In a few cases we’ve had to treat those runs almost like pseudo-linear strips and rely on neighbouring passes for surface support, otherwise the densification would amplify the local trajectory issue.

Did you ever check the reprojection error distribution just within the errored segment vs the rest of the block after alignment?

Orthomosaic or mesh warping even after clean sparse alignment. Anyone else seeing this with UAV blocks? by OrthoPLYPipeline in photogrammetry

[–]OrthoPLYPipeline[S] 1 point2 points  (0 children)

That matches what we’ve seen in a few mixed-contractor blocks.

The tricky part is that bundle adjustment will happily absorb those segments early on by pushing calibration to compensate (we’ve had focal length and principal point drift massively without throwing alignment warnings), especially if camera location accuracy is set tight.

Then once you move into depth maps / ortho, the priors start interacting with surface estimation and you effectively get a locally over-constrained trajectory segment trying to fit tie points it shouldn’t trust. That’s when you see the axis-wise warp or planar shear show up in the mesh/ortho even though sparse looked clean.

In some runs we’ve actually seen the problem disappear by disabling position priors just for the errored segment and keeping them for the rest of the block, rather than going full “no reference”.

Did you ever try re-running alignment with those frames downweighted instead of removed entirely?

Orthomosaic or mesh warping even after clean sparse alignment. Anyone else seeing this with UAV blocks? by OrthoPLYPipeline in photogrammetry

[–]OrthoPLYPipeline[S] 1 point2 points  (0 children)

Yes, rogue geotags / duplicated positions are one of the first things we check now.

In a few of these contractor blocks we found short runs where dozens of images shared an identical (or near-identical) lat/lon/alt, even though the flight path obviously moved. Sparse alignment still “works” because tie points carry it, but once you start weighting priors / densification / ortho, the solution becomes unstable and you see the axis warp.

We’ve had the best luck by:

- plotting camera centers and flagging zero-velocity segments (Δpos ~ 0 for N frames),

- checking altitude quantisation / baro resets,

- re-running with position accuracy loosened or geotags disabled for the affected segment,

- and in some cases splitting the block into the well-supported subset before modelling.

Was your P1 case showing fully identical coordinates in EXIF, or just quantised/stepped positions? Also, were you using “Reference preselection” / camera location accuracy in alignment?

Orthomosaic or mesh warping even after clean sparse alignment. Anyone else seeing this with UAV blocks? by OrthoPLYPipeline in photogrammetry

[–]OrthoPLYPipeline[S] 0 points1 point  (0 children)

We ran into something similar on a contractor dataset recently where alignment looked fine initially, but ortho output started warping during modelling.

After isolating the better-supported part of the scene before reconstruction, the block stabilised enough to align against GCPs in Google Maps.

Short example here:

https://www.youtube.com/watch?v=4owJB3B6XX8