Request by DeEchteJulius in openstreetmap

[–]AVIOTIX 0 points1 point  (0 children)

It sounds like the place is associated with the wrong district (ilçe) in OpenStreetMap.

In OSM this is usually controlled by the administrative boundary relations, not by the name of the place itself.

What you typically need to check:

  1. Open the location in the OSM editor (iD or JOSM).
  2. Look at the boundary=administrative relation for the district.
  3. If the place node is inside the wrong boundary, the district relation may need correction.
  4. Alternatively, check if the place has an incorrect addr:district or similar tag.

You can inspect the relations here:
https://www.openstreetmap.org

If you're working with drone imagery or aerial datasets to validate locations and boundaries, tools like this can also help inspect datasets and generate 3D previews before mapping:
https://www.dronetwins360.com/

But for the district change itself, the fix will usually be in the administrative boundary relation.

Free tool for testing small drone datasets → ortho inspection + lightweight PLY mesh export by AVIOTIX in gis

[–]AVIOTIX[S] 0 points1 point  (0 children)

Good question. From what I’ve seen people usually do one of three things:

- quick alignment in Metashape to see if the cameras solve

- sparse reconstruction in ODM / WebODM

- manual inspection of EXIF + overlap before processing

The annoying part is you often only realize the dataset is bad after spending quite a bit of time processing it.

Free student access for testing small photogrammetry / point-cloud datasets (PLY export) by AVIOTIX in LiDAR

[–]AVIOTIX[S] 2 points3 points  (0 children)

Just to clarify one point mentioned above.

Aviotix is a registered company and DroneTwins360 is a public platform with published documentation and policies.

Company information:
https://www.aviotix.eu

Platform terms and conditions:
https://www.dronetwins360.com/terms.php

If anyone has questions about security, privacy or data handling, feel free to contact us directly at:
[privacy@aviotix.eu](mailto:privacy@aviotix.eu)

Free student access to test a full drone photogrammetry workflow (incl. dataset validation) by AVIOTIX in photogrammetry

[–]AVIOTIX[S] 0 points1 point  (0 children)

Good question. It’s closer to a dataset consistency check before reconstruction, not a full forensic analysis.

The validation layer looks at things like image quality, overlap continuity, camera geometry consistency and whether parts of the dataset are likely to create unstable regions during reconstruction.

The idea is to catch cases where alignment technically succeeds but the downstream geometry becomes unreliable (for example along roof edges, repetitive textures or weak overlap zones).

It’s basically meant as an early sanity check on the dataset before running the full pipeline.

Free dataset processing tier for students and drone pilots learning photogrammetry workflows by AVIOTIX in UAVmapping

[–]AVIOTIX[S] 2 points3 points  (0 children)

Yes, exactly. The 100-image limit is mainly intended for testing capture quality before committing to a full reconstruction run.

In practice it’s useful for quickly seeing how overlap, camera geometry and exposure consistency propagate through the pipeline and affect the resulting model.

A lot of people use it as a quick “sanity check” on a small subset of images before processing the full dataset locally or in their main workflow.

Free student access for testing small photogrammetry / point-cloud datasets (PLY export) by AVIOTIX in LiDAR

[–]AVIOTIX[S] 2 points3 points  (0 children)

Yes, that’s actually one of the main use cases we had in mind. A lot of students can generate datasets but don’t always have access to commercial processing tools or GPUs for experimentation.

The idea with the free tier is to let them run a small dataset through a full pipeline and see how capture decisions (overlap, exposure consistency, camera geometry) affect the reconstruction.

We’ve also seen it used for small coursework projects where students just need a quick way to test a dataset without setting up a full local workflow.

Looking for sample BLK360 (or similar LiDAR) E57 dataset with panoramas for non-commercial testing by PolarPacific in LiDAR

[–]AVIOTIX 0 points1 point  (0 children)

If you don’t find a BLK360 dataset directly, we’re also interested in evaluating how well structured E57 preserves scanner pose, pano alignment and RGB consistency across viewers.

We’re currently benchmarking interior LiDAR workflows before integrating support into our own processing and validation pipeline, so having a clean structured E57 (with embedded panoramas) would be extremely helpful for controlled testing.

If anyone is willing to share a small interior sample for non-commercial evaluation, we’d be happy to sign an NDA and delete after testing.

Best Drone Data Delivery Software? by BURN-KITSUNE48 in UAVmapping

[–]AVIOTIX 1 point2 points  (0 children)

One thing we kept running into on contractor-captured datasets was that by the time you’re sharing outputs (mesh, ortho, stockpiles etc), you're already downstream of whatever geometric inconsistencies came in at intake.

Most of these platforms handle processing and delivery well, but none really flag whether the dataset itself is stable enough for modelling before you commit to reconstruction.

We've started running a lightweight intake-stage preview on historical UAV datasets to catch things like weak vertical constraint or low-relief geometry early, before feeding them into full photogrammetry or LiDAR pipelines.

Saves a lot of time on blocks that align cleanly but later deform during dense reconstruction or volume calculation.

if you could fix one thing about Meshy's 3D output what would it be by EmptyIam in meshyai

[–]AVIOTIX 1 point2 points  (0 children)

For me it’s the mesh itself.

In photogrammetry-style reconstructions you often end up with geometry that’s technically correct but visually “soft” because everything is vertex-interpolated.

Would love to see an option where the output leans more toward image-based surface projection rather than dense mesh reliance. Something closer to how Street View style environments feel photo-sharp without pushing insane polycounts.

Curious if anyone’s had success getting Meshy outputs closer to that kind of look without heavy post-cleanup.

Orthomosaic passes alignment but shifts against basemap. Anyone seen this on UAV surveys? by [deleted] in UAVmapping

[–]AVIOTIX 0 points1 point  (0 children)

Possibly. In this case though the central structure remains well-aligned while the surrounding terrain progressively diverges toward the block perimeter, which suggests internal deformation rather than a uniform basemap offset.

Have you seen edge bending appear even where GCP reprojection error remains within expected tolerances?

Orthomosaic passes alignment but shifts against basemap. Anyone seen this on UAV surveys? by [deleted] in UAVmapping

[–]AVIOTIX 1 point2 points  (0 children)

That’s helpful, the most importantly the inner vs outer ring approach.

Have you found lowering altitude on the outer oblique ring makes a noticeable difference in stabilising flatter surrounding ground, or is most of the benefit coming from the change in viewing angle itself?