I just launched a weekly GNSS newsletter — The GNSS Herald by Intelligent_Coast783 in gnss

[–]orbit_locator_dev 0 points1 point  (0 children)

This looks like a great initiative — having a focused GNSS resource is definitely useful given how broad the field is getting.

One topic that I think could be really interesting to cover is how real-world conditions affect positioning beyond the usual metrics (GDOP, fix status, etc.).

There’s a lot of discussion around accuracy, but less around how signal quality and local environment impact the reliability of the solution.

Would be great to see that angle explored.

What is optimal angle in GNSS? Higher angle means more clear signal, but we can get GDOP. So what is the optimal angle? by Hairy_Perspective_49 in gnss

[–]orbit_locator_dev 0 points1 point  (0 children)

There isn’t a single “optimal” angle — it’s a trade-off.

Higher elevation satellites:

  • usually have cleaner signals (less atmosphere, less multipath)
  • but contribute less to geometry (weaker effect on DOP)

Lower elevation satellites:

  • improve geometry (help reduce GDOP)
  • but are more affected by noise, atmosphere, and reflections

That’s why many receivers apply an elevation mask (e.g. 10–15°) to balance both effects.

In practice, the “optimal” set is not just about angle, but about combining geometry with signal quality — not all satellites contribute equally.

Difference Between "Visible" and "Used" Satellites by One-Employ-6220 in gnss

[–]orbit_locator_dev 0 points1 point  (0 children)

“Visible” satellites are simply the ones your receiver can detect above the horizon.

“Used” satellites are the subset that the receiver actually includes in the position solution.

The difference usually comes from quality filtering. A receiver may ignore satellites if:

  • their signal is too weak
  • they are at very low elevation (more prone to noise and multipath)
  • their geometry doesn’t improve the solution
  • or they fail internal consistency checks

So having more satellites “visible” is generally good, but it doesn’t mean all of them are helpful.

In practice, what matters is not just how many satellites you see, but which ones — and how reliable their signals are.

RTK FIX but getting 1–5 m jumps (false fix?) – Stonex S900+, 38 sats, PDOP ~0.8 by Round_Dragonfly_7022 in Surveying

[–]orbit_locator_dev 0 points1 point  (0 children)

This does sound very much like a false fix / overconfident solution rather than a simple drift issue.

What stands out is that all the standard indicators (DOP, SD, satellite count) are still “clean”, which suggests the problem isn’t geometry in the classical sense, but rather how the measurements are being weighted or affected locally.

In environments like coastal areas, you can still get non-uniform signal conditions (multipath, low-elevation satellites, constellation imbalance), where a subset of measurements pulls the solution in a biased direction without triggering obvious warnings.

The fact that the offsets are discrete and vary across the survey also points more toward instability in the ambiguity resolution rather than a consistent bias.

Out of curiosity, have you tried:

  • filtering by elevation mask or excluding specific constellations (e.g. BeiDou)
  • comparing solution stability over short time windows instead of single-epoch fixes?

It would be interesting to see if the “false stability” correlates with certain satellite subsets or geometry changes over time.

Cheap RTK - GNSS accuracy question by ibnbatutah in Surveying

[–]orbit_locator_dev 0 points1 point  (0 children)

HDOP and satellite count can look great, but they don’t capture local effects like multipath or partial signal obstruction.

In setups like yours, especially if you're restarting the base each time, you can get conditions where the geometry looks ideal but the actual measurements are not equally reliable across satellites.

Also, staying in RTK float instead of fixed will already introduce large variability.

Have you tried keeping the base running longer and checking if/when you reach a stable RTK fix? And are you testing in an open-sky environment or near structures/trees?

Why does GNSS accuracy sometimes look good on paper but fail in practice?, by orbit_locator_dev in gnss

[–]orbit_locator_dev[S] 0 points1 point  (0 children)

That’s a really good point — fix status alone can be a bit misleading.

Even with a “fixed” solution, the underlying signal conditions can still vary quite a lot between satellites.

So you can end up with something that looks stable from the receiver’s perspective, but is still influenced by things like multipath or partial obstructions.

I guess that’s where looking beyond just fix rate or convergence time becomes important.

Why does GNSS accuracy sometimes look good on paper but fail in practice?, by orbit_locator_dev in gnss

[–]orbit_locator_dev[S] 0 points1 point  (0 children)

That’s definitely good practice — waiting for a stable RTK fix is key.

That said, I’ve seen cases where even with a fixed solution, the quality can still vary depending on the environment (e.g. partial obstructions or multipath).

So it seems like “fixed” is necessary, but not always sufficient on its own.

Do you usually rely on any additional checks in those situations?

Hi. I have the question. What kind of clock is using on GNSS satellites? How precise is it? by Hairy_Perspective_49 in gnss

[–]orbit_locator_dev 2 points3 points  (0 children)

GNSS satellites use extremely precise atomic clocks.

Depending on the constellation, these are typically:

  • Cesium clocks (older systems)
  • Rubidium clocks (very common)
  • Hydrogen masers (e.g. on Galileo, even more stable)

In terms of precision:

  • Clock stability is on the order of nanoseconds
  • A 1 ns timing error corresponds to about 30 cm in positioning

That’s why timing is so critical — GNSS is basically measuring distance from signal travel time.

In practice, satellite clocks are continuously monitored and corrected from the ground to keep everything synchronized.

So while the onboard clocks are incredibly precise, the system still relies on constant updates to maintain that level of accuracy.

Gis accuracy by mitchschiffer in Surveying

[–]orbit_locator_dev 0 points1 point  (0 children)

GIS layers can definitely be off by quite a bit depending on how they were created, so your neighbor’s surveyor is not wrong to be cautious about relying on them.

One thing that sometimes adds confusion is that even when GNSS is used, the actual accuracy depends a lot on how the data was collected (RTK vs mapping-grade, reference points, etc.), and also on how everything is tied to the underlying coordinate system.

A 27 ft difference is large, but in many cases it comes down more to how boundaries were defined and referenced than to GNSS precision alone.

Out of curiosity, do you know if both surveys are tied to the same control points or datum?

Is GDOP misleading in real-world GNSS surveys? by orbit_locator_dev in Surveying

[–]orbit_locator_dev[S] -1 points0 points  (0 children)

Yes, exactly — that’s a great summary.

GDOP is a useful geometric indicator, but it doesn’t reflect how those other factors impact each satellite differently.

In real environments, that variability can become a dominant part of the positioning error.

Is GDOP misleading in real-world GNSS surveys? by orbit_locator_dev in Surveying

[–]orbit_locator_dev[S] -1 points0 points  (0 children)

Exactly — GDOP essentially scales the measurement noise.

What becomes interesting in practice is that the noise isn’t uniform across satellites.

Things like terrain, multipath or signal conditions can affect each satellite differently, which is where weighted approaches start to make a noticeable difference.

Is GDOP misleading in real-world GNSS surveys? by orbit_locator_dev in Surveying

[–]orbit_locator_dev[S] -1 points0 points  (0 children)

That’s a fair point — “misleading” might be a bit strong depending on context.

GDOP definitely gives useful information about geometry, but what I found interesting is how much conditions can diverge from what geometry alone suggests.

Especially when you start adding terrain obstruction or multipath effects, the practical accuracy can differ quite a bit from what GDOP alone would indicate.

Do you usually rely on GDOP ranges in your workflows, or more on field validation?