EVK-M10QSAM - User guide
11 Common evaluation pitfalls
•
Parameters may have the same name but a different definition.
GNSS receivers may have a
similar size, price and power consumption but can still have different functionalities (e.g. no
support for passive antennas, different temperature range). Also, the definitions of Hot, Warm,
and Cold Start times may differ between suppliers.
• Verify design-critical parameters. Try to
use identical or at least similar settings when
comparing
the GNSS performance of different receivers. Data, which has not been recorded
at the same time and the same place, should not be compared. The satellite constellation, the
number of visible satellites and the sky view might have been different.
•
Do not compare momentary measurements.
GNSS is a non-deterministic system. The satellite
constellation changes constantly. Atmospheric effects (i.e. dawn and dusk) have an impact
on signal travel time. The position of the GNSS receiver is typically not the same between two
tests. Comparative tests should therefore be conducted in parallel by using one antenna and a
signal splitter; statistical tests shall be run for 24 hours.
•
Monitor the carrier-to-noise-ratio (C/N0).
The average C/N0 of the high elevation satellites
should be between 40 dBHz and about 50 dBHz. A low C/N0 will result in a prolonged TTFF and
more position drift.
• Try to
feed the same signal to all receivers in parallel
(i.e. through a splitter) with identical
cable length; the receivers will otherwise not have the same sky view. Even small differences
can have an impact on the speed, accuracy, and power consumption. One additional satellite
can lead to a lower dilution of precision (DOP), less position drift, and lower power consumption.
•
When doing reacquisition tests,
cover the antenna in order to block the sky view.
UBX-22026860 - R01
11 Common evaluation pitfalls
Page 25 of 29
C1-Public