1
2
Figure 86: Different types of occlusion
1
Camera occlusion
2
Laser occlusion
16.1.2
Width resolution and resolution in the motion direction
In a laser triangulation system the camera placement and optics determine the width
of the field-of-view (FOV). The resolution across the object (
Δ
X) is the FOV width divided
with the number of pixels.
The resolution along the motion direction (
Δ
Y) is a direct function of the measurement
frequency and the object speed.
16.1.3
Sensor coordinate system
This section is only relevant for 2D images and uncalibrated data. Normally a
Ruler3000 uses the built-in calibration and receives 3D data as rectified Z data with Z
pointing towards the device, i.e. inverse to the sensor v axis.
The camera views the object and the laser line from above, with a certain angle
between the camera and the laser, as described in this document. The 2D image has
its origo in the top left corner when you view the image on the screen, see the figure
below. This means that the v-coordinate of a point that is close to the bottom of the
screen (v
1
) is greater than the v-coordinate of a point that is higher up on the screen
(v
2
).
(0,0)
u
v
(u
1
,v
1
)
(u
2
,v
2
)
Figure 87: Sensor image and coordinate system
When the coordinates from the sensor image are used as 3D data, a high value of the
v-coordinate will give a high range value.
16
ANNEX
96
O P E R A T I N G I N S T R U C T I O N S | Ruler3000
8026049//2021-04 | SICK
Subject to change without notice