ExRobotics B.V.
ExR-2 Robot
Operating Guide
Document No.:
20220412IP1
Version No.: 2
Owner:
Ian Peerless
Date:
2022-04-30
Page 17 of 39
This document is considered an uncontrolled copy when printed. Always ensure that you print and use a current version.
Copyright 2022 ExRobotics B.V.
▪
Other information that has been gathered at Waypoints is displayed in the right-hand part
of the screen.
▪
Snapshots and gas readings are uploaded immediately. Other recordings are uploaded
when the robot returns to its docking station. Video and sound recordings are limited to 2
minutes for each action.
To study a chart in more detail:
▪
Hover the cursor over a point on the chart to get a digital reading.
▪
Zoom in with the mouse wheel or by left-clicking and dragging.
▪
Pan by using the shift key, left-clicking and dragging.
6.5.
Engineer Screen
This screen is used by ExRobotics and Energy Robotics and will not usually be used by customers.
7.
Autonomous Missions
7.1.
Overview
A robot mission is typically a circuit that starts and finishes at a docking station. During the circuit
the robot will perform actions for points of interest when the robot is located at a waypoint:
▪
A typical action is to record a video, snapshot, sound, or sensor reading.
▪
Actions are targeted at points of interest (POIs). This is a 3D location at which the appropriate
camera or sensor is targeted. Examples of POIs are valves, flanges and pumps. To target the
POI the robot will usually need to change its azimuth (rotate) and in some cases will require a
camera to lift its field of view (elevate). There can be more than one action at a POI.
▪
Waypoints are 2D locations from which POIs are observed. There can be multiple POIs at a
waypoint. In Line-following navigation, waypoints are defined by an array of chili-tags.
▪
There can be multiple waypoints on a mission.
When in autonomous mode, by default t
he robot will stop if the connection to the driver’s control
station is broken for more than 5 seconds. This means that an active control station is required for
the robot to be operational in autonomous mode. The robot will also stop if it loses sight of the
orange line. In this situation an audio/visual alarm will be triggered on the robot control screen.
If an autonomous mission is interrupted (by a person or the software) human intervention is
required to restart the mission. For line-following navigation the robot will need to be re-positioned
over the line, for other forms of autonomy it will need to be positioned close to the autonomous
route. In both situations a human operator will then need to re-click the start mission icon.
Robots perform their missions using orange lines on the ground (line following navigation), chili-
tags (tag-based inspections) and/or a virtual model created by
the robot’s LiDAR module (“teach &
repeat” or “click and inspect navigation”
). The deployment guide describes how to establish the
required infrastructure. This manual describes how to use that infrastructure using this mission
editor screen.