Obstacle detection is a key component of our robots’ software. When driving on the road (as in the case of Prospect 12) or maneuvering through a field (as in the case of the IGVC), autonomous systems need to have a robust and reliable method for detecting obstacles (like cones, other cars, and trees) to aid path planning. This is relatively easy for a human because we have an extensive library of things we’ve already seen and excellent pattern matching and object identification. Replicating this functionality on a robot includes constructing a 3D representation of the environment, using 2D image data (color, edges in the picture) as well as 3D data (a cloud of cone-shaped points above the ground) to identify obstacle size, shape, and movement, and track them over time.
The visual sensor of choice for PAVE is a Videre color stereo camera. This device has onboard software that compares two slightly offset images captured at the same time and builds a 3D point cloud that represents the environment. We have experimented with a couple of algorithms for using this point cloud to detect obstacles and their positions. We use an algorithm published by Manduchi et al. that looks at differences in height between adjacent points. We also experimented with a different method at the IGVC– this algorithm assumes most of the visual field will show the ground and uses RANSAC robust plane-fitting to locate the ground plane. The assumption is generally valid when the camera is angled downward, but it checks for degenerate cases such as when the robot is facing a wall. Then, the algorithm classifies points as obstacles based on their height above the ground. We are currently exploring ways to combine 3D point-based obstacle detection with 2D image data, because identifying objects in the image and orienting them using 3D point data could provide more robust, lower-noise obstacle detection.
You must be logged in to post a comment.