Vision

Kratos uses a single sensor, a Point Grey Bumblebee2, for both obstacle detection and lane detection. The processes for both are discussed below.

Obstacle Detection

Obstacle detection begins by capturing simultaneous images from each of the Bumblebee2 cameras.

input image

We use Point Grey’s stereo vision library to compute a depth map by matching similar pixels between the two simultaneously captured stereo images.

point clout

Finally, we apply a simplified version of the obstacle detection algorithm proposed by Manduchi et al., which searches for pairs of points that are roughly vertical. We are able to detect obstacles to an effective range of 18 meters.

obstacle detection output

Lane Detection

Our lane detection is an extended version of the system used by Prospect Twelve in the Urban Challenge.

A series of filters is applied to the original image (top-left) to determine whether it might fall on a lane line. Color filters respond to pixels that correspond to white lane markings (top-middle), while a rectangular pulse width filter responds to edges of the correct width for a lane marking, adjusted by vertical location in the image (top-right). Finally, an obstacle filters excludes points that fall on an obstacle (bottom-left). The obstacle filter significantly reduces detecting false-positives on reflective obstacles, and is made possible by our using the same camera for both obstacle and lane detection, giving us a common image space for processing.

The results of all three filters are fused into a heat map indicating the likelihood that each pixel falls on a lane marking (bottom-middle). We then apply the RANSAC algorithm to find the parabolic fit that passes near the most lane pixels, further extended by a pixel-by-pixel greedy search.

Stages of Lane Detection

Princeton Autonomous Vehicle Engineering