Lane Detection

The lane detection method currently employed functions in three phases: first, the system looked for a set of features on a pixel-by-pixel level which indicate lanes; next, it fused these features into a single score for each pixel; and finally, estimated lanes on this fused image. Below is an image that represents the various stages of lane detection:

lanestages

originalFirst, the raw color image is taken in from the stereocamera. This particular photo came from the Videre 15cm STOC color camera used in Argos, PAVE‘s entry into the 2009 Intelligent Ground Vehicle Competition.

whitefilterNext, a white filter is applied. Depending on the color of the lane the robot is attempting to detect, filter can either look for regions that may compromise yellow lane markings, or may serve to bring an actual white lane, as if the case in the image to the right. Prospect 12‘s algorithm applies an additional yellow color filter.

widthfilterA modified rectangular pulse width filter is then used. It responds to edges of the correct width for a lane marking, adjusted by vertical location in the image.

obstacleheatmapFinally, an obstacle filters excludes points that fall on an obstacle. The obstacle filter significantly reduces detecting false-positives on reflective obstacles, and is made possible by our use of the same camera for both obstacle and lane detection, providing a common image space for processing.

widthfilterThe results of all three filters are then fused into a heat map indicating the likelihood that each pixel falls on a lane marking. The algorithm essentially requires that a pixel be either white or yellow, and have responded to the width filter.

ransacA parabola is fit to each individual lane marking via the RANSAC (RANdom SAmple Consensus) algorithm which searches for the second order polynomial fit that passes closest to the most amount of lane pixels, and is further extended by a pixel-by-pixel greedy search.

Using a specialized white filter which utilizes local brightness thresholds based on local saturation, the lane detection algorithm ensures that lane markings within shadows are not ignored. RANSAC then fits a smooth curve to the lanes, and with its ability to tolerate gaps within the heat map, dashed lanes are able to be detected as well.

2007 Urban Challenge

PAVE ‘s first use of lane detection was in the 2007 DARPA Urban Challenge. The implementation agreed upon made the lane detection algorithm functions in three phases: first, the system looked for a set of features on a pixel-by-pixel level which indicate lanes; next, it fused these features into a single score for each pixel; and finally, estimated lanes on this fused image.

The process began with a raw color image from the Point Grey Flea2 camera used during the Urban Challenge:

lanes1-origFirst, a white filter was applied, looking for regions that may comprise white lane markings:

lanes2-whiteNext, a yellow filter is applied to look for regions of within the range of typical yellow lane markings:

lanes3-yellow

A modified edge filter is then applied, which responds to edges of the correct width.

lanes4-widthThe algorithm then fuses these features into a single heat map representation, essentially requiring that a pixel be either white or yellow, and have responded to the width filter:

lanes5-fused

Finally, the software look for lanes in the heat map. The RANSAC (Random Sample Consensus) algorithm is run through several hundred iterations, attempting to fit a parabola to the image that passes through the most amount of lane pixels while simultaneously rejecting any spuriously detected outliers. However, since a parabola does not always perfectly describe the geometry of a lane, Prospect 12 performs a greedy search over pixels, starting at the parabola’s estimate of position.

The detected lanes and stop lines are sent along to Lane Fusion for processing.

2008 IGVC

For the first Intelligent Ground Vehicle Competition (IGVC), the system used by Prospect 12 in the Urban Challenge was extended and enhanced.

lanestagesA series of filters is applied to the original image (top-left) to determine whether it might fall on a lane line. Color filters respond to pixels that correspond to white lane markings (top-middle), while a rectangular pulse width filter responds to edges of the correct width for a lane marking, adjusted by vertical location in the image (top-right). Finally, an obstacle filters excludes points that fall on an obstacle (bottom-left). The obstacle filter significantly reduces detecting false-positives on reflective obstacles, and is made possible by our using the same camera for both obstacle and lane detection, giving us a common image space for processing.

The results of all three filters are fused into a heat map indicating the likelihood that each pixel falls on a lane marking (bottom-middle). The system then applies the RANSAC algorithm to find the parabolic fit that passes near the most lane pixels, further extended by a pixel-by-pixel greedy search.

2009 IGVC

For the next year’s IGVC competition, the filter and fusion steps were enhanced. New white and yellow filters were developed that are more resilient under a wide variety of lighting conditions. Additionally, a neural network replaced the earlier fusion technique utilizing the binary and operator. After training it with 10,000 iterations, it produced over 30% improvement in lane detection under low light conditions

2010 IGVC

The 2010 IGVC lane detection algorithm continues to improve upon the model PAVE had established in previous years. The basic algorithm, developed for the DARPA Urban Challenge in 2007, applies filters for yellow content, white content, pulse width, and obstacles detected in the same frame, then fuses the results into a heat map which is searched for lines. Using a specialized white filter which utilizes local brightness thresholds based on local saturation, the algorithm ensures that lane markings within shadows are not ignored. The RANSAC algorithm is run on each detected lane individually and fits a parabola to it, generating a smooth, continuous path to define the boundary. Since the algorithm can tolerate gaps within the heat map, it allows Phobetor to recognize dashed lanes as well.

The majority of the lane detection development time for the 2010 IGVC was focused on developing robust methods of fusing and filtering lanes over time, which allows us to build a continuous piecewise model of the lane and to reject spuriously detected lanes. Given the fact that turns are guaranteed to have a minimum turn radius, a road model was developed that dismissed possible lanes that disagree with an extrapolation of the history of detected lane markings.

Princeton Autonomous Vehicle Engineering