Our lane detection algorithm functions in three phases: first, we look for a set of features on a pixel-by-pixel level which indicate lanes; next, we fuse these features into a single score for each pixel; and finally, we estimate lanes on this fused image.
We begin with a raw color image from our Point Grey Flea2 camera:

First, a white filter is applied, looking for regions that may comprise white lane markings:

Next, a yellow filter is applied to look for regions of within the range of typical yellow lane markings:

A modified edge filter is then applied, which responds to edges of the correct width.

We fuse these features into a single heat map representation, essentially requiring that a pixel be either white or yellow, and have responded to the width filter:

Finally, we look for lanes in the heat map. RANSAC is run through several hundred iterations, looking for a parabola which passes through many lane pixels. However, since a parabola does not always perfectly describe the geometry of a lane, we then perform a greedy search over pixels, starting at the parabola’s estimate of position.

The detected lanes and stop lines are sent along to Lane Fusion for processing.