- Advanced Lane Lines
- Files
- Challenges
- Shortcomings & Future Enhancements
- Acknowledgements & References
This video contains results and illustration of challenges encountered during this project:
- Camera Calibration
- RGB2Gray using
cv2.cvtColor
- Finding and Drawing Corners using
cv2.findChessboardCorners
andcv2.drawChessboardCorners
- Identifying Camera Matrix and Distortion Coefficients using
cv2.calibrateCamera
- Undistort
- Cropped using
cv2.undistort
- Uncropped additionally using
cv2.getOptimalNewCameraMatrix
- Cropped using
- Perspective Transform in
corners_unwarp
- RGB2Gray using
- Filters using
filtering_pipeline
- RGB to HSL
- H & L Color Threshold Filters
- Gradient, Magnitude and Direction Filters
- Careful Combination of the above
- Guassian Blur to eliminate noise
K=31
- Lane Detection
pipeline
undistort
perspective_transform
crop_to_region_of_interest
filtering_pipeline
fit_lane_lines
left_fitx
right_fitx
usinghistogram[:midpoint]
andsliding window
to capture points forming a lane line along with 2nd Order Polynomial curve fitting, identifies
overlay_and_unwarp
- Car's Trajectory
car_fitx
- Lane Center
mid_fitx
fill_lane_polys
- Car's Trajectory
calculate_curvature
left_curve_radius
right_curve_radius
off_centre_m
put_metrics_on_image
- finally returning an undistorted image
- Save as Video.mp4
Minor: Note the position from center is represented as a positive 0.2(m). Compare with images below.
The project was designed to be modular and reusable. The significant independent domains get their own Class
and an individual file:
camera.py
- Camera Calibrationlanes.py
- Lane Detectionmain.py
- Main test runner withtest_road_unwarp
,test_calibrate_and_transform
andtest_filters
utils.py
- Handy utils likeimcompare
,warper
,debug
shared across modulessettings.py
- Settings shared across modulevision_filters.py
- Gradient, magnitude, direction, Sobel filters and relatedREADME.md
- description of the development process (this file)
All files contain detailed comments to explain how the code works. Refer Udacity Repository CarND-Advanced-Lane-Lines - for calibration images, test images and test videos
Repository includes all required files and can be used to rerun advanced lane line detection on a given video. Set configuration values in settings.py
and run the main.py
python script.
$ grep 'INPUT\|OUTPUT' -Hn settings.py
settings.py:9: INPUT_VIDEOFILE = 'project_video.mp4'
settings.py:11: OUTPUT_DIR = 'output_images/'
$ python main.py
[load_or_calibrate_camera:77] File found: camera_cal/camera_calib.p
[load_or_calibrate_camera:78] Loading: camera calib params
[test_road_unwarp:112] Processing Video: project_video.mp4
[MoviePy] >>>> Building video output_images/project_video_output.mp4
[MoviePy] Writing video output_images/project_video_output.mp4
100%|██████████████████████████████████████████████████████████████▉| 1260/1261 [11:28<00:00, 1.99it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: output_images/project_video_output.mp4
$ open output_images/project_video_output.mp4
There was no visual indication for developing an intuition to identify Car's Lane-Center offset by looking at the video. For example in this image the car is quit off the center towards the left side of the lane. But it doesn't show:
Figure: Frame with No Intuitive Indication of Off-Center Distance
To get an intuitive feel, I decided to identify, approximate and visualize the position of the car in the lane with respect to lane-center. In order to achieve this, I identified four different lane lines:
- Left Lane Marker Line
- Approximate Car's Trajectory Line
- Lane Center Line, and
- Right Lane Marker Line
Figure: Frame with Intuitive Off-Center Highlights
Figure: Frame with Shadows (Sidenote: Smaller Off-Center Position)
A simple and elegant solution to approximate the trajectory of the car was to use the existing lane end markers and interpolate between them. Using the center of the image as car's position and identifying its relative position between the lane ends, I came up with an approximate car trajectory, as follows:
# Identify/Highlight car's offset position in lane
ratio = (float(image.shape[1])/2 - left_fitx[-1]) / (right_fitx[-1] - left_fitx[-1])
mid_fitx = (left_fitx + right_fitx)*0.5
car_fitx = (left_fitx + right_fitx)*ratio
Then it was just a matter of coloring them with a generic fill_lane_polys()
lanes.py#L222-L236 function to draw polygons on lanes given a lane fit line. The off-center distance was thus displayed the lane itself in RED.
self.fill_lane_poly(image, car_fitx, ploty, mid_fitx, mid_color)
Figure: Example of a frame where the current implementation falls apart
- Use ∞ for straight lanes beyond say ~10km radius
- Add Intermediate Processing Frames to Video
- Smoothen Radius Metric using Moving Average (lowpass filter)
- Use a
Line
class to keep track of left and right lane lines - Consider Weighted Averaging (on line-length, for example)
- Use a
- Reuse lane markers to eliminate full frame search for subsequent frames
- Use sliders to tune thresholds (Original idea courtesy Sagar Bhokre)
- Work on harder challenge videos
- Sagar Bhokre - for project skeleton & constant support
- Caleb Kirksey - for motivation and the idea of using camera bias
- CarND-Advanced-Lane-Lines - Udacity Repository containing calibration images, test images and test videos