Self-driving cars promise to transform roadways. There’d be fewer traffic accidents and will provide greater mobility for people who can’t operate a vehicle. The cars could fundamentally change the way we think about getting around. To have a self-driving car we need to be able to sense the surroundings and detect unexpected encounters.
The solution proposed uses deep learning across which means we are teaching self-driving cars somewhat like how you’d teach a human. That involves providing a host of examples of situations, objects and scenarios and then letting the system extrapolate how the rules it learns there might apply to novel or unexpected experiences. It means logging a huge number of driving hours to provide the system with basic information.
The solutions of this project show the first steps toward having a real self-driving car. The result will demonstrate how to drive a car in a simulator using 100% deep learning model without any human interaction. My wish would be to demo the power of the IBM Watson Data Platform with one of the most challenging use cases in the industry.
- Lane Line detection: lines on the road show us a constant reference for where to steer the vehicle. This notebook we detect lane lines in images and videos using Python and OpenCV (Open-Source Computer Vision). We apply techniques for distortion correction to raw images, color transforms and gradients.
- Advanced Lane Line detection: this second notebook provides enhancements for lane line detection like apply a perspective transform ("birds-eye-view") and detect lane pixels and fit to find the lane boundary. It also detects the curvature of the lane and vehicle position with respect to center.
- Traffic Sign Classifier: decode traffic signs from natural images and test the model with new images of signs from the web.
- Vehicle Detection: detect vehicles in a dash camera video. It is an object detection problem and for that we pre-process the images and we re-use a very popular Object Recognition pre-trained model called Yolo (9 convolutional layers and 3 full connected layers). That technique is called Transfer Learning.
- Behavioral cloning: train, validate and test a model using Keras to clone driving behavior. The model will output a steering angle to an autonomous vehicle.