Skip to content

Reproduces a human performed driving pattern using the supervised learning based behavioral cloning approach.

License

Notifications You must be signed in to change notification settings

aydinsimsek/Self-Driving-Car-Simulation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Table of Contents

Abstract

In this project, self driving car simulation is carried out by reproducing a human performed driving pattern using the supervised learning based behavioral cloning approach. The required data are collected by driving couple of laps around Track #1 of the Udacity's self driving car simulator, various augmentation techniques are used to make the trained model generalize well enough so that the car drives autonomously on both Track #1 and an unseen track, Track #2, and making use of a convolutional neural network (CNN) based on the NVIDIA architecture, steering angles corresponding to the positions of the car on the track are learned. A real-time web app is used make the model and the simulator communicate continuously so that the essential information for autonomous driving such as the current position of the car, predicted steering angle and the throttle to be given are passed along them.

Data Collection and Balancing the Dataset

I drove around the Track #1 approximately 15 minutes to collect the images with the three cameras mounted on the left, center and right side of the car. Figure 1 shows a set of sample images.

Figure-1 Figure 1: Sample images taken from left, center and right side cameras

Since Track #1 has lots of straights, there is a strong bias on the zero degree steering angle as the following histogram suggests.

Figure-2
Figure 2: Distribution of the steering angles of the original dataset

The neural network model would not be able to learn the sharp turns if this bias problem was not fixed. I overcame this issue by truncating the number of samples to 800 for each steering angle and ended up with the distribution shown in Figure 3.

Figure-3
Figure 3: Distribution of the steering angles of the balanced dataset

After balancing the dataset, the images split between training and validation sets with the ratios 80% and 20% respectively.

Data Augmentation and Preprocessing

In order to prevent overfitting and make the model operate on several tracks that have diverse characteristics (such as shape, surrounding environment, lighting condition etc.), augmenting the images in the training dataset is necessary. The following figure illustrates the augmented images with different techniques.

augmented-images
Figure 4: (From top left to bottom right) zoomed-in, translated, darkened and horizontally flipped images

Before training the model one last step, which is preprocessing, is required to improve the efficiency and speed-up the training process.
First off, the image is cropped such that only the region of interest is left, then as proposed in the NVIDIA paper, the color model is changed to YUV and the width and height are set to 200 pixels and 66 pixels respectively, after that Gaussian blur is applied to reduce the noise, and finally normalization is used to work with pixel values between 0 and 1.
Figure 5 shows an image after the mentioned preprocessing techniques are applied.

preprocessed-image
Figure 5: Preprocessed image

Neural Network Architecture and Training Process

I used a slightly modified version of the convolution neural network architecture proposed in the NVIDIA paper. Figure 6 indicates the layers, output shapes and the number of parameters of the model. Note that, I preferred to use ELU activation function since it can produce negative outputs as opposed to ReLU activation function. For the compilation of the model, Adam optimizer with learning rate 1e-4 is used and mean squared error loss function is chosen as it's a regression task.

Figure-6
Figure 6: CNN model summary

During the training process, the provided images to the CNN model are generated with batch generation on the fly so that they are not stored in the memory. Notice that, 384,000 images are generated for training and 256,000 images are generated for validation and these would be pretty huge number of images to fit in the memory in the absense of this method. Also, to be able to use significantly higher number of images than the size of the training and validation sets is another advantage of using a batch generator. In order to decide the best model, validation losses are considered. Validation losses after each epoch can be seen in Figure 7.

Figure-7
Figure 7: Training process

Putting Them All Together: Autonomous Driving

In order to make the car drive autonomously, bidirectional communication between the model and the simulator need to be established. To achieve this, a real time web application is built using Socket.IO and Flask. Using the real time web app, the simulator continuously sends the current frame recorded by the center camera, the position of the car on the track is used as an input to the model and the model predicts a steering angle. A throttle value is calculated based on the current and desired speed using a PI controller. The predicted steering angle and the calculated throttle value are sent back to the simulator and the car moves on the track accordingly.

File Descriptions and Usage

drive.py contains the class, function and event handlers to drive the car autonomously
utils.py provides data manipulation functions and batch generator
model.py is used to create and train the model
model.h5 contains the weights and model configuration

Follow the below steps to use these files:

conda create --name <ENVIRONMENT_NAME>
For Windows: activate <ENVIRONMENT_NAME> / For Linux and macOS: source activate <ENVIRONMENT_NAME>   
conda install -c anaconda flask
conda install -c conda-forge python-socketio
conda install -c conda-forge eventlet
conda install -c conda-forge python-engineio=3.0.0
conda install -c conda-forge tensorflow
conda install -c conda-forge keras 
conda install -c anaconda pillow
conda install -c anaconda numpy
conda install -c conda-forge opencv
conda install -c anaconda pandas
conda install -c anaconda scikit-learn
conda install -c conda-forge imgaug

If you want to train your own model:

  • You can either use my dataset or get your own dataset (launch the Udacity's self driving car simulator, click Play!, choose a track, select TRAINING MODE and click the RECORD button)
  • Make the changes you want on model.py and utils.py
  • Run model.py
python model.py

To directly use my model or your own model.h5 that you'll have after running model.py:

  • Run drive.py
python drive.py
  • Launch the Udacity's self driving car simulator, click Play!, choose a track and select AUTONOMOUS MODE

Demonstration Videos

Reference Paper

https://arxiv.org/pdf/1604.07316v1.pdf

About

Reproduces a human performed driving pattern using the supervised learning based behavioral cloning approach.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages