Skip to content

This repository is for participants of INTEL ONE API 2023 Hackathon

License

Notifications You must be signed in to change notification settings

gvsmothish/intel-oneAPI-ADAS

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

44 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Intel-OneAPI

Team Name - Momentum

Team Members - GVS Mothish, Akash Agrawal, Sudarshan Bandyopadhyay, Swapnil Mane

Challenge Name - Object Detection For Autonomous Vehicles

PyTorch - Version Python - Version

๐Ÿ“œ Prototype Brief:

Description: This project is part of the Intel OneAPI Hackathon 2023, under the object detection and Segmentation for Autonomous Vehicle theme by Team Momentum.We embarked on a project to develop a robust object detection and segmentation model capable of handling complex scenarios like bad weather and low light conditions. For the task at hand we have implemented Hybridnets model, which is an end2end perception network for multi-tasks. Our work focused on traffic object detection, drivable area segmentation and lane detection. HybridNets can run real-time on embedded systems, and obtains SOTA Object Detection, Lane Detection on BDD100K Dataset. The model is trained on image input which will be temporal in nature. Further training is done using HybridNet neural network Architecture with the intel's oneDNN pytorch optimization to perform faster inferencing. Finally the real time inference will be achieved using the intel's oneDNN libraries which will provide mainly three outputs that are, object bounding box, object class and lane detection.

Screenshot from 2023-06-09 03-07-52 Screenshot from 2023-06-09 01-45-27

๐Ÿšฉ Medium Article

Article Link

๐Ÿšฉ PPT and Demonstration

PPT and Demonstration Link

๐Ÿž Tech Stack:

Technologies used to Build the prototype Intelยฎ AI Analytics Toolkits, and it's libraries Screenshot from 2023-06-09 01-25-31

๐Ÿž Project Structure

HybridNets
โ”‚   backbone.py                     # Model configuration
โ”‚   export.py                       # UPDATED 10/2022: onnx weight with accompanying .npy anchors
โ”‚   hubconf.py                      # Pytorch Hub entrypoint
โ”‚   hybridnets_test_images_old.py   # Image inference
โ”‚   hybridnets_test_images.py       # Modified hybridnets_test to get inference time for a no of images
โ”‚   hybridnets_test_videos_old.py   # Video inference
โ”‚   hybridnets_test_videos.py       # Modified hybridnets_test_videos to get inference for different length of videos
โ”‚   speedup_test.py                 # Calculate 
โ”‚   train.py                        # Train script
โ”‚   train_ddp.py                    # DistributedDataParallel training (Multi GPUs)
โ”‚   val.py                          # Validate script
โ”‚   val_ddp.py                      # DistributedDataParralel validating (Multi GPUs)
โ”‚   frameCount_vs_time_plot.py      # Plot framecount vs time taken to infer
โ”‚   imageCount_vs_time_plot.py      # Plot imagecount vs time taken to infer
โ”‚   speedup_test.py                 # Speedup test for comparison between with and without optimization
โ”‚
โ”œโ”€โ”€โ”€demo                            # Images and videos for testing inference
โ”‚
โ”œโ”€โ”€โ”€demo_results                    # Post processing results on test images and videos to validate the inference
โ”‚
โ”œโ”€โ”€โ”€data                            # Time record for inferences on different conditions
โ”‚
โ”œโ”€โ”€โ”€plots                           # Comparison plots between inferences on different conditions
โ”‚
โ”œโ”€โ”€โ”€encoders                        # https://github.com/qubvel/segmentation_models.pytorch/tree/master/segmentation_models_pytorch/encoders
โ”‚       ...
โ”‚
โ”œโ”€โ”€โ”€hybridnets
โ”‚       autoanchor.py               # Generate new anchors by k-means
โ”‚       dataset.py                  # BDD100K dataset
โ”‚       loss.py                     # Focal, tversky (dice)
โ”‚       model.py                    # Model blocks
โ”‚
โ”œโ”€โ”€โ”€projects
โ”‚       bdd100k.yml                 # Project configuration
โ”‚
โ””โ”€โ”€โ”€utils
โ”‚   constants.py
โ”‚   plot.py                         # Draw bounding box
โ”‚   smp_metrics.py                  # https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/metrics/functional.py
โ”‚   utils.py                        # Various helper functions (preprocess, postprocess, eval...)

๐Ÿž Installation

The project was developed with Python>=3.7 and Pytorch>=1.10.

# Creating Anaconda Virtual Environment Inside Project Folder
conda create -p venv python==3.7.2 -y

# Activating the created Virtual Environment
conda activate venv/

#Installing Dependencies
pip install -r requirements.txt

#Installing Pytorch for CPU
pip install torch==1.13.1+cpu torchvision==0.14.1+cpu -f https://download.pytorch.org/whl/torch_stable.html

#Installing Intel Pytorch Optimisation Ipex Dependency
pip install intel_extension_for_pytorch==1.13.100 -f https://developer.intel.com/ipex-whl-stable-cpu

๐Ÿšฉ Project Demo - Step-by-Step Code Execution Instructions:

# Download end-to-end weights
curl --create-dirs -L -o weights/hybridnets.pth https://github.com/datvuthanh/HybridNets/releases/download/v1.0/hybridnets.pth

# Image inference with Intel Optimisation
python3 hybridnets_test_images.py --source demo/images --output demo_result/images --use_optimization True --enable_postprocessing True

# Image inference without Intel Optimisation
python3 hybridnets_test_images.py --source demo/images --output demo_result/images --use_optimization False --enable_postprocessing True

# Video inference with Intel Optimisation
python3 hybridnets_test_videos.py --source demo/video --output demo_result/video --use_optimization True --enable_postprocessing True

# Video inference without Intel Optimisation
python3 hybridnets_test_videos.py --source demo/video --output demo_result/video --use_optimization False --enable_postprocessing True

# Result is saved in a new folder called demo_result

๐Ÿšฉ Usage

Dataset Structure:

HybridNets
โ””โ”€โ”€โ”€datasets
    โ”œโ”€โ”€โ”€imgs
    โ”‚   โ”œโ”€โ”€โ”€train
    โ”‚   โ””โ”€โ”€โ”€val
    โ”œโ”€โ”€โ”€det_annot
    โ”‚   โ”œโ”€โ”€โ”€train
    โ”‚   โ””โ”€โ”€โ”€val
    โ”œโ”€โ”€โ”€da_seg_annot
    โ”‚   โ”œโ”€โ”€โ”€train
    โ”‚   โ””โ”€โ”€โ”€val
    โ””โ”€โ”€โ”€ll_seg_annot
        โ”œโ”€โ”€โ”€train
        โ””โ”€โ”€โ”€val

For BDD100K: (DataSets Used)

๐Ÿšฉ Benchmarking Results

Screenshot from 2023-06-09 20-32-57 Screenshot from 2023-06-09 20-34-19 *inference time is measured in seconds.

๐Ÿ“œ References

HybridNets: End-to-End Perception Network Paper Link

๐Ÿ“œ What We Learned:

  1. Expansion of domain knowledge in deep learning based computer vision techniques like object detection and segmentation.
  2. Usage of End2End Perception Network Hybridnet to do image and video inferencing for simultaneous object detection and segmentation.
  3. Importance of Robustness in Autonomous Driving: Development of a robust algorithm that could handle all weather conditions, night time conditions as evident by the results.
  4. Incorporation of Intel oneAPI libraries oneDNN libraries.
  5. Learnt about Optimisation techniques for faster inferencing, specifically the libraries developed by Intel.

About

This repository is for participants of INTEL ONE API 2023 Hackathon

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • Other 0.5%