Skip to content

Estimated the motion of a robot using classical computer vision techniques such as the point-n-perspective, local outlier factor, LMeds, and stereo vision with the KITTI dataset.

Notifications You must be signed in to change notification settings

Badri-R-S/VisualOdom

 
 

Repository files navigation

VISUAL ODOMETRY

Dependencies necessary to run the code:

pip3 install scikit-learn
pip3 install pandas
pip3 install numpy
import cv2

Instructons to run the code:

  • git clone the files from the repository.
  • Run "python3 VODOM.py" to run the code.
  • Default methods : FAST feature detction and PnPRANSAC (1).
  • Method3 - Results will not converge and will have high error due to its instability - Method 1 is the most stable method

To change algorithm:

  • Provide "ORB" in line 99 as first parameter in VODOM.py.
  • give "2" as a second parameter to run camera pose estimation using Essential matrix
  • give "3" as the second parameter to run camera pose estimation using Essential matrix + Local Outlier Factor (LOF). -Give dist_threshold value 0.5+ for FAST.

Depth Estimation

Using PNPSOLVER as the centric trajectory solver, we first computed depth and feature matching. For depth and disparity, we used stereo SGBM, or semi sliding window, which gave better results compared to SBM. The below image shows our result when performed with stereo SGBM.

DisparitySBGM

Using that, we computed depth, which gave us a better understanding of the depth information.

Dpth

Feature Extraction

We approached with two methods. First one was using ORB , a quick method to get matci=hing keypoints and matches. Used a lowe's distance of 0.8 to filter best matches, and we were able to estimate around deccent amount of matches for the given dataset sequence.

FilteredORB

Second feature extraction, we combined FAST + SIFT , that provided more robust results than ORB and the computation time was slightly higher than ORB.

ORBAST

Motion Estimation

Deployed three approaches

  • Approach 1 : Point-n-Perspective method and depth information to get project 2D image points as 3D homogenous points and estimate Translation Vector and Rotation matrix using SolvePnP. This is the most efficient method

FAST+PNP

  • Approach 2 : Calculate Camera Pose using Essential Matrix and recover pose using LMedS - Unstable

FAST-Method2

  • Approach 3 : Implement Local Outlier Factor for 20 neighbors and calculate essential matrix and get rotation and translation matrices. - Highly Unstable

output-2

CONTRIBUTORS

Badrinarayanan RS - [@Badri-R-S]

Smit Dumore - [@smitdumore]

Achuthan- [@Achuthankrishna]

WARNING

Please be patient - as it takes atleast 20 mins long for method 2 and 3 - Method 1 results converge in average 15 mins. This is due to dataset size.

About

Estimated the motion of a robot using classical computer vision techniques such as the point-n-perspective, local outlier factor, LMeds, and stereo vision with the KITTI dataset.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.4%
  • Python 1.6%