Final project of Cognitive Robotics by:
- Sohyung Kim (S3475743)
- Thijs Eker (S2576597)
- Dhawal Salvi (S4107624)
- Ewout Bergsma (S3441423)
- Compile the c++ code in the CPP/ folder
- Install the python packages in requirements.txt
-
Build the dataset by (this might take a night, we also have a zip with the data, it's only 500mb):
1. download the files from washington university site evaluation set(containing images) and point clouds
2. Point the EVAL_DATASET_PATH, PC_DATASET_PATH, OUTPUT_DATASET_PATH variables to the downloaded folders
3. runbuild_dataset.py
and after thatbuild_additional_dataset.py
-
Run either the
cross_validation_for_al_mf.py shape_descriptor confidence_threshold
(shape descriptor should be 0(VFH), 1(GOOD5) or 2(GOOD15)) orcross_validation_for_non_al_mf.py
to generate the results
- CPP/include/good.h: good descriptor header file from https://github.com/SeyedHamidreza/GOOD_descriptor
- CPP/CMakeLists.txt: Cmake file for building the cpp code
- CPP/good.cpp: good descriptor implementation from https://github.com/SeyedHamidreza/GOOD_descriptor
- CPP/main.cpp: the main file called by python for building feature histograms for VFH, GOOD5 and GOOD15
- build_additional_dataset.py: additional script for also computing the GOOD5 and GOOD15 descriptions
- build_dataset.py: the main script for building the dataset, this files reads pngs and scales the to 224*224 and reads in pointclouds to compute VFH descritions.
- cross_validation_for_al_mf.py: The script used for calculating the final Active learning results for the paper
- cross_validation_for_non_al_mf.py: This script was used for calculating the final offline learning results in the paper.
- load_dataset.py: This script contains the functionality for loading the different datasets used in our research (VFH, GOOD5, GOOD15) + the implementation of the cross-validation.
- mondrian_forest_classifier_with_al_strategy.py: Implementation of a fit method using the described querying strategy (The AL is implemented here!)
- requirements.txt: The required pip packages for running the code
- run_exec.py: python file calling the compiled C++ code from the CPP/ folder
- utils.py: file with some definitions(like the category names)
- create_image_features.py: used to compute 4096 features from the scaled images using VGG
- final_general_functions.py: these script were used for RGB results
- final_mf_all_image_features.py: these script were used for RGB results
- final_mf_vfh_and_all_image_features.py: these script were used for RGB results
- mrmr_feature_selection.py: mrmr feature selection using skfeature-chappers package
- mrmr_feature_selection_2.py: pymrmr feature selection
- mrmr_feature_selection_3.py: multithreaded mrmr feature selection using mifs package
- rf_hyperparam_search.py: hyperparameter search for random forest
- train_svm.py: file for testing SVM on VFH data