diff --git a/README.md b/README.md index 002c7dd..af2f32e 100644 --- a/README.md +++ b/README.md @@ -1,22 +1,62 @@ -# Real-Time-Anomaly-Segmentation [[Course Project](https://docs.google.com/document/d/1ElljsAprT2qX8RpePSQ3E00y_3oXrtN_CKYC6wqxyFQ/edit?usp=sharing)] -This repository provides a starter-code setup for the Real-Time Anomaly Segmentation project of the Machine Learning Course. It consists of the code base for training ERFNet on the Cityscapes dataset and perform anomaly segmentation. +# Real-Time Anomaly Segmentation for Road Scenes +This repository contains the code of the __Real-Time Anomaly Segmentation for Road Scenes__ project of the __Advanced Machine Learning__ course 23/24 - Politecnico di Torino + +### Sample Results + +#### First Example + +* Original Image
+Tractor + +* Ground Truth Anomaly
+Tractor Ground Truth Anomaly + +* Anomaly Scores
+Tractor Anomaly Scores + +#### Second Example + +* Original Image
+Phone Box + +* Ground Truth Anomaly
+Phone Box Truth Anomaly + +* Anomaly Scores
+Phone Box Anomaly Scores ## Packages -For instructions, please refer to the README in each folder: +For instructions, please refer to the __README__ in each folder: -* [train](train) contains tools for training the network for semantic segmentation. -* [eval](eval) contains tools for evaluating/visualizing the network's output and performing anomaly segmentation. -* [imagenet](imagenet) Contains script and model for pretraining ERFNet's encoder in Imagenet. -* [trained_models](trained_models) Contains the trained models used in the papers. +* [train](train) contains tools for training the networks for semantic segmentation. +* [eval](eval) contains tools for evaluating/visualizing the networks' output and performing anomaly segmentation. +* [imagenet](imagenet) contains scripts and model for pretraining ERFNet's encoder in Imagenet. +* [trained_models](trained_models) contains the trained models used in the papers (some networks are in the Releases section of the Repo). -## Requirements: +## Datasets * [**The Cityscapes dataset**](https://www.cityscapes-dataset.com/): Download the "leftImg8bit" for the RGB images and the "gtFine" for the labels. **Please note that for training you should use the "_labelTrainIds" and not the "_labelIds", you can download the [cityscapes scripts](https://github.com/mcordts/cityscapesScripts) and use the [conversor](https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/createTrainIdLabelImgs.py) to generate trainIds from labelIds** -* [**Python 3.6**](https://www.python.org/): If you don't have Python3.6 in your system, I recommend installing it with [Anaconda](https://www.anaconda.com/download/#linux) -* [**PyTorch**](http://pytorch.org/): Make sure to install the Pytorch version for Python 3.6 with CUDA support (code only tested for CUDA 8.0). -* **Additional Python packages**: numpy, matplotlib, Pillow, torchvision and visdom (optional for --visualize flag) -* **For testing the anomaly segmentation model**: Road Anomaly, Road Obstacle, and Fishyscapes dataset. All testing images are provided here [Link](https://drive.google.com/file/d/1r2eFANvSlcUjxcerjC8l6dRa0slowMpx/view). - -## Anomaly Inference: -* The repo provides a pre-trained ERFNet on the cityscapes dataset that can be used to perform anomaly segmentation on test anomaly datasets. -* Anomaly Inference Command:```python evalAnomaly.py --input='/home/shyam/ViT-Adapter/segmentation/unk-dataset/RoadAnomaly21/images/*.png```. Change the dataset path ```'/home/shyam/ViT-Adapter/segmentation/unk-dataset/RoadAnomaly21/images/*.png```accordingly. +* **For testing the anomaly segmentation models**: All testing images are provided [here](https://drive.google.com/file/d/1r2eFANvSlcUjxcerjC8l6dRa0slowMpx/view). + +## Networks +The repo provides the following pre-trained networks that can be used to perform anomaly segmentation: +* __Erfnet__ trained on 19 classes of the Cityscapes dataset using a __Cross-Entropy loss__, __Logit Norm + Cross Entropy__, __Logit Norm + Focal Loss__, __IsoMax+ + Cross Entropy__ and __IsoMax+ + Focal Loss__ +* __BiSeNetV1__ trained on 20 classes (19 + void class) of the Cityscapes dataset +* __Enet__ trained on 20 classes (19 + void class) of the Cityscapes dataset + + +## Anomaly Inference +To run the anomaly inferences method is possible to use the following command +* Anomaly Inference Command: ```python evalAnomaly.py --input='/content/validation_dataset/RoadAnomaly21/images/*.png'```. Change the dataset path ```'/content/validation_dataset/RoadAnomaly21/images/*.png'``` accordingly. + +## Notebook +The `AML_Project.ipynb` can be opened on Colab to run all the evaluation commands. + +## Authors + +- [Davide Sferrazza s326619](https://github.com/FarInHeight/) +- [Davide Vitabile s330509](https://github.com/Vitabile/) +- [Yonghu Liu s313442](https://github.com/Liu-Yonghu) + +## License +[MIT License](LICENSE) \ No newline at end of file diff --git a/eval/README.md b/eval/README.md index 1dcdb99..109e869 100644 --- a/eval/README.md +++ b/eval/README.md @@ -1,6 +1,8 @@ # Functions for evaluating/visualizing the network's output -Currently there are 4 usable functions to evaluate stuff: +Currently there are 6 usable functions to evaluate stuff: +- evalAnomaly +- colorized_anomly - eval_cityscapes_color - eval_cityscapes_server - eval_iou @@ -12,12 +14,21 @@ This code can be used to produce anomaly segmentation results on various anomaly **Examples:** ``` -python evalAnomaly.py --input='/home/shyam/ViT-Adapter/segmentation/unk-dataset/RoadAnomaly21/images/*.png' +python evalAnomaly.py --input='/content/validation_dataset/RoadAnomaly21/images/*.png' ``` For the _MSP_ method, you can also optionally specify the temperature scaling value as: ``` -python evalAnomaly.py --method='msp' --temperature=2 --input='/home/shyam/ViT-Adapter/segmentation/unk-dataset/RoadAnomaly21/images/*.png' +python evalAnomaly.py --method='msp' --temperature=2 --input='/content/validation_dataset/RoadAnomaly21/images/*.png' +``` +## colorized_anomaly.py + +This code can be used to produce visual anomaly segmentation results using various method, and saving an image representing the ground truth anomaly segmentation, the resulting anomaly segmentation and a heatmap of the anomaly scores. + + +**Examples:** +``` +python colorized_anomaly.py --input='/content/validation_dataset/RoadAnomaly21/images/*.png' ``` ## eval_cityscapes_color.py @@ -28,7 +39,7 @@ This code can be used to produce segmentation of the Cityscapes images in color **Examples:** ``` -python eval_cityscapes_color.py --datadir /home/datasets/cityscapes/ --subset val +python eval_cityscapes_color.py --datadir /content/cityscapes/ --subset val ``` ## eval_cityscapes_server.py @@ -39,7 +50,7 @@ This code can be used to produce segmentation of the Cityscapes images and conve **Examples:** ``` -python eval_cityscapes_server.py --datadir /home/datasets/cityscapes/ --subset val +python eval_cityscapes_server.py --datadir /content/cityscapes/ --subset val ``` ## eval_iou.py @@ -50,7 +61,7 @@ This code can be used to calculate the IoU (mean and per-class) in a subset of i **Examples:** ``` -python eval_iou.py --datadir /home/datasets/cityscapes/ --subset val +python eval_iou.py --datadir /content/cityscapes/ --subset val ``` ## eval_forwardTime.py diff --git a/eval/saved_anomalies/phone_box.png b/eval/saved_anomalies/phone_box.png new file mode 100644 index 0000000..91bffc6 Binary files /dev/null and b/eval/saved_anomalies/phone_box.png differ diff --git a/eval/saved_anomalies/phone_box_anomaly_scores.png b/eval/saved_anomalies/phone_box_anomaly_scores.png new file mode 100644 index 0000000..c2339dd Binary files /dev/null and b/eval/saved_anomalies/phone_box_anomaly_scores.png differ diff --git a/eval/saved_anomalies/phone_box_label.png b/eval/saved_anomalies/phone_box_label.png new file mode 100644 index 0000000..cb33e8f Binary files /dev/null and b/eval/saved_anomalies/phone_box_label.png differ diff --git a/eval/saved_anomalies/tractor.png b/eval/saved_anomalies/tractor.png new file mode 100644 index 0000000..99b941a Binary files /dev/null and b/eval/saved_anomalies/tractor.png differ diff --git a/eval/saved_anomalies/tractor_anomaly_scores.png b/eval/saved_anomalies/tractor_anomaly_scores.png new file mode 100644 index 0000000..92f4d41 Binary files /dev/null and b/eval/saved_anomalies/tractor_anomaly_scores.png differ diff --git a/eval/saved_anomalies/tractor_label.png b/eval/saved_anomalies/tractor_label.png new file mode 100644 index 0000000..fc3f382 Binary files /dev/null and b/eval/saved_anomalies/tractor_label.png differ