Skip to content

Latest commit

 

History

History
103 lines (59 loc) · 4.23 KB

README.md

File metadata and controls

103 lines (59 loc) · 4.23 KB

Tiny_CD

This is the implementation of: TinyCD: A (Not So) Deep Learning Model For Change Detection

🔥 🔥 🔥 TinyCD has been accepted for publication in Neural Computing and Applications 🔥 🔥 🔥

You can find the pre-print version here: arXiv

overall_architecture_v2

Results

samples_reduced.mp4

In the following table we report the quantitative comparison of our model and other State-of-the-art models. F1 scores are reported in percentage, parameters in Millions and Flops in Gigaflops results

Here we report a visual comparison between the mask generated by TinyCD and BIT. We selected negative samples in the fairness way in order to show both similarities and differences between the obtained results. Bit vs ours

Finally, we show a complete sequence of intermediate masks and final binary classification mask to show the ability of our tiny model in detecting detailed objects. mid mask

Installation and Requirements

The easiest way to reproduce our results is to create a folder named "TinyCD" on your device and then clone this repository in "TinyCD":

git clone https://github.com/AndreaCodegoni/Tiny_model_4_CD.git

Then, you can create a virtual conda environment named TinyCD with the following cmd:

conda create --name TinyCD --file requirements.txt
conda activate TinyCD

Dataset

You can find the original datasets at these two links:

LEVIR-CD: https://justchenhao.github.io/LEVIR/

WHU-CD: https://study.rsgis.whu.edu.cn/pages/download/building_dataset.html

Then, for each dataset, you have to organise the data in the following way:

A: images of t1 phase;

B: images of t2 phase;

label: label maps;

list: contains train.txt, val.txt and test.txt, each file records the image names (XXX.png) in the change detection dataset.

If you prefer, you can download the pre-processed dataset from this link.

If you have any trouble with the datasets, feel free to contact us.

Evaluate pretrained models

If you want to evaluate your trained model, or if you want to reproduce the paper results with the pretrained models that you can find in the "pretrained_models" folder, you can run:

python test_ondata.py --datapath "Your_data_path" --modelpath "Your_path_to_pretrained_model"

Train your model

You can re-train our model, or if you prefer you can play with the parameters of our model and then train it using

python training.py --datapath "Your_data_path" --log-path "Path_to_save_logs_and_models_checkpoints"

External repositories using TinyCD

Here I report some repositories that use TinyCD for their search. If you use TinyCD and enjoy it, please let me know your work so I can add it

References

We want to mention the following repositories that greatly help us in our work:

License

Code is released for non-commercial and research purposes only. For commercial purposes, please contact the authors.