Table of Contents
The figure above illustrates our ShapeFormer architecture. The main implementation of this network can be found here.
This work uses aistron for implementation. Please follow the instruction at here
Please download the below datasets. More preparation instruction can be found here.
Download the Images from KITTI dataset.
The Amodal Annotations could be found at KINS dataset
The D2S Amodal dataset could be found at mvtec-d2sa.
The COCOA dataset annotation from here (reference from github.com/YihongSun/Bayesian-Amodal) The images of COCOA dataset is the train2014 and val2014 of COCO dataset.
Configuration files for training AISFormer on each datasets are available here.
To train, test and run demo, see the example scripts at scripts/
:
@article{tran2024shapeformer,
title={ShapeFormer: Shape Prior Visible-to-Amodal Transformer-based Amodal Instance Segmentation},
author={Tran, Minh and Bounsavy, Winston and Vo, Khoa and Nguyen, Anh and Nguyen, Tri and Le, Ngan},
journal={arXiv preprint arXiv:2403.11376},
year={2024}
}