A Two-Stream Conditional Generative Adversarial Network (TScGAN) for Improving Semantic Predictions in Urban Driving Scenes
The proposed scheme is a Two-Stream Conditional Generative Adversarial Network (TScGAN), with one stream having initial semantic segmentation masks predicted by an existing CNN while the other stream utilizes scene images to retain high-level information under a supervised residual network structure. In addition, TScGAN incorporates a novel dynamic weighting mechanism, which leads to significant and consistent gain in segmentation performance.
Please refer to our paper for more details.
Click here to download the code.
The pretrained models are found in TScGAN
Several comparative tests on public benchmark driving databases, including Cityscapes, Mapillary, and Berkeley DeepDrive100K, demonstrate the effectiveness of the proposed method when used with state-of-the-art CNN-based semantic segmentation models. Download the high-quality image results by clicking here
If you find this framework useful in your work, please cite the paper:
If you find any issue running the code, you can report it in the issues section.
University Technology Belfort-Montbrliard, France UTBM Connaissance et Intelligence Artificielle Distribuées CIAD
We hope that this will benefit the community and researchers working in the field of autonomous driving.