-
Notifications
You must be signed in to change notification settings - Fork 755
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[Model] Add SVTR framework and configs (#1621)
* [Model] Add SVTR framework and configs * update * update transform names * update base config * fix cfg * update cfgs * fix * update cfg * update decoder * fix encoder * fix encoder * fix * update cfg * update name
- Loading branch information
1 parent
b0557c2
commit 0aa5d7b
Showing
15 changed files
with
437 additions
and
14 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,67 @@ | ||
# SVTR | ||
|
||
> [SVTR: Scene Text Recognition with a Single Visual Model](https://arxiv.org/abs/2205.00159) | ||
<!-- [ALGORITHM] --> | ||
|
||
## Abstract | ||
|
||
Dominant scene text recognition models commonly contain two building blocks, a visual model for feature extraction and a sequence model for text transcription. This hybrid architecture, although accurate, is complex and less efficient. In this study, we propose a Single Visual model for Scene Text recognition within the patch-wise image tokenization framework, which dispenses with the sequential modeling entirely. The method, termed SVTR, firstly decomposes an image text into small patches named character components. Afterward, hierarchical stages are recurrently carried out by component-level mixing, merging and/or combining. Global and local mixing blocks are devised to perceive the inter-character and intra-character patterns, leading to a multi-grained character component perception. Thus, characters are recognized by a simple linear prediction. Experimental results on both English and Chinese scene text recognition tasks demonstrate the effectiveness of SVTR. SVTR-L (Large) achieves highly competitive accuracy in English and outperforms existing methods by a large margin in Chinese, while running faster. In addition, SVTR-T (Tiny) is an effective and much smaller model, which shows appealing speed at inference. | ||
|
||
<div align=center> | ||
<img src="https://user-images.githubusercontent.com/22607038/210541576-025df5d5-f4d2-4037-82e0-246cf8cd3c25.png"/> | ||
</div> | ||
|
||
## Dataset | ||
|
||
### Train Dataset | ||
|
||
| trainset | instance_num | repeat_num | source | | ||
| :-------: | :----------: | :--------: | :----: | | ||
| SynthText | 7266686 | 1 | synth | | ||
| Syn90k | 8919273 | 1 | synth | | ||
|
||
### Test Dataset | ||
|
||
| testset | instance_num | type | | ||
| :-----: | :----------: | :-------: | | ||
| IIIT5K | 3000 | regular | | ||
| SVT | 647 | regular | | ||
| IC13 | 1015 | regular | | ||
| IC15 | 2077 | irregular | | ||
| SVTP | 645 | irregular | | ||
| CT80 | 288 | irregular | | ||
|
||
## Results and Models | ||
|
||
| Methods | | Regular Text | | | | Irregular Text | | download | | ||
| :-----------------------------------------------------------: | :----: | :----------: | :-------: | :-: | :-------: | :------------: | :----: | :------------------------------------------------------------------------------: | | ||
| | IIIT5K | SVT | IC13-1015 | | IC15-2077 | SVTP | CT80 | | | ||
| [SVTR-tiny](/configs/textrecog/svtr/svtr-tiny_20e_st_mj.py) | - | - | - | | - | - | - | [model](<>) \| [log](<>) | | ||
| [SVTR-small](/configs/textrecog/svtr/svtr-small_20e_st_mj.py) | 0.8553 | 0.9026 | 0.9448 | | 0.7496 | 0.8496 | 0.8854 | [model](https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-small_20e_st_mj/svtr-small_20e_st_mj-35d800d6.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-small_20e_st_mj/20230105_184454.log) | | ||
| [SVTR-base](/configs/textrecog/svtr/svtr-base_20e_st_mj.py) | 0.8570 | 0.9181 | 0.9438 | | 0.7448 | 0.8388 | 0.9028 | [model](https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-base_20e_st_mj/svtr-base_20e_st_mj-ea500101.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-base_20e_st_mj/20221227_175415.log) | | ||
| [SVTR-large](/configs/textrecog/svtr/svtr-large_20e_st_mj.py) | - | - | - | | - | - | - | [model](<>) \| [log](<>) | | ||
|
||
```{note} | ||
The implementation and configuration follow the original code and paper, but there is still a gap between the reproduced results and the official ones. We appreciate any suggestions to improve its performance. | ||
``` | ||
|
||
## Citation | ||
|
||
```bibtex | ||
@inproceedings{ijcai2022p124, | ||
title = {SVTR: Scene Text Recognition with a Single Visual Model}, | ||
author = {Du, Yongkun and Chen, Zhineng and Jia, Caiyan and Yin, Xiaoting and Zheng, Tianlun and Li, Chenxia and Du, Yuning and Jiang, Yu-Gang}, | ||
booktitle = {Proceedings of the Thirty-First International Joint Conference on | ||
Artificial Intelligence, {IJCAI-22}}, | ||
publisher = {International Joint Conferences on Artificial Intelligence Organization}, | ||
editor = {Lud De Raedt}, | ||
pages = {884--890}, | ||
year = {2022}, | ||
month = {7}, | ||
note = {Main Track}, | ||
doi = {10.24963/ijcai.2022/124}, | ||
url = {https://doi.org/10.24963/ijcai.2022/124}, | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
dictionary = dict( | ||
type='Dictionary', | ||
dict_file='{{ fileDirname }}/../../../dicts/lower_english_digits.txt', | ||
with_padding=True, | ||
with_unknown=True, | ||
) | ||
|
||
model = dict( | ||
type='SVTR', | ||
preprocessor=dict( | ||
type='STN', | ||
in_channels=3, | ||
resized_image_size=(32, 64), | ||
output_image_size=(32, 100), | ||
num_control_points=20, | ||
margins=[0.05, 0.05]), | ||
encoder=dict( | ||
type='SVTREncoder', | ||
img_size=[32, 100], | ||
in_channels=3, | ||
out_channels=192, | ||
embed_dims=[64, 128, 256], | ||
depth=[3, 6, 3], | ||
num_heads=[2, 4, 8], | ||
mixer_types=['Local'] * 6 + ['Global'] * 6, | ||
window_size=[[7, 11], [7, 11], [7, 11]], | ||
merging_types='Conv', | ||
prenorm=False, | ||
max_seq_len=25), | ||
decoder=dict( | ||
type='SVTRDecoder', | ||
in_channels=192, | ||
module_loss=dict( | ||
type='CTCModuleLoss', letter_case='lower', zero_infinity=True), | ||
postprocessor=dict(type='CTCPostProcessor'), | ||
dictionary=dictionary), | ||
data_preprocessor=dict( | ||
type='TextRecogDataPreprocessor', mean=[127.5], std=[127.5])) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,89 @@ | ||
Collections: | ||
- Name: SVTR | ||
Metadata: | ||
Training Data: OCRDataset | ||
Training Techniques: | ||
- AdamW | ||
Training Resources: 4x Tesla A100 | ||
Epochs: 20 | ||
Batch Size: 2048 | ||
Architecture: | ||
- STN | ||
- SVTREncoder | ||
- SVTRDecoder | ||
Paper: | ||
URL: https://arxiv.org/pdf/2205.00159.pdf | ||
Title: 'SVTR: Scene Text Recognition with a Single Visual Model' | ||
README: configs/textrecog/svtr/README.md | ||
|
||
Models: | ||
- Name: svtr-small_20e_st_mj | ||
Alias: svtr-small | ||
In Collection: SVTR | ||
Config: configs/textrecog/svtr/svtr-small_20e_st_mj.py | ||
Metadata: | ||
Training Data: | ||
- SynthText | ||
- Syn90k | ||
Results: | ||
- Task: Text Recognition | ||
Dataset: IIIT5K | ||
Metrics: | ||
word_acc: 0.8553 | ||
- Task: Text Recognition | ||
Dataset: SVT | ||
Metrics: | ||
word_acc: 0.9026 | ||
- Task: Text Recognition | ||
Dataset: ICDAR2013 | ||
Metrics: | ||
word_acc: 0.9448 | ||
- Task: Text Recognition | ||
Dataset: ICDAR2015 | ||
Metrics: | ||
word_acc: 0.7496 | ||
- Task: Text Recognition | ||
Dataset: SVTP | ||
Metrics: | ||
word_acc: 0.8496 | ||
- Task: Text Recognition | ||
Dataset: CT80 | ||
Metrics: | ||
word_acc: 0.8854 | ||
Weights: https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-small_20e_st_mj/svtr-small_20e_st_mj-35d800d6.pth | ||
|
||
- Name: svtr-base_20e_st_mj | ||
Alias: svtr-base | ||
Batch Size: 1024 | ||
In Collection: SVTR | ||
Config: configs/textrecog/svtr/svtr-base_20e_st_mj.py | ||
Metadata: | ||
Training Data: | ||
- SynthText | ||
- Syn90k | ||
Results: | ||
- Task: Text Recognition | ||
Dataset: IIIT5K | ||
Metrics: | ||
word_acc: 0.8570 | ||
- Task: Text Recognition | ||
Dataset: SVT | ||
Metrics: | ||
word_acc: 0.9181 | ||
- Task: Text Recognition | ||
Dataset: ICDAR2013 | ||
Metrics: | ||
word_acc: 0.9438 | ||
- Task: Text Recognition | ||
Dataset: ICDAR2015 | ||
Metrics: | ||
word_acc: 0.7448 | ||
- Task: Text Recognition | ||
Dataset: SVTP | ||
Metrics: | ||
word_acc: 0.8388 | ||
- Task: Text Recognition | ||
Dataset: CT80 | ||
Metrics: | ||
word_acc: 0.9028 | ||
Weights: https://download.openmmlab.com/mmocr/textrecog/svtr/svtr-base_20e_st_mj/svtr-base_20e_st_mj-ea500101.pth |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
_base_ = [ | ||
'svtr-tiny_20e_st_mj.py', | ||
] | ||
|
||
model = dict( | ||
preprocessor=dict(output_image_size=(48, 160), ), | ||
encoder=dict( | ||
img_size=[48, 160], | ||
max_seq_len=40, | ||
out_channels=256, | ||
embed_dims=[128, 256, 384], | ||
depth=[3, 6, 9], | ||
num_heads=[4, 8, 12], | ||
mixer_types=['Local'] * 8 + ['Global'] * 10), | ||
decoder=dict(in_channels=256)) | ||
|
||
train_dataloader = dict(batch_size=256, ) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
_base_ = [ | ||
'svtr-tiny_20e_st_mj.py', | ||
] | ||
|
||
model = dict( | ||
preprocessor=dict(output_image_size=(48, 160), ), | ||
encoder=dict( | ||
img_size=[48, 160], | ||
max_seq_len=40, | ||
out_channels=384, | ||
embed_dims=[192, 256, 512], | ||
depth=[3, 9, 9], | ||
num_heads=[6, 8, 16], | ||
mixer_types=['Local'] * 10 + ['Global'] * 11), | ||
decoder=dict(in_channels=384)) | ||
|
||
train_dataloader = dict(batch_size=128, ) | ||
|
||
optim_wrapper = dict(optimizer=dict(lr=2.5 / (10**4))) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
_base_ = [ | ||
'svtr-tiny_20e_st_mj.py', | ||
] | ||
|
||
model = dict( | ||
encoder=dict( | ||
embed_dims=[96, 192, 256], | ||
depth=[3, 6, 6], | ||
num_heads=[3, 6, 8], | ||
mixer_types=['Local'] * 8 + ['Global'] * 7)) |
Oops, something went wrong.