Skip to content
/ MeMOTR Public
forked from MCG-NJU/MeMOTR

[ICCV 2023] MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking

License

Notifications You must be signed in to change notification settings

Ronever/MeMOTR

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

81 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MeMOTR

The official implementation of MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking, ICCV 2023.

Authors: Ruopeng Gao, Limin Wang.

PWC PWC

MeMOTR

MeMOTR is a fully-end-to-end memory-augmented multi-object tracker based on Transformer. We leverage long-term memory injection with a customized memory-attention layer, thus significantly improving the association performance.

Dance Demo

News πŸ”₯

  • 2024.05.09: We release MOTIP, a new perspective to regard the multi-object tracking task as an ID prediction problem πŸ”­.

  • 2024.02.21: We add the results on SportsMOT in our arxiv version (supp part). We would appreciate it if you could CITE our trackers in the SportsMOT comparison πŸ“ˆ.

  • 2023.12.24: We release the code, scripts and checkpoints on BDD100K πŸš—.

  • 2023.12.13: We implement a jupyter notebook to run our model on your own video πŸŽ₯.

  • 2023.11.07: We release the scripts and checkpoints on SportsMOT πŸ€.

  • 2023.08.24: We release the scripts and checkpoints on DanceTrack πŸ’ƒ.

  • 2023.08.09: We release the main code. More configurations, scripts and checkpoints will be released soon πŸ”œ.

Installation

conda create -n MeMOTR python=3.10  # create a virtual env
# I remember that I use some new features in Python 3.10, but I'm not exactly sure about this.
conda activate MeMOTR               # activate the env
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
# Our code is primarily running on PyTorch 1.13.1, 
# but it should be also compatible with earlier PyTorch versions (e.g., 1.12.1).
# However, too early pytorch version may cause some issue that need to be fixed, as we use some newly proposed feature of pytorch (e.g., ResNet50_Weights).
conda install matplotlib pyyaml scipy tqdm tensorboard
pip install opencv-python

You also need to compile the Deformable Attention CUDA ops:

# From https://github.com/fundamentalvision/Deformable-DETR
cd ./models/ops/
sh make.sh
# You can test this ops if you need:
python test.py

Data

You should put the unzipped MOT17 and CrowdHuman datasets into the DATADIR/MOT17/images/ and DATADIR/CrowdHuman/images/, respectively. And then generate the ground truth files by running the corresponding script: ./data/gen_mot17_gts.py and ./data/gen_crowdhuman_gts.py.

Finally, you should get the following dataset structure:

DATADIR/
  β”œβ”€β”€ DanceTrack/
  β”‚ β”œβ”€β”€ train/
  β”‚ β”œβ”€β”€ val/
  β”‚ β”œβ”€β”€ test/
  β”‚ β”œβ”€β”€ train_seqmap.txt
  β”‚ β”œβ”€β”€ val_seqmap.txt
  β”‚ └── test_seqmap.txt
  β”œβ”€β”€ SportsMOT/
  β”‚ β”œβ”€β”€ train/
  β”‚ β”œβ”€β”€ val/
  β”‚ β”œβ”€β”€ test/
  β”‚ β”œβ”€β”€ train_seqmap.txt
  β”‚ β”œβ”€β”€ val_seqmap.txt
  β”‚ └── test_seqmap.txt
  β”œβ”€β”€ MOT17/
  β”‚ β”œβ”€β”€ images/
  β”‚ β”‚ β”œβ”€β”€ train/     # unzip from MOT17
  β”‚ β”‚ └── test/      # unzip from MOT17
  β”‚ └── gts/
  β”‚   └── train/     # generate by ./data/gen_mot17_gts.py
  └── CrowdHuman/
    β”œβ”€β”€ images/
    β”‚ β”œβ”€β”€ train/     # unzip from CrowdHuman
    β”‚ └── val/       # unzip from CrowdHuman
    └── gts/
      β”œβ”€β”€ train/     # generate by ./data/gen_crowdhuman_gts.py
      └── val/       # generate by ./data/gen_crowdhuman_gts.py

Pretrain

We initialize our model with the official DAB-Deformable-DETR (with R50 backbone) weights pretrained on the COCO dataset, you can also download the checkpoint we used here. And then put the checkpoint at the root of this project dir.

Scripts on DanceTrack

Training

Train MeMOTR with 8 GPUs on DanceTrack (recommended to use GPUs with >= 32 GB Memory, like V100-32GB or some else):

python -m torch.distributed.run --nproc_per_node=8 main.py --use-distributed --config-path ./configs/train_dancetrack.yaml --outputs-dir ./outputs/memotr_dancetrack/ --batch-size 1 --data-root <your data dir path>

if your GPU's memory is below than 32 GB, we also implement a memory-optimized version (by running option --use-checkpoint) as discussed in the paper, we use gradient checkpoint to reduce the allocated GPU memory. This following training script will only take about 10 GB GPU memory:

python -m torch.distributed.run --nproc_per_node=8 main.py --use-distributed --config-path ./configs/train_dancetrack.yaml --outputs-dir ./outputs/memotr_dancetrack/ --batch-size 1 --data-root <your data dir path> --use-checkpoint

Submit and Evaluation

You can use this script to evaluate the trained model on the DanceTrack val set:

python main.py --mode eval --data-root <your data dir path> --eval-mode specific --eval-model <filename of the checkpoint> --eval-dir ./outputs/memotr_dancetrack/ --eval-threads <your gpus num>

for submitting, you can use the following scripts:

python -m torch.distributed.run --nproc_per_node=8 main.py --mode submit --submit-dir ./outputs/memotr_dancetrack/ --submit-model <filename of the checkpoint> --use-distributed --data-root <your data dir path>

Besides, if you just want to directly eval or submit through our trained checkpoint, you can get the checkpoint we used in the paper here. Then put this checkpoint into ./outputs/memotr_dancetrack/ and run the above scripts.

Scripts on MOT17

Submit

For submitting, you can use the following scripts:

python -m torch.distributed.run --nproc_per_node=8 main.py --mode submit --config-path ./outputs/memotr_mot17/train/config.yaml --submit-dir ./outputs/memotr_mot17/ --submit-model <filename of the checkpoint> --use-distributed --data-root <your data dir path>

Also, you can directly download our trained checkpoint here. Then put it into ./outputs/memotr_mot17/ and run the above script for submitting to get submit files of MOT17 test set.

Scripts on SportsMOT and other datasets

You can replace the --config-path in DanceTrack Scripts. E.g., from ./configs/train_dancetrack.yaml to ./configs/train_sportsmot.yaml for training on SportsMOT.

Results

Multi-Object Tracking on the DanceTrack test set

Methods HOTA DetA AssA checkpoint
MeMOTR 68.5 80.5 58.4 Google Drive
MeMOTR (Deformable DETR) 63.4 77.0 52.3 Google Drive

Multi-Object Tracking on the SportsMOT test set

For all experiments, we do not use extra data (like CrowdHuman) for training.

Methods HOTA DetA AssA checkpoint
MeMOTR 70.0 83.1 59.1 Google Drive
MeMOTR (Deformable DETR) 68.8 82.0 57.8 Google Drive

Multi-Object Tracking on the MOT17 test set

Methods HOTA DetA AssA checkpoint
MeMOTR 58.8 59.6 58.4 Google Drive

Multi-Category Multi-Object Tracking on the BDD100K val set

Methods mTETA mLocA mAssocA checkpoint
MeMOTR 53.6 38.1 56.7 Google Drive

Contact

Citation

@InProceedings{MeMOTR,
    author    = {Gao, Ruopeng and Wang, Limin},
    title     = {{MeMOTR}: Long-Term Memory-Augmented Transformer for Multi-Object Tracking},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {9901-9910}
}

Stars

Star History Chart

Acknowledgement

About

[ICCV 2023] MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 74.7%
  • Jupyter Notebook 18.2%
  • Cuda 6.4%
  • Other 0.7%