-[2022/05/26] Our paper is published on MDPI Sensors : Improved Feature-Based Gaze Estimation Using Self-Attention Module and Synthetic Eye Images
We proposed the feature based gaze estimation method using 50 landmarks. We used HRNet as backbone network and revised it applying attention moduel CBAM. Then a gaze vector is obtained by accurate eye region features. We trained the model using the synthetic data UnityEyes and real-world-wild-settings data MPIIGaze and evaluate our method on MPIIGaze.
We evaluated gaze estimation and landmarks detection performance on UnityEyes and MPIIGaze.
- The Landmarks Detection Performance.
- The Gaze Estimation Performance.
- The Outputs on UnityEyes and MPIIGaze (Green : ground-truth , Yellow : predicted)
This codie is developed using on Python 3.6.9 and PyTorch 1.7.0 on Ubuntu 20.04 with NIVIDA 3090 RTX GPUs. Training and testing are performed using 2 NVIDA 3090 RTX GPUs with CUDA 11.0 and cuDNN 8.1.0.
It also works well on Windows 10
- Python 3.6.9
- Pytorch 1.6.0
- NVIDIA 1080 GTX, CUDA 10.1, Cudnn 7.0
Install prerequisites with:
conda install forgaze
conda activate forgaze
# pytorch install
pip install -r requirements.txt
Red, blue and yellow are the eye region, iris region and gaze vectors, respectively.
python tools/demo.py
Please specify the configuration file in experiments
(learning rate should be adjusted when the number of GPUs is changed).
python tools/train.py --cfg <CONFIG-FILE>
# example:
python tool/train.py --cfg experiments/unityeyes/eye_alignment_unityeyes_hrnet_w18.yaml