Skip to content

Latest commit

 

History

History
64 lines (41 loc) · 2.39 KB

README.md

File metadata and controls

64 lines (41 loc) · 2.39 KB

Gaze Estimation Model Based on Eye Region in PyTorch

News

-[2022/05/26] Our paper is published on MDPI Sensors : Improved Feature-Based Gaze Estimation Using Self-Attention Module and Synthetic Eye Images

Introduction

We proposed the feature based gaze estimation method using 50 landmarks. We used HRNet as backbone network and revised it applying attention moduel CBAM. Then a gaze vector is obtained by accurate eye region features. We trained the model using the synthetic data UnityEyes and real-world-wild-settings data MPIIGaze and evaluate our method on MPIIGaze.

Performance

We evaluated gaze estimation and landmarks detection performance on UnityEyes and MPIIGaze.

  • The Landmarks Detection Performance.

ENAS_rnn

  • The Gaze Estimation Performance.

ENAS_rnn

  • The Outputs on UnityEyes and MPIIGaze (Green : ground-truth , Yellow : predicted)

ENAS_rnn

Quick Start

Environment

This codie is developed using on Python 3.6.9 and PyTorch 1.7.0 on Ubuntu 20.04 with NIVIDA 3090 RTX GPUs. Training and testing are performed using 2 NVIDA 3090 RTX GPUs with CUDA 11.0 and cuDNN 8.1.0.
It also works well on Windows 10

  • Python 3.6.9
  • Pytorch 1.6.0
  • NVIDIA 1080 GTX, CUDA 10.1, Cudnn 7.0

Install

Install prerequisites with:

conda install forgaze
conda activate forgaze
# pytorch install
pip install -r requirements.txt

Demo

Red, blue and yellow are the eye region, iris region and gaze vectors, respectively.

ENAS_rnn

python tools/demo.py

Train

Please specify the configuration file in experiments (learning rate should be adjusted when the number of GPUs is changed).

python tools/train.py --cfg <CONFIG-FILE>
# example:
python tool/train.py --cfg experiments/unityeyes/eye_alignment_unityeyes_hrnet_w18.yaml