Skip to content

Latest commit

 

History

History
51 lines (35 loc) · 2.75 KB

README.md

File metadata and controls

51 lines (35 loc) · 2.75 KB

Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow

arXiv YouTube

This repository contains the code implementation of the experiments presented in the paper Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow.


Directory Structure

  • Use the code in meow/toy to reproduce the experimental results presented in Section 4.1 of our paper.
  • Use the code in meow/cleanrl to reproduce the experimental results presented in Section 4.2 of our paper.
  • Use the code in meow/skrl to reproduce the experimental results presented in Section 4.3 of our paper.
  • Use the code in meow/plot to reproduce the figures presented in our paper.

License

To maintain reproducibility, we freezed the released versions of following repositories and list their licenses as follows:

Further changes based on the repository above are licensed under the MIT License.


Cite this Repository

If you find this repository useful, please consider citing our paper:

@inproceedings{chao2024maximum,
    title={Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow},
    author={Chao, Chen-Hao and Feng, Chien and Sun, Wei-Fang and Lee, Cheng-Kuang and See, Simon and Lee, Chun-Yi},
    booktitle={Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS)},
    year={2024}
}

Contributors of the Code Implementation

meow meow meow

Visit our GitHub pages by clicking the images above.