When Demonstrations meet Generative World Models: A Maximum Likelihood Framework for Offline Inverse Reinforcement Learning
Offline ML-IRL is an algorithm for offline inverse reinforcement learning that is discussed in the article arxiv link
Here is the link to our online version.
- PyTorch 1.13.1
- MuJoCo 2.1.0
- pip install -r requirements.txt
- Experiment result :
data/
- Configurations:
args_yml/
- Expert Demonstrations:
expert_data/
- All the experiments are to be run under the root folder.
- After running, you will see the training logs in
data/
folder.
All the commands below are also provided in run.sh
.
Before experiment, you can download our expert demonstrations and our trained world model here.
python train.py --yaml_file args_yml/model_base_IRL/halfcheetah_v2_medium.yml --seed 0 --uuid halfcheetah_result
also you can use:
./run.sh