Skip to content

Cloud0723/Offline-MLIRL

Repository files navigation

When Demonstrations meet Generative World Models: A Maximum Likelihood Framework for Offline Inverse Reinforcement Learning

Offline ML-IRL is an algorithm for offline inverse reinforcement learning that is discussed in the article arxiv link

Here is the link to our online version.

Installation

  • PyTorch 1.13.1
  • MuJoCo 2.1.0
  • pip install -r requirements.txt

File Structure

  • Experiment result :data/
  • Configurations: args_yml/
  • Expert Demonstrations: expert_data/

Instructions

  • All the experiments are to be run under the root folder.
  • After running, you will see the training logs in data/ folder.

Experiments

All the commands below are also provided in run.sh.

Offline-IRL benchmark (MuJoCo)

Before experiment, you can download our expert demonstrations and our trained world model here.

python train.py --yaml_file args_yml/model_base_IRL/halfcheetah_v2_medium.yml --seed 0 --uuid halfcheetah_result 

also you can use:

./run.sh

Performances

Graph


Graph


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published