Reinforcement learning project aimed at having a Kinova Gen3 manipulator catch a ball in the air
- Make sure you have the
OmniIsaacGymEnvs
repository cloned to your device. Follow their installation instructions if not. - Create a symbolic link from the directory
OmniIsaacGymEnvs/omniisaacgymenvs/cgf/task/
tokinova_ball_catching_RL/config/KinovaTask.yaml
- Create a symbolic link from the directory
OmniIsaacGymEnvs/omniisaacgymenvs/cgf/train/
tokinova_ball_catching_RL/config/KinovaTaskPPO.yaml
- Navigate to
OmniIsaacGymEnvs/omniisaacgymenvs/utils/task_util.py
. - Inside the
import_tasks()
function, addfrom kinova_task import KinovaTask
. - Inside the
task_map
dictionary, add an entry"KinovaTask": KinovaTask
.
- Add the
isaac_scripts
folder in this repo to yourPYTHONPATH
environment variable manually. Example:export PYTHONPATH=$PYTHONPATH:/path/to/isaac_scripts
. - Navigate to the
OmniIsaacGymEnvs/omniisaacgymenvs
folder. - Run
/path/to/your/isaac-sim/python.sh scripts/rl_train.py task=KinovaTask
. - Additional arguments you can pass to that script include:
headless=True
num_envs=<how many robots you want to spawn>
test=True
if you want to examine a policycheckpoint=/path/to/a/checkpoint
(Note that you always have to do this to examine a policy. Just settingtest
toTrue
will not load a trained policy)max_iterations=<how many epochs to run>
The default is 100 and is pretty quick (about 400,000 timesteps with 256 robots). Setting to 1000 is pretty long but gives very good results.