Skip to content

Agents code for Multi-Agent Connected Autonomous Driving (MACAD) described in the paper presented in the Machine Learning for Autonomous Driving Workshop at NeurIPS 2019:

Notifications You must be signed in to change notification settings

praveen-palanisamy/macad-agents

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MACAD-Agents

Multi-Agent algorithms for Multi-Agent Connected Autonomous Driving using MACAD-Gym

How to train/test MACAD-Agents?

  1. git clone https://github.com/praveen-palanisamy/macad-agents

If you want to avoid building and running the Docker container, you can follow the instructions in theRunning MACAD-Agents witout Docker section instead and skip the next 2 steps.

  1. Build the MACAD-Agents Docker container: docker build --rm -f macad-agents/Dockerfile -t macad-agents:latest .

  2. Run the MACAD-Agents training container: bash run.sh

    You can pick from one of the available multi-agent training options:

    • To train multiple agents using PPO where the agents communicate/share learned weights, modify the last line in run.sh to look like this:

      macad-agents:latest python -m macad_agents.rllib.ppo_multiagent_shared_weights.py

    • To train multiple agents using IMPALA where the agents communicate/share learned weights, modify the last line in run.sh to look like this:

      macad-agents:latest python -m macad_agents.rllib.impala_multiagent_shared_weights.py

Running MACAD-Agents without Docker

If you have all the necessary dependencies installed an configured on your host machine, you can run the agent script like shown below: cd macad-agents/src && python -m macad_agents.rllib.ppo_multiagent_shared_weights

A brief gist of what you need to setup on your host machine is listed below:

macad-agents/run.sh

Lines 8 to 12 in 35c06f5

-e CARLA_SERVER=/home/software/CARLA/CarlaUE4.sh \
-e CARLA_OUT=./carla_out \
-e XAUTHORITY=$XAUTHORITY \
-e DISPLAY=$DISPLAY \
-e SDL_VIDEODRIVER=offscreen \

Where -e is equivalent to export using the bash terminal.

Citing

If you find this work or MACAD-Gym useful in your research, please cite:

@misc{palanisamy2019multiagent,
    title={Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning},
    author={Praveen Palanisamy},
    year={2019},
    eprint={1911.04175},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Citation in other Formats: (Click to View)

MLA
Palanisamy, Praveen. "Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning." arXiv preprint arXiv:1911.04175 (2019).
APA
Palanisamy, P. (2019). Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175.
Chicago
Palanisamy, Praveen. "Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning." arXiv preprint arXiv:1911.04175 (2019).
Harvard
Palanisamy, P., 2019. Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175.
Vancouver
Palanisamy P. Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175. 2019 Nov 11.

About

Agents code for Multi-Agent Connected Autonomous Driving (MACAD) described in the paper presented in the Machine Learning for Autonomous Driving Workshop at NeurIPS 2019:

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •