-
Notifications
You must be signed in to change notification settings - Fork 56
MultiAgent Models
This folder includes 10 policy-based multi-agent reinforcement learning algorithms.
class models.coma.COMA(args, target_net=None)
class models.facmaddpg.FACMADDPG(args, target_net=None)
class models.iac.IAC(args, target_net=None)
class models.iddpg.IDDPG(args, target_net=None)
class models.ippo.IPPO(args, target_net=None)
class models.maac.MAAC(args, target_net=None)
class models.maddpg.MADDPG(args, target_net=None)
class models.mappo.MAPPO(args, target_net=None)
class models.matd3.MATD3(args, target_net=None)
class models.sqddpg.SQDDPG(args, target_net=None)
If target_net=None
, the model would not instantiate a target network. Otherwise, a target network would be instantiated.
If you would add a new implementation of multi-agent models, you should implement a class inheriting CLASS model.Model(args)
under the folder models
. Specifically, you must implement the following functions.
def construct_value_net(self)
def get_loss(self)
def get_actions(self)
def value(self, obs, act, last_act=None, last_hid=None)
def construct_model(self)
Next, you need to register your new multi-agent model in model_registry.py
. E.g.,
from .maddpg import MADDPG
from .sqddpg import SQDDPG
from .iac import IAC
from .iddpg import IDDPG
from .coma import COMA
from .maac import MAAC
from .matd3 import MATD3
from .ippo import IPPO
from .mappo import MAPPO
from .facmaddpg import FACMADDPG
from .[your model] import [your model]
Model = dict(maddpg=MADDPG,
sqddpg=SQDDPG,
iac=IAC,
iddpg=IDDPG,
coma=COMA,
maac=MAAC,
matd3=MATD3,
ippo=IPPO,
mappo=MAPPO,
facmaddpg=FACMADDPG,
[your model]=[your model]
)
Strategy = dict(maddpg='pg',
sqddpg='pg',
iac='pg',
iddpg='pg',
coma='pg',
maac='pg',
matd3='pg',
ippo='pg',
mappo='pg',
facmaddpg='pg',
[your model]='pg'
)
Since this repo temporarily supports policy-based algorithms only, the strategy needs to be set to pg
as default.