From 3eab78ba2e11e2107dd89c4e5d9d1dc0ab4aafa3 Mon Sep 17 00:00:00 2001 From: Edouard Leurent Date: Sun, 8 Dec 2024 13:17:10 +0000 Subject: [PATCH] Fix docs --- docs/quickstart.md | 24 +++++------------------- 1 file changed, 5 insertions(+), 19 deletions(-) diff --git a/docs/quickstart.md b/docs/quickstart.md index 7c706dfb1..e4e28d444 100644 --- a/docs/quickstart.md +++ b/docs/quickstart.md @@ -55,8 +55,12 @@ For example, the number of lanes can be changed with: env = gymnasium.make( "highway-v0", - config={"lanes_count": 2} + config={"lanes_count": 2}, + render_mode='rgb_array', ) + env.reset() + plt.imshow(env.render()) + plt.show() ``` After environment creation, the configuration can be accessed using the @@ -70,24 +74,6 @@ After environment creation, the configuration can be accessed using the pprint.pprint(env.unwrapped.config) ``` -The config can also be changed after creation - - -```{eval-rst} -.. jupyter-execute:: - env = gymnasium.make("highway-v0", render_mode='rgb_array') - env.unwrapped.config["lanes_count"] = 2 - env.reset() - plt.imshow(env.render()) - plt.show() -``` - - -```{note} -The environment must be {py:meth}`~highway_env.envs.common.abstract.AbstractEnv.reset` for the change of configuration -to be effective. -``` - ## Training an agent Reinforcement Learning agents can be trained using libraries such as [eleurent/rl-agents](https://github.com/eleurent/rl-agents),