Skip to content

Commit

Permalink
fix sphinx warnings
Browse files Browse the repository at this point in the history
  • Loading branch information
cpnota committed Mar 17, 2024
1 parent 4da0908 commit 4b7551e
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,4 +72,4 @@
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# html_static_path = ['_static']
12 changes: 6 additions & 6 deletions docs/source/guide/basic_concepts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -160,8 +160,8 @@ A few other quick things to note: ``f.no_grad(x)`` runs a forward pass with ``to
``f.target(x)`` calls the *target network* (an advanced concept used in algorithms such as DQN. For example, David Silver's `course notes <http://www0.cs.ucl.ac.uk/staff/d.silver/web/Talks_files/deep_rl.pdf>`_) associated with the ``Approximation``, also with ``torch.no_grad()``.
The ``autonomous-learning-library`` provides a few thin wrappers over ``Approximation`` for particular purposes, such as ``QNetwork``, ``VNetwork``, ``FeatureNetwork``, and several ``Policy`` implementations.

Environments
------------
ALL Environments
----------------

The importance of the ``Environment`` in reinforcement learning nearly goes without saying.
In the ``autonomous-learning-library``, the prepackaged environments are simply wrappers for `OpenAI Gym <http://gym.openai.com>`_, the defacto standard library for RL environments.
Expand Down Expand Up @@ -216,8 +216,8 @@ Of course, this control loop is not exactly feature-packed.
Generally, it's better to use the ``Experiment`` module described later.


Presets
-------
ALL Presets
-----------

In the ``autonomous-learning-library``, agents are *compositional*, which means that the behavior of a given ``Agent`` depends on the behavior of several other objects.
Users can compose agents with specific behavior by passing appropriate objects into the constructor of the high-level algorithms contained in ``all.agents``.
Expand Down Expand Up @@ -274,8 +274,8 @@ If a ``Preset`` is loaded from disk, then we can instansiate a test ``Agent`` us



Experiment
----------
ALL Experiments
---------------

Finally, we have all of the components necessary to introduce the ``run_experiment`` helper function.
``run_experiment`` is the built-in control loop for running reinforcement learning experiment.
Expand Down

0 comments on commit 4b7551e

Please sign in to comment.