Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retitle tutorials #769

Merged
merged 5 commits into from
Jul 20, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,11 @@ Finally, run the notebooks with
$ jupyter-notebook notebooks
```

Alternatively, you can copy and paste the tutorials into fresh notebooks and avoid installing the library from source. To ensure you have the required plotting dependencies, simply run:
```bash
$ pip install trieste[plotting]
```

## The Trieste Community

### Getting help
Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ output :math:`f: X \to \mathbb R`, this is
.. math:: \mathop{\mathrm{argmin}}_{x \in X} f(x) \qquad .

When the objective function has higher-dimensional output, we can still talk of finding the minima,
though the optimal values will form a Pareto set rather than a single point. Trieste provides
though the optimal values will form a `Pareto set <https://en.wikipedia.org/wiki/Pareto_front>`_ rather than a single point. Trieste provides
functionality for optimization of single-valued objective functions, and supports extension to the
higher-dimensional case. It also supports optimization over constrained spaces, learning the
constraints alongside the objective.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Active Learning for Gaussian Process Classification Model
# # Active Learning for binary classification

# %%
import gpflow
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/asynchronous_greedy_multiprocessing.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Asynchronous Bayesian optimization with Trieste
# # Asynchronous Bayesian Optimization
#
# In this notebook we demonstrate Trieste's ability to perform asynchronous Bayesian optimisation, as is suitable for scenarios where the objective function can be run for several points in parallel but where observations might return back at different times. To avoid wasting resources waiting for the evaluation of the whole batch, we immediately request the next point asynchronously, taking into account points that are still being evaluated. Besides saving resources, asynchronous approach also can potentially [improve sample efficiency](https://arxiv.org/abs/1901.10452) in comparison with synchronous batch strategies, although this is highly dependent on the use case.
#
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/asynchronous_nongreedy_batch_ray.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Asynchronous batch Bayesian optimization
# # Asynchronous batch Bayesian Optimization
#
# As shown in [Asynchronous Bayesian Optimization](asynchronous_greedy_multiprocessing.ipynb) tutorial, Trieste provides support for running observations asynchronously. In that tutorial we used a greedy batch acquisition function called Local Penalization, and requested one new point whenever an observation was received. We also used the Python multiprocessing module to run distributed observations in parallel.
#
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/batch_optimization.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Batch Bayesian Optimization with Batch Expected Improvement, Local Penalization, Kriging Believer and GIBBON
# # Batch Bayesian Optimization

# %% [markdown]
# Sometimes it is practically convenient to query several points at a time. This notebook demonstrates four ways to perfom batch Bayesian optimization with Trieste.
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/data_transformation.pct.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
# %% [markdown]
# # Data transformation with the help of Ask-Tell interface.
# # Data transformation

# %%
import os
Expand Down
4 changes: 2 additions & 2 deletions docs/notebooks/deep_ensembles.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Bayesian optimization with deep ensembles
# # Deep ensembles
#
# Gaussian processes as a surrogate models are hard to beat on smaller datasets and optimization budgets. However, they scale poorly with amount of data, cannot easily capture non-stationarities and they are rather slow at prediction time. Here we show how uncertainty-aware neural networks can be effective alternative to Gaussian processes in Bayesian optimisation, in particular for large budgets, non-stationary objective functions or when predictions need to be made quickly.
#
Expand All @@ -25,7 +25,7 @@


# %% [markdown]
# ## Deep ensembles
# ## What are deep ensembles?
#
# Deep neural networks typically output only mean predictions, not posterior distributions as probabilistic models such as Gaussian processes do. Posterior distributions encode mean predictions, but also *epistemic* uncertainty - type of uncertainty that stems from model misspecification, and which can be eliminated with further data. Aleatoric uncertainty that stems from stochasticity of the data generating process is not contained in the posterior, but can be learned from the data. Bayesian optimization requires probabilistic models because epistemic uncertainty plays a key role in balancing between exploration and exploitation.
#
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/deep_gaussian_processes.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Using deep Gaussian processes with GPflux for Bayesian optimization.
# # Deep Gaussian processes

# %%
import numpy as np
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/expected_improvement.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Noise-free optimization with Expected Improvement
# # Introduction to Bayesian Optimization

# %%
import numpy as np
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/explicit_constraints.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Explicit Constraints
# # Explicit constraints

# %% [markdown]
# This notebook demonstrates ways to perfom Bayesian optimization with Trieste in the presence of explicit input constraints.
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/failure_ego.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # EGO with a failure region
# # Failure regions

# %%
from __future__ import annotations
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/feasible_sets.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Bayesian active learning of failure or feasibility regions
# # Active learning of feasibility regions
#
# When designing a system it is important to identify design parameters that may affect the reliability of the system and cause failures, or lead to unsatisfactory performance. Consider designing a communication network that for some design parameters would lead to too long delays for users. A designer of the system would then decide what is the maximum acceptable delay and want to identify a *failure region* in the parameter space that would lead to longer delays., or conversely, a *feasible region* with safe performance.
#
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/multi_objective_ehvi.pct.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
# %% [markdown]
# # Multi-objective optimization with Expected HyperVolume Improvement
# # Multi-objective optimization

# %%
import math
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/multifidelity_modelling.pct.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
import gpflow.kernels

# %% [markdown]
# # Multifidelity Modelling with Autoregressive Model
# # Multifidelity Modelling
#
# This tutorial demonstrates the usage of the `MultifidelityAutoregressive` model for fitting multifidelity data. This is an implementation of the AR1 model initially described in <cite data-cite="Kennedy2000"/>.

Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/openai_gym_lunar_lander.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Trieste meets OpenAI Gym
# # OpenAI Gym
#
# This notebook demonstrates how to use Trieste to apply Bayesian optimization to a problem that is slightly more practical than classical optimization benchmarks shown used in other tutorials. We will use OpenAI Gym, which is a popular toolkit for reinforcement learning (RL) algorithms.
#
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/qhsri-tutorial.pct.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# ---

# %% [markdown]
# # Batch HSRI Tutorial
# # Batching with Sharpe Ratio

# %% [markdown]
# Batch Hypervolume Sharpe Ratio Indicator (qHSRI) is a method proposed by Binois et al. (see <cite data-cite="binois2021portfolio"/>) for picking a batch of query points during Bayesian Optimisation. It makes use of the Sharpe Ratio, a portfolio selection method used in investment selection to carefully balance risk and reward.
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/recovering_from_errors.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Recovering from errors during optimization
# # Recovering from errors

# %%
import numpy as np
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/rembo.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # High-dimensional Bayesian optimization with Random EMbedding Bayesian Optimization (REMBO).
# # High-dimensional Bayesian Optimization
# This notebook demonstrates a simple method for optimizing a high-dimensional (100-D) problem, where standard BO methods have trouble.

# %%
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
# %% [markdown]
# # Scalable Thompson Sampling using Sparse Gaussian Process Models
# # Scalable Thompson Sampling

# %% [markdown]
# In our other [Thompson sampling notebook](thompson_sampling.pct.py) we demonstrate how to perform batch optimization using a traditional implementation of Thompson sampling that samples exactly from an underlying Gaussian Process surrogate model. Unfortunately, this approach incurs a large computational overhead that scales polynomially with the optimization budget and so cannot be applied to settings with larger optimization budgets, e.g. those where large batches (>>10) of points can be collected.
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/thompson_sampling.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Batch-sequential optimization with Thompson sampling
# # Thompson Sampling

# %%
import numpy as np
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/visualizing_with_tensorboard.pct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# %% [markdown]
# # Tracking and visualizing optimizations using Tensorboard
# # Visualizing with Tensorboard

# %%
import numpy as np
Expand Down
6 changes: 3 additions & 3 deletions docs/tutorials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Tutorials
Example optimization problems
-----------------------------

The following tutorials explore various optimization problems using Trieste.
The following tutorials explore various types of optimization problems using Trieste.

.. toctree::
:maxdepth: 1
Expand Down Expand Up @@ -52,7 +52,7 @@ The following tutorials (or sections thereof) explain how to use and extend spec
* :doc:`How do I recover a failed optimization loop?<notebooks/recovering_from_errors>`
* :doc:`How do I track and visualize an optimization loop in realtime using TensorBoard?<notebooks/visualizing_with_tensorboard>`
* :doc:`What are the key Python types used in Trieste and how can they be extended?<notebooks/code_overview>`
* :doc:`Does Trieste have interface for external control of the optimization loop, also known as Ask-Tell interface?<notebooks/ask_tell_optimization>`
* :doc:`How do I externally control the optimization loop via an Ask-Tell interface?<notebooks/ask_tell_optimization>`
* :doc:`How do I perform data transformations required for training the model?<notebooks/data_transformation>`
* How do I use Trieste in asynchronous objective evaluation mode?

Expand Down Expand Up @@ -87,7 +87,7 @@ then run

$ jupyter-notebook notebooks

Alternatively, you copy and paste the tutorials into fresh notebooks and avoid installing the library from source. To ensure you have the required plotting dependencies, simply run:
Alternatively, you can copy and paste the tutorials into fresh notebooks and avoid installing the library from source. To ensure you have the required plotting dependencies, simply run:

.. code::

Expand Down
Loading