Skip to content

Commit

Permalink
Merge pull request #275 from DavAug/DavAug-patch-1
Browse files Browse the repository at this point in the history
Update README.md
  • Loading branch information
DavAug authored Jul 4, 2024
2 parents 18e86d9 + ef67982 commit c661d65
Show file tree
Hide file tree
Showing 4 changed files with 37 additions and 38 deletions.
9 changes: 4 additions & 5 deletions .github/workflows/unit-test-os-versions.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,17 @@ jobs:
steps:
- uses: actions/checkout@v1

- name: Set up Python 3.8
uses: actions/setup-python@v1
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: 3.8
architecture: x64
python-version: 3.11

- name: install sundials (ubuntu)
if: ${{ matrix.os == 'ubuntu-latest' }}
run: |
sudo apt-get update
sudo apt-get install libsundials-dev
- name: install sundials (macos)
if: ${{ matrix.os == 'macos-latest' }}
run: |
Expand Down
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,7 @@

## About

**Chi** is an open source Python package hosted on GitHub,
which can be used to model dose response dynamics.
**Chi** is an open source Python package for pharmacokinetic and pharmacodynamic (PKPD) modelling.

All features of the software are described in detail in the
[full API documentation](https://chi.readthedocs.io/en/latest/).
Expand Down
58 changes: 29 additions & 29 deletions docs/source/getting_started/fitting_models_to_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ dynamics and to optimise dosing regimens to target a desired treatment response.

However, at this point, the simulated treatment responses have little to do with
real treatment responses. To describe *real* treatment
responses that we may observe in clinical practice, we need to somehow connect
responses, i.e. treatment responses that we may observe in clinical practice, we need to somehow connect
our model to reality.

The most common approach to relate models to real treatment responses is to
Expand Down Expand Up @@ -75,13 +75,13 @@ for a given model structure.
Estimating model parameters from data: Background
*************************************************

Before we can try to find parameter values that describe the observed
treatment response most closely, we first need to agree on what we mean by
"*most closely*" for the relationship between the mechanistic model output and the measurements.
An intuitive way to define this notion of closeness is to use the distance
Before we can try to find better parameter values that describe the observed
treatment response, we first need to agree on what we mean by
"*better*" for the relationship between the mechanistic model output and the measurements.
An intuitive notion of "better" is "closer", quantified by the distance
between the measurements and the model output,
i.e. the difference between the measured values and the
simulated values. Then the model parameters that most closely
simulated values. Then the model parameters that best
describe the measurements would be those parameter values that make the mechanistic
model output perfectly match the measurements, resulting in distances of 0 ng/mL
between the model output and the measurements at all measured time points.
Expand All @@ -90,26 +90,25 @@ However, as outlined in Sections 1.3 and 1.4 of the
noisy, and will therefore not perfectly represent the treatment response dynamics.
Consequently, if we were to match the model outputs to measurements perfectly,
we would end up with an inaccurate description of the treatment response
as our model would be paying too much attention to the measurement noise.
that is corrupted by measurement noise.

One way to improve our notion of closeness is to incorporate the measurement
process into our computational model of the treatment response, thereby
explicitly stating that we do not expect the mechanistic model output to match
the measurements perfectly. In Chi, this can be done
One way to overcome this limitation is to change our notion of "better" and incorporate the measurement
process into our computational model of the treatment response. This makes explicit
that we do not expect the mechanistic model output to match
the measurements perfectly. In Chi, the measurement process can be captured
using :class:`chi.ErrorModel` s. Error models promote the single value output
of mechanistic model simulations to a distribution of
values. This distribution characterises a
range of values around the mechanistic model output where measurements may be
expected.
For simulation, this distribution can be used to sample measurement values and
We can use this measurement distribution in two ways: 1. for simulation; and 2. for
parameter estimation. For simulation, the distribution
can be used to sample measurement values and
imitate the measurement process of real treatment responses, see
Section 1.3 in the :doc:`quick_overview` for an example. For parameter estimation,
the distribution can be used to quantify the likelihood with which the observed
measurements would have been generated by our model,
see Section 1.4 in the :doc:`quick_overview`. To account for measurement noise
during the parameter estimation, we therefore
choose to quantify the closeness between the model output an the measurements
using likelihoods.
see Section 1.4 in the :doc:`quick_overview`.

Formally, we denote the measurement distribution by :math:`p(y | \psi, t, r)`,
where :math:`y` denotes the measurement value, :math:`\psi` denotes the model parameters,
Expand All @@ -122,7 +121,7 @@ of the measurement distribution evaluated at the measurement,
:math:`p(y_1 | \psi, t_1, r^*)`. Note that this
likelihood depends on the choice of model parameters, :math:`\psi`. The model
parameters with the maximum likelihood are
the parameter values that most closely describe the measurements.
the parameter values that "best" describe the measurements.

.. note::
The measurement distribution, :math:`p(y | \psi, t, r)`, is defined
Expand All @@ -144,7 +143,7 @@ the parameter values that most closely describe the measurements.
we extend the definition of the model parameters to include :math:`\sigma`,
:math:`\psi = (a_0, k_a, k_e, v, \sigma)`.

We can see that the model output
We can see that the mechanistic model output
defines the mean or Expectation Value of the measurement distribution.

2. If we choose a :class:`chi.LogNormalErrorModel` to describe the difference
Expand All @@ -154,7 +153,7 @@ the parameter values that most closely describe the measurements.
.. math::
p(y | \psi, t, r) = \frac{1}{\sqrt{2\pi \sigma ^2}}\frac{1}{y}\mathrm{e}^{-\big(\log y - \log c(\psi, t, r) + \sigma / 2\big)^2 / 2\sigma ^2}.
One can show that also for this distribution the model output defines the mean
One can show that also for this distribution the mechanistic model output defines the mean
or Expectation Value of the measurement distribution.

The main difference between the two distributions is the shape. The
Expand Down Expand Up @@ -306,14 +305,14 @@ likelihood-prior product over the full parameter space,
:math:`p(\mathcal{D}) = \int \mathrm{d} \psi \, p(\mathcal{D}, \psi ) = \int \mathrm{d} \psi \, p(\mathcal{D}| \psi )\, p(\psi)`.
This renders the value of the constant shift for all intents and purposes unknown.

The unknown shift makes it impossible to make statements about the absolute probability
of parameter values. However, it does allow for relative comparisons of
probabilities -- a fact exploited by MCMC algorithms to circumvent the limitation
The unknown shift makes it very difficult to make statements about the absolute probability
of parameter values from the :class:`chi.LogPosterior` alone. However, the uknown shift does allow for relative comparisons of
probabilities as the shift is the same for all parameter values -- a fact exploited by MCMC algorithms to circumvent the limitation
of the partially known log-posterior. MCMC algorithms use
the relative comparison of parameter probabilities to generate random samples from the
posterior distribution, opening a gateway to reconstruct the distribution. The
more random samples are generated, the closer the histogram over the samples will
approximate the posterior distribution. In fact, one can show that the histogram
approximate the original posterior distribution. In fact, one can show that the histogram
will converge to the posterior distribution as the number of samples approaches
infinity. This makes it possible for MCMC algorithms
to reconstruct any posterior distribution from a :class:`chi.LogPosterior`.
Expand Down Expand Up @@ -397,12 +396,12 @@ we can see in the second row of the figure that the marginal posterior distribut
substantially differs from the marginal prior distribution. This is because the
drug concentration measurements contain important information about the elimination rate, rendering
rates above 1.5 1/day or below 0.25 1/day as extremely unlikely for the
model of the treatment response. This in in stark contrast to the relatively wide
model of the treatment response. This is in stark contrast to the relatively wide
range of model parameters that we deemed feasible prior to the inference
(see black line). However, the measurements are not conclusive enough
to reduce the distribution of feasible elimination rates to a single value. Similarly,
for the volume of distribution (row 3) and the error scale parameter
(row 4), the measurements lead to substaintial updates relative to the
(row 4), the measurements lead to substantial updates relative to the
prior distribution.
In comparison, the measurements appear less informative about the absorption rate
(see row 1), given that the marginal posterior distribution of
Expand Down Expand Up @@ -437,9 +436,10 @@ Let us begin this section by revisiting the right column in the figure above. Th
shows the samples from the three MCMC algorithm runs at each
iteration. For early iterations of the algorithm,
the samples from the MCMC runs look quite distinct -- each run appears to sample
from a different area of the parameter space. In contrast,
the MCMC runs seem to converge and sample from the same area of the parameter space
at later iterations. Intuitively,
from a different area of the parameter space. In contrast, at later iterations
the MCMC runs are harder to distinguish and sample from the same area of the parameter space.

Intuitively,
it does not really make sense for the samples from the MCMC runs to look different
-- after all, we use the same MCMC algorithm to sample from the same posterior distribution.
The histogram over the samples *should* therefore be identical within the limits of
Expand Down Expand Up @@ -476,7 +476,7 @@ and is the particular choice of the *second* half important? The answer comes
back to a common limitation of all MCMC algorithm which we can see in the right
column of the figure presented earlier: MCMC algorithms generate samples
from the posterior distribution conditional on the latest generated sample.
For some MCMC algorithms, this conditioning has little influences on sequential samples
For some MCMC algorithms, this conditioning has little influence on sequential samples
because the internal sampling strategy is advanced enough to
sufficiently decorrelate subsequent samples. But for
many MCMC algorithms the conditioned sample substantially influences the sampled value. That
Expand Down
5 changes: 3 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
setup(
# Module name
name='chi-drm',
version='1.0.0',
version='1.0.1',
description='Package to model dose response dynamics',
long_description=readme,
long_description_content_type="text/markdown",
Expand All @@ -36,8 +36,9 @@
'pandas>=0.24',
'pints>=0.4',
'plotly>=4.8.1',
'scipy<=1.12', # 07/2024 - ArviZ seems to not yet keep up with SciPy
'tqdm>=4.46.1',
'xarray>=0.19'
'xarray>=0.19',
],
extras_require={
'docs': [
Expand Down

0 comments on commit c661d65

Please sign in to comment.