Skip to content

Commit

Permalink
Merge pull request #1084 from ICB-DCM/develop
Browse files Browse the repository at this point in the history
release 0.3.1
  • Loading branch information
PaulJonasJost authored Jun 29, 2023
2 parents 27a48d2 + d6723dd commit 9a75457
Show file tree
Hide file tree
Showing 48 changed files with 1,248 additions and 589 deletions.
8 changes: 4 additions & 4 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ jobs:
run: .github/workflows/install_deps.sh amici pysb

- name: Run tests
timeout-minutes: 25
timeout-minutes: 35
run: tox -e petab

- name: Coverage
Expand Down Expand Up @@ -166,15 +166,15 @@ jobs:

steps:
- name: Check out repository
uses: actions/checkout@v2
uses: actions/checkout@v3

- name: Prepare python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}

- name: Cache
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-hierarchical
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/install_deps.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ for par in "$@"; do
pysb)
# bionetgen
wget -q -O bionetgen.tar \
https://github.com/RuleWorld/bionetgen/releases/download/BioNetGen-2.6.0/BioNetGen-2.6.0-linux.tgz
https://github.com/RuleWorld/bionetgen/releases/download/BioNetGen-2.8.5/BioNetGen-2.8.5-linux.tar.gz
tar -xf bionetgen.tar
;;

Expand Down
27 changes: 27 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,33 @@ Release notes
..........


0.3.1 (2023-06-22)
-------------------
* Visualize:
* Parameter plot w/ hier. pars, noise estimation for splines (#1061)
* Sampling:
* AdaptiveMetropolis failure fix for bounded priors (#1065)
* Ensembles
* Speed up Ensemble from History (#1063)
* PEtab support:
* Support for petab 0.2.x (#1073)
* Remove PetabImporterPysb #1082)
* Objective
* AggregatedObjective: objective-specific kwargs for call_unprocessed (#1068)
* Select
* Use predecessor stored in file (#1059)
* support petab-select version 0.1.8 (#1070)
* Examples
* Synthetic data: update for libpetab-python v0.2.0 (#1060)
* Fix error in sampling_diagnostics which led to test failure(#1092)
* General
* Test fixes (#1064)
* Fix numpy DeprecationWarnings (#1076)
* GHA: Fix deprecation warnings (#1075)
* Fixed bug on existing file and no overwrite (#1046)
* Fix error in bound checking (#1081)


0.3.0 (2023-05-02)
-------------------

Expand Down
183 changes: 165 additions & 18 deletions doc/example/example_nonlinear_monotone.ipynb

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
observableId preequilibrationConditionId simulationConditionId measurement time observableParameters noiseParameters observableTransformation noiseDistribution measurementType
Activity Inhibitor_0 15.24029495 5 1 lin normal nonlinear_monotone
Activity Inhibitor_3 15.97660789 5 1 lin normal nonlinear_monotone
Activity Inhibitor_10 17.89265379 5 1 lin normal nonlinear_monotone
Activity Inhibitor_25 23.18714697 5 1 lin normal nonlinear_monotone
Activity Inhibitor_35 16.81210375 5 1 lin normal nonlinear_monotone
Activity Inhibitor_50 9.17312936 5 1 lin normal nonlinear_monotone
Activity Inhibitor_75 4.15092812 5 1 lin normal nonlinear_monotone
Activity Inhibitor_100 2.53355252 5 1 lin normal nonlinear_monotone
Activity Inhibitor_300 0.58963290 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_0 0 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_3 0.05999885 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_10 0.19999376 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_25 0.49904277 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_35 0.65916874 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_50 0.81495435 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_75 0.91638297 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_100 0.94898071 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_300 0.98813045 5 1 lin normal nonlinear_monotone
observableId preequilibrationConditionId simulationConditionId measurement time observableParameters noiseParameters observableTransformation noiseDistribution measurementType
Activity Inhibitor_0 15.24029495 5 1 lin normal nonlinear_monotone
Activity Inhibitor_3 15.97660789 5 1 lin normal nonlinear_monotone
Activity Inhibitor_10 17.89265379 5 1 lin normal nonlinear_monotone
Activity Inhibitor_25 23.18714697 5 1 lin normal nonlinear_monotone
Activity Inhibitor_35 16.81210375 5 1 lin normal nonlinear_monotone
Activity Inhibitor_50 9.17312936 5 1 lin normal nonlinear_monotone
Activity Inhibitor_75 4.15092812 5 1 lin normal nonlinear_monotone
Activity Inhibitor_100 2.53355252 5 1 lin normal nonlinear_monotone
Activity Inhibitor_300 0.5896329 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_0 0 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_3 0.05999885 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_10 0.19999376 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_25 0.49904277 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_35 0.65916874 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_50 0.81495435 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_75 0.91638297 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_100 0.94898071 5 1 lin normal nonlinear_monotone
Ybar Inhibitor_300 0.98813045 5 1 lin normal nonlinear_monotone
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
observableId observableName observableFormula noiseFormula observableTransformation noiseDistribution
Activity Activity R_R+RIR noiseParameter1_Activity lin normal
Ybar Ybar (RI+RIR+2*RIRI)/Rtot noiseParameter1_Ybar lin normal
observableId observableName observableFormula noiseFormula observableTransformation noiseDistribution
Activity Activity R_R+RIR noiseParameter1_Activity lin normal
Ybar Ybar (RI+RIR+2*RIRI)/Rtot noiseParameter1_Ybar lin normal
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
parameterId parameterName parameterScale lowerBound upperBound nominalValue estimate priorType priorParameters
K1 K1 lin -5 5 0.04 0
K2 K2 lin -5 5 20 0
K3 K3 log10 0.1 100000 4000 1
K5 K5 log10 1.00E-05 100000 0.1 1
parameterId parameterName parameterScale lowerBound upperBound nominalValue estimate
K1 K1 lin -5 5 0.04 0
K2 K2 lin -5 5 20 0
K3 K3 log10 0.1 100000 4000 1
K5 K5 log10 0.00001 100000 0.1 1
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
parameterId parameterName parameterScale lowerBound upperBound nominalValue estimate parameterType
K1 K1 lin -5 5 0.04 0
K2 K2 lin -5 5 20 0
K3 K3 log10 0.1 100000 4000 1
K5 K5 log10 0.00001 100000 0.1 1
sd_Activity \sigma_{activity} lin 0 inf 1 1 sigma
sd_Ybar \sigma_{ybar} lin 0 inf 1 1 sigma
382 changes: 232 additions & 150 deletions doc/example/sampling_diagnostics.ipynb

Large diffs are not rendered by default.

55 changes: 32 additions & 23 deletions doc/example/synthetic_data.ipynb

Large diffs are not rendered by default.

20 changes: 11 additions & 9 deletions pypesto/C.py
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,17 @@ class InnerParameterType(str, Enum):

MIN_SIM_RANGE = 1e-16

SPLINE_PAR_TYPE = 'spline'
N_SPLINE_PARS = 'n_spline_pars'
DATAPOINTS = 'datapoints'
MIN_DATAPOINT = 'min_datapoint'
MAX_DATAPOINT = 'max_datapoint'
EXPDATA_MASK = 'expdata_mask'
CURRENT_SIMULATION = 'current_simulation'
INNER_NOISE_PARS = 'inner_noise_pars'
OPTIMIZE_NOISE = 'optimize_noise'


###############################################################################
# HISTORY

Expand All @@ -210,15 +221,6 @@ class InnerParameterType(str, Enum):
SUFFIXES_HDF5 = ["hdf5", "h5"]
SUFFIXES = SUFFIXES_CSV + SUFFIXES_HDF5

SPLINE_PAR_TYPE = 'spline'
N_SPLINE_PARS = 'n_spline_pars'
DATAPOINTS = 'datapoints'
MIN_DATAPOINT = 'min_datapoint'
MAX_DATAPOINT = 'max_datapoint'
EXPDATA_MASK = 'expdata_mask'
CURRENT_SIMULATION = 'current_simulation'
NOISE_PARAMETERS = 'noise_parameters'


###############################################################################
# PRIOR
Expand Down
38 changes: 17 additions & 21 deletions pypesto/ensemble/ensemble.py
Original file line number Diff line number Diff line change
Expand Up @@ -1111,29 +1111,25 @@ def entries_per_start(
ens_ind = [np.flatnonzero(fval <= cutoff) for fval in fval_traces]

# count the number of candidates per start
n_per_start = np.array([len(start) for start in ens_ind])
n_theo = np.array([len(start) for start in ens_ind])

# trimm down starts that exceed the limit:
n_per_start = [min(n, max_per_start) for n in n_theo]

# if all possible indices can be included, return
if (n_per_start < max_per_start).all() and sum(n_per_start) < max_size:
if sum(n_per_start) < max_size:
return n_per_start

# trimm down starts that exceed the limit:
n_per_start = [min(n, max_per_start) for n in n_per_start]

# trimm down more until it fits the max size
decr = 0
while sum(n_per_start) > max_size:
n_per_start = [min(n, max_per_start - decr) for n in n_per_start]
decr += 1
# TODO: Possibility. With this implementation we could
# in a scenario, where we have more candidates than
# max size end up with an ensemble of size
# `max_size - len(n_starts)` in the worst case. We could introduce
# a flag which would be `force_max`, that indicates
# whether those remaining free slots should be filled by
# entries from certain starts. This would brng up the
# discussion which starts to choose. One obvious choice
# would be the best starts based on their endpoint.
n_equally = max_size // len(n_per_start)
n_left = max_size % len(n_per_start)
# divide numbers equally
n_per_start = [min(n, n_equally) for n in n_per_start]
# add one more to the first n_left possible (where n_theo > n_equally):
to_add = np.where(n_theo > n_equally)[0]
if len(to_add) > n_left:
to_add = to_add[0:n_left]
n_per_start = [
n + 1 if i in to_add else n for i, n in enumerate(n_per_start)
]

return n_per_start

Expand Down Expand Up @@ -1171,7 +1167,7 @@ def get_vector_indices(
indices = np.round(np.linspace(0, len(candidates) - 1, n_vectors))
return candidates[indices.astype(int)]
else:
return candidates[:n_vectors]
return sorted(candidates, key=lambda i: trace_start[i])[:n_vectors]


def get_percentile_label(percentile: Union[float, int, str]) -> str:
Expand Down
4 changes: 4 additions & 0 deletions pypesto/hierarchical/calculator.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,10 @@ def initialize(self):
super().initialize()
self.inner_solver.initialize()

def get_inner_parameter_ids(self) -> List[str]:
"""Get the ids of the inner parameters."""
return self.inner_problem.get_x_ids()

def __call__(
self,
x_dct: Dict,
Expand Down
36 changes: 31 additions & 5 deletions pypesto/hierarchical/inner_calculator_collector.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
FVAL,
GRAD,
HESS,
INNER_PARAMETERS,
MEASUREMENT_TYPE,
METHOD,
MODE_RES,
Expand Down Expand Up @@ -108,6 +109,7 @@ def __init__(

def initialize(self):
"""Initialize."""
self.best_fval = np.inf
for calculator in self.inner_calculators:
calculator.initialize()

Expand All @@ -119,6 +121,8 @@ def construct_inner_calculators(
inner_options: Dict,
):
"""Construct inner calculators for each data type."""
self.noise_dummy_values = {}
self.best_fval = np.inf
if ORDINAL in self.data_types or CENSORED in self.data_types:
optimal_scaling_inner_options = {
key: value
Expand Down Expand Up @@ -155,6 +159,9 @@ def construct_inner_calculators(
spline_calculator = SplineAmiciCalculator(
spline_inner_problem, spline_inner_solver
)
self.noise_dummy_values = (
spline_inner_problem.get_noise_dummy_values(scaled=True)
)
self.inner_calculators.append(spline_calculator)
# TODO relative data

Expand Down Expand Up @@ -216,6 +223,14 @@ def _get_quantitative_data_mask(

return quantitative_data_mask

def get_inner_parameter_ids(self) -> List[str]:
"""Return the ids of the inner parameters."""
return [
parameter_id
for inner_calculator in self.inner_calculators
for parameter_id in inner_calculator.get_inner_parameter_ids()
]

def __call__(
self,
x_dct: Dict,
Expand Down Expand Up @@ -277,7 +292,8 @@ def __call__(
nllh, snllh, s2nllh, chi2, res, sres = init_return_values(
sensi_orders, mode, dim
)
inner_parameters_dictionary = {}
all_inner_pars = {}
interpretable_inner_pars = {}

# set order in solver
sensi_order = 0
Expand All @@ -287,7 +303,7 @@ def __call__(
amici_solver.setSensitivityOrder(sensi_order)

x_dct = copy.deepcopy(x_dct)

x_dct.update(self.noise_dummy_values)
# fill in parameters
amici.parameter_mapping.fill_in_parameters(
edatas=edatas,
Expand Down Expand Up @@ -315,7 +331,8 @@ def __call__(
RES: res,
SRES: sres,
RDATAS: rdatas,
X_INNER_OPT: inner_parameters_dictionary,
X_INNER_OPT: all_inner_pars,
INNER_PARAMETERS: interpretable_inner_pars,
}
ret[FVAL] = np.inf
# if the gradient was requested,
Expand Down Expand Up @@ -365,7 +382,10 @@ def __call__(
nllh += inner_result[FVAL]
if sensi_order > 0:
snllh += inner_result[GRAD]
inner_parameters_dictionary.update(inner_result[X_INNER_OPT])

all_inner_pars.update(inner_result[X_INNER_OPT])
if INNER_PARAMETERS in inner_result:
interpretable_inner_pars.update(inner_result[INNER_PARAMETERS])

# add result for quantitative data
if self.quantitative_data_mask is not None:
Expand All @@ -392,9 +412,15 @@ def __call__(
RES: res,
SRES: sres,
RDATAS: rdatas,
X_INNER_OPT: inner_parameters_dictionary,
}

# Add inner parameters to return dict
# only if the objective value improved.
if ret[FVAL] < self.best_fval:
ret[X_INNER_OPT] = all_inner_pars
ret[INNER_PARAMETERS] = interpretable_inner_pars
self.best_fval = ret[FVAL]

return filter_return_dict(ret)


Expand Down
5 changes: 5 additions & 0 deletions pypesto/hierarchical/optimal_scaling/calculator.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,11 @@ def __init__(
def initialize(self):
"""Initialize."""
self.inner_solver.initialize()
self.inner_problem.initialize()

def get_inner_parameter_ids(self) -> List[str]:
"""Get the ids of the inner parameters."""
return self.inner_problem.get_x_ids()

def __call__(
self,
Expand Down
4 changes: 4 additions & 0 deletions pypesto/hierarchical/optimal_scaling/parameter.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,3 +85,7 @@ def __init__(
self.estimate = estimate
self.value = self.dummy_value
self.censoring_type = censoring_type

def initialize(self):
"""Initialize."""
self.value = self.dummy_value
12 changes: 12 additions & 0 deletions pypesto/hierarchical/optimal_scaling/problem.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,18 @@ def _initialize_groups(self) -> None:
'have to either be all None, or all not None.'
)

def initialize(self) -> None:
"""Initialize the subproblem."""
# Initialize all parameter values.
for x in self.xs.values():
x.initialize()

# Initialize the groups.
for group in self.get_groups_for_xs(InnerParameterType.SPLINE):
self.groups[group][SURROGATE_DATA] = np.zeros(
self.groups[group][NUM_DATAPOINTS]
)

@staticmethod
def from_petab_amici(
petab_problem: petab.Problem,
Expand Down
10 changes: 10 additions & 0 deletions pypesto/hierarchical/optimal_scaling/solver.py
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,16 @@ def calculate_gradients(
or par_opt in already_calculated
):
continue
# Current fix for scaling/offset parameters in models.
elif par_sim.startswith('observableParameter'):
continue
# For noise parameters optimized hierarchically, we
# do not calculate the gradient.
elif (
par_sim.startswith('noiseParameter')
and par_opt not in par_opt_ids
):
continue
else:
already_calculated.add(par_opt)

Expand Down
Loading

0 comments on commit 9a75457

Please sign in to comment.