NeuralDELux
Documentation for NeuralDELux.
NeuralDELux.ADEulerStep
NeuralDELux.ADNeuralDE
NeuralDELux.ADRK4Step
NeuralDELux.ANODEForecastLength
NeuralDELux.AlternativeModelLoss
NeuralDELux.AlternativeModelLossSingleSample
NeuralDELux.AugmentedNeuralDE
NeuralDELux.ForecastLength
NeuralDELux.SciMLEulerStep
NeuralDELux.SciMLNeuralDE
NeuralDELux.DetermineDevice
NeuralDELux.SamePadCircularConv
NeuralDELux.evolve
NeuralDELux.evolve
NeuralDELux.evolve_sol
NeuralDELux.evolve_to_blowup
NeuralDELux.evolve_to_blowup
NeuralDELux.forecast_δ
NeuralDELux.slice_and_batch_trajectory
NeuralDELux.train!
NeuralDELux.train_anode!
NeuralDELux.trajectory
NeuralDELux.ADEulerStep
— TypeADEulerStep
Does a single Euler step, does work with GPUs and AD (tested with Zygote)
It's is called with solve(model, x, ps, st, solver::ADEulerStepTest, dt; kwargs...)
NeuralDELux.ADNeuralDE
— TypeADNeuralDE(; model=model, alg=ADRK4Step(), dt=dt, kwargs...)
Model for setting up and training Chaotic Neural Differential Equations with Lux.jl and NeuralDELux one-step solvers
Fields:
prob
DEProblemalg
Algorithm to use for thesolve
commanddt
time stepkwargs
any additonal keywords
An instance of the model is called with a trajectory pair (t,x)
in t
are the timesteps that NDE is integrated for and x
is a trajectory N x ... x N_t
in which x[:, ... , 1]
is taken as the initial condition.
NeuralDELux.ADRK4Step
— TypeADRK4Step
Does a single Runge Kutta 4th order step, does work with GPUs and AD (tested with Zygote)
It's is called with solve(model, x, ps, st, solver::ADEulerStepTest, dt; kwargs...)
NeuralDELux.ANODEForecastLength
— TypeANODEForecastLength(data; threshold::Number=0.4, metric="norm")
Provides an additonal metric to measure how well a model performs on data
in term of its forecast error. The length is definded as the time step it takes until metric
exceeds threshold
. The initialized struct can then be called with
fl = ForecastLength(data)
-res = fl(model, ps, st)
NeuralDELux.AlternativeModelLoss
— TypeAlternativeModelLoss(model, loss, data)
Computes the mean loss
with model
on data
. data
is supposed to serve as an iterator.
NeuralDELux.AlternativeModelLossSingleSample
— TypeAlternativeModelLossSingleSample(model, loss, data)
Computes the mean loss
with model
on data
. data
is supposed to serve as an iterator. data
is assumed to be batched along the last dimension, but model
only gets single samples as inputs.
NeuralDELux.AugmentedNeuralDE
— TypeAugmentedNeuralDE(node_model::Union{ADNeuralDE, SciMLNeuralDE}, size_aug::Tuple, size_orig::Tuple, cat_dim)
Construct an augmented NODE that wraps around an exisiting node_model
with observales with size size_orig
and adds size_aug
additional dimensions along dimension cat_dim
.
NeuralDELux.ForecastLength
— TypeForecastLength(data; threshold::Number=0.4, modes=("forecast_length",), metric="norm", N_avg::Int=30)
Provides an additonal metric to measure how well a model performs on data
in term of its forecast error. The length is definded as the time step it takes until metric
exceeds threshold
. The initialized struct can then be called with
fl = ForecastLength(data)
-res = fl(model, ps, st)
NeuralDELux.SciMLEulerStep
— TypeSciMLEulerStep
Does one Euler step using direct AD. Expected to be used like a solver algorithm from OrdinaryDiffEq.jl
, so with solve(prob::AbstractDEProblem, ADEulerStep())
.
NeuralDELux.SciMLNeuralDE
— TypeSciMLNeuralDE(model; alg=ADEulerStep(), gpu=nothing, kwargs...)
Model for setting up and training Chaotic Neural Differential Equations with Lux.jl and SciMLSensitivity.jl
Fields:
prob
DEProblemalg
Algorithm to use for thesolve
commandkwargs
any additional keyword arguments that should be handed over (e.g.sensealg
)device
the device the model is running on, eitherDeviceCPU
orDeviceCUDA
, used for dispatiching ifArrays
orCuArrays
are used
An instance of the model is called with a trajectory pair (t,x)
in t
are the timesteps that NDE is integrated for and x
is a trajectory N x ... x N_t
in which x[:, ... , 1]
is taken as the initial condition.
NeuralDELux.DetermineDevice
— MethodDetermineDevice(; gpu::Union{Nothing, Bool}=nothing)
Initializes the device that is used. Returns either DeviceCPU
or DeviceCUDA
. If no gpu
keyword argument is given, it determines automatically if a GPU is available.
NeuralDELux.SamePadCircularConv
— FunctionSamePadCircularConv(kernel, ch, activation=identity)
Wrapper around Lux.Conv
that adds circular padding so that the dimensions stay the same.
NeuralDELux.evolve
— Methodevolve(model::ADNeuralDE, ps, st, ic; tspan::Union{T, Nothing}=nothing, N_t::Union{Integer,Nothing}=nothing) where T
Evolve the model
by tspan
or N_t
(only specifiy one), starting from the initial condition ic
NeuralDELux.evolve
— Methodevolve(model::SciMLNeuralDE, ps, st, ic; tspan::Union{T, Nothing}=nothing, N_t::Union{Integer,Nothing}=nothing) where T
Evolve the model
by tspan
or N_t
(only specifiy one), starting from the initial condition ic
NeuralDELux.evolve_sol
— Methodevolve_sol
Same as evolve
but returns a SciML solution object.
NeuralDELux.evolve_to_blowup
— Methodevolve_to_blowup(singlestep_solver, x, ps, st, dt, default_time=Inf)
Integrated a longer trajectory from a (trained) single step solver until it blows up.
NeuralDELux.evolve_to_blowup
— Methodevolve_to_blowup(model::SciMLNeuralDE, ps, st, ic::A; default_time=Inf, kwargs...)
Evolves a model
that is suspected to blowup and returns the last time step if that is the case, and if not returns default_time
NeuralDELux.forecast_δ
— Functionforecast_δ(prediction::AbstractArray{T,N}, truth::AbstractArray{T,N}, mode::String="both") where {T,N}
Assumes that the last dimension of the input arrays is the time dimension and N_t
long. Returns an N_t
long array, judging how accurate the prediction is.
Supported modes:
"mean"
: mean between the arrays"maximum"
: maximum norm"norm"
: normalized, similar to the metric used in Pathak et al
NeuralDELux.slice_and_batch_trajectory
— Methodslice_and_batch_trajectory(t::AbstractVector, x, N_batch::Integer)
Slice a single trajectory into multiple ones for the batched dataloader.
NeuralDELux.train!
— Methodmodel, ps, st, training_results = train!(model, ps, st, loss, train_data, opt_state, η_schedule; τ_range=2:2, N_epochs=1, verbose=true, save_name=nothing, save_results_name=nothing, shuffle_data_order=true, additional_metric=nothing, valid_data=nothing, test_data=nothing, scheduler_offset::Int=0, compute_initial_error::Bool=true, save_mode::Symbol=:valid)
Trains the model
with parameters ps
and state st
with the loss
function and train_data
by applying a opt_state
with the learning rate η_schedule
for N_epochs
. Returns the trained model
, ps
, st
, results
. An additional_metric
with the signature (model, ps, st) -> value
might be specified that is computed after every epoch. save_mode
determines if the model is saved with the lowest error on the :valid
set or :train
set.
NeuralDELux.train_anode!
— Methodmodel, ps, st, training_results = train_anode!(model, ps, st, loss, train_data, opt_state, η_schedule; τ_range=2:2, N_epochs=1, verbose=true, save_name=nothing, additional_metric=nothing)
Trains the model
with parameters ps
and state st
with the loss
function and train_data
by applying a opt_state
with the learning rate η_schedule
for N_epochs
. Returns the trained model
, ps
, st
, results
. An additional_metric
with the signature (model, ps, st) -> value
might be specified that is computed after every epoch.
NeuralDELux.trajectory
— Methodtrajectory(singlestep_solver, x, ps, st)
Integrated a longer trajectory from a (trained) single step solver. Not implemented for AD / training.