Skip to content

Commit

Permalink
build based on 5510050
Browse files Browse the repository at this point in the history
  • Loading branch information
Documenter.jl committed May 16, 2024
1 parent ec9d4a8 commit 5facbb0
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion dev/api/index.html

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions dev/gradient-lbfgs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@
)</code></pre><pre class="documenter-example-output"><code class="nohighlight hljs ansi">NLPStopping{ADNLPModels.ADNLPModel{Float64, Vector{Float64}, Vector{Int64}}, StoppingMeta{Float64, Float64, Nothing, Stopping.var&quot;#46#54&quot;, Stopping.var&quot;#47#55&quot;{Stopping.var&quot;#46#54&quot;}, typeof(unconstrained_check)}, StopRemoteControl, NLPAtX{Float64, Float64, Vector{Float64}}, VoidStopping{Any, StoppingMeta, StopRemoteControl, GenericState, Nothing, VoidListofStates}, VoidListofStates}
It has no main_stp.
It doesn&#39;t keep track of the state history.
Problem is ADNLPModel - Model with automatic differentiation backend ADNLPModels.ADModelBackend{ADNLPModels.ForwardDiffADGradient, ADNLPModels.ForwardDiffADHvprod, ADNLPModels.ForwardDiffADJprod, ADNLPModels.ForwardDiffADJtprod, ADNLPModels.ForwardDiffADJacobian, ADNLPModels.ForwardDiffADHessian, ADNLPModels.ForwardDiffADGHjvprod}(ADNLPModels.ForwardDiffADGradient(ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(Main.fH), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(Main.fH), Float64}, Float64, 2}}}((Partials(1.0, 0.0), Partials(0.0, 1.0)), ForwardDiff.Dual{ForwardDiff.Tag{typeof(Main.fH), Float64}, Float64, 2}[Dual{ForwardDiff.Tag{typeof(Main.fH), Float64}}(6.92372919598254e-310,0.0,6.9237308196854e-310), Dual{ForwardDiff.Tag{typeof(Main.fH), Float64}}(0.0,6.9237676133501e-310,5.0e-324)])), ADNLPModels.ForwardDiffADHvprod(), ADNLPModels.ForwardDiffADJprod(), ADNLPModels.ForwardDiffADJtprod(), ADNLPModels.ForwardDiffADJacobian(0), ADNLPModels.ForwardDiffADHessian(3), ADNLPModels.ForwardDiffADGHjvprod())
Problem is ADNLPModel - Model with automatic differentiation backend ADNLPModels.ADModelBackend{ADNLPModels.ForwardDiffADGradient, ADNLPModels.ForwardDiffADHvprod, ADNLPModels.ForwardDiffADJprod, ADNLPModels.ForwardDiffADJtprod, ADNLPModels.ForwardDiffADJacobian, ADNLPModels.ForwardDiffADHessian, ADNLPModels.ForwardDiffADGHjvprod}(ADNLPModels.ForwardDiffADGradient(ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(Main.fH), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(Main.fH), Float64}, Float64, 2}}}((Partials(1.0, 0.0), Partials(0.0, 1.0)), ForwardDiff.Dual{ForwardDiff.Tag{typeof(Main.fH), Float64}, Float64, 2}[Dual{ForwardDiff.Tag{typeof(Main.fH), Float64}}(6.9053667291017e-310,6.9053656893133e-310,6.90537394119613e-310), Dual{ForwardDiff.Tag{typeof(Main.fH), Float64}}(6.9053656893133e-310,6.90536586400225e-310,6.90536672953096e-310)])), ADNLPModels.ForwardDiffADHvprod(), ADNLPModels.ForwardDiffADJprod(), ADNLPModels.ForwardDiffADJtprod(), ADNLPModels.ForwardDiffADJacobian(0), ADNLPModels.ForwardDiffADHessian(3), ADNLPModels.ForwardDiffADGHjvprod())
Problem name: Generic
All variables: ████████████████████ 2 All constraints: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
free: ████████████████████ 2 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
Expand All @@ -107,10 +107,10 @@
</code></pre><p>Our first elementary runs will use separately the steepest descent method and the quasi-Newton method to solve the problem.</p><h2 id="Steepest-descent"><a class="docs-heading-anchor" href="#Steepest-descent">Steepest descent</a><a id="Steepest-descent-1"></a><a class="docs-heading-anchor-permalink" href="#Steepest-descent" title="Permalink"></a></h2><pre><code class="language-julia hljs">reinit!(stp, rstate = true, x = nlp.meta.x0)
steepest_descent(stp)

(status(stp), elapsed_time(stp), get_list_of_states(stp), neval_obj(nlp), neval_grad(nlp))</code></pre><pre class="documenter-example-output"><code class="nohighlight hljs ansi">(:Optimal, 1.2195870876312256, VoidListofStates(), 889, 37)</code></pre><h2 id="BFGS-quasi-Newton"><a class="docs-heading-anchor" href="#BFGS-quasi-Newton">BFGS quasi-Newton</a><a id="BFGS-quasi-Newton-1"></a><a class="docs-heading-anchor-permalink" href="#BFGS-quasi-Newton" title="Permalink"></a></h2><pre><code class="language-julia hljs">reinit!(stp, rstate = true, x = nlp.meta.x0, rcounters = true)
(status(stp), elapsed_time(stp), get_list_of_states(stp), neval_obj(nlp), neval_grad(nlp))</code></pre><pre class="documenter-example-output"><code class="nohighlight hljs ansi">(:Optimal, 1.088163137435913, VoidListofStates(), 889, 37)</code></pre><h2 id="BFGS-quasi-Newton"><a class="docs-heading-anchor" href="#BFGS-quasi-Newton">BFGS quasi-Newton</a><a id="BFGS-quasi-Newton-1"></a><a class="docs-heading-anchor-permalink" href="#BFGS-quasi-Newton" title="Permalink"></a></h2><pre><code class="language-julia hljs">reinit!(stp, rstate = true, x = nlp.meta.x0, rcounters = true)
bfgs_quasi_newton_armijo(stp)

(status(stp), elapsed_time(stp), get_list_of_states(stp), neval_obj(nlp), neval_grad(nlp))</code></pre><pre class="documenter-example-output"><code class="nohighlight hljs ansi">(:Optimal, 0.009244918823242188, VoidListofStates(), 91, 18)</code></pre><h2 id="Mix-of-Algorithms"><a class="docs-heading-anchor" href="#Mix-of-Algorithms">Mix of Algorithms</a><a id="Mix-of-Algorithms-1"></a><a class="docs-heading-anchor-permalink" href="#Mix-of-Algorithms" title="Permalink"></a></h2><pre><code class="language-julia hljs">NLPModels.reset!(nlp)
(status(stp), elapsed_time(stp), get_list_of_states(stp), neval_obj(nlp), neval_grad(nlp))</code></pre><pre class="documenter-example-output"><code class="nohighlight hljs ansi">(:Optimal, 0.009037017822265625, VoidListofStates(), 91, 18)</code></pre><h2 id="Mix-of-Algorithms"><a class="docs-heading-anchor" href="#Mix-of-Algorithms">Mix of Algorithms</a><a id="Mix-of-Algorithms-1"></a><a class="docs-heading-anchor-permalink" href="#Mix-of-Algorithms" title="Permalink"></a></h2><pre><code class="language-julia hljs">NLPModels.reset!(nlp)
stp_warm = NLPStopping(
nlp,
optimality_check = unconstrained_check,
Expand All @@ -131,7 +131,7 @@
end</code></pre><pre><code class="language-julia hljs">reinit!(stp_warm)
stp_warm.meta.max_iter = 100
bfgs_quasi_newton_armijo(stp_warm, Hk = Hwarm)
(status(stp_warm), elapsed_time(stp_warm), get_list_of_states(stp_warm), neval_obj(nlp), neval_grad(nlp))</code></pre><pre class="documenter-example-output"><code class="nohighlight hljs ansi">(:Optimal, 0.009093046188354492, ListofStates{NLPAtX{Float64, Float64, Vector{Float64}}, VoidListofStates}(-1, 10, Tuple{NLPAtX{Float64, Float64, Vector{Float64}}, VoidListofStates}[(NLPAtX{Float64, Float64, Vector{Float64}} with an iterate of type Vector{Float64} and a score of type Float64.
(status(stp_warm), elapsed_time(stp_warm), get_list_of_states(stp_warm), neval_obj(nlp), neval_grad(nlp))</code></pre><pre class="documenter-example-output"><code class="nohighlight hljs ansi">(:Optimal, 0.008687019348144531, ListofStates{NLPAtX{Float64, Float64, Vector{Float64}}, VoidListofStates}(-1, 10, Tuple{NLPAtX{Float64, Float64, Vector{Float64}}, VoidListofStates}[(NLPAtX{Float64, Float64, Vector{Float64}} with an iterate of type Vector{Float64} and a score of type Float64.
, VoidListofStates()), (NLPAtX{Float64, Float64, Vector{Float64}} with an iterate of type Vector{Float64} and a score of type Float64.
, VoidListofStates()), (NLPAtX{Float64, Float64, Vector{Float64}} with an iterate of type Vector{Float64} and a score of type Float64.
, VoidListofStates()), (NLPAtX{Float64, Float64, Vector{Float64}} with an iterate of type Vector{Float64} and a score of type Float64.
Expand Down

0 comments on commit 5facbb0

Please sign in to comment.