-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try using step_callback to get gpflow training loss history #772
Closed
Closed
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking of getting the whole
OptimizeResult
object, rather than just loss: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.htmlwhen we call
minimize
in gpflow.scipy, it should return it, right?then
fun
should be actual loss value, but one gets all the other stuff that is useful for evaluating the optimisation: success, status, nfev, nit ...There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can certainly get the OptimizeResult, but I thought that didn't include any loss history (unlike the keras optimizer) just the number of evaluations? Isn't that what you wanted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As an aside, how were you expecting to get the result? TrainableProbabilisticModel.optimize currently doesn't return anything (though Optimizer.optimize already does). If you're using BO or AskTell then there is no obvious way to return all the results when optimizing all the models, but I guess we could place them somewhere in the model wrappers (a bit like how the keras history is already available inside DeepEnsemble.model.history).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, you want to say we would get here a loss with every iteration of the optimiser? rather than a single end result with
fun
inOptimizeResult
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, this was the solution I was thinking of - but not sure if we can attach a history object to gpflow model like we have with keras model object, if not and we store it in the wrapper, perhaps we should have a unified way for all models, store it in the wrapper object
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also just store the most recent result of calling
Optimizer.optimize
in the optimizer itself, which would let us easily access the OptimizeResult object. Or even all the previous results (enabled via asave_results: bool
option in the Optimizer).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, that would be useful, but since it can expensive perhaps leaving off by default? (depends how expensive it is...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that is the best solution, all the history from each active learning step
for keras we can simply copy it from the model attribute, just so that we have a unified way of accessing the optimization history?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remember that for keas the history is generated by the model calling the fit method, not the optimizer's minimize method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could still attach the resulting history to the optimizer attribute, we would just need to do it from the model wrapper, in the optimize method, no?