Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try using step_callback to get gpflow training loss history #772

Closed
wants to merge 1 commit into from

Conversation

uri-granta
Copy link
Collaborator

Related issue(s)/PRs: #617

Summary

Investigate using the the the step callback mechaism to track training loss history for the gpflow scipy optimizer.

Having said that, it may be better to fix this in gpflow directly, though that would involve figuring out how (or whether) to handle interaction with other callbacks, as well as how to extend OptimizeResult.

Fully backwards compatible: yes

PR checklist

  • The quality checks are all passing
  • The bug case / new feature is covered by tests
  • Any new features are well-documented (in docstrings or notebooks)

Copy link
Collaborator

@hstojic hstojic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be a way of getting the output of minimize call which should return all that we need (see my comment) - and then there is no need to incur the extra cost of computing the loss again

@@ -92,3 +94,30 @@ def compiled_closure() -> tf.Tensor:
builder.closure_builder = closure_builder

return builder.closure_builder((X, Y))


def loss_history_callback(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking of getting the whole OptimizeResult object, rather than just loss: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html

when we call minimize in gpflow.scipy, it should return it, right?
then fun should be actual loss value, but one gets all the other stuff that is useful for evaluating the optimisation: success, status, nfev, nit ...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can certainly get the OptimizeResult, but I thought that didn't include any loss history (unlike the keras optimizer) just the number of evaluations? Isn't that what you wanted?

Copy link
Collaborator Author

@uri-granta uri-granta Jul 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an aside, how were you expecting to get the result? TrainableProbabilisticModel.optimize currently doesn't return anything (though Optimizer.optimize already does). If you're using BO or AskTell then there is no obvious way to return all the results when optimizing all the models, but I guess we could place them somewhere in the model wrappers (a bit like how the keras history is already available inside DeepEnsemble.model.history).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, you want to say we would get here a loss with every iteration of the optimiser? rather than a single end result with fun in OptimizeResult?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could place them somewhere in the model wrappers (a bit like how the keras history is already available inside DeepEnsemble.model.history)

yes, this was the solution I was thinking of - but not sure if we can attach a history object to gpflow model like we have with keras model object, if not and we store it in the wrapper, perhaps we should have a unified way for all models, store it in the wrapper object

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also just store the most recent result of calling Optimizer.optimize in the optimizer itself, which would let us easily access the OptimizeResult object. Or even all the previous results (enabled via a save_results: bool option in the Optimizer).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, you want to say we would get here a loss with every iteration of the optimiser? rather than a single end result with fun in OptimizeResult?

Yes, that's what this does. Essentially equivalent to history.history["loss"] on the keras result.

I see, that would be useful, but since it can expensive perhaps leaving off by default? (depends how expensive it is...)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also just store the most recent result of calling Optimizer.optimize in the optimizer itself, which would let us easily access the OptimizeResult object. Or even all the previous results (enabled via a save_results: bool option in the Optimizer).

I think that is the best solution, all the history from each active learning step

for keras we can simply copy it from the model attribute, just so that we have a unified way of accessing the optimization history?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remember that for keas the history is generated by the model calling the fit method, not the optimizer's minimize method.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could still attach the resulting history to the optimizer attribute, we would just need to do it from the model wrapper, in the optimize method, no?

@uri-granta
Copy link
Collaborator Author

This will be covered by #774 and GPflow/GPflow#2080

@uri-granta uri-granta closed this Aug 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants