You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 16, 2024. It is now read-only.
Thank you very much for your work, I find your examples are supervised, I wonder if I can use some custom unsupervised loss function to train, or do you have any suggestions?
In addition, I found that the training process was encapsulated in dt.engine, so could Tensorboardx be used to visualize the intermediate results of the training process?
The text was updated successfully, but these errors were encountered:
You can specify which loss per each task to use. If you implement your own loss function to train, supervised or unsupervised, you can specify it in this example here. By default all the loss functions accept the tensor with predictions and the tensor with ground truth during the forward pass (you can see some examples here). For the unsupervised case, you can provide some dummy ground truth and then implement the logic however you want.
At the moment you cannot visualise intermediate results of the training (or validation) process. As you rightly pointed out, the training process is hidden within dt.engine.train and neither inputs nor predictions are being used outside it. There is some WIP that adds an option of visualisation callback, but it is not a priority right now
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Thank you very much for your work, I find your examples are supervised, I wonder if I can use some custom unsupervised loss function to train, or do you have any suggestions?
In addition, I found that the training process was encapsulated in dt.engine, so could Tensorboardx be used to visualize the intermediate results of the training process?
The text was updated successfully, but these errors were encountered: