-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make Multi Node sampler cycle forever #1424
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/data/1424
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (13 Unrelated Failures)As of commit 25062f2 with merge base 4ec4548 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
@@ -17,8 +17,12 @@ class StopCriteria: | |||
dataset is seen exactly once. No wraparound or restart will be performed. | |||
|
|||
3) FIRST_DATASET_EXHAUSTED: Stop when the first dataset is exhausted. | |||
|
|||
4) CYCLE_FOREVER: Cycle through the datasets by reinitializing each exhausted source nodes. | |||
This is useful when trainer want control over certain number of steps instead of epochs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this call set_epoch on the underlying datasets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if we need to do anything special, or can advise users on how to perform that. This otherwise looks good to me
This PR adds support for continuous cycling through each base node in multi node weighted sampler. With the current setup this was each to extend. We do cycle through each node to support
StopCriteria.CYCLE_UNTIL_ALL_DATASETS_EXHAUSTED
, forStopCriteria.CYCLE_FOREVER
we skip raising aStopIteration
.This should be functionally equivalent to having an infinite sampler plugged on top of each base dataset.
The dataset order (as defined by
weights
) remains the same.