-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix how chunk size is computed for throttling. #1158
Fix how chunk size is computed for throttling. #1158
Conversation
Signed-off-by: Cody Littley <cody@eigenlabs.org>
Signed-off-by: Cody Littley <cody@eigenlabs.org>
Signed-off-by: Cody Littley <cody@eigenlabs.org>
) | ||
|
||
// computeInMemoryFrameSize computes the size of a blob's chunks in memory. | ||
func computeInMemoryFrameSize(frames []*encoding.Frame) (uint64, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: this is a short term solution. The long term solution is to not fully deserialize each frame and to leave them as byte arrays. This will result in the in-memory footprint being very close to the serialized footprint, and in that case we can skip this complex measurement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The long term solution is to not fully deserialize each frame and to leave them as byte arrays.
How long term is this solution? If we're making this change soon enough, I wonder if this complex measurement logic is necessary in the meantime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order for correctness tests to function, a properly sized cache is necessary.
This solution can be replaced when I fix how the relay deserializes frames. This is a priority to complete, but it's a lower priority than finishing correctness testing. I'd like to fix this for the MVP, but it's possible it won't be fixed for the MVP if we can handle load tests without fixing it.
Signed-off-by: Cody Littley <cody@eigenlabs.org>
Signed-off-by: Cody Littley <cody@eigenlabs.org>
Signed-off-by: Cody Littley <cody@eigenlabs.org>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple questions
return 0, fmt.Errorf("blob version %d not found in blob params map", header.BlobVersion) | ||
} | ||
|
||
return totalChunkSizeBytes / blobParams.NumChunks, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blobParams.NumChunks
should never be 0, but let's still check for the condition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if blobParams.NumChunks == 0 {
return 0, fmt.Errorf("numChunks is 0, this should never happen")
}
) | ||
|
||
// computeInMemoryFrameSize computes the size of a blob's chunks in memory. | ||
func computeInMemoryFrameSize(frames []*encoding.Frame) (uint64, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The long term solution is to not fully deserialize each frame and to leave them as byte arrays.
How long term is this solution? If we're making this change soon enough, I wonder if this complex measurement logic is necessary in the meantime?
Signed-off-by: Cody Littley <cody@eigenlabs.org>
Why are these changes needed?
Fix how chunk sizes are calculated (used for throttling). Also adds some new metrics related to throttling.