Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pool chunk slices instead of reusing them #6

Merged
merged 2 commits into from
Oct 31, 2024
Merged

Pool chunk slices instead of reusing them #6

merged 2 commits into from
Oct 31, 2024

Conversation

PapaCharlie
Copy link
Member

Similar to #2, where buffers are reused instead of pooled. Reusing a buffer instead of pooling it means its capacity is never released. Therefore if a large spike in traffic causes that buffer to grow, the resources it holds onto are never returned even as the traffic decreases. This causes those large buffers to stick around and not get used.

This change uses a sync.Pool instead of each deltaSender having its own buffer. This way buffers are automatically released when they aren't used.

Here's a screenshot of the motivation behind this change, these unfreed buffers are responsible for about 20% of the memory usage of the observer:
image

Similar to #2, where buffers are reused instead of pooled. Reusing a buffer
instead of pooling it means its capacity is never released. Therefore if a large
spike in traffic causes that buffer to grow, the resources it holds onto are
never returned even as the traffic decreases. This causes those large buffers to
stick around and not get used.

This change uses a `sync.Pool` instead of each `deltaSender` having its own
buffer. This way buffers are automatically released when they aren't used.
Copy link

@ZoabKapoor ZoabKapoor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks overall reasonable to me! One question -- is that screenshot of 3GB from a local deployment? If yes, can we add a screenshot of how much memory is used after this change so we know that it improved things?

@PapaCharlie
Copy link
Member Author

No the 3GB screencap is from prod, under loads I just can't reproduce locally, I physically do not have the hardware for it haha. Every time I tried, it's shadowed by the actual ZK data and isn't significant enough to show up.

// Return the queue that was too short to the pool
queuedUpdatesPool.Put(queue)
}
queue = new([]queuedResourceUpdate)
Copy link

@bohhyang bohhyang Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for my knowledge, this creates a new slice in the pool but what happens to the shorter one? Is it still in the pool or released? or just got extended longer?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can see just above, if the slice we get from the pool is too short, it returns it to the pool. I left a comment on what it means if ok is true

@PapaCharlie PapaCharlie merged commit 3c5cb8c into master Oct 31, 2024
2 checks passed
@PapaCharlie PapaCharlie deleted the pc/chunk branch October 31, 2024 23:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants