Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Fix client can OOM when there are some bookies slow #4556

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dao-jun
Copy link
Member

@dao-jun dao-jun commented Feb 20, 2025

Related to:
apache/pulsar#12169
apache/pulsar#9562
apache/pulsar#10439
#3139
apache/pulsar#14861
and etc.

Background

Our customer has 12 nodes bookie and 12 nodes broker cluster.
Pulsar version: 2.6.3
Bookkeeper: 4.11.1

They enabled bookkeeper client addEntryTimeout feature and set addEntryTimeoutSec to 30

At first, their EWA is 332, and they encountered Broker OOM exception.
According to apache/pulsar#12169, we recommended them set EWA to 222 and observe for a period of time

After a few days, they also encountered broker OOM exception.

So we suspect that the broker may have a memory leak and let them to enable Netty ByteBuf leak detector (Add -Dpulsar.allocator.leak_detection=Paranoid to their broker vm args and restart).

But when search LEAK keyword in their broker logs, their is no related logs which means no mem leaks in their broker.

We found some logs New ensemble: [aaa,bbb] is not adhering to Placement Policy. quarantinedBookies: [xxx] in their logs, and quarantinedBookies is always same.

We have observed the monitoring of this bookie and found that there has been no traffic entering for a long time(weeks), so we tried to restart the bookie, but it can be shutdown for a long time, until we kill -9, which means this bookie maybe ran into thread blocking or sth else so that it can not respond requests.

After we restart the bookie, there is no more broker OOM happened, brokers goes well.

When I analyze the broker heap dump, I found some Netty channels held a big number of DirectMemory, and all this channels connected to that quarantinedBookie:
image
There are 6 channels retained over 100MB DirectMemories each.

Due to our customer enabled addEntryTimeout feature, so broker Backpressure won't work in this case.
Enable failfast can prevent the situation from escalating, but it will not solve the root cause.
If we set EWA to 332, and there is 1 bookie is SLOW or HANGING, OOM can also have a chance to happen.
If we set EWA to 222 and disable addEntryTimeout, and there is 1 bookie is SLOW or HANGING, broker maybe can not serve requests.

The key point is if there is a bookie is slow or hanging and we don't enable failfast, client will keep sending data to it, even though the data cannot send-out. All the data will backlog in the client.

Motivation

Fix bookkeeper client can be OOM if there is a bookie is SLOW or HANGING in the ensemble.

Changes

Close all the channel which connected to a quarantined Bookie to release memories.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant