-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hugepage_reset: Test compatible with different NUMA topologies #4237
base: master
Are you sure you want to change the base?
Conversation
Please @JinLiul could you test this PR whenever you have a 8 NUMA nodes system again? Thanks! |
Hi @mcasquer, tested with 8 NUMA nodes system. |
As the test will set 8 hugepages, this works fine for systems with 2 NUMA nodes, having e.g. 8 nodes is going to lead the on_numa_node variant to fail since the binded node doesn't have enough hugepages. As the cfg already suggests to allocate 1G hugepages on boot time, let's make user decision how many hugepages allocate, adding an informative comment in the cfg as well. Finally, if system hugepage_size is 1GB, allocates at runtime enough hugepages in all valid nodes. Signed-off-by: mcasquer <mcasquer@redhat.com>
5e1c2c5
to
b1013a9
Compare
Tests results on a 8 NUMA nodes host (with test loop a bit tuned 😁)
|
Please @JinLiul could you test again this PR? Thanks ! |
Also passed in 2 NUMA nodes host
|
hugepage_reset: Test compatible with different NUMA topologies
As the test will set 8 hugepages, this works fine
for systems with 2 NUMA nodes, having e.g. 8 nodes
is going to lead the on_numa_node variant to fail
since the binded node doesn't have enough hugepages.
As the cfg already suggests to allocate 1G hugepages
on boot time, let's make user decision how many hugepages
allocate, adding an informative comment in the cfg as well.
Finally, if system hugepage_size is 1GB, allocates at
runtime enough hugepages in all valid nodes.
Signed-off-by: mcasquer mcasquer@redhat.com
ID: 3254