Replies: 4 comments 8 replies
-
Can you explain what specific problem this is causing for you? We have yet to receive any other complaints of this. I believe there must be a privileged pod (probably one of the CNI pods, or perhaps niginx) that is doing this, as there is nothing in RKE2, and nothing I am aware of in core Kubernetes, that sets this at a system level. For example, RKE2's systemd unit sets a higher limit than what you're reporting: |
Beta Was this translation helpful? Give feedback.
-
After the node runs for a while, it the node basically becomes unusable. e.g.
|
Beta Was this translation helpful? Give feedback.
-
We've certainly never run into that. What makes you think that this is related to the max file descriptors? What are the resources (CPU/memory/disk) of the nodes in question? |
Beta Was this translation helpful? Give feedback.
-
On a ubuntu VM, I see no change when enabling rke2-server with your configuration:
same with agent as well. |
Beta Was this translation helpful? Give feedback.
-
Environmental Info:
RKE2 Version:
v1.30.5+rke2r1
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
N/A
Describe the bug:
After installing RKE2 (see below), fs.file-max gets set to 65k, which is much too low, especially for control plane nodes.
Steps To Reproduce:
Expected behavior:
fs.file-max to stay the same (2^64)
Actual behavior:
fs.file-max gets reduced to 2^16
Additional context / logs:
Beta Was this translation helpful? Give feedback.
All reactions