Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

My proxmox provider allows me to set NODE_IP_RANGES: "[10.4.0.20-10.4.0.254]", would also like to set a VIP_IP_RANGES. #297

Closed
lknite opened this issue Oct 18, 2024 · 4 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@lknite
Copy link

lknite commented Oct 18, 2024

Describe the solution you'd like
Seems ipam is already capable of supporting this. I think maybe the ipam project just isn't encouraging providers to implement it

I'd like the ipam project to add the documentation and stubbed out methods, whatever is necessary to encourage providers to also implement setting the vip, in addition to the node ips.

Anything else you would like to add:
I wouldn't be surprised if the proxmox provider already implements this, but I'm finding a lack of documentation and slack channel responses. I think I might have read that it was a non-goal. I'd like that to be added as a goal. The ipam project would only have to decide, they wouldn't need to force anyone. Without the vip being set automatically I'm having to look for available ips and set it myself each time, which is exactly what ipam solved for node ips.

It would be easier for me to ask the cluster-api provider to add a feature defined by the ipam project than to create something unique for just myself. If I am to be the implementor of the feature for the provider, similarly I'd like to follow a standard so I wouldn't just be implementing something unique for a single provider. I had the same need when using the vsphere provider, and I suspect its pretty much the same for all providers.

/kind feature

@schrej
Copy link
Member

schrej commented Oct 21, 2024

This topic came up multiple times already. In it's current form, the IPAM contract is only intended for allocating addresses for individual machines of a cluster deployed using CAPI, as that was otherwise only possible with rather hacky implementations. Anything else is out of scope of the IPAM contract.

With that said, it's still possible to allocate additional addresses, like a cluster address, using the in-cluster IPAM provider. For other providers that might not be the case - the Infoblox provider expects (indirect) owner references to a Machine for example.

It also heavily depends on the provider whether it makes sense that it allocates a cluster address. In cloud environments it's straight forward: just create a load balancer and set it's IP as the cluster address. For on-premise providers it depends on the specific kind of infrastructure in use. Proxmox might support something like load balancers (or the provider just abstracts it away somehow), other providers like metal3 leave the entire topic to the user since there is multiple options how to handle VIPs without load balancers.

If a provider wants to implement this independently, I'd recommend a reference to an IPPool that can be set instead of specifying the address directly.
ClusterClass could also be interesting here, and might be an alternative to custom implementations by infra providers.

@lknite
Copy link
Author

lknite commented Oct 21, 2024

Here's how I was thinking to implement it, detailed in a feature request on proxmox provider github page:
ionos-cloud/cluster-api-provider-proxmox#304

If you have a minute, take a look and let me know if you think of any ideas on how it could be made into a standard that could work for more providers than just proxmox. Feels like a good start to a generic standard.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2025
@lknite
Copy link
Author

lknite commented Jan 20, 2025

I'll go ahead and close this out. I understand what you are saying.

@lknite lknite closed this as completed Jan 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

4 participants