Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add IPPoolRef to ProxmoxMachine to override the IP range. #427

Open
mcbenjemaa opened this issue Feb 21, 2025 · 3 comments · May be fixed by #429
Open

Add IPPoolRef to ProxmoxMachine to override the IP range. #427

mcbenjemaa opened this issue Feb 21, 2025 · 3 comments · May be fixed by #429
Labels
enhancement New feature or request kind/feature

Comments

@mcbenjemaa
Copy link
Member

mcbenjemaa commented Feb 21, 2025

Describe the solution you'd like
[A clear and concise description of what you want to happen.]

Currently, the default network spec is defined in the ProxmoxCluster spec.
Since we started provisioning clusters in various zones, we need a way to change the IP range for specific machines.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

We started #423 to place specific machines into some set of Proxmox Nodes.
We need something similar to override the IP range for the default network while supporting backward compatibility.

Proposal:

Add new struct in proxmoxmachines_types.go:

// IPPoolConfig defines the IPAM pool ref.
type IPPoolConfig struct {
	// IPv4PoolRef is a reference to an IPAM Pool resource, which exposes IPv4 addresses.
	// The network device will use an available IP address from the referenced pool.
	// This can be combined with `IPv6PoolRef` in order to enable dual stack.
	// +optional
	// +kubebuilder:validation:XValidation:rule="self.apiGroup == 'ipam.cluster.x-k8s.io'",message="ipv4PoolRef allows only IPAM apiGroup ipam.cluster.x-k8s.io"
	// +kubebuilder:validation:XValidation:rule="self.kind == 'InClusterIPPool' || self.kind == 'GlobalInClusterIPPool'",message="ipv4PoolRef allows either InClusterIPPool or GlobalInClusterIPPool"
	IPv4PoolRef *corev1.TypedLocalObjectReference `json:"ipv4PoolRef,omitempty"`

	// IPv6PoolRef is a reference to an IPAM pool resource, which exposes IPv6 addresses.
	// The network device will use an available IP address from the referenced pool.
	// this can be combined with `IPv4PoolRef` in order to enable dual stack.
	// +optional
	// +kubebuilder:validation:XValidation:rule="self.apiGroup == 'ipam.cluster.x-k8s.io'",message="ipv6PoolRef allows only IPAM apiGroup ipam.cluster.x-k8s.io"
	// +kubebuilder:validation:XValidation:rule="self.kind == 'InClusterIPPool' || self.kind == 'GlobalInClusterIPPool'",message="ipv6PoolRef allows either InClusterIPPool or GlobalInClusterIPPool"
	IPv6PoolRef *corev1.TypedLocalObjectReference `json:"ipv6PoolRef,omitempty"`
}

Then adding the IPPoolConfig to the NetworkDevice:


// NetworkDevice defines the required details of a virtual machine network device.
type NetworkDevice struct {
       ...

	// IPPoolConfig defines config for IP Pool ref.
	// For default device 'net0' the IP pool is optional,
	// If not set, the default IPAM pool will be used.
	// For additional devices, the IP pool is required (IPV4/IPV6).
	// +optional
	IPPoolConfig `json:",inline"`
}

And delete the IP pool config from InterfaceConfig.

Environment:

  • Cluster-api-provider-proxmox version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):
@wikkyk
Copy link
Collaborator

wikkyk commented Feb 25, 2025

This can be done without needing a new API contract, am I seeing this right?

@mcbenjemaa
Copy link
Member Author

@wikkyk Yes, without breaking changes,
but i just wanted to share the proposal for adding the new field to comply with the current structure,.

@wikkyk
Copy link
Collaborator

wikkyk commented Feb 25, 2025

I like it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request kind/feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants