Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node labels not possible with include_instance_type_in_instance_name #425

Open
ValentinVoigt opened this issue Aug 28, 2024 · 1 comment

Comments

@ValentinVoigt
Copy link

I'm trying to update our clusters to v2, but I am having trouble with the node labels. I think using labels (and possibly taints) on a cluster with include_instance_type_in_instance_name enabled is currently not possible. It works fine on masters, though. These are the last few lines of output:

[Instance test-cx22-pool-asdf-worker1] [INFO]  systemd: Starting k3s-agent
[Instance test-cx22-pool-asdf-worker1] ...k3s has been deployed to worker test-cx22-pool-asdf-worker1.
[Node labels] 
Adding labels to masters_pool pool workers...
[Node labels] node/test-cx22-master1 not labeled
[Node labels] ...node labels applied
[Node labels] 
Adding labels to asdf pool workers...
error: resource(s) were provided, but no name was specified
[Node labels] : error: resource(s) were provided, but no name was specified

I also did some digging and I think the issue might be related to this line of code, as it does not check for the configured naming scheme. Not sure, though.

Workaround: I just removed the labels and taints from the config file. Existing labels and taints will not be removed from the nodes.

This is the config I used for testing. (I'm using an existing network.)

cluster.yaml
---
hetzner_token:
cluster_name: test
kubeconfig_path: "./kubeconfig"
k3s_version: v1.30.4+k3s1
schedule_workloads_on_masters: true
include_instance_type_in_instance_name: true
embedded_registry_mirror:
  enabled: false

networking:
  ssh:
    port: 22
    use_agent: true
    public_key_path: "~/.ssh/id_rsa.pub"
    private_key_path: "~/.ssh/id_rsa"
  allowed_networks:
    ssh:
      - 
      - 
    api:
      - 
      - 
  public_network:
    ipv4: true
    ipv6: false
  private_network:
    enabled: true
    subnet: 10.0.0.0/16
    existing_network_name: "network-1"
  cni:
    enable: true
    encryption: false
    mode: flannel

datastore:
  mode: etcd
  external_datastore_endpoint: postgres://....

masters_pool:
  instance_type: cx22
  instance_count: 1
  location: nbg1
  labels:
    - key: foo
      value: bar

worker_node_pools:
  - name: asdf
    instance_type: cx22
    instance_count: 1
    location: nbg1
    labels:
      - key: foo
        value: bar
@vitobotta
Copy link
Owner

Thanks for reporting this! It must have slipped during upgrade testing. Glad you figured out a workaround for now. I will fix this in the upcoming release, unless you are willing to make a PR for it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants