Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle new nodes scaled up and down with new names #1569

Open
allensiho opened this issue Jan 4, 2024 · 1 comment
Open

Handle new nodes scaled up and down with new names #1569

allensiho opened this issue Jan 4, 2024 · 1 comment
Labels
Enhancement New feature or request

Comments

@allensiho
Copy link

allensiho commented Jan 4, 2024

It seems Diskpool requires you to know the nodenames before hand

cat <<EOF | kubectl create -f -
apiVersion: "openebs.io/v1alpha1"
kind: DiskPool
metadata:
name: pool-on-node-3
namespace: mayastor
spec:
node: node3
disks: ["/dev/sdc"]
EOF

This is problematic if you do not have this information as nodes will get added and removed on demand for Azure Kubernetes and will get new node names

I think it will be better to target disk pools by a common node label if possible

That way new nodes spun up that have this common label will have the diskpool associated with it

https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler?tabs=azure-cli

@tiagolobocastro
Copy link
Contributor

This is an interesting problem, and we might be able to solve in different ways:

  1. k8s/aks specific component would detect pool disks being moved to another node and updating the control-plane with the new one.
  2. data-plane could detect new disks and check with control-plane if there are any disks moved..
  3. control-plane itself could probe node disks, and check if any moved (actually this probably can work together with 2)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants