On a homelab kubernetes on ubuntu 22.04. hosts, I want to install openebs with an nfs provisioner through:
helm install openebs --namespace openebs openebs/openebs --create-namespace --set nfs-provisioner.enabled=true
At first all the pods are running:
kubectl get pods -n openebs READY STATUS RESTARTS AGE
openebs-localpv-provisioner-7c8ffb99f9-2xfr7 1/1 Running 0 13s
openebs-ndm-mx57f 1/1 Running 0 13s
openebs-ndm-operator-56c5b679f7-49sgp 1/1 Running 0 13s
openebs-nfs-provisioner-74f4f7cffd-mwqh9 1/1 Running 0 13s
But immediately afterwards, this happens:
kubectl get pods -n openebs
NAME READY STATUS RESTARTS AGE
openebs-localpv-provisioner-85dd945f8b-fjjkf 0/1 CrashLoopBackOff 7 (47s ago) 17m
openebs-ndm-8rg56 1/1 Running 0 17m
openebs-ndm-operator-56c5b679f7-cgwx8 1/1 Running 0 17m
openebs-nfs-provisioner-74f4f7cffd-kzdpn 0/1 CrashLoopBackOff 4 (73s ago) 6m31s
and the logs of openebs-nfs-provisioner-74f4f7cffd-kzdpn are:
E0417 14:44:28.621613 1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/openebs/endpoints/openebs.io-nfsrwx": context deadline exceeded
I0417 14:44:28.621705 1 leaderelection.go:278] failed to renew lease openebs/openebs.io-nfsrwx: timed out waiting for the condition
F0417 14:44:28.621732 1 controller.go:888] leaderelection lost
I can see that two endpoints are created:
kubectl get endpoints -n openebs
NAME ENDPOINTS AGE
openebs.io-local <none> 13m
openebs.io-nfsrwx <none> 13m
but they are never assigned values. Is this phenomenon familiar to anyone and is there a solution?