Now I am install the prometheus in kuberentes cluster like this:
helm install prometheus prometheus-community/prometheus \
--namespace reddwarf-monitor \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2"
and I found the PVC was bind to storage class, then I create the storage class like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2
namespace: default
provisioner: fuseim.pri/ifs
but the PVC still shows error storageclass.storage.k8s.io "gp2" not found. I have already create the nfs deployment client like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 3659b4xxx-pli55.cn-shanghai.nas.aliyuncs.com
- name: NFS_PATH
value: /k8s/storageclass
volumes:
- name: nfs-client-root
nfs:
server: 3659b4ab7c-pli55.cn-shanghai.nas.aliyuncs.com
path: /data/k8s/storageclass
create service account like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
namespace: default
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
Am I missing something? what shoul I do to make the PVC find the storage class? this is the kubernetes version info:
> kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:16:20Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.7", GitCommit:"84e1fc493a47446df2e155e70fca768d2653a398", GitTreeState:"clean", BuildDate:"2023-07-19T12:16:45Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.26) exceeds the supported minor version skew of +/-1
should I put the pvc with the same namespace with stroage class? I did not found the namespace with storage class.
Finally I found the k8s 1.20 remove the support for selfLink. https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1164-remove-selflink, just replace the image not based on selfLink will fixed this issue, just replace the deployment image to
gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0.