I using k8s/client-go library to control and develop my application (https://github.com/kubernetes/client-go).
I have an issue when use sub-path of persistent volume claim.
Example, I'm having two pod and mount data of each container to 2 subpath ORG1/DIR1 and ORG2/DIR2 on persistent volume claim (efs file), detail in blow:
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "rootpasswd"
volumeMounts:
- mountPath: /var/lib/mysql
name: site-data
subPath: ORG1/DIR1
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
subPath: ORG1/DIR2
volumes:
- name: site-data
persistentVolumeClaim:
claimName: hpc-vinhha-test
And when I call to delete this pod, currently k8s only delete pod, and core lib not delete data of pod on persistent volume claim. So, data of PVC will become garbage and become bigger and bigger.
I want to delele all data in sub path ORG1/DIR1 and ORG1/DIR2 when pods deleted.
This is file yaml of pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"efs-claim","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"5Gi"}},"storageClassName":"efs-sc"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-07-10T04:02:51Z"
finalizers:
- kubernetes.io/pvc-protection
name: efs-claim
namespace: default
resourceVersion: "887409"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/efs-claim
uid: ab66c2f7-744c-4d6f-a508-2bc90f0b1897
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: efs-sc
volumeMode: Filesystem
volumeName: efs-pv-shared
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
phase: Bound
So, can you help me with this problem. Because I'm a newbie in k8s and aws-efs. So, I don't have much experience about it :(
Thank so much.