The directories to be created/managed should be dynamic and be read from values.yaml file of the helm release.
The Daemonset definition which I could think of is mentioned below:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: create-directory
spec:
selector:
matchLabels:
app: create-directory
template:
metadata:
labels:
app: create-directory
spec:
# use default service-account with scc role-binding
ServiceAccountName: default
containers:
- name: create-directory
image: "{{ .Values.container.image }}"
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c"]
args:
- |
{{- range .Values.directories }}
dir_path="/var/log/at/{{ . }}"
echo $dir_path
# Check if the directory exists
if [ -d "$dir_path" ]; then
echo "Directory $dir_path already exists."
else
# Create the directory
mkdir -p "$dir_path" && echo "Directory $dir_path created."
fi
{{- end }}
while true; do
sleep 100000;
done
# run the container as root
securityContext:
runAsUser: 0
volumeMounts:
- name: at-volume
mountPath: /var/log/at
volumes:
- name: at-volume
hostPath:
path: /var/log/at
where the values.yaml file for the helm chart looks like below:
container:
image: registry.access.redhat.com/ubi8/toolbox:8.5
directories:
- directory1
- directory2
I need the guidance on the below mentioned:
- Is my approach of using the
sleepto keep the pod alive and stop it from being rescheduling once the directory check script is executed, is correct or does the community recommend something else? - How can I remove directories once they are removed from the
values.yamldirectories list?
UPDATE For (1) I could find a solution: "Run-once Kubernetes DaemonSet pods"
For the question #2, you can try something:
This will remove all the directories except the ones listed. See here for more examples.