I have a K8S cluster with an Nginx ingress controller, linkerd, etc.
I want to apply strict network policies, like blocking ingress and egress connections in the entire namespace.
This works, but some services need access to the Kubernetes API server. Since I can't use the service domain kubernetes.default.svc.cluster.local in my network policy, I must provide the full IP as CIDR.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-kube-server
spec:
podSelector: {}
policyTypes:
- Ingress
egress:
- to:
- ipBlock:
cidr: 1.2.3.4/32 # K8S API Endpoint
ports:
- port: 443
I got that IP thanks to this question
Now, this cluster is not running 24/7, so it's shut down and restarted multiple times a week. This cause the K8S API IP to change on each restart, breaking my network policies, and I need to update the rules manually.
Is there any way to solve this issue, or do I need to start thinking about implementing some automation to update the policies after the restart?
You can restrict the traffic based on either namespace or pod label selectors as shown in the official documentation. Just specify the labels of kube-apiserver's Pods.