I have some setup of 2 pods using StatefulSet behind a nodeport service with AWS ingress and sending traffic to this service through ALB endpoint. Then I patch the StatefulSet to have only 1 pod. I have configured this StatefulSet with readiness probe and lifecycle policy like this to avoid getting traffic on unhealthy pod.
Liveness and Readiness Probe:
livenessProbe:
exec:
command: ["/bin/sh", "-c", "reply=$(curl -s -o /dev/null -w %{http_code} http://127.0.0.1:80/health); if [ \"$reply\" -lt 200 -o \"$reply\" -ge 400 ]; then exit 1; fi; cat /tmp/.health-check"]
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
exec:
command: ["/bin/sh", "-c", "reply=$(curl -s -o /dev/null -w %{http_code} http://127.0.0.1:80/health); if [ \"$reply\" -lt 200 -o \"$reply\" -ge 400 ]; then exit 1; fi; cat /tmp/.health-check"]
timeoutSeconds: 10
failureThreshold: 2
Lifecycle Policy:
lifecycle:
postStart:
exec:
command: ["/bin/touch", "/tmp/.health-check"]
preStop:
exec:
command: ["sh", "-c", "rm -r /tmp/.health-check; sleep 60"]
terminationGracePeriodSeconds
terminationGracePeriodSeconds: 60
But when I patch StatefulSet to have only pod, one get into Terminating state and it became unhealthy but it still receives the traffic. How to debug this issue.
Some timestamps and actions:
Patched STS: Sun Jun 25 16:23:28 UTC 2023
Pod became unhealthy (checked using kubectl get pods and observed 0/1 in status): Sun Jun 25 16:23:47 UTC 2023
Last request received on Pod-2 (checked on kibana): Sun Jun 25 16:24:38 UTC 2023
This isn't desired behavior, when a pod goes to terminating state, the traffic isn't sent to it. You should get the
Endpointsto be sure whether it is an active endpoint or not. Runor can run continuously
the SERVICE_NAME should be the name of the K8s service that you created. Confirm if the unhealthy pod ip address is there or not.