GKE autopilot events: Node is not ready: lack of PodCIDR?

109 Views Asked by At

GKE autopilot shows numerous kubernetes events: Node is not ready Logs show: Runtime network not ready: Network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR

Nodes show PodCIDR: of 2^(32-26=6)=64 IP addresses (see below)

I have ~30 cronjobs running at various intervals from 1 minute to 1 day. I have 2 services The workloads show no errors or warnings GKE Version 1.28.3-gke.1286000

I added secondary IP address ranges following https://cloud.google.com/kubernetes-engine/docs/how-to/multi-pod-cidr#add_more_pod_ranges_autopilot and GKE autopilot with shared vpc ip exhausted

Pod IPv4 address range (default)    10.84.0.0/17        
Cluster pod IPv4 ranges (additional)    
pvlivesecondaryipv4range2 (10.0.0.0/16)
pvlivesecondaryipv4range3 (240.10.0.0/17)

I expected the nodes to use the secondary IP ranges (Cluster pod IPv4 ranges (additional)). The Cluster manifest Cluster pod IPv4 ranges tool tip says "GKE will allocate a pod range from the list of default and additional pod ranges to node pools automatically"

The nodes seem to not use the secondary IP ranges.

$ kubectl describe node  | egrep 'Name:|PodCIDR|CreationTimestamp:'
Name:               gk3-pvlive-autopilot-pool-2-3a70ba4f-khfj
CreationTimestamp:  Tue, 16 Jan 2024 09:53:01 +0000
PodCIDR:                      10.84.0.64/26
PodCIDRs:                     10.84.0.64/26
Name:               gk3-pvlive-autopilot-pool-2-3b206b5e-hsw5
CreationTimestamp:  Tue, 16 Jan 2024 10:11:02 +0000
PodCIDR:                      10.84.0.0/26
PodCIDRs:                     10.84.0.0/26
Name:               gk3-pvlive-autopilot-pool-2-3b206b5e-rmpd
CreationTimestamp:  Tue, 16 Jan 2024 10:08:56 +0000
PodCIDR:                      10.84.0.192/26
PodCIDRs:                     10.84.0.192/26
Name:               gk3-pvlive-autopilot-pool-2-472e39ca-kspl
CreationTimestamp:  Tue, 16 Jan 2024 08:44:54 +0000
PodCIDR:                      10.84.1.0/26
PodCIDRs:                     10.84.1.0/26
$ 

How can I fix the issue causing the kubernetes events and log warnings? Should I ignore the events and log warnings? Thanks

0

There are 0 best solutions below