Kubernetes System Pool starts after User Pool

36 Views Asked by At

Today Azure seems to have some issues across different services and this situation caused my curiosity over our Kubernetes cluster strange behavior's.

My configuration is the following:

  1. Cluster is configures with Multi-Zone
  2. A Node Pool with one VM for linux (system)
  3. A Node Pool with one VM for linux (user)
  4. A Node Pool with one VM for windows (user)

What I noticed, is that most of the times VM for linux (user) is starting earlier than VM for linux (system), and some PODs which were supposed to start into the linux (system) are starting into linux (user), and this has as an effect more resource consumption on linux (user).

I know that there are some system PODs which are supposed to run on linux (user) to collect metrics or use proxy but this is not he case here.

I double checked that the PODs which I am talking about have affinity for linux and toleration for CriticalAddons NoSchedule.

enter image description here

So when I am deleting them, they start on linux (system) which saws that the proper place for this pod was the linux (system) from the beginning.

enter image description here

Checking more carefully some PODs which were supposed to be into linux (system) shows warning for no nodes available which makes me think that the linux (user) is starting before linux (system)

enter image description here

Is there any explanation for this or any other way to fix this problem?

0

There are 0 best solutions below