I've been trying to deploy an IPV6 native EKS cluster for a week already without success. The cluster is successfully created, but the managed node groups fail.
I've tried to:
- use ipv6 only public subnets -> it tells me that IPV4 is required (SubnetInvalidConfiguration);
- use subnets with both ipv4 and ipv6 but with the "Automatically assign public ipv4" disabled -> then the nodes fail to join the cluster:
NodeCreationFailure, Message=Instances failed to join the kubernetes cluster - put the managed node Group in a private subnet, also doesn’t work(SubnetInvalidConfiguration).
Those attempts were done using both AWS Console and the Terraform EKS module.
In the end I thought I was doing something wrong, so I decided to strictly follow this AWS doc using eksctl that behind the scenes creates CloudFormation stacks that perform all the work, including VPC creation, subnets, required roles, node groups an so on. It turns out that even this way it failed, with the same error I got in my attempt 2 above:

