AWS EKS in existing VPC context timeouts EBS CSI driver

120 Views Asked by At

I'm using eksctl to create an eks cluster

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: xxx-dev
  region: us-east-1
  version: "1.27"

vpc:
  id: vpc-xxx
  subnets:
    private:
      xxx-dev-private-a:
        id: subnet-xxx
      xxx-dev-private-b:
        id: subnet-xxx
      xxx-dev-private-c:
        id: subnet-xxx
    public:
      xxx-dev-public-a:
        id: subnet-xxx
      xxx-dev-public-b:
        id: subnet-xxx
      xxx-dev-public-c:
        id: subnet-xxx

nodeGroups:
  - name: dev-ng-postgres
    privateNetworking: true
    instanceType: t3.xlarge
    desiredCapacity: 3
eksctl create cluster -f eks/eks-cluster.yaml

export cluster_name=xxx-dev
oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4
eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve

eksctl create iamserviceaccount \
    --name ebs-csi-controller-sa \
    --namespace kube-system \
    --cluster xxx-dev \
    --role-name AmazonEKS_EBS_CSI_DriverRole \
    --role-only \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
    --approve
eksctl create addon --name aws-ebs-csi-driver --cluster xxx-dev --service-account-role-arn arn:aws:iam::account_id:role/AmazonEKS_EBS_CSI_DriverRole --force

I'm running this to init the cluster.

I still get context timeout errors when attempting to create ebs volumes. I believe the issue is from my existing VPCs. What are some things to check for to see what is blocking the requests?

I'm using eksctl to create an eks cluster

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: xxx-dev
  region: us-east-1
  version: "1.27"

vpc:
  id: vpc-xxx
  subnets:
    private:
      xxx-dev-private-a:
        id: subnet-xxx
      xxx-dev-private-b:
        id: subnet-xxx
      xxx-dev-private-c:
        id: subnet-xxx
    public:
      xxx-dev-public-a:
        id: subnet-xxx
      xxx-dev-public-b:
        id: subnet-xxx
      xxx-dev-public-c:
        id: subnet-xxx

nodeGroups:
  - name: dev-ng-postgres
    privateNetworking: true
    instanceType: t3.xlarge
    desiredCapacity: 3
eksctl create cluster -f eks/eks-cluster.yaml

export cluster_name=xxx-dev
oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4
eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve

eksctl create iamserviceaccount \
    --name ebs-csi-controller-sa \
    --namespace kube-system \
    --cluster xxx-dev \
    --role-name AmazonEKS_EBS_CSI_DriverRole \
    --role-only \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
    --approve
eksctl create addon --name aws-ebs-csi-driver --cluster xxx-dev --service-account-role-arn arn:aws:iam::account_id:role/AmazonEKS_EBS_CSI_DriverRole --force

I'm running this to init the cluster.

I still get context timeout errors when attempting to create ebs volumes. I believe the issue is from my existing VPCs. What are some things to check for to see what is blocking the requests?

EDIT: when I have the VPC created I can create the EBS

0

There are 0 best solutions below