How to update EKS cluster ID information as output inside terraform state?

836 Views Asked by At

I have deployed EKS cluster. At this time in order to use its cluster id I was using this code

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

After deployment I have refactored my code into modules and now I using output for providers

outputs.tf (this file is within same directory as eks.tf which uses eks module)

output "eks_cluster_id" {
  value = module.eks.cluster_id
}

providers.tf in root module

data "aws_eks_cluster" "cluster" {
  name = module.base.eks_cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.base.eks_cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

Now the problem is that deployment without modules didnt have modules so terraform state does not have this cluster id information. if I try to do terraform plan (after refactoring into modules ) it fails since cluster_id information is not there to connect to kubernetes cluster

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

How to solve this?

I think if I use terraform apply -target=module.base.aws_eks_cluster.this this will update the output information. However when I tried this it is destroying the cluster which is already created.

1

There are 1 best solutions below

0
Marko E On

What I have found is working a bit better is using a different approach to configuring the kubernetes provider:

data "aws_region" "selected" {}

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  exec {
    api_version = "client.authentication.k8s.io/v1"
    args        = ["eks", "get-token", "--cluster-name", module.eks.cluster_id, "--region", data.aws_region.selected.name]
    command     = "aws"
  }
}

The important thing to note here is that you can use any additional options in args that the AWS CLI command provides. As a side note, this works only with AWS CLI v2. Additionally, using it this way will fall back to the default profile. If you are using a profile other than default, you can add the --profile, <profile name> in the args list. Finally, to be able to use this cluster and perform actions on it, you need to update the .kubeconfig file. This is achieved by running the following AWS CLI command:

aws eks update-kubeconfig --name <cluster name> [--profile <named profile>]

There is an --alias parameter available, which if omitted, will default to the cluster ARN. Also note the following:

When update-kubeconfig writes a configuration to a kubeconfig file, the current-context of the kubeconfig file is set to that configuration.

so make sure to check the context prior to applying any Kubernetes manifest files.