I am using the following Terraform script to create an Azure HDInsight Kafka cluster:

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "rg" {
  name     = "my-resource-group"
  location = "eastus"
}

resource "azurerm_virtual_network" "virtual_network" {
    resource_group_name = azurerm_resource_group.rg.name
    name = "my-vnet"
    location = "eastus"
    address_space = ["10.136.82.0/24"]
}

resource "azurerm_subnet" "subnet" {
  name                 = "subnet-3"
  resource_group_name  = "my-resource-group"
  virtual_network_name = "my-vnet"
  address_prefixes     = ["10.136.82.64/27"]
}

resource "azurerm_storage_account" "storage_account" {
  name                     = "my-storage-account"
  resource_group_name      = azurerm_resource_group.rg.name
  location                 = azurerm_resource_group.rg.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource "azurerm_storage_container" "storage_container" {
  name                  = "hdinsight"
  storage_account_name  = azurerm_storage_account.storage_account.name
  container_access_type = "private"
}

resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster" {
  name                = "my-hdicluster"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  cluster_version     = "4.0"
  tier                = "Standard"
  worker_node_count   = 5
  enable_gateway      = false

  component_version {
    kafka = "2.4"
  }

  gateway {
    username = "my-username"
    password = "my-password"
  }

  storage_account {
    storage_container_id = azurerm_storage_container.storage_container.id
    storage_account_key  = azurerm_storage_account.storage_account.primary_access_key
    is_default           = true
  }

  roles {
    head_node {
      virtual_network_id = azurerm_virtual_network.virtual_network.id
      subnet_id = azurerm_subnet.subnet.id
      vm_size  = "Standard_D3_V2"
      username = "my-username"
      password = "my-password"
    }

    worker_node {
      virtual_network_id = azurerm_virtual_network.virtual_network.id
      subnet_id = azurerm_subnet.subnet.id
      vm_size                  = "Standard_D3_V2"
      username                 = "my-username"
      password                 = "my-password"
      number_of_disks_per_node = 3
      target_instance_count    = 3
    }

    zookeeper_node {
      virtual_network_id = azurerm_virtual_network.virtual_network.id
      subnet_id = azurerm_subnet.subnet.id
      vm_size  = "Standard_D3_V2"
      username = "my-username"
      password = "my-password"
    }
  }
}

I am specifically trying to use these two parameters within the azurerm_hdinsight_kafka_cluster block to accomplish having 5 Kafka worker (i.e. Kafka Broker) nodes, as well as to disallow any public IP address on this cluster:

worker_node_count   = 5
enable_gateway      = false

The terraform plan produces these errors:

 on hdinsight.tf line 44, in resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster":
│   44:   worker_node_count   = 5
│ 
│ An argument named "worker_node_count" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on hdinsight.tf line 45, in resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster":
│   45:   enable_gateway      = false
│ 
│ An argument named "enable_gateway" is not expected here.

Where do I need to put these two parameters to accomplish the control over the number of Kafka brokers and disallowing of public IP addresses?

1

There are 1 best solutions below

0
kavyaS On BEST ANSWER

I tried to add the parameters worker_node_count and enable_gateway

resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster" {
  name                = "my-hdicluster"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  cluster_version     = "4.0"
  tier                = "Standard"
  worker_node_count   = 5
  enable_gateway      = false

  component_version {
    kafka = "2.4"
  }

  gateway {
    username = "xxxx"
    password = "xxxxx"
  }

  storage_account {
    storage_container_id = azurerm_storage_container.storage_container.id
    storage_account_key  = azurerm_storage_account.storage_account.primary_access_key
    is_default           = true
  }

  roles {
    head_node {
      virtual_network_id = azurerm_virtual_network.virtual_network.id
      subnet_id = azurerm_subnet.subnet.id
      vm_size  = "Standard_D3_V2"
      username = "xxxxx"
      password = "xxxx"
    }

    worker_node {
      virtual_network_id = azurerm_virtual_network.virtual_network.id
      subnet_id = azurerm_subnet.subnet.id
      vm_size                  = "Standard_D3_V2"
      username                 = "xxxxxx"
      password                 = "xxxxxx"
      number_of_disks_per_node = 3
      target_instance_count    = 3
    }

    zookeeper_node {
      virtual_network_id = azurerm_virtual_network.virtual_network.id
      subnet_id = azurerm_subnet.subnet.id
      vm_size  = "Standard_D3_V2"
      username = "xxxxxxx"
      password = "xxxxxx"
    }
  }
}

Error: Unsupported argument

│
│   on main.tf line 178, in resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster":

│  178:   worker_node_count   = 5
│
│ An argument named "worker_node_count" is not expected here.
╵

Error: Unsupported argument

  │
    │   on main.tf line 179, in resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster":
    │  179:   enable_gateway      = false
    │
    │ An argument named "enable_gateway" is not expected here.

enter image description here

In the above code, the target_instance_count attribute added in the worker_node block to set the number of Kafka worker nodes to 5.

resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster" {
  name                = "my-hdicluster"
  location            = data.azurerm_resource_group.example.location
  resource_group_name = data.azurerm_resource_group.example.name
  cluster_version     = "4.0"
  tier                = "Standard"

 // worker_node_count   = 5
 // enable_gateway      = false

  component_version {
    kafka = "2.4"
  }
  

  gateway {
    username = "kavyagw"
    password = "P@ssw0rd123"
   // public_ip_address_enabled = false
  // enable_gateway = true
  
  }

  storage_account {
    storage_container_id = azurerm_storage_container.storage_container.id
    storage_account_key  = azurerm_storage_account.storage_account.primary_access_key
    is_default           = true
  }

  roles {
    head_node {
      virtual_network_id = azurerm_virtual_network.virtual_network.id
      subnet_id = azurerm_subnet.subnet.id
      vm_size  = "Standard_D3_V2"
      username = "kavyahn"
      password = "P@ssw0rd123"
    }

    worker_node {
      virtual_network_id = azurerm_virtual_network.virtual_network.id
      subnet_id = azurerm_subnet.subnet.id
      vm_size                  = "Standard_D3_V2"
      username                 = "kavyawn"
      password                 = "P@ssw0rd123"
      number_of_disks_per_node = 3
      target_instance_count    = 5     
    }

    zookeeper_node {
      virtual_network_id = azurerm_virtual_network.virtual_network.id
      subnet_id = azurerm_subnet.subnet.id
      vm_size  = "Standard_D3_V2"
      username = "kavyazn"
      password = "P@ssw0rd123"
    }
  } 
depends_on = [
    azurerm_network_security_group.hdinsight_nsg
  ]
}

See if the network restriction can be made using NSG rule set .

resource "azurerm_network_security_group" "hdinsight_nsg" {
  name                = "hdinsight-nsg"
   location            = data.azurerm_resource_group.example.location
  resource_group_name = data.azurerm_resource_group.example.name

  security_rule {
    name                       = "allow-all-outbound"
    priority                   = 100
    direction                  = "Outbound"
    access                     = "Allow"
    protocol                   = "*"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "*"
   // destination_address_prefix = "*"
    destination_address_prefix = azurerm_subnet.subnet.address_prefixes[0]
  }

  security_rule {
    name                       = "deny-internet-inbound"
    priority                   = 200
    direction                  = "Inbound"
    access                     = "Deny"
    protocol                   = "*"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "*"
    destination_address_prefix = "Internet"
    
  }
  
}

enter image description here

Also check this Restrict public connectivity in Azure HDInsight | Microsoft Learn .