I am using the following Terraform script to create an Azure HDInsight Kafka cluster:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "rg" {
name = "my-resource-group"
location = "eastus"
}
resource "azurerm_virtual_network" "virtual_network" {
resource_group_name = azurerm_resource_group.rg.name
name = "my-vnet"
location = "eastus"
address_space = ["10.136.82.0/24"]
}
resource "azurerm_subnet" "subnet" {
name = "subnet-3"
resource_group_name = "my-resource-group"
virtual_network_name = "my-vnet"
address_prefixes = ["10.136.82.64/27"]
}
resource "azurerm_storage_account" "storage_account" {
name = "my-storage-account"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "storage_container" {
name = "hdinsight"
storage_account_name = azurerm_storage_account.storage_account.name
container_access_type = "private"
}
resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster" {
name = "my-hdicluster"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
cluster_version = "4.0"
tier = "Standard"
worker_node_count = 5
enable_gateway = false
component_version {
kafka = "2.4"
}
gateway {
username = "my-username"
password = "my-password"
}
storage_account {
storage_container_id = azurerm_storage_container.storage_container.id
storage_account_key = azurerm_storage_account.storage_account.primary_access_key
is_default = true
}
roles {
head_node {
virtual_network_id = azurerm_virtual_network.virtual_network.id
subnet_id = azurerm_subnet.subnet.id
vm_size = "Standard_D3_V2"
username = "my-username"
password = "my-password"
}
worker_node {
virtual_network_id = azurerm_virtual_network.virtual_network.id
subnet_id = azurerm_subnet.subnet.id
vm_size = "Standard_D3_V2"
username = "my-username"
password = "my-password"
number_of_disks_per_node = 3
target_instance_count = 3
}
zookeeper_node {
virtual_network_id = azurerm_virtual_network.virtual_network.id
subnet_id = azurerm_subnet.subnet.id
vm_size = "Standard_D3_V2"
username = "my-username"
password = "my-password"
}
}
}
I am specifically trying to use these two parameters within the azurerm_hdinsight_kafka_cluster block to accomplish having 5 Kafka worker (i.e. Kafka Broker) nodes, as well as to disallow any public IP address on this cluster:
worker_node_count = 5
enable_gateway = false
The terraform plan produces these errors:
on hdinsight.tf line 44, in resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster":
│ 44: worker_node_count = 5
│
│ An argument named "worker_node_count" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on hdinsight.tf line 45, in resource "azurerm_hdinsight_kafka_cluster" "kafka_cluster":
│ 45: enable_gateway = false
│
│ An argument named "enable_gateway" is not expected here.
Where do I need to put these two parameters to accomplish the control over the number of Kafka brokers and disallowing of public IP addresses?
I tried to add the parameters worker_node_count and enable_gateway
Error: Unsupported argument
Error: Unsupported argument
In the above code, the
target_instance_countattribute added in the worker_node block to set the number of Kafka worker nodes to 5.See if the network restriction can be made using NSG rule set .
Also check this Restrict public connectivity in Azure HDInsight | Microsoft Learn .