I need solution to add the Kubernetes services inside the Traffic manager using terraform and in order to do that I need to have a public IP address for each cluster, but it seems that IP is created under different subscription after deployment.
Tried playing with azurerm_traffic_manager_endpoint about different Types like azureEndpoints and nestedEndpoints but it seems that the script is failing with the same error listed below.
Below is my script which I want to deploy and I will share the error:
Error:
creating/updating nestedEndpoints Endpoint "vmap-tmep" (Traffic Manager Profile "vmap-tm" / Resource Group "RG-TEST-TEST"): trafficmanager.EndpointsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="The 'resourceTargetId' property of endpoint 'vmap-tmep' is invalid or missing. The property must be specified only for the following endpoint types: AzureEndpoints, NestedEndpoints. You must have read access to the resource to which it refers."
# Traffic Manager Profile Resource
resource "azurerm_traffic_manager_profile" "tmp" {
name = lower("${var.customer4letter}-${var.env3letter}-${var.locationid3letter}-${var.servicetype}-tm")
resource_group_name = azurerm_resource_group.rg.name
traffic_routing_method = "Weighted"
dns_config {
relative_name = lower("${var.customer4letter}-${var.env3letter}-${var.locationid3letter}-${var.servicetype}-tm-dns-test")
ttl = 100
}
monitor_config {
protocol = "http"
port = 80
path = "/"
interval_in_seconds = 30
timeout_in_seconds = 9
tolerated_number_of_failures = 3
}
}
# Traffic Manager Endpoint Resource
resource "azurerm_traffic_manager_endpoint" "tmep" {
name = lower("${var.customer4letter}-${var.env3letter}-${var.locationid3letter}-${var.servicetype}-tmep")
resource_group_name = azurerm_resource_group.rg.name
profile_name = azurerm_traffic_manager_profile.tmp.name
type = "nestedEndpoints"
weight = 1000
target_resource_id = azurerm_kubernetes_cluster.k8s1.id
}
################ K8S nodes pool location 1 ################
resource "azurerm_kubernetes_cluster" "k8s1" {
name = lower("${var.customer4letter}-${var.env3letter}-${var.locationid3letter}-${var.servicetype}-k8s")
location = var.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "exampleaks1"
service_principal {
client_id = "bsdfsdfs3b"
client_secret = "353sdfsdfsdfsdfsd9"
}
role_based_access_control {
azure_active_directory {
managed = true
admin_group_object_ids = [var.group_object_id]
tenant_id = var.tenant_id
azure_rbac_enabled = true
}
enabled = true
}
linux_profile {
admin_username = var.adminusername
ssh_key {
key_data = "${file("${var.ssh_public_key}")}"
}
}
auto_scaler_profile {
new_pod_scale_up_delay = "5s"
scale_down_delay_after_delete = "10s"
skip_nodes_with_local_storage = false
}
addon_profile {
azure_policy{
enabled = true
}
}
default_node_pool {
enable_auto_scaling = true
max_count = 5
max_pods = 30
min_count = 1
name = "default"
only_critical_addons_enabled = false
#orchestrator_version = "1.20.7"
vm_size = "Standard_D2_v2"
os_disk_size_gb = 30
}
}
As already discussed You need to change few things in your code in-order to use traffic manager for AKS.
You need to use azureEndpoints
instead of nestedEndpoints
as the
Traffic Manager endpoint Type.
As there are currently four services (Cloud Service ,App Service, App Service Slots and Public IP's)
which support the Traffic
manager . So , you have to use the Public IP which is being used by
the AKS .
You have to use the below block:
resource "azurerm_traffic_manager_endpoint" "tmep" {
name = "ansumanaks-tmep"
resource_group_name = data.azurerm_resource_group.rg.name
profile_name = azurerm_traffic_manager_profile.tmp.name
type = "azureEndpoints"
endpoint_status = "enabled"
target_resource_id = (tolist(azurerm_kubernetes_cluster.k8s1.network_profile.0.load_balancer_profile.0.effective_outbound_ips)[0])
}
For testing I have used the below terraform Code:
provider "azurerm" {
features {}
}
data "azurerm_resource_group" "rg"{
name="ansumantest"
}
# Traffic Manager Profile Resource
resource "azurerm_traffic_manager_profile" "tmp" {
name = "ansumanaks-tm"
resource_group_name = data.azurerm_resource_group.rg.name
traffic_routing_method = "Priority"
dns_config {
relative_name = "ansumanaks-tm-dns-test"
ttl = 100
}
monitor_config {
protocol = "http"
port = 80
path = "/"
interval_in_seconds = 30
timeout_in_seconds = 9
tolerated_number_of_failures = 3
}
}
resource "azurerm_public_ip" "example" {
name = "akspublicIP"
resource_group_name = data.azurerm_resource_group.rg.name
location = data.azurerm_resource_group.rg.location
sku = "Standard"
allocation_method = "Static"
domain_name_label = "akstestregion"
}
# Traffic Manager Endpoint Resource
resource "azurerm_traffic_manager_endpoint" "tmep" {
name = "ansumanaks-tmep"
resource_group_name = data.azurerm_resource_group.rg.name
profile_name = azurerm_traffic_manager_profile.tmp.name
type = "azureEndpoints"
endpoint_status = "enabled"
target_resource_id = (tolist(azurerm_kubernetes_cluster.k8s1.network_profile.0.load_balancer_profile.0.effective_outbound_ips)[0])
}
################ K8S nodes pool location 1 ################
resource "azurerm_kubernetes_cluster" "k8s1" {
name = "ansumanaks-k8s"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
dns_prefix = "exampleaks1"
service_principal {
client_id = "1dd6833b-xxxx-xxxx-xxxx-112c3fb4fb79"
client_secret = "e997Q~ky5ZWHIxxxxxxxxxxxxxxxxxxxxxxx"
}
role_based_access_control {
azure_active_directory {
managed = true
tenant_id = "72f988bf-xxxx-xxxx-xxxx-2d7cd011db47"
azure_rbac_enabled = true
}
enabled = true
}
network_profile {
network_plugin = "kubenet"
load_balancer_profile {
outbound_ip_address_ids= [azurerm_public_ip.example.id]
}
}
linux_profile {
admin_username = "ansuman"
ssh_key {
key_data = "${file("C:/Users/ansbal/public.pub")}"
}
}
auto_scaler_profile {
new_pod_scale_up_delay = "5s"
scale_down_delay_after_delete = "10s"
skip_nodes_with_local_storage = false
}
addon_profile {
azure_policy{
enabled = true
}
}
default_node_pool {
enable_auto_scaling = true
max_count = 5
max_pods = 30
min_count = 1
name = "default"
only_critical_addons_enabled = false
#orchestrator_version = "1.20.7"
vm_size = "Standard_D2_v2"
os_disk_size_gb = 30
}
}
Ouputs:
Note:
admin_group_object_ids = [var.group_object_id]
due
to lack of permissions . You can use those as per your requirements.