Search code examples
azurepulumipulumi-azure

Pulumi and AKS : Public IP not linked to AKS load-balancer


trying, using pulumi, to setup and AKS cluster with a public ip as load-balancer and a domain name. I tried several options using the following project : https://github.com/hostettler/pulumi-az-test

I end up with a working AKS cluster (great!) , that has a load-balancer (amazing!), on a public ip (fantastic!), but with an fqdn that is not linked to the public ip of the load-balancer (huh!).

I tried to

  1. extract the public ip to force the domain but I did not manage to do it. I get the resource id of the public id created but I cannot cast it as a public ip object to set the domain. Meaning how to load a public ip object from a public ip resource-id. "/subscriptions/XXXXXX-XXX-XX-XXXx-XXXXXXXXXXXX/resourceGroups/MC_XXXXX_XXXXX_aksCluster5632b82_eastus/providers/Microsoft.Network/publicIPAddresses/XXXXX-XXXX-XXX-XXX-XXXXXXXXX"
  2. tries to add a public ip to the cluster but that ip, while having a good fqdn is not used by the load-balancer

Any idea on how to address? Solving either would be ok with me.

Thanks a lot in advance


Solution

  • found the problem (or rather a work around). The problem stems from the helm chart I was using. The bitnami nginx chart seemed to ignored the controller.service.loadBalanceIP. Therefore, I replaced it by the ingress-nginx.

    Thus, replacing

    chart: "nginx",
    version: "13.2.10",
    fetchOpts: {
        repo: "https://charts.bitnami.com/bitnami",
    },
    

    by

    chart: "ingress-nginx",
    version: "4.3.0",
    fetchOpts: {
        repo: "https://kubernetes.github.io/ingress-nginx",
    },
    

    solved the problem. Instead of recreating an external ip address for each deployment, it used the one provided in controller.service.loadBalanceIP that is static and with the proper A record.

    import * as azure from "@pulumi/azure";
    import * as k8s from "@pulumi/kubernetes";
    import * as pulumi from "@pulumi/pulumi";
    import * as config from "./config";
    import * as azuread from "@pulumi/azuread";
    
    const current = azuread.getClientConfig({});
    
    // Create a Virtual Network for the cluster
    const aksVnet = new azure.network.VirtualNetwork("aks-net", {
        resourceGroupName: config.resourceGroup.name,
        addressSpaces: ["10.2.0.0/16"],
    });
    
    // Create a Subnet for the cluster
    const aksSubnetId = new azure.network.Subnet("aks-net", {
        resourceGroupName: config.resourceGroup.name,
        virtualNetworkName: aksVnet.name,
        addressPrefixes: ["10.2.1.0/24"],
        serviceEndpoints : ["Microsoft.Sql"],
    },
    {dependsOn: [aksVnet]}
    );
    
    // Now allocate an AKS cluster.
    export const k8sCluster = new azure.containerservice.KubernetesCluster("aksCluster", {
        resourceGroupName: config.resourceGroup.name,
        location: config.location,
        defaultNodePool: {
            name: "aksagentpool",
            nodeCount: config.nodeCount,
            vmSize: config.nodeSize,
            vnetSubnetId: aksSubnetId.id,
        },
        dnsPrefix: `${pulumi.getStack()}-reg-engine`,
    
        linuxProfile: {
            adminUsername: "aksuser",
            sshKey: {
                keyData: config.sshPublicKey,
            },
        },
        networkProfile: {
            networkPlugin : "azure",
            loadBalancerProfile: {
                managedOutboundIpCount: 1,
            },
            loadBalancerSku: "standard",
            outboundType: "loadBalancer",
        },
        identity: {
            type: "SystemAssigned",
        },
    },
    {dependsOn: [aksSubnetId]}
    );
    
    const publicIp = new azure.network.PublicIp("app-engine-ip", {
        resourceGroupName: k8sCluster.nodeResourceGroup,
        allocationMethod: "Static",
        domainNameLabel : "app-engine",
        sku : "Standard",
        tags: {
            service: "kubernetes-api-loadbalancer",
        },
    },
    {dependsOn: [k8sCluster]}
    );
    
    // Expose a K8s provider instance using our custom cluster instance.
    export const k8sProvider = new k8s.Provider("aksK8s", {
        kubeconfig: k8sCluster.kubeConfigRaw,
    });
    
    const clusterSvcsNamespace = new k8s.core.v1.Namespace(
        config.namespace,    
        undefined,
    { provider: k8sProvider });
    export const clusterSvcsNamespaceName = clusterSvcsNamespace.metadata.name;
    
    const regEngineIngress = new k8s.helm.v3.Chart(
        "sample-engine-ingress",
        {
            repo: "kubernetes.github.io",
            chart: "ingress-nginx",
            version: "4.3.0",
            fetchOpts: {
                repo: "https://kubernetes.github.io/ingress-nginx",
            },
            namespace: clusterSvcsNamespace.metadata.name,
            values: {
                controller: {
                    replicaCount: 2,
                    nodeSelector: {
                        "kubernetes\.io/os": "linux",
                    },
                    service: {
                        loadBalancerIP: publicIp.ipAddress,
                    }
                }
            },
        },
        { provider: k8sProvider },
    );
    
    export let cluster = k8sCluster.name;
    export let kubeConfig = k8sCluster.kubeConfigRaw;
    export let k8sFqdn = k8sCluster.fqdn;
    export let externalIp = publicIp.ipAddress;