Search code examples
azuredockerkubernetesazure-aksservice-principal

Azure Kubernetes Services - when it is required to set AKS Service Principle on other Azure Resources in order to have connection?


By default when creating an AKS cluster a service principal is being created for that cluster.

Then that Service Principal can be set on the level of some other Azure Resource (VM?) in order for them to be able to establish a network connection and for them to be able to communicate (except of of course general network settings)

I am really not sure and can not understand when this is required and when not. If for example I have db on VM level do I need to grant the AKS service principal access to the VM to be able to communicate with it through the network or not?

Can someone provide me some guidance for this, and not general documentation. When this is required to be used/set on the level of those other Azure resources and when it is not? I cannot find proper explanation for this. Thank you


Solution

  • Regarding your question about the DB, you do not need to give the service principal any access to that VM. Given that the Database runs outside of Kubernetes does not need to access that VM in any way. The database could even be in a different data center or hosted on another cloud provider entirely, applications running inside kubernetes will still be able to communicate with it as long as the traffic is allowed by firewalls etc.

    I know you did not ask for generic documentation, but the documentation on Kubernetes Service Principals puts it well:

    To interact with Azure APIs, an AKS cluster requires either an Azure Active Directory (AD) service principal or a managed identity. A service principal or managed identity is needed to dynamically create and manage other Azure resources such as an Azure load balancer or container registry (ACR).

    In other words, the Service principal is the identity that the Kubernetes cluster authenticates with when it interacts with other Azure resources such as:

    • Azure container registry: The images that the containers are created from must come from somehwere. If you are storing your custom images in a private registry the cluster must be authorized to pull images from the registry. If the private registry is an Azure container registry the service principal must be authorized for those operations
    • Networking: Kubernetes must be able to dynamically configure routetables and to register external IP's for services in a loadbalancer. Again, the service principal is used as identity so it must be authorized
    • Storage: To access disk resources and mount them into pods
    • Azure Container instances: In case you are using the virtual kubelet to dynamically add compute resources to your cluster Kubernetes must be allow to manage containers on ACI.

    To delegate access to other Azure resources you can use the azure cli to assign a role to a an assignee on a certain scope:

    az role assignment create --assignee <appId> --scope <resourceScope> --role Contributor
    

    Here is a detailed list of all cluster identity permissions in use