Search code examples
terraformazure-storageazure-storage-account

terraform backend state file storage using keys instead of AD account


It appears that Terraform uses Keys for backend state files when persisting to an Azure storage account. I wish to use a single storage account with dedicated folders for different service principals but without cross-folder write access. I am trying to avoid accidental overwrites of the state files by different service principals. But since Terraform is using the keys to update the storage account, every service principal technically has rights to update every file. And the developer would have to take care not to accidentally reference the wrong state file to update. Any thoughts on how to protect against this?


Solution

  • You can use a SAS token generated for a Container to be used by that service principal only and no other service principals .

    I tested with something like below:

    data "terraform_remote_state" "foo" {
      backend = "azurerm"
      config = {
        storage_account_name = "cloudshellansuman"
        container_name       = "test"
        key                  = "prod.terraform.tfstate"
       sas_token = "sp=racwdl&st=2021-09-28T05:49:01Z&se=2023-04-01T13:49:01Z&sv=2020-08-04&sr=c&sig=O87nHO01sPxxxxxxxxxxxxxsyQGQGLSYzlp6F8%3D"
    
      }
    }
    provider "azurerm" {
      features {}
      use_msi = true
      subscription_id = "948d4068-xxxxx-xxxxxx-xxxxxxxxxx"
      tenant_id = "72f988bf-xxxx-xxxxx-xxxxxx-xxxxxxx"
    }
    
    resource "azurerm_resource_group" "test" {
        name="xterraformtest12345"
        location ="east us"
    }
    

    enter image description here

    But If I change container name to another container then I can't write as it will error out saying the authentication failed as the SAS token is for Test container not Test1 container.

    enter image description here

    For more information on how to generate SAS token for containers and how to set backend azurerm for terraform , please refer the below links:

    Generate shared access signature (SAS) token for containers and blobs with Azure portal. | Microsoft Docs

    Use Azure storage for Terraform remote state


    OR

    You can set the containers authentication method to azure ad user account, after assigning storage blob data contributor/owner role to the service principal which will use that specific container .

    enter image description here

    enter image description here

    Then you can use something like below:

    data "terraform_remote_state" "foo" {
      backend = "azurerm"
      config = {
        storage_account_name = "cloudshellansuman"
        container_name       = "test1"
        key                  = "prod.terraform.tfstate"
        subscription_id = "b83c1ed3-xxxx-xxxxxx-xxxxxxx"
        tenant_id = "72f988bf-xxx-xxx-xxx-xxx-xxxxxx"
        client_id = "f6a2f33d-xxxx-xxxx-xxx-xxxxx"
        client_secret = "y5L7Q~oiMOoGCxm7fK~xxxxxxxxxxxxxxxxx"
        use_azuread_auth =true
      }
    }
    
    provider "azurerm"{
        subscription_id = "b83c1ed3-xxxx-xxxxxx-xxxxxxx"
        tenant_id = "72f988bf-xxx-xxx-xxx-xxx-xxxxxx"
        client_id = "f6a2f33d-xxxx-xxxx-xxx-xxxxx"
        client_secret = "y5L7Q~oiMOoGCxm7fK~xxxxxxxxxxxxxxxxx"
        features {}
    }
    
    data "azurerm_resource_group" "test" {
        name="resourcegroupname"
    }
    
    resource "azurerm_virtual_network" "example" {
      name                = "example-network"
      resource_group_name = data.azurerm_resource_group.test.name
      location            = data.azurerm_resource_group.test.location
      address_space       = ["10.254.0.0/16"]
    }
    

    Output:

    ![enter image description here

    enter image description here

    If service principal doesn't have Role assigned to it for the container , then it will give error like below:

    enter image description here

    Note: For the first scenario I have used managed system identity, but the same can be achieved for service principal as well.