Search code examples
azureterraformazure-stream-analyticsterraform-provider-azure

Terraform- How to avoid destroy and create with single state file


I have a terraform code that creates a stream analytics job, An input and output for the job too. Below is my terraform code:

provider "azurerm" {
  version = "=1.44"
}
resource "azurerm_stream_analytics_job" "test_saj" {
  name                                     = "test-stj"
  resource_group_name                      = "myrgname"
  location                                 = "Southeast Asia"
  compatibility_level                      = "1.1"
  data_locale                              = "en-US"
  events_late_arrival_max_delay_in_seconds = 60
  events_out_of_order_max_delay_in_seconds = 50
  events_out_of_order_policy               = "Adjust"
  output_error_policy                      = "Drop"
  streaming_units                          = 3

  tags = {
    environment = "test"
  }
  transformation_query = var.query

}
resource "azurerm_stream_analytics_output_blob" "mpl_saj_op_jk_blob" {
  name                      = var.saj_jk_blob_output_name
  stream_analytics_job_name = "test-stj"
  resource_group_name       = "myrgname"
  storage_account_name      = "mystaname"
  storage_account_key       = "mystakey"
  storage_container_name    = "testupload"
  path_pattern              = myfolder/{day}"
  date_format               = "yyyy-MM-dd"
  time_format               = "HH"

  depends_on = [azurerm_stream_analytics_job.test_saj]

  serialization {
    type            = "Json"
    encoding        = "UTF8"
    format  = "LineSeparated"
  }
}
resource "azurerm_stream_analytics_stream_input_eventhub" "mpl_saj_ip_eh" {
  name                         = var.saj_joker_event_hub_name
  stream_analytics_job_name    = "test-stj"
  resource_group_name          = "myrgname"
  eventhub_name                = "myehname"
  eventhub_consumer_group_name = "myehcgname"
  servicebus_namespace         = "myehnamespacename"
  shared_access_policy_name    = "RootManageSharedAccessKey"
  shared_access_policy_key     = "ehnamespacekey"

      serialization {
        type     = "Json"
        encoding = "UTF8"
      }
      depends_on = [azurerm_stream_analytics_job.test_saj]
    }

Following is my tfvars input file:

query=<<EOT
myqueryhere
EOT
saj_jk_blob_output_name="outputtoblob01"
saj_joker_event_hub_name="inputventhub01"

I have no problem with the creation. Now my problem is when I want to create a new input and output for the same stream analytics job, I changed the name values alone in the tfvars file and gave terraform apply (in the same directory where first apply was given. Same state file).

Terraform is replacing the existing i/p and o/p with the new ones which is not my requirement. I want both the old one and the new one. This usecase was satisfied when imported the existing stream analytics using terraform import in a completely different folder and I used the same code. But is there way to do this without terraform import. Can this be done with a single state file itself?


Solution

  • State allows Terraform to know what Azure resources to add, update, or delete. What you want to do can not be done with a single state file itself unless you directly deploy resources with different names in your configuration files.

    For example, if you want to create two virtual networks. You can directly create resources like this or use a count parameter on resources level for the loop.

    resource "azurerm_virtual_network" "example" {
      name                = "examplevnet1"
      location            = azurerm_resource_group.example.location
      resource_group_name = azurerm_resource_group.example.name
      address_space       = ["10.1.0.0/16"]
    }
    
    resource "azurerm_virtual_network" "example" {
      name                = "examplevnet2"
      location            = azurerm_resource_group.example.location
      resource_group_name = azurerm_resource_group.example.name
      address_space       = ["10.2.0.0/16"]
    }
    

    When working with Terraform in a team, you can use remote state to write the state data to a remote data store, which can then be shared between all members of a team. It's recommended to store Terraform state in Azure Storage.

    For more information, you could see Terraform workflow in this blog.