i have a working sops solution to encrypt files using 1 aws accounts (aws_sops
) KMS and then deploy the secrets to another aws accounts secret manager (aws_secrets
).
This is done via connecting to the aws_sops
having the .sops.yaml file point at its kms and using an alias to deploy the secret.
while this works, it then saves the state of the aws_secrets
workspace to the aws_sops
statefile. Which means i cant deploy this solution to a terraform workspace that is already hosted in the aws_secrets
account.
Is it possible to switch the solution to using an alias for aws_sops
and connecting directly to aws_secrets
account? I dont see how to tell sops to use the aws alias instead of the default.
provider "aws" {
alias = "development"
profile = "development"
}
provider "aws" {}
provider "sops" {}
terraform {
backend "s3" {
bucket = "xxx-statefile"
encrypt = true
key = "pat/terraform.tfstate"
}
}
data "sops_file" "sops-secret" {
source_file = "../secrets.json"
}
resource "aws_secretsmanager_secret" "pipeline" {
provider = aws.development
name = "service-accounts/pipeline/resource-access-pat"
recovery_window_in_days = 0
force_overwrite_replica_secret = true
}
resource "aws_secretsmanager_secret_version" "pipeline" {
provider = aws.development
secret_id = aws_secretsmanager_secret.pipeline.id
secret_string = jsonencode(
{
"pat" : data.sops_file.sops-secret.data["token"]
})
}
was to remove the provider alias from the secrets and put in the data call as thats the only time / place i can see sops getting called.
But that gets the error:
│ Error: Invalid data source
│
│ on ../data.tf line 1, in data "sops_file" "test":
│ 1: data "sops_file" "test" {
│
│ The provider hashicorp/aws does not support data source "sops_file".
which makes sense as its just reading a local file.
it looks someone had a similar problem and raised a ticket: https://github.com/carlpett/terraform-provider-sops/issues/89
A possible solution was to add the role for the aws_sops
ive tried adding a role with admin permissions to kms etc like :
"sops": {
"kms": [
{
"arn": "arn:aws:kms:eu-west-2:xxx:key/xxx",
"role": "arn:aws:iam::xxx:role/TerraformAccountAccessRole",
"created_at": "2023-02-10T13:53:05Z",
"enc": "xx==",
"aws_profile": ""
}
and tried adding the the aws_profile as well:
"sops": {
"kms": [
{
"arn": "arn:aws:kms:xxx:xxx:key/xxx",
"role": "arn:aws:iam::xxx:role/TerraformAccountAccessRole",
"created_at": "2023-02-10T13:53:05Z",
"enc": "xx==",
"aws_profile": "aws_sops"
}
bit i get an error:
│ Error: Failed to get the data key required to decrypt the SOPS file.
│
│ Group 0: FAILED
│ arn:aws:kms:xxx:xxx:key/xxx: FAILED
│ - | Error creating AWS session: Failed to assume role
│ | "arn:aws:iam::xxx:role/TerraformAccountAccessRole":
│ | AccessDenied: User:
│ | arn:aws:sts::089449186373:assumed-role/AWSReservedSSO_DevOps_xxx/xxx@xxx.com
│ | is not authorized to perform: sts:AssumeRole on resource:
│ | arn:aws:iam::xxx:role/TerraformAccountAccessRole
│ | status code: 403, request id:
│ | d9327e8c-8ffc-4873-9279-112c1c8c7258
│
│ Recovery failed because no master key was able to decrypt the file. In
│ order for SOPS to recover the file, at least one key has to be successful,
│ but none were.
Hi resolved this via updating failed solution 2:
terraform {
required_version = ">= 1.1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.54"
}
sops = {
source = "carlpett/sops"
version = "0.7.2"
}
}
}
This will tell sops when it tries to decypt your secret to assume into the role you specified using the KMS.
This worked for me, hope it helps