So far I have 2 directories:
aws/
k8s/
Inside aws/
are .tf
files describing a VPC, networking, security groups, IAM roles, EKS cluster, EKS node group, and a few EFS mounts. These are all using the AWS provider, the state in stored in S3.
Then in k8s/
I'm then using the Kubernetes provider and creating Kubernetes resources inside the EKS cluster I created. This state is stored in the same S3 bucket in a different state file.
I'm having trouble figuring out how to mount the EFS mounts as Persistent Volumes to my pods.
I've found docs describing using an efs-provisioner pod to do this. See How do I use EFS with EKS?.
In more recent EKS docs they now say to use Amazon EFS CSI Driver. The first step is to do a kubectl apply
of the following file.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
images:
- name: amazon/aws-efs-csi-driver
newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver
newTag: v0.2.0
- name: quay.io/k8scsi/livenessprobe
newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-liveness-probe
newTag: v1.1.0
- name: quay.io/k8scsi/csi-node-driver-registrar
newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-node-driver-registrar
newTag: v1.1.0
Does anyone know how I would do this in Terraform? Or how in general to mount EFS file shares as PVs to an EKS cluster?
@BMW had it right, I was able to get this all into Terraform.
In the aws/
directory I created all my AWS resources, VPC, EKS, workers, etc. and EFS mounts.
resource "aws_efs_file_system" "example" {
creation_token = "${var.cluster-name}-example"
tags = {
Name = "${var.cluster-name}-example"
}
}
resource "aws_efs_mount_target" "example" {
count = 2
file_system_id = aws_efs_file_system.example.id
subnet_id = aws_subnet.this.*.id[count.index]
security_groups = [aws_security_group.eks-cluster.id]
}
I also export the EFS file system IDs from the AWS provider plan.
output "efs_example_fsid" {
value = aws_efs_file_system.example.id
}
After the EKS cluster is created I had to manually install the EFS CSI driver into the cluster before continuing.
Then in the k8s/
directory I reference the aws/
state file so I can use the EFS file system IDs in the PV creation.
data "terraform_remote_state" "remote" {
backend = "s3"
config = {
bucket = "example-s3-terraform"
key = "aws-provider.tfstate"
region = "us-east-1"
}
}
Then created the Persistent Volumes using the Kubernetes provider.
resource "kubernetes_persistent_volume" "example" {
metadata {
name = "example-efs-pv"
}
spec {
storage_class_name = "efs-sc"
persistent_volume_reclaim_policy = "Retain"
capacity = {
storage = "2Gi"
}
access_modes = ["ReadWriteMany"]
persistent_volume_source {
nfs {
path = "/"
server = data.terraform_remote_state.remote.outputs.efs_example_fsid
}
}
}
}