Search code examples
amazon-ekseksctl

Assign roles to EKS cluster in manifest file?


I'm new to Kubernetes, and am playing with eksctl to create an EKS cluster in AWS. Here's my simple manifest file

kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5

metadata:
  name: sandbox
  region: us-east-1
  version: "1.18"

managedNodeGroups:
  - name: ng-sandbox
    instanceType: r5a.xlarge
    privateNetworking: true
    desiredCapacity: 2
    minSize: 1
    maxSize: 4
    ssh:
      allow: true
      publicKeyName: my-ssh-key

fargateProfiles:
  - name: fp-default
    selectors:
      # All workloads in the "default" Kubernetes namespace will be
      # scheduled onto Fargate:
      - namespace: default
      # All workloads in the "kube-system" Kubernetes namespace will be
      # scheduled onto Fargate:
      - namespace: kube-system
  - name: fp-sandbox
    selectors:
      # All workloads in the "sandbox" Kubernetes namespace matching the
      # following label selectors will be scheduled onto Fargate:
      - namespace: sandbox
        labels:
          env: sandbox
          checks: passed

I created 2 roles, EKSClusterRole for cluster management, and EKSWorkerRole for the worker nodes? Where do I use them in the file? I'm looking at eksctl Config file schema page and it's not clear to me where in manifest file to use them.


Solution

  • As you mentioned, it's in the managedNodeGroups docs

    managedNodeGroups:
      - ...
        iam:
          instanceRoleARN: my-role-arn
          # or
          # instanceRoleName: my-role-name
    

    You should also read about