I'm recreating a cluster with kOps. Normally, on all my kops clusters, I have 3 master
instance groups and a single nodes
instance group. This is the default according to the docs, and what I'm used to.
However, while trying to create this cluster, it is adding the 3 master ig's, as well as 3 node ig's
Using cluster from kubectl context: [my-cluster]
NAME ROLE MACHINETYPE MIN MAX ZONES
master-us-east-1a Master t2.medium 1 1 us-east-1a
master-us-east-1b Master t2.medium 1 1 us-east-1b
master-us-east-1c Master t2.medium 1 1 us-east-1c
nodes-us-east-1a Node r5.xlarge 2 2 us-east-1a
nodes-us-east-1b Node r5.xlarge 2 2 us-east-1b
nodes-us-east-1c Node r5.xlarge 2 2 us-east-1c
It should look like this instead:
Using cluster from kubectl context: [my-cluster]
NAME ROLE MACHINETYPE MIN MAX ZONES
master-us-east-1a Master t2.medium 1 1 us-east-1a
master-us-east-1b Master t2.medium 1 1 us-east-1b
master-us-east-1c Master t2.medium 1 1 us-east-1c
nodes Node r5.xlarge 6 6 us-east-1a,us-east-1b,us-east-1c
I have no idea why it's doing it this way. I've created clusters fine before using the same script. The only thing I changed is the kOps version as I updated to v1.19
, but it doesn't mention anything obvious that would change this in the changelogs.
My create command is:
kops create cluster \
--yes \
--authorization RBAC \
--cloud aws \
--networking calico \
--image ami-0affd4508a5d2481b \
--topology private \
--api-loadbalancer-class network \
--vpc ${VPC_ID} \
--subnets ${PRIVATE_SUBNETS} \
--utility-subnets ${PUBLIC_SUBNETS} \
--zones ${ZONES} \
--master-zones ${ZONES} \
--node-size ${NODE_SIZE} \
--node-count 6 \
--master-size ${MASTER_SIZE} \
--state s3://${KOPS_STATE_BUCKET} \
--ssh-public-key ${PUB_KEY_LOCATION} \
--api-ssl-certificate ${SSL_CERT_ARN_K8S} \
--admin-access ${API_ACCESS_CIDRS} \
--ssh-access ${SSH_ACCESS_CIDRS} \
${KUBE_CLUSTER_NAME}
and versions
Kops: Version 1.19.0
Kubernetes:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-17T02:13:01Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
The AWS stack was generated via terraform, but I'm not sure that's related.
And yes, I can just manually create the nodes
ig and delete these ones; but our architecture is fully scripted and I would like to keep it that way.
In kOps the default ASG layout for AWS changed from having a single node IG to one per AZ. There are multiple reasons for that:
It is worth mentioning that kops create cluster
is meant as an easy way of testing kOps. For production use, and especially if you are handling multiple clusters, use templates. See https://kops.sigs.k8s.io/getting_started/production/
The outcome of kops create cluster
also changes between versions. This is one example, cilium moving from kube-proxy to eBPF nodeport service is another. If you expect repeatable outcome you must use a cluster spec/templates.