I have a kubernetes cluster and an RDS configured in terraform, now i want to whitelist the node-IPs for the RDS. Is there a way to somehow access the node-pool instances from the cluster-config? What i basically want for the RDS config is something like
ip_configuration {
dynamic "authorized_networks" {
for_each = google_container_cluster.data_lake.network
iterator = node
content {
name = node.network.ip
value = node.network.ip
}
}
}
But from what i see there seems to be no way to get a list of the nodes/the-IPs.. I tried
ip_configuration {
authorized_networks {
value = google_container_cluster.my_cluster.cluster_ipv4_cidr
}
}
which resulted in Non-routable or private authorized network (10.80.0.0/14).., invalid
so it looks like this only works with public IPs. Or i have to setup a separate VPC for that?
i would suggest you to first set up the NAT gateway in front of the GKE so that you can manage your all outgoing traffic from a single egress point.
You can use this terraform to create & setup the NAT gateway : https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway
Module source code : https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/v1.2.3/examples/gke-nat-gateway
Using NAT gateway your all Nodes traffic will be going out of single IP and you can whitelist this single IP
into the RDS
.
Since RDS is in AWS service, VPC peering is not possible otherwise if are using the GCP SQL that would also one option.