We've set up on-premises networks with GCI, the local device is able to ping GKE Node internal IP, but we found the internal IP isn't fixed. So we choose to use GKE Node external IP instead, but I'm not sure if this IP is accessible from local device and if it is, how would it know which pod to route to when connecting to GKE Node external IP?
Do I need to set up something else like NetworkPolicy etc ?
note: we're using GKE autopilot mode
2023/06/06 Updates:
I'm currently attempting to set up an Ingress with an internal IP in Google Kubernetes Engine (GKE), which requires a proxy-only subnet. Additionally, I've tried setting the service type to LoadBalancer (Internal) and assigning a static IP. I also checked advertised IP ranges is on the list of nonMasqueradeCIDRs. However, the results have been unsatisfactory, as on-premises devices are still unable to ping the pods in GKE.
Internal IP should only be used if you are using the Private IP of the GKE node and if you are using External IP, issues related to firewall may occur. To overcome this situation, you need to include Egress NAT policy that will help you to set up SNAT based on pod labels and destination IP address.
The GKE Egress NAT policy lets you configure the IP masquerade behavior for Autopilot clusters.
GKE supports two automatically generated NAT policies, default policy and managed by GKE policy. The default policy is editable and it configures the default non-masquerade destinations. All the required changes can be done in the default policy. Follow the steps mentioned in the official document.
When packets are sent to the destinations mentioned in “CIDR” under EgressNATPolicy, your cluster does not masquerade IP address sources and preserves source Pod IP addresses.
Hope the above information is useful to you.