I use gcloud compute engine for my db and app engine for my server. If on the CE I enable ephemeral external IP, my server can connect via the INTERNAL network. I can also reach my server externally. If however I configure my CE with INTERNAL ONLY then my server cannot reach the CE, even though the server used the internal network with the CE configured for ephemeral external IP. The server knows nothing about the ephemeral IP address of the CE instance. In summary:
CE with default network (10.156.0.X & ephemeral external IP) -> App Engine server (via 10.156.0.X) = works!
CE with default network (10.156.0.X only) -> App Engine server = doesn't work
I would have thought simply removing the external IP address wouldn't have an effect on the internal network! I am currently using the Serverless VPC connector on my server to access the GCE. Both are part of the same project.
With external IP configuration:
"networkInterfaces": [
{
"name": "nic0",
"network": "projects/x/global/networks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"natIP": "xx.xxx.xx.xx",
"kind": "compute#accessConfig",
"networkTier": "STANDARD",
"setPublicPtr": false
}
],
"subnetwork": "projects/x/regions/europe-west3/subnetworks/default",
"networkIP": "10.156.0.6",
"fingerprint": "tkgs6oiAL8E=",
"kind": "compute#networkInterface"
}
]
Without external IP
"networkInterfaces": [
{
"name": "nic0",
"network": "projects/x/global/networks/default",
"subnetwork": "projects/x/regions/europe-west3/subnetworks/default",
"networkIP": "10.156.0.6",
"fingerprint": "uWRkQpIa-fs=",
"kind": "compute#networkInterface"
}
],
Database firewall configuration:
{
"allowed": [
{
"IPProtocol": "tcp",
"ports": [
"0-65535"
]
},
{
"IPProtocol": "udp",
"ports": [
"0-65535"
]
},
{
"IPProtocol": "icmp"
}
],
"creationTimestamp": "2020-03-23T08:36:03.630-07:00",
"description": "Allow internal traffic on the default network",
"direction": "INGRESS",
"disabled": false,
"enableLogging": false,
"id": "x",
"kind": "compute#firewall",
"logConfig": {
"enable": false
},
"name": "default-allow-internal",
"network": "projects/x/global/networks/default",
"priority": 65534,
"selfLink": "projects/x/global/firewalls/default-allow-internal",
"sourceRanges": [
"0.0.0.0/0",
"10.128.0.0/9",
"10.8.0.0/28"
],
"sourceServiceAccounts": [
"[email protected]"
]
}
Server firewall configuration:
Priority
1000
Action on match
ALLOW
IP Range
10.156.0.6
Description
Something
App Engine configuration:
runtime: nodejs16
env: standard
instance_class: F4
handlers:
- url: .*
script: auto
- url: .*
script: auto
env_variables:
ACCESS_CONTROL_ALLOW_ORIGIN: 'https://x.com'
...
DB_PATH_INTERNAL: 'http://10.156.0.6:8529'
...
automatic_scaling:
min_idle_instances: automatic
max_idle_instances: automatic
min_pending_latency: automatic
max_pending_latency: automatic
network:
name: default
vpc_access_connector:
name: projects/x/locations/europe-west3/connectors/x
egress_setting: private-ip-ranges
service_account: [email protected]
So I was able to solve my issue. The conceptual issue (based on my non-expert opinion) is App Engine Standard constantly changes ip addresses. Hence my server's response (Compute Engine VM) could never find the node server (App Engine) from whence it came. Therefore I needed to somehow 'lock-down' the ip address of the server/middleware so that the db's response could find it.
There were several issues to address:
[https://cloud.google.com/appengine/docs/standard/outbound-ip-addresses][1]
The outbound services in the App Engine standard environment, such as the URL Fetch, Sockets, and Mail APIs, make use of a large pool of IP addresses. The IP address ranges in this pool are subject to routine changes. In fact, two sequential API calls from the same application may appear to originate from two different IP addresses.
If you need to know the IP addresses associated with outbound traffic from your service, you can either find the current IP address ranges for your service, or set up a static IP address for your service. https://cloud.google.com/vpc/docs/configure-private-google-access
Under the network configuration: 'Network': 'default', 'Subnetwork':'name of subnetwork created above' 'Internal IP Address': choose an IP from the subnetwork range above 'Primary internal IPv4 address': ephemeral 'External IP Address': none
vpc_access_connector: name: projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME egress_setting: all-traffic
ensuring the 'egress_setting' is set as above.
I was thrown off by the vague instructions for the ServerlessVPC Connector which doesn't state that a outbound IP address needed to be created. Further I had created a subnetwork for the VM, but not the outbound IP address.