Is there anyway I can connect the Cloud Functions with a VPC connector on default network to a GCE instance with multiple network interfaces where nic0 is someother network and nic1 is default network?
So I have a GCE instance with multiple network interfaces.
nic0 is someother network
nic1 is default network
I made a serverless VPC connector on default network. And used that connector with Google Cloud functions to connect to the GCE instance.
The problem is that when network interfaces are swapped i.e. nic0 is default network and nic1 is someother network, then VPC connector connects successfully and cloud functions can reach the GCE but when nic0 is someother network and nic1 is default network then cloud functions cannot reach GCE.
I tried the following things:
Note: I have the correct IAM permissions setup as I've successfully connected Cloud functions to GCE instance with only default network.
Without further configuration, secondary network interfaces only provide access to the immediate subnet they are attached to, this includes serverless VPC connectors, as they are by their very nature a different subnet than the one your instance is attached to.
To get around this, you need to create a static route in the operating system on the instance where the secondary interface is located. This will obviously vary based on your operating system, but on Debian-9 you can set this up with this command:
sudo ip route add [MY_CONNECTOR_SUBNET] via [ETH1_DEFAULT_ROUTER] dev eth1
Where ETH1_DEFAULT_ROUTER is the .1 address of your ETH1 subnet, and MY_CONNECTOR_SUBNET is the CIDR-format /28 subnet the connector is configured to use (e.g. something like 10.50.1.0/28, but it will depend on how you set up your connector).
Of course, this doesn't persist it at boot, as that is also an OS-specific configuration, but it should give you an idea if this is the problem for you.
Also, there isn't really anything special about the 'default' network -- its just an auto-created auto-mode network, and there isn't any reason this shouldn't have worked when you had the connector attached to the nic0 "someother" network. The only thing happening here that is 'special' is that nic0 gets the default route for all traffic out of the VM, and therefore won't need a static route added to access a Serverless VPC Connector on the same network.