I might ask a wrong question here, but i am trying create an internal load balancer like this
i have an API service that is accessible by http://[api_service_name]:3000
and a simple nginx gateway service that proxy_pass http://[gateway_service_name]:80
to http://[api_service_name]:3000
my API service service.yaml file is
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
service: api-service-name
name: api-service-name
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
selector:
service: api-service-name
status:
loadBalancer: {}
and my API service deployment.yaml file is
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: api-service-name
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
service: api-service-name
spec:
containers:
- env:
...
image: ...
name: api-service-name
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
status: {}
while my nginx service.yaml is
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
service: gateway-service-name
name: gateway-service-name
spec:
ports:
- name: "80"
port: 80
protocol: TCP
targetPort: 80
selector:
service: gateway-service-name
type: LoadBalancer
externalName: gateway-service-name
status:
loadBalancer: {}
and deployment.yaml is
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: gateway-service-name
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
service: gateway-service-name
spec:
containers:
- image: ...
name: gateway-service-name
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
these settings work well for external loadbalancer/gateway. when i do kubectl get svc
it prints
NAME CLUSTER-IP EXTERNAL-IP
gateway-service-name gateway.int.ip.add.ress gateway.ext.ip.add.ress
api-service-name api.int.ip.add.ress <none>
and i can browse through http://gateway.ext.ip.add.ress/any_available_endpoints
just fine
i am trying to figure out if i can achieve the same thing without having to have an external ip address for my gateway, and use http://gateway.int.ip.add.ress/any_available_endpoints
instead
i tried using the default ClusterIp
ServiceType
but its not working
NOTE: i will be accessing the network through a vpn and that another service that sits on another cluster will access this internally
UPDATE: i ended up putting my client (web) inside the same cluster, this way my gateway doesn't have to have an external ip address, i am not sure if this is the right approach but will keep it like this for now
A ClusterIP
Service
is only accessible from other services in the same cluster so if your service is in ClusterA and your VPN is in ClusterB, the VPN won't be able to reach it as a ClusterIP
Service
.
One option is to continue to use a public IP with the LoadBalancer
Service
and configure the firewall to restrict traffic to only originating from your VPN using the loadBalancerSourceRanges
setting on the Service
(https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/)
If both ClusterA and ClusterB are on the same network (which is the default setting for new clusters), another option you have is to use type: NodePort
for your Service
. This will expose the service on a static port of each Node in ClusterA without opening any ports in the default firewall.
If ClusterA has nodes with IPs 10.128.0.2, 10.128.0.3, and 10.128.0.4, and you configure your Service
like this
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
service: gateway-service-name
name: gateway-service-name
spec:
ports:
- name: "80"
port: 80
nodePort: 80
protocol: TCP
targetPort: 80
selector:
service: gateway-service-name
type: NodePort
externalName: gateway-service-name
then you should be able to connect to your service at http://10.128.0.2/any_available_endpoints
or http://10.128.0.3/any_available_endpoints
or http://10.128.0.4/any_available_endpoints