I have currently provisioned Envoy proxy into Azure Kubernetes with various micro-services , I would like to use Envoy proxy to ensure it similar to other environments running docker or a local installation for end users that is driven by configuration. The envoy proxy just basically acts to redirect traffic based on the routes, it knowns the end target via the name resolution within the kubernetes environment.
We are looking to migrate our service into the cloud and therefore looking to use Kubernetes to ensure that it can scale under load etc. I am concerned that the envoy proxy might not scales correctly?
We have currently set up multiple deployments and envoy is one of them. We then expose the ports onto a load balancer.
kubectl expose deployment --name=routing-http-new --type=LoadBalancer envoy-deployment
We then expose the other service internally using:
kubectl expose deployment <deployment name>
What I am trying to understand is whether deploying something like this scale ok.
A simple image of environment
This is a generic way of routing traffic and would consider this fine. For scaling to work properly, envoy pods should have enough resources to scale out and the limit set properly after doing a load test with peak traffic.
Make sure to have loadbalancers properly sharing the traffic loads across the pods. You can use tools to monitor this.
Something like a kibana board would help to monitor the logs and stats for a better visibility.