I have a simple use case; I have 4 Microservices lets say service-a, service-b, service-c, service-d.
For the purpose of testing, I want to split traffic based on weight like
Considering they will be accessed over same path: example.com/
I am planinng to go with NGINX Ingress Controller but I saw a limitation on documentation: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary
Currently a maximum of one canary ingress can be applied per Ingress rule.
There is a GitHub issue too related to the same: https://github.com/kubernetes/ingress-nginx/issues/5848
I am not able to understand what does this actually mean and will this limitation not allow me implement with 4 services. Does that mean I have to create 4 canary ingress with single ingress rule for all 4 services? All the example for traffic splitting using ingress controller is of 2 services. should I consider Istio as it does not have this limitation
Can someone please explain me with simple yaml code example about this limitation?
If you need to implement traffic splitting across multiple microservices in AKS using a load balancer like NGINX Ingress Controller, and are facing limitations with the canary configuration, a better approach could be using Istio, a service mesh that provides advanced traffic management capabilities.
install and setup istio first
Create Deployment and Service for Each Microservice (service-a
, service-b
, service-c
, and service-d
) using your docker image ( I'll use nginxdemos/hello
image here)
# deployment-service-a.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-a
spec:
replicas: 2
selector:
matchLabels:
app: service-a
template:
metadata:
labels:
app: service-a
spec:
containers:
- name: service-a
image: nginxdemos/hello
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: service-a
spec:
ports:
- port: 80
targetPort: 80
selector:
app: service-a
repeat for b, c and d same way as above and apply kubectl apply -f <filename>.yaml
next steps involve setting up the Istio Gateway and Virtual Service to handle the traffic distribution according to your specified weights. This will enable you to route incoming traffic to your services service-a
, service-b
, and service-c
at the specified ratios
Create Istio Gateway
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: example-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
Create Istio VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtualservice
namespace: default # Ensure this is the correct namespace for your services
spec:
hosts:
- "*" # This can be specific to your domain if needed
gateways:
- my-gateway
http:
- route:
- destination:
host: service-a.default.svc.cluster.local # Adjust the FQDN as necessary
weight: 40
- destination:
host: service-b.default.svc.cluster.your-cluster.com
weight: 20
- destination:
host: service-c.default.svc.cluster.your-cluster.com
weight: 10
- destination:
host: service-d.default.svc.cluster.your-cluster.com
weight: 30
Adjust the namespace
, host
, and port
parameters accordingly
kubectl apply -f istio-gateway.yaml
kubectl apply -f istio-virtualservice.yaml
kubectl get svc istio-ingressgateway -n istio-system
You can now verify the VirtualService Configuration if it is matching your split -
kubectl describe virtualservice my-virtualservice -n default