I have two deployments in kubernetes on Azure both with three replicas. Both deployments use oauth2 reverse proxy
for external users/requests authentication. The manifest file for both deployments looks like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice1
labels:
aadpodidbinding: my-pod-identity-binding
spec:
replicas: 3
progressDeadlineSeconds: 1800
selector:
matchLabels:
app: myservice1
template:
metadata:
labels:
app: myservice1
aadpodidbinding: my-pod-identity-binding
annotations:
aadpodidbinding.k8s.io/userAssignedMSIClientID: pod-id-client-id
aadpodidbinding.k8s.io/subscriptionID: my-subscription-id
aadpodidbinding.k8s.io/resourceGroup: my-resource-group
aadpodidbinding.k8s.io/useMSI: 'true'
aadpodidbinding.k8s.io/clientID: pod-id-client-id
spec:
securityContext:
fsGroup: 2000
containers:
- name: myservice1
image: mycontainerregistry.azurecr.io/myservice1:latest
imagePullPolicy: Always
ports:
- containerPort: 5000
securityContext:
runAsUser: 1000
allowPrivilegeEscalation: false
readinessProbe:
initialDelaySeconds: 1
periodSeconds: 2
timeoutSeconds: 60
successThreshold: 1
failureThreshold: 1
httpGet:
host:
scheme: HTTP
path: /healthcheck
port: 5000
httpHeaders:
- name: Host
value: http://127.0.0.1
resources:
requests:
memory: "4G"
cpu: "2"
limits:
memory: "8G"
cpu: "4"
env:
- name: MESSAGE
value: Hello from the external app!!
---
apiVersion: v1
kind: Service
metadata:
name: myservice1
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 5000
selector:
app: myservice1
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://myservice1.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://myservice1.com/oauth2/start?rd=https://myservice1.com/oauth2/callback"
kubernetes.io/ingress.class: nginx-external
nginx.org/proxy-connect-timeout: 3600s
nginx.org/proxy-read-timeout: 3600s
nginx.org/proxy-send-timeout: 3600s
name: myservice1-external
spec:
rules:
- host: myservice1.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myservice1
port:
number: 80
Now, I want to restrict the communication between the pods in two ways:
Intra-deployments: I want to deny any communication between the 3 pods of each deployments internally; meaning that all 3 pods can and must only communicate with their corresponding proxy (Ingress part of the manifest)
Inter-deployments: I want to deny any communications between any two pods belonging to two deployments; meaning that if for example pod1 from deployment1 tries lets say to ping or send http request to pod2 from deployment2; this will be denied.
Allow requests throw proxies: the only requests that are allowed to enter must go through the correspoding deployment's proxy.
How to implement the manifest for the netwrok policy that achieves these requirements?
You can make use of NetworkPolicies and reference the Policy in your ingress configuration like below:-
My networkpolicy.yml:-
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
I applied it in my Azure Kubernetes like below:-
kubectl apply -f networkpolicy.yml
kubectl get networkpolicies
Then use the below yml file to reference the NetworkPolicy in the ingress settings:-
ingress.yml:-
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-access
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ingress:
- from:
- ipBlock:
cidr: 192.168.1.0/24
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-to-backends
spec:
podSelector:
matchLabels:
app: myapp
ingress:
- from:
- namespaceSelector:
matchLabels:
ingress: "true"
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx