I am working with aks service. I have a Tensorflow serving image at azure container registry. Now when I deploy my service, the public service endpoint is not accessible neither is it pingable.
My image is exposed at port 8501 , so I am using it as a target port in my yaml.
Here is the yaml file I am using for this deployment.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-model-gpu
spec:
replicas: 1
template:
metadata:
labels:
app: my-model-gpu
spec:
containers:
- name: my-model-gpu
image: dsdemocr.azurecr.io/work-place-safety-gpu
ports:
- containerPort: 8501
resources:
limits:
nvidia.com/gpu: 1
imagePullSecrets:
- name: registrykey
---
apiVersion: v1
kind: Service
metadata:
name: my-model-gpu
spec:
type: LoadBalancer
ports:
- port: 8501
protocol: "TCP"
targetPort: 8501
selector:
app: my-model-gpu
below are my svc description : kubectl describe svc my-model-gpu
Name: my-model-gpu
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-model-gpu","namespace":"default"},"spec":{"ports":[{"port":850...
Selector: app=my-model-gpu
Type: LoadBalancer
IP: 10.0.244.106
LoadBalancer Ingress: 52.183.17.101
Port: <unset> 8501/TCP
TargetPort: 8501/TCP
NodePort: <unset> 31546/TCP
Endpoints: 10.244.0.22:8501
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 10m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 9m8s service-controller Ensured load balancer
Looks like I am making some mistake with port mapping. Any help is much appreciated.
The container i was trying to access had no port open on 8501 , once i fixed it it worked well.