Search code examples
azure-machine-learning-service

How to expose port from locally-deployed AzureML container?


I want to be able to debug a running entry_script.py script in VSCode. This code runs in the container created through az ml deploy with its own docker run command. This is a local deployment so I'm using a deployment config that looks like this:

{
    "computeType": "LOCAL",
    "port": 32267
}

I was thinking about using ptvsd to set up a VSCode server but I need to also expose/map the 5678 port in addition to that 32267 port for the endpoint itself. So it's not clear to me how to map an additional exposed port (typically using the -p or -P flags in the docker run command).

Sure, I can EXPOSE it in the extra_dockerfile_steps configuration but that won't actually map it to a host port that I can connect to/attach to in VSCode.

I tried to determine the run command and maybe modify it but I couldn't find out what that run command is. If I knew how to run the image that's created through AzureML local deployment then I could modify these flags.

Ultimately it felt too hacky - if there was a more supported way through az ml deploy or through the deployment configuration that would be preferred.

This is the code I'm using at the start of the entry_script to enable attachment via ptvsd:

# 5678 is the default attach port in the VS Code debug configurations
print("Waiting for debugger attach")
ptvsd.enable_attach(address=('localhost', 5678), redirect_output=True)

Solution

  • Unfortunately az ml deploy local doesn't support binding any ports other then the port hosting the scoring server.