Search code examples
azure-machine-learning-service

Out-of-memory error webservice deployed with Azure ML Studio


My webservice deployed with Azure Machine Learning Studio exposes a classification model. Since the last re-training and re-deployment, in circa 1% of the processed cases in production, I get either of the two following out-of-memory (possibly correlated) errors:

  1. "The model consumed more memory than was appropriated for it. Maximum allowed memory for the model is 2560 MB. Please check your model for issues."
  2. "The following error occurred during evaluation of R script: R_tryEval: return error: Error: cannot allocate vector of size 57.6 Mb"

I did not mange to find anything to solve this issue according to the official documentation.

Note that these errors occur exclusively while trying to consume the webservice (and not while training, evaluation and deployment).

Also, consuming the webservice in batch mode, as suggested here, is not a viable option for my business use case.

Is there a way to increase the memory usage limit for webservices deployed in Azure ML Studio?


Solution

  • Currently, there's no way to increase memory limit in Classic Studio. We encouraged customers to try Azure Machine Learning designer (preview), which provides similar drag and drop ML modules plus scalability, version control, and enterprise security. Furthermore, with Designer, the endpoints are deployed to AKS where no limit other than cluster resource is imposed.