Search code examples
azure-machine-learning-service

Compute Instance: Best practice for custom Anaconda env


I'd like to use a compute instance as my develop machine. Are there any best practices on how to handle custom Anaconda enviroments on these machines?

So far, I do it this way:

conda create --name testenv python=3
conda activate testenv
conda install ipykernel
ipython kernel install --user --name=testenv
sudo systemctl restart jupyter.service

--> Reload the JupyterHub in your browser.

Do you see any drawbacks by doing it this way? I know, some special package combinations in the standard env are lost, but I'd like to know what I've installed in my system. Of course, one could combine it with an environment.yml.

What do you think?


Solution

  • Your workaround is the best option as of now. But I know that the Azure ML product group has been working on exactly this problem, but I can't make any promises as to timeline.

    I share your dream of an easily configurable data science cloud development environment that allows for Git repo cloning and environment creation w/ a conda yml. We're so close especially given all the press & announcements around Visual Studio Codespaces!