Search code examples
pythondockerpippytorchintel-mkl

Adding Intel MKL and MKL-DNN in Docker


I have ML code (e.g. Numpy, Scipy, LightGBM, PyTorch) deployed with Docker. I am using Python with Poetry, installing packages with pip.

What should I do in order to use MKL and MKL-DNN? I know that the most standard way is to use Anaconda, but I cannot (large business, without commercial Anaconda license).

Will pip install mkl suffice?

How to install MKL-DNN, so that PyTorch will use it?


Solution

  • Will pip install mkl suffice?

    No, it will not, see the section in the numpy install docs:

    The NumPy wheels on PyPI, which is what pip installs, are built with OpenBLAS. The OpenBLAS libraries are included in the wheel. This makes the wheel larger, and if a user installs (for example) SciPy as well, they will now have two copies of OpenBLAS on disk.

    So you will need to built numpy from source.

    I know that the most standard way is to use Anaconda, but I cannot (large business, without commercial Anaconda license).

    Have you considered using miniforge and miniconda? IANAL, but I am quite certain that you are just not allowed to use the ana-/miniconda distributions and the anaconda channel in large scale commercial products, but conda-forge can still be used free of charge. You should be able to set up all the requirements that you mentioned from conda-forge. At least you would probably have an easier time compiling pytorch from source