Search code examples
dockerdockerfileopenvino

How to set environment variables dynamically by script in Dockerfile?


I build my project by Dockerfile. The project need to installation of Openvino. Openvino needs to set some environment variables dynamically by a script that depends on architecture. The sciprt is: script to set environment variables

As I learn, Dockerfile can't set enviroment variables to image from a script.

How do I follow way to solve the problem?

I need to set the variables because later I continue install opencv that looks the enviroment variables.

What I think that if I put the script to ~/.bashrc file to set variables when connect to bash, if I have any trick to start bash for a second, it could solve my problem.

Secondly I think that build openvino image, create container from that, connect it and initiliaze variables by running script manually in container. After that, convert the container to image. Create new Dockerfile and continue building steps by using this images for ongoing steps.

Openvino Dockerfile exp and line that run the script

My Dockerfile:

FROM ubuntu:18.04

ARG DOWNLOAD_LINK=http://registrationcenter-download.intel.com/akdlm/irc_nas/16612/l_openvino_toolkit_p_2020.2.120.tgz

ENV INSTALLDIR /opt/intel/openvino

# openvino download
RUN curl -LOJ "${DOWNLOAD_LINK}"

# opencv download
RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/4.3.0.zip && \
    wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.3.0.zip

RUN apt-get -y install sudo

# openvino installation
RUN tar -xvzf ./*.tgz && \
    cd l_openvino_toolkit_p_2020.2.120 && \
    sed -i 's/decline/accept/g' silent.cfg && \
    ./install.sh -s silent.cfg && \
    # rm -rf /tmp/* && \
    sudo -E $INSTALLDIR/install_dependencies/install_openvino_dependencies.sh

WORKDIR /home/sa

RUN /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh" && \
    echo "source /opt/intel/openvino/bin/setupvars.sh" >> /home/sa/.bashrc && \
    echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc && \
    $INSTALLDIR/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
    $INSTALLDIR/deployment_tools/demo/demo_squeezenet_download_convert_run.sh

RUN bash

# opencv installation

RUN unzip opencv.zip && \
    unzip opencv_contrib.zip && \
    # rm opencv.zip opencv_contrib.zip && \
    mv opencv-4.3.0 opencv && \
    mv opencv_contrib-4.3.0 opencv_contrib && \
    cd ./opencv && \
    mkdir build && \
    cd build && \
    cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_INF_ENGINE=ON -D ENABLE_CXX11=ON -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=OFF -D INSTALL_C_EXAMPLES=OFF -D ENABLE_PRECOMPILED_HEADERS=OFF -D OPENCV_ENABLE_NONFREE=ON -D OPENCV_EXTRA_MODULES_PATH=/home/sa/opencv_contrib/modules -D PYTHON_EXECUTABLE=/usr/bin/python3 -D WIDTH_GTK=ON -D BUILD_TESTS=OFF -D BUILD_DOCS=OFF -D WITH_GSTREAMER=OFF -D WITH_FFMPEG=ON -D BUILD_EXAMPLES=OFF .. && \
    make && \
    make install && \
    ldconfig

Solution

  • You need to cause the shell to load that file in every RUN command where you use it, and also at container startup time.

    For startup time, you can use an entrypoint wrapper script:

    #!/bin/sh
    # Load the script of environment variables
    . /opt/intel/openvino/bin/setupvars.sh
    # Run the main container command
    exec "$@"
    

    Then in the Dockerfile, you need to include the environment variable script in RUN commands, and make this script be the image's ENTRYPOINT.

    RUN . /opt/intel/openvino/bin/setupvars.sh && \
        /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
        /opt/intel/openvino/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
    
    RUN ... && \
        . /opt/intel/openvino/bin/setupvars.sh && \
        cmake ... && \
        make && \
        ...
    
     COPY entrypoint.sh .
     ENTRYPOINT ["./entrypoint.sh"]
     CMD same as the command you set in the original image
    

    If you docker exec debugging shells in the container, they won't see these environment variables and you'll need to manually re-read the environment variable script. If you use docker inspect to look at low-level details of the container, it also won't show the environment variables.


    It looks like that script just sets a couple of environment variables (especially $LD_LIBRARY_PATH and $PYTHONPATH), if to somewhat long-winded values, and you could just set these with ENV statements in the Dockerfile.

    If you look at the docker build output, there are lines like ---> 0123456789ab after each build step; those are valid image IDs that you can docker run. You could run

    docker run --rm 0123456789ab \
      env \
      | sort > env-a
    docker run --rm 0123456789ab \
      sh -c '. /opt/intel/openvino/bin/setupvars.sh && env' \
      | sort > env-b
    

    This will give you two local files with the environment variables with and without running this setup script. Find the differences (say, with comm(1)), put ENV before each line, and add that to your Dockerfile.


    You can't really use .bashrc in Docker. Many common paths don't invoke its startup files: in the language of that documentation, neither a Dockerfile RUN command nor a docker run instruction is an "interactive shell" so those don't read dot files, and usually docker run ... command doesn't invoke a shell at all.

    You also don't need sudo (you are already running as root, and an interactive password prompt will fail); RUN sh -c is redundant (Docker inserts it on its own); and source isn't a standard shell command (prefer the standard ., which will work even on Alpine-based images that don't have shell extensions).