I'm using boot2docker on OS X and cloned the following repo:
https://github.com/enokd/docker-node-hello
It basically has a Dockerfile and a very simple express app that prints hello world. Everything runs great when I build and run the image, but of course if I make any changes to index.js on my Mac none of that gets reflected in my running image. I can't seem to find any references on how I'd setup docker so that I can run it in my development environment to automatically pick up my source code changes so I feel like I'm "Doing it wrong". Any suggestions?
Here's how I'm currently running it (I'm not using Vagrant, and not quite sure if that makes any difference):
$ docker build -t gasi/centos-node-hello .
$ docker run -p 49160:8080 -d gasi/centos-node-hello
$ curl localhost:49160
Update: Added an answer with what I ended up doing.
Update: Added more current answer using boot2docker 1.3+ and fig.
This is what I ended up doing, so far seems to work but I'm still digging into it:
# script located in bin/run
NS=mycompany
PROJECT=myproject
# kill and remove old container if it exists
docker kill $PROJECT
docker rm $PROJECT
# tag the previously built image
docker tag $NS/$PROJECT $NS/$PROJECT:old
# build the new image
docker build -t $NS/$PROJECT .
# remove the old image
docker rmi $NS/$PROJECT:old
docker run -dP --name=$PROJECT $NS/$PROJECT /sbin/my_init
In my project root, I simply run:
nodemon -x bin/run
Credit goes to this source.
Update for docker 1.3 and fig
Fig is great, it really took a lot of the complexity out of the script I had before. In addition, boot2docker now natively supports mounting volumes on Mac OS X using Virtual Box's shared folders. This is what I find works really well for me now:
First, the Dockerfile
:
FROM ubuntu:14.04
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set debconf to run non-interactively
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install base dependencies
RUN apt-get update && apt-get install -y -q --no-install-recommends \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
python \
rsync \
software-properties-common \
wget \
&& rm -rf /var/lib/apt/lists/*
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 0.10.33
# Install nvm with node and npm
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.20.0/install.sh | bash \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/v$NODE_VERSION/bin:$PATH
CMD ["npm", "start"]
The fig.yml
:
app:
image: myNodeImage
working_dir: /home/myProject
volumes_from:
- myvols
Here's the new bin/run
:
#!/usr/bin/env bash
# This is the the bin/run script
docker run --rm --volumes-from myvols myNodeImage \
rsync \
--delete \
--recursive \
--safe-links \
--exclude .git --exclude node_modules \
/data/myProject/ /home/myProject
fig up
I also have a bin/install
script that does the node_modules
dependency installs. This assumes I've already done an npm install on my host so that any private packages will work. Also, this works great with npm links, you just need to make a symlink from your /home/linkedProject
into $NODE_PATH/linkedProject
in your container.
#!/usr/bin/env bash
# This is the the bin/install script
docker run --rm --volumes-from myvols myNodeImage \
rm -rf /home/myProject && \
rsync \
--delete \
--recursive \
--safe-links \
--exclude .git \
/data/myProject/ /home/myProject && \
cd /home/myProject && \
npm rebuild
So, to put this all together, here's the steps in order:
Create my data volume container:
docker run -v $HOME/data:/data:ro \
-v /home \
-v /path/to/NODE_PATH \
--name myvols myNodeImage echo Creating my volumes
Run my install script: cd ~/data/myProject && ./bin/install
Run my run script: nodemon -x bin/run