My Jenkins multi-branch pipeline is set to build/deploy a docker image of a nodeJS app to an EC2 instance. Everything seems to be working, except the docker container is exiting immediately after creation. It also doesn't appear to be opening port 3000 like it should be.
The image appears correctly with docker images
and shows to be incrementing properly, as per the Jenkinsfile increment stage.
I can only see the closed container with docker ps -a
I am removing all docker images and containers with docker rm $(docker ps -a -q)
&& docker rmi $(docker images -a -q)
before each build of the pipeline. The same issue reappears each time.
docker logs <containerID>
is giving an error about not being able to find modules, so I'm guessing it has something to do with my Dockerfile? I'll attach everything just to be sure it isn't some odd blunder in another file.
Dockerfile
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 3000
CMD [ "node", "server.js" ]
app folder
images
index.html
package-lock.json
package.json
server.js
server.test.js
docker logs
internal/modules/cjs/loader.js:638
throw err;
^
Error: Cannot find module '/home/node/app/server.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
at Function.Module._load (internal/modules/cjs/loader.js:562:25)
at Function.Module.runMain (internal/modules/cjs/loader.js:831:12)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:623:3)
docker-compose.yaml
version: '3.8'
services:
nodejs-app:
image: <docker-ID>/<docker-repo>:${IMAGE}
ports:
- "3000:3000"
server-cmd.sh
#!/usr/bin/env bash
export IMAGE=$1
docker-compose -f docker-compose.yaml up --detach
echo "success"
Jenkinsfile
#!/usr/bin/env groovy
pipeline {
agent any
tools {
nodejs "node"
}
stages {
stage('increment version') {
when {
expression {
return env.GIT_BRANCH == "main"
}
}
steps {
script {
dir("app") {
echo "incrementing app version..."
sh "npm version minor"
def packageJson = readJSON file: 'package.json'
def version = packageJson.version
env.IMAGE_NAME = "$version-$BUILD_NUMBER"
}
}
}
}
stage('Run tests') {
steps {
script {
dir("app") {
sh "npm install"
sh "npm run test"
}
}
}
}
stage('Build and Push docker image') {
when {
expression {
return env.GIT_BRANCH == "main"
}
}
steps {
script {
echo "building the docker image..."
withCredentials([usernamePassword(credentialsId: 'dockerhub', passwordVariable: 'PASS', usernameVariable: 'USER' )]) {
sh "docker build -t <docker-ID>/<docker-repo>:${IMAGE_NAME} ."
sh "echo $PASS | docker login -u $USER --password-stdin"
sh "docker push <docker-ID>/<docker-repo>:${IMAGE_NAME}"
}
}
}
}
stage('deploy to EC2') {
when {
expression {
return env.GIT_BRANCH == "main"
}
}
steps {
script {
def shellCmd = "bash ./server-cmd.sh ${IMAGE_NAME}"
def ec2Instance = "ec2-user@<ec2-public-IP>"
sshagent(['ec2-server-key']) {
sh "scp -o StrictHostKeyChecking=no server-cmd.sh ${ec2Instance}:/home/ec2-user"
sh "scp -o StrictHostKeyChecking=no docker-compose.yaml ${ec2Instance}:/home/ec2-user"
sh "ssh -o StrictHostKeyChecking=no ${ec2Instance} ${shellCmd}"
}
}
}
}
stage('commit version update') {
when {
expression {
return env.GIT_BRANCH == "main"
}
}
steps {
script {
withCredentials([usernamePassword(credentialsId: 'gitlab', passwordVariable: 'PASS', usernameVariable: 'USER' )]) {
sh 'git config --global user.email "[email protected]" '
sh 'git config --global user.name "jenkins" '
sh "git status"
sh "git branch"
sh "git config --list"
sh "git remote set-url origin https://${USER}:${PASS}@gitlab.com/<gitlab-ID>/<gitlab-repo>.git"
sh "git add ."
sh 'git commit -m "jenkins: version bump"'
sh 'git push -f origin HEAD:main'
}
}
}
}
}
}
I've tried a few different Dockerfile configurations. This is the only one that lets the pipeline complete the build with full green stages and no errors/warnings/hints in the build logs.
It's also the only configuration that is getting the image to the EC2 instance and at least getting it started for a moment.
I'm left with docker logs <containerID>
as my only indication as to what might be wrong. I'm lost at this point on how to configure the Dockerfile correctly. Any input would be appreciated!
Problem was in one of the Dockerfile COPY
lines. I was copying from the project root folder instead of the subfolder that contained the nodeJS files.
It should have been COPY --chown=node:node ./app .
instead of COPY --chown=node:node . .
Removed the COPY package*.json ./
line for being superfluous. (Correct me if I'm wrong, but it seems to work the same without it.)
Changed FROM
line to node:20-Alpine
to combat a deprecation warning.
Seems like it wasn't showing port 3000 before because it was never truly starting.
Here is my cleaned up code:
Dockerfile
FROM node:20-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY --chown=node:node ./app .
USER node
RUN npm install
EXPOSE 3000
CMD [ "node", "server.js" ]