Search code examples
javascriptdockerexpresscommand-line-interface

Dockerize Client - Server CLI Node.js application


My Node.js application features two independent components (Express.js server and a CLI with vanilla js and inquirer.js) that communicate to each other through REST API. The main reason for doing this is to keep SOLID single responsibility principle in case that anyone wants to make a different client with a framework such as React.js while keeping the same server-database implementation.

The app is currently running smoothly but I want to launch it with a single command from a root folder that contains both client and server folders. After some exploration and experimentation on my own with npm packages I ended up doing a little bit of research to confirm if my instincts were right about using Docker. This article shed some light into my issue, so I went on looking for more info about Docker.

My progress so far is just creating Dockerfiles for each component and trying to use docker-compose to run the whole environment. I worked on my approach based in this Stackoverflow Answer which gave me a clue on how to work with docker-compose. Then, I searched more about using Docker for Full Stack project and found this article that went more deeply on how to dockerize apps that looked like mine.

Unfortunately, I haven't been able to find an answer to how to create a command to run my CLI app from its root folder. Both answers seem to be satisfactory but I don't know how to implement them in my project in spite of reading the official documentation.

Now, this is a personal project for my backend development studies and I'd like to understand more about Docker to add it to my skills. That's why, instead of getting just the solution to my question, I think that this is a good moment to clarify with the community some concepts I've learned. Please feel free to let me know which of them I need to study again

  • Docker image: A docker image is the standardization of a development environment to ensure the team has the same set up and avoid the "it works on my computer, why not on yours" issue.
  • Docker container: Containers have all that an application needs to run in the environment. That includes source code and dependencies
  • Docker files: Docker files store the sequence of steps Docker follows to create a container. Image creation, dependencies install and source code copying
  • Docker compose file: Set ups/runs the development environment.

At this point I've only talked about development environment but I haven't touched anything related to production. What I mean is, if I ask a user to have fun with my CLI application he/she won't able to just run the program unless docker exists in his/her machine. Isn't Docker the solution to this problem.

What I want to achieve is:

  1. Create a command that users can use to launch the app from the root folder
  2. Run the server first and run the client only afterwards. The client must not be able to run if there is no server running
  3. Run the server in the background, so users don't have to open two terminal windows
  4. Close both client and server on SIGINT or SIGTERM

From my research I know that 2,3 and 4 can be covered with docker-compose, I could be wrong though. Number is a gray area for me.

Project structure:

root
  docker-compose.yaml
  client
    Dockerfile
    node_modules
    package.json
    package-lock.json
    ...source code
  server
    Dockerfile
    node_modules
    package.json
    package-lock.json
    ...source code

Please let me know if you need me to expand anything on my question.

Update

Dave Maze made me realize that I should have included client and server Dockerfile plus my docker-compose.yaml to complete the description of the question.

./client/Dockerfile

FROM node:latest
WORKDIR /client
COPY package.json package.json
RUN npm install
RUN npm install dateformat
COPY . .
ENTRYPOINT ["npm", "start"]

./server/Dockerfile

FROM node:latest
WORKDIR /server
COPY package.json package.json
RUN npm install
RUN npm install express-async-handler
ENV NODE_ENV=${NODE_ENV}
COPY . .
ENTRYPOINT ["npm","start"]

./docker-compose.yaml

version: "3.3"
services:
  backend:
    build:
      context: "./server"
    container_name: server
    ports:
    - "3000:3000"
    command: node ./server/src/index.js
  frontend:
    depends_on:
      - backend
    build: ./client

Currently compose creates the image correctly after applying some feedback I got from another developer on these files. That was good news but the way the program runs does not meet the way it is designed, something I should have detailed in the question description as well.

Design:

My CLI program creates, edits blogposts and performs some basic admin actions from a profile called author. It also has a reader profile which create comments and lets user read blogposts. That is achieved through using flags in a command line in the following way:

blogcli -a to give access to the author profile. blogcli -r to give access to the reader profile. blogcli or wrong options outputs help.

When I run docker-compose there's not any issue with the execution of the program but it runs the npm start on the frontend without giving me the chance to pass flags. I'm aware that the line EntryPoint in my ./client/Dockerfile does what it is told but I wonder if there's a way to let anyone who runs docker-compose up have access to a terminal to fire the program with flags. Hope this makes sense.


Solution

  • For a couple of reasons, you don't typically run interactive CLI applications via Compose, and often not in a container at all. Most notably, it's an extra command to attach to the stdin/stdout of a Compose container. Compose generally expects its containers to not exit, so you could wind up needing to re-run docker-compose up -d every time you wanted to run the CLI again.

    For a CLI application that connects to a backend server, I'd only run the backend in a container. The Compose definition you have for the backend should be fine; you can probably trim it down further to

    version: '3.8'
    services:
      backend:
        build: ./server
        ports:
          - '3000:3000'
    

    Start the server with docker-compose up -d. Because of the ports:, the server can be reached on http://localhost:3000/ from the host system outside a container on most modern Docker setups.

    Run the CLI directly on the host, the same way you do now.

    cd client
    npm install
    ./blogcli -h http://localhost:3000/ -a
    ./blogcli -h http://localhost:3000/ -r
    

    If you really needed to run the backend in a container, I might use docker-compose run. This runs a one-off command based on some existing container definition, replacing its command:.

    services:
      backend: # as above
      cli:
        build: ./client
        depends_on:
          - backend
        entrypoint: /app/blogcli
        environment:
          BACKEND_URL: http://backend:3000/
        profiles:
          - cli
    
    docker-compose run --rm cli -a
    docker-compose run --rm cli -r