Search code examples
ruby-on-railsdockerdocker-composedockerfiledocker-machine

docker-compose rebuild after each Gemfile update?


I will explain what's working first:

I created a new rails App using Docker from this docs. After I ran docker-compose up my rails app will be running on http://docker-ip:port.

Then In a new terminal when i run a scaffold like this

docker-compose run --rm app bundle exec rails g scaffold note title body:text

and also then

docker-compose run --rm app bundle exec rake db:migrate

to migrate to Database. And then when I go to http://docker-ip:port -> My new scaffolding works. But Scaffolding doesn't need rails server to start.

Not working stuff:

So now say i need a devise gem, I update my Gemfile on local sublime text and then run

docker-compose run --rm app bundle install

This will install new devise gem as expected. But when I Run

docker-compose run --rm app bundle exec rails g devise:install

I get the error:

Could not find bcrypt-3.1.11 in any of the sources Run bundle install to install missing gems.
So basically after adding devise to Gemfile I have to again run docker-compose build Which will take long time due to bundle install which will install all gems required from scratch.

So can can I update Gemfile without rebuilding docker-compose again?

or Where Am i wrong?


Solution

  • First, about the workaround with docker exec. It's not a good approach to modify container state. What if you need to run one more instance of app container? There will be no changes made by exec. You'll have to install gems there again, or rebuild image. It's not a rare case when you need to run multiple containers. For example, you use docker-compose up to run dev environment, and docker-compose run --rm web bash in the near terminal to run shell in the second app container and use it to run tests, migrations, generators or use rails console without stopping containers launched by docker-compose up.

    Now about the solution. When you run docker-compose run --rm app bundle install, you create the new container, install new gems into it (this operation updates Gemfile.lock, and you see this changes, because your project dir is mounted to container), and exit. Container gets removed because of --rm flag. Changes made in container don't affect image.

    To avoid image rebuilding on each gem install, you can add a service to store gems. Here is modified docker-compose.yml, based on the one from docs.

    version: '3'
    services:
      db:
        image: postgres
      web:
        build: .
        command: bash -c "bundle install && bundle exec rails s -p 3000 -b 0.0.0.0"
        volumes:
          - .:/myapp
          - bundle_cache:/bundle_cache
        ports:
          - "3000:3000"
        depends_on:
          - db
        environment:
          - BUNDLE_PATH=/bundle_cache
      bundle_cache:
        image: busybox
        volumes:
          - bundle_cache:/bundle_cache
    
    volumes:
      bundle_cache:
    

    When you use container that stores gems for all your app containers, you don't need to rebuild image because of adding new gems at all until you run docker-compose down that deletes all your containers (it's really rarely needed) or until you delete bundle_cache container yourself. And of course you don't need to use bundle exec for each container where you want to install new gems. So it's much easier and time-saving.

    This, however, requires additional bundle install after initial docker-compose build, because on the creation and first mounting /bundle_cache to the container with application it will be empty. But after that your gems will be stored in the separate container, and this storage will be available for each started application container.