Search code examples
ruby-on-railsamazon-web-servicesdaemonamazon-elastic-beanstalk

how to configure Elastic Beanstalk to deploy code to an instance but not add it to the load balancer


I am moving a Rails app to AWS and am using EB. I need to run a daemon on a separate instance (I do not want this instance to be serving HTTP requests).

The daemon is part of app's codebase, and will communicate with the same RDS instance as the web server instances. I would like to know, if possible, how I can configure EB to deploy the rails app to an additional instance, but elide adding that instance to the load-balancer, and (re)start the daemon on that instance after a new revision is deployed.

I realize I could achieve the same result by managing this additional instance myself, outside of EB, but I have a feeling there's a better way. I have done some research myself, without finding what I'm after.

I could also just run the daemon on one of the web server instances, and live with the fact that it's also serving HTTP requests. Since this is acceptable for right now, that's what I'm doing today ... but I want a dedicated instance for that daemon, and it would be great if I didn't have to drop the convenience of EB deployments just for that.

This is the first time I've used Elastic Beanstalk; I have some experience with AWS. I hope my question makes sense. If it doesn't, an answer that points out why it doesn't make sense will be accepted.

Thanks!


Solution

  • With Elastic Beanstalk, this is typically achieved by using a worker tier environment within the same EB application (same code base, same .eb* files, just different environments.

    Here's an example of a rails application that is deployed to one web server, and two specialized workers:

    [yacin@mac my_rails_app (master)]$ eb list -v
    Region: us-west-1
    Application: my_rails_app
        Environments: 3
            email-workers-production : ['i-xxxxxxx']
            * web-servers-production : ['i-xxxxxx']
            job1-workers-production : ['i-xxxxxxx', 'i-xxxxxx']
    

    The workers don't have a public HTTP interface and pull jobs from a queue shared with the front-end. The worker can be configured to have access the same database and with load balancing and autoscaling.

    It's a very flexible and scalable approach, but will require some work to setup. Here's a couple of resources on the subject: Amazon Worker Tier Video Tutorial, Elastic Beanstalk.