Search code examples
amazon-ec2ansibleansible-inventory

Ansible ec2 only provision required servers


I've got a basic Ansible playbook like so:

---

- name: Provision ec2 servers
  hosts: 127.0.0.1
  connection: local
  roles:
    - aws

- name: Configure {{ application_name }} servers
  hosts: webservers
  sudo: yes
  sudo_user: root
  remote_user: ubuntu
  vars:
    - setup_git_repo: no
    - update_apt_cache: yes
  vars_files:
    - env_vars/common.yml
    - env_vars/remote.yml
  roles:
    - common
    - db
    - memcached
    - web

with the following inventory:

[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python

The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.

The Configure {{ application_name }} servers step then configures that server, installing everything I need.

So far so good, this all does exactly what I want and everything seems to work.

Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers

The only ways I can think of to make this work... well, they don't work.

I tried simply declaring the host groups without hosts in the inventory:

[webservers]

[dbservers]

[memcachedservers]

but that's a syntax error.

I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:

- name: Provision webservers
  hosts: webservers
  connection: local
  roles:
    - aws

- name: Provision dbservers
  hosts: dbservers
  connection: local
  roles:
    - aws

- name: Provision memcachedservers
  hosts: memcachedservers
  connection: local
  roles:
    - aws

but those groups don't exist until after the respective step is complete, so I don't think that will work either.

I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.


Solution

  • In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.

    My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:

    ---
    
    - name: Provision ec2 apiservers
      hosts: apiservers  #important bit
      connection: local  #important bit
      vars:
        - host_group: apiservers
        - security_group: blah
      roles:
        - aws
    
    - name: Configure {{ application_name }} apiservers
      hosts: apiservers:!127.0.0.1  #important bit
      sudo: yes
      sudo_user: root
      remote_user: ubuntu
      vars_files:
        - env_vars/common.yml
        - env_vars/remote.yml
      vars:
        - setup_git_repo: no
        - update_apt_cache: yes
      roles:
        - common
        - db
        - memcached
        - web
    

    This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.

    The wrapping playbook is dead simple, just:

    ---
    
    - name: Deploy all the {{ application_name }} things!
      hosts: all
    
    - include: webservers.yml
    - include: apiservers.yml
    

    I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.