I have written two roles with Ansible. The first role (i.e. provision) is executed locally on an instance that has the required IAMs to provision EC2 instances (see below):
- name: Provison "{{ count }}" ec2 instances in "{{ region }}"
ec2:
key_name: "{{ key_name }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
...
exact_count: "{{ count }}"
count_tag: "{{ count_tag }}"
instance_tags:
...
register: ec2
I then add the private IP address to hosts.
- name: Add the newly created EC2 instances to the local host file
local_action: lineinfile
dest="./hosts"
regexp={{ item.private_ip }}
insertafter="[sit]" line={{ item.private_ip }}
with_items: "{{ ec2.instances }}"
I wait for SSH to be available.
- name: Wait for SSH process to be available on "{{ sit }}"
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
The second role (i.e. setupEnv) sets up environmental variables on the 'sit' hosts such as users/group directories. I attempt to run the roles sequentially (see below main.yml
playbook):
- hosts: local
roles:
connection: local
gather_facts: false
user: svc_ansible_lab
roles:
- provision
- hosts: sit
roles:
connection: ssh
gather_facts: true
user: ec2-user
roles:
- setupEnv
However, only the first role gets executed on local host. Ansible waits until SSH is available on the provisioned instances and then the process finishes without attmpting role setupEnv.
Is there a way I can make sure the second role is executed on the sit
hosts after the SSH is available?
The inventory file will not be automatically re-sourced in between the plays.
Instead of modifying the inventory file, use add_host
module and in-memory inventory.
- name: Add the newly created EC2 instances to the in-memory inventory
add_host:
hostname: "{{ item.private_ip }}"
groups: sit
with_items: "{{ ec2.instances }}"
Alternatively you might use the meta
module with refresh_inventory
parameter to force Ansible to re-read the inventory file:
- meta: refresh_inventory