I have a simple Ansible dynamic inventory for AWS servers that looks like this.
---
plugin: aws_ec2
regions:
- eu-west-2
keyed_groups:
- key: tags.Name
hostnames:
# A list in order of precedence for hostname variables.
- ip-address
compose:
ansible_host: _Applications_
ansible_user: "'ubuntu'"
This works fine, except that I also have another instance that's Redhat.
Which means that when I try to do a simple ping
command on all the hosts, it fails as the username ubuntu
is only valid on one of the servers.
Is there a way to set group my inventory file so that I can add in the ec2-user
username for a specific group maybe based on it's tag or something else.
I could do this easily with my static inventory but I'm not sure how to do this with a dynamic inventory.
I've tried setting my Ansible inventory as an environment variable
export ANSIBLE_INVENTORY=~/Users/inventory
And placed my aws_ec2.yaml
in the inventory directory along with a group vars directory containing my different groups with default usernames in each of the different groups
username: ubuntu
username: ec2-user
and then setting my inventory file as such
compose:
ansible_user: "{{ username }}"
But when Ansible tries to connect, it's using an admin
username and not what's set in my group vars.
Is there a way to set the different usernames needed to connect to the different type of servers?
Per the example for the constructed plugin, you can use the keyed_group
feature to create groups by ansible_distribution
:
keyed_groups:
# this creates a group per distro (distro_CentOS, distro_Debian) and assigns the hosts that have matching values to it,
# using the default separator "_"
- prefix: distro
key: ansible_distribution
And then set ansible_user
inside groups_vars/distro_Ubuntu.yaml
and group_vars/distro_RedHat.yaml
.
Also from the documentation, this requires fact caching to operate (because otherwise Ansible doesn't know the value of ansible_distribution
at the time it's processing the keyed_groups
setting).
I don't have access to AWS at the moment, but here's how I'm testing everything locally. Given an inventory that looks like:
$ tree inventory
inventory/
├── 00-hosts.yaml
└── 10-constructed.yaml
Where inventory/00-hosts.yaml
looks like:
all:
hosts:
host0:
ansible_host: localhost
And inventory/10-constructed.yaml
looks like:
plugin: constructed
strict: false
groups:
ipmi_hosts: ipmi_host|default(false)
keyed_groups:
- prefix: "distro"
key: ansible_distribution
And ansible.cfg
looks like:
[defaults]
inventory = inventory
enable_plugins = constructed
gathering = smart
fact_caching = jsonfile
fact_caching_connection = ./.facts
The first time I run this playbook:
- hosts: all
gather_facts: true
tasks:
- debug:
var: group_names
The output of the debug task is:
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [host0] => {
"group_names": [
"ungrouped"
]
}
But because of the fact gathering and caching performed by the previous playbook run, the second time I run it the output is:
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [host0] => {
"group_names": [
"distro_Fedora"
]
}
Similarly, before the first playbook run ansible-inventory --graph
outputs:
@all:
|--@ungrouped:
| |--host0
But after running the playbook once, I get:
@all:
|--@distro_Fedora:
| |--host0
|--@ungrouped:
I've bundled this all into an example repository.