Inventory file (inventory/k8s.yaml):
plugin: kubernetes.core.k8s
connections:
- kubeconfig: ~/.kube/config
context: 'cluster-2'
Task file (roles/common/tasks/main.yaml):
# Method 1: Using `kubernetes.core` plugin to list the pod names:
- name: Get a list of all pods from any namespace
kubernetes.core.k8s_info:
kind: Pod
register: pod_list
- name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].metadata.name') }} "
# Method 2: Using `shell` command to list the pod names:
- name: Get node names
shell: kubectl get pods
register: pod_list2
- name: Print pod names
debug:
msg: "{{ pod_list2.stdout }}"
Ansible config (ansible.cfg):
[inventory]
enable_plugins = host_list, auto, yaml, ini, kubernetes.core.k8s
Main file (main.yaml):
---
- hosts: localhost
gather_facts: false
collections:
- azure.azcollection
- kubernetes.core
roles:
- "common"
Running command to execute task: ansible-playbook main.yaml -i cluster-2/k8s.yaml -e role=common -e cluster_name=cluster-2
Question: I am running the above configs to run get the pods from the remote cluster mentioned in the inventory file. But, the problem is, I am still getting the pod names from the local cluster and not the cluster-2 in Method 1 and 2.
k8s plugin should get the list of pods from cluster-2 as described in the inventory file. How can I connect to remote kubernetes cluster?
I also checked output with -vvvv
:
ansible-playbook [core 2.14.0]
config file = /Users/test/u/apps/ansible.cfg
configured module search path = ['/Users/test/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/test/Library/Python/3.9/lib/python/site-packages/ansible
ansible collection location = /Users/test/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/test/Library/Python/3.9/bin/ansible-playbook
python version = 3.9.12 (main, Mar 26 2022, 15:52:10) [Clang 13.0.0 (clang-1300.0.29.30)] (/usr/local/opt/[email protected]/bin/python3.9)
jinja version = 3.1.2
libyaml = True
Using /Users/test/u/apps/ansible.cfg as config file
setting up inventory plugins
Loading collection kubernetes.core from /Users/test/.ansible/collections/ansible_collections/kubernetes/core
You're trying to use both the kubernetes inventory plugin and the k8s_info
module, and because of that you're getting conflicting results. The two don't have anything to do with each other.
The kubernetes inventory module is -- I think -- a weird beast; it produces an ansible inventory in which the pods in your cluster are presented as Ansible hosts. To see a list of all the pod names in your cluster, you could write a playbook like this:
- hosts: all
gather_facts: false
tasks:
- name: Print pod names
debug:
msg: "{{ inventory_hostname }}"
This will respect the context you've configured in your kubernetes inventory plugin configuration. For example, if I have in inventory/k8s.yaml
the following:
plugin: kubernetes.core.k8s
connections:
- kubeconfig: ./kubeconfig
context: 'kind-cluster2'
Then the above playbook will list the pod names from kind-cluster2
, regardless of the current-context
setting in my kubeconfig
file. In my test environment, this produces:
PLAY [all] *********************************************************************
TASK [Print pod names] *********************************************************
ok: [kubernetes] => {
"msg": "kubernetes"
}
ok: [coredns-565d847f94-2shl6_coredns] => {
"msg": "coredns-565d847f94-2shl6_coredns"
}
ok: [coredns-565d847f94-md57c_coredns] => {
"msg": "coredns-565d847f94-md57c_coredns"
}
ok: [kube-dns] => {
"msg": "kube-dns"
}
ok: [etcd-cluster2-control-plane_etcd] => {
"msg": "etcd-cluster2-control-plane_etcd"
}
ok: [kube-apiserver-cluster2-control-plane_kube-apiserver] => {
"msg": "kube-apiserver-cluster2-control-plane_kube-apiserver"
}
ok: [kube-controller-manager-cluster2-control-plane_kube-controller-manager] => {
"msg": "kube-controller-manager-cluster2-control-plane_kube-controller-manager"
}
ok: [kube-scheduler-cluster2-control-plane_kube-scheduler] => {
"msg": "kube-scheduler-cluster2-control-plane_kube-scheduler"
}
ok: [kindnet-nc27b_kindnet-cni] => {
"msg": "kindnet-nc27b_kindnet-cni"
}
ok: [kube-proxy-9chgt_kube-proxy] => {
"msg": "kube-proxy-9chgt_kube-proxy"
}
ok: [local-path-provisioner-684f458cdd-925v5_local-path-provisioner] => {
"msg": "local-path-provisioner-684f458cdd-925v5_local-path-provisioner"
}
PLAY RECAP *********************************************************************
coredns-565d847f94-2shl6_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
coredns-565d847f94-md57c_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
etcd-cluster2-control-plane_etcd : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kindnet-nc27b_kindnet-cni : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-apiserver-cluster2-control-plane_kube-apiserver : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-controller-manager-cluster2-control-plane_kube-controller-manager : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-dns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-proxy-9chgt_kube-proxy : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-scheduler-cluster2-control-plane_kube-scheduler : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubernetes : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
local-path-provisioner-684f458cdd-925v5_local-path-provisioner : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The key point here is that your inventory will consist of a list of pods. I've never found this particularly useful.
k8s_info
moduleThe k8s_info
queries a kubernetes cluster for a list of objects. It doesn't care about your inventory configuration -- it will run on whichever target host you've defined for your play (probably localhost
) and perform the rough equivalent of kubectl get <whatever>
. If you want to use an explicit context, you need to set that as part of your module parameters. For example, to see a list of pods in kind-cluster2
, I could use the following playbook:
- hosts: localhost
gather_facts: false
tasks:
- kubernetes.core.k8s_info:
kind: pod
kubeconfig: ./kubeconfig
context: kind-cluster2
register: pods
- debug:
msg: "{{ pods.resources | json_query('[].metadata.name') }}"
Which in my test environment produces as output:
PLAY [localhost] ***************************************************************
TASK [kubernetes.core.k8s_info] ************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"coredns-565d847f94-2shl6",
"coredns-565d847f94-md57c",
"etcd-cluster2-control-plane",
"kindnet-nc27b",
"kube-apiserver-cluster2-control-plane",
"kube-controller-manager-cluster2-control-plane",
"kube-proxy-9chgt",
"kube-scheduler-cluster2-control-plane",
"local-path-provisioner-684f458cdd-925v5"
]
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
In conclusion: you probably want to use k8s_info
rather than the inventory plugin, and you'll need to configure the module properly by setting the context
(and possibly the kubeconfig
) parameters when you call the module.
Is there any way I can define context and kubeconfig outside of the tasks (globally) if I am using k8s_info module?
According to the documentation, you could set the K8S_AUTH_KUBECONFIG
and K8S_AUTH_CONTEXT
environment variables if you want to globally configure the settings for the k8s_info
module. You could also write your task like this:
- kubernetes.core.k8s_info:
kind: pod
kubeconfig: "{{ k8s_kubeconfig }}"
context: "{{ k8s_context }}"
register: pods
And then define the k8s_kubeconfig
and k8s_context
variables somewhere else in your Ansible configuration (e.g., as group vars). This makes it easy to retarget things to a different cluster with only a single change.