I'm having a k3s (single node) kubernetes cluster and try to make a deployment on it via ansible. I know of cource that I use a k8s collection for k3s, but maybe it can be solved anyhow.
The relevant part of my playbook is:
---
- hosts: k3s_cluster
become: yes
tasks:
- name: Create a Deployment by reading the definition from a local file
kubernetes.core.k8s:
api_key: mytoken
state: present
src: deployment.yml
The deployment file is simple.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
If I do it directly via kubectl it works. The config exists on the target host(and also on my local machine) in ~/.kube/config. Specifying that location doesn't help.
In the end the problem was (as the error message inidcated) the kubeconfig location. I was using https://github.com/k3s-io/k3s-ansible to deploy kubernetes k3s.
The Problem: As Sysadmin mentioned in his post, k3s puts the config into /etc/rancher/k3s/k3s.yaml. But the used ansible collection(https://docs.ansible.com/ansible/latest/collections/kubernetes/core/k8s_module.html) expects it to be ins /home/user/.kube/config.
Solution: Either follow Sysadmin suggestions and set the global variable. But then you still have to amend the user rights for the file. Or copy /etc/rancher/k3s/k3s.yaml to /home/user/.kube/config (also renaming it to config and setting user rights in the process).