After starting a cloud instance, different packages are installed using an ansible playbook. This works when accessing the instance remotely, but when I access it via my program directly, just after creating the instance, it fails, because it can't find a standard package: No package matching 'docker.io' is available
. As implied above, it has no issues finding the package when I then connect to the instance and manually repeat the command (i.e. run said playbook).
I tested whether the package is in the apt-cache by using apt-cache show docker.io
and apparently it is not. This confuses me a little since I don't understand how it can magically appear later. But this explains why ansible is unable to find it since it just takes a look in the cache:
TASK [common : Install Docker] *************************************************
task path: /home/ubuntu/playbook/roles/common/tasks/030-docker.yml:1
Using module file /usr/local/lib/python3.10/dist-packages/ansible/modules/apt.py
Pipelining is enabled.
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<localhost> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ghkszxgwpyzhztutyejukfzkeokujykh ; /usr/bin/python3'"'"' && sleep 0'
The full traceback is:
File "/tmp/ansible_apt_payload_n4snw4g0/ansible_apt_payload.zip/ansible/modules/apt.py", line 511, in package_status
File "/usr/lib/python3/dist-packages/apt/cache.py", line 283, in __getitem__
raise KeyError('The cache has no package named %r' % key)
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"allow_change_held_packages": false,
"allow_downgrade": false,
"allow_unauthenticated": false,
"autoclean": false,
"autoremove": false,
"cache_valid_time": 0,
"clean": false,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"fail_on_autoremove": false,
"force": false,
"force_apt_get": false,
"install_recommends": null,
"lock_timeout": 60,
"name": "docker.io",
"only_upgrade": false,
"package": [
"docker.io"
],
"policy_rc_d": null,
"purge": false,
"state": "present",
"update_cache": null,
"update_cache_retries": 5,
"update_cache_retry_max_delay": 12,
"upgrade": null
}
},
"msg": "No package matching 'docker.io' is available"
}
The path to a solution might therefore be to understand why the cache is not ready yet and add a command before executing ansible or in ansible that updates the cache properly.
Additional info
The specific task is:
- name: Install Docker
apt:
name: docker.io
state: present
tags: install
The distribution in use is ubuntu22.04
Commands run before ansible
Before installing packages via ansible, I deactivate automatic updating and upgrading, wait till the lock is released, update and do a simple ping command via ansible (that runs without issues).
Deactivating automatic updating:
sudo sed -i 's/APT::Periodic::Unattended-Upgrade "1";/APT::Periodic::Unattended-Upgrade "0";/g' /etc/apt/apt.conf.d/20auto-upgrades
Wait till lock is released:
while sudo lsof /var/lib/dpkg/lock 2> null; do echo "/var/lib/dpkg/lock locked - wait for 10 seconds"; sleep 10; done
Ping:
ansible -i "~/playbook/ansible_hosts" all -m ping
EDIT 3
When testing I noticed that adding a 20 second update between ping and execution, the error mentioned above does not occur (at least in the test runs i did). So I think it is probably a timing issue. Any idea what to wait for?
While I still don't know why the apt update
before is not enough, adding update_cache: 'yes'
fixed it:
- name: Install Docker
apt:
name: docker.io
state: present
update_cache: 'yes'
tags: install