Search code examples
ansiblevirtual-machineinventory

How to deploy a custom VM with ansible and run subsequent steps on the guest VM via the host?


I have a playbook that I run to deploy a guest VM onto my target node. After the guest VM is fired up, it is not available to the whole network, but to the host machine only. Also, after booting up the guest VM, I need to run some commands on that guest to configure it and make it available to all the network members.

---
- block:
  - name: Verify the deploy VM script
    stat: path="{{ deploy_script }}"
    register: deploy_exists
    failed_when: deploy_exists.stat.exists == False
    no_log: True

  rescue:
  - name: Copy the deploy script from Ansible
    copy:
      src: "scripts/new-install.pl"
      dest: "/home/orch"
      owner: "{{ my_user }}"
      group: "{{ my_user }}"
      mode: 0750
      backup: yes
    register: copy_script

- name: Deploy VM
  shell: run my VM deploy script

<other tasks>

- name: Run something on the guest VM
  shell: my_other_script
  args:
     cdir: /var/scripts/

- name: Other task on guest VM
  shell: uname -r

<and so on>

How can I run those subsequent steps on the guest VM via the host? My only workaround is to populate a new inventory file with the VMs details and add the use the host as a bastion host.

[myvm]
myvm-01 ansible_connection=ssh ansible_ssh_user=my_user ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p someuser@host_machine"'

However, I want everything to happen on a single playbook, rather than splitting them.


Solution

  • I have resolved it myself. I managed to dynamically add the host to the inventory and used a group:vars for the newly created hosts to use the VM manager as a bastion host

    Playbook:

    ---
      hosts: "{{ vm_manager }}"
      become_method: sudo
      gather_facts: False
    
      vars_files:
        - vars/vars.yml
        - vars/vault.yml
    
      pre_tasks:
    
      - name: do stuff here on the VM manager
        debug: msg="test"
    
      roles:
        - { role: vm_deploy, become: yes, become_user: root }
    
      tasks:
      - name: Dinamically add newly created VM to the inventory
        add_host:
          hostname: "{{ vm_name }}"
          groups: vms
          ansible_ssh_user: "{{ vm_user }}"
          ansible_ssh_pass: "{{ vm_pass }}"
    
    - name: Run the rest of tasks on the VM through the host machine
      hosts: "{{ vm_name }}"
      become: true
      become_user: root
      become_method: sudo
    
      post_tasks:
      - name: My first task on the VM
        static: no
        include_role: 
          name: my_role_for_the_VM
    

    Inventory:

    [vm_manager]
    vm-manager.local
    
    [vms]
    my-test-01
    my-test-02
    
    [vms:vars]
    ansible_connection=ssh 
    ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p username@vm-manager.local"'
    

    Run playbook:

    ansible-playbook -i hosts -vv playbook.yml -e vm_name=some-test-vm-name