Ansible dynamic inventory for Azure always empty

104 Views Asked by At

I'm working on a multi-cloud setup with Ansible and Azure support is the latest platform addition. The problem is that Azure dynamic inventory is delivering empty list of hosts. There are no error messages, just the empty list host list:

$> ansible-inventory -vvv --list -i europe.azure_rm.yml
ansible-inventory [core 2.15.0]
config file = /home/tony/projects/pool/ansible.cfg
configured module search path = ['/home/tony/projects/pool/library']
ansible python module location = /home/tony/projects/ansible2.15/lib/python3.10/site-packages/ansible
ansible collection location = /usr/local/share/ansible/ansible_collections:/home/tony/.ansible/collections/ansible_collections
executable location = /home/tony/projects/ansible2.15/bin/ansible-inventory
python version = 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (/home/tony/projects/ansible2.15/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/tony/projects/pool/ansible.cfg as config file
redirecting (type: inventory) ansible.builtin.azure_rm to azure.azcollection.azure_rm
Parsed /home/tony/europe.azure_rm.yml inventory source with ansible_collections.azure.azcollection.plugins.inventory.azure_rm plugin
{
    "_meta": {
        "hostvars": {}
    },
    "all": {
        "children": [
            "ungrouped"
        ]
    }
}

I'm using Azure collection v1.18.1 (Ansible and Python versions are visible from the output above) and yes, requirements-azure.txt from the Azure collection is installed in venv. The inventory plugin is enabled in ansible.cfg:

[inventory]
enable_plugins = ini, yaml, azure_rm

The inventory file is simple and the filename is compliant with requested naming:

---
plugin: azure.azcollection.azure_rm
include_vm_resource_groups:
    - '*'
auth_source: auto

There is just one resource group in the Azure reserved for test purposes with several VMs created manually directly in the Azure Portal (not via Ansible play). Service principal is created and scope is set only for this subscription/resource group. These credentials are working if I use them with Azure CLI - az delivers the full list of VMs without any problems, so I presume culprit is somewhere on the Ansible side. The only problem is - there are no error messages and I don't know where to start. I have dug through similar problems of other people here, but none of them helped. The only question that seemed relatively similar is that dynamic inventory did not deliver VMs which were created manually, but only those that were created via Ansible and the very same collection (but there is no solution or follow-ups).

  • tried different ways of supplying the credentials (auth_source) - as credentials file, as ENV variables, using "cli" and "auto".
  • tried specifying RG name instead of using '*'.
  • tried putting "AzureCloud" explicitly as cloud_environment.
  • tried to define forming of hostnames explicitly.
  • resource group and VMs do not have special characters, snake case is used
  • tried adding new venv with different python version
  • tried starting the azure_rm.py directly

Nothing helped. No error messages, the returning host list remains empty.

1

There are 1 best solutions below

0
Michael On

I experienced identical behaviour this week and the problem turned out to be VMs not being powered on. In Azure portal they display as "Stopped (Deallocated)". AZ CLI tool will return these VMs with az vm list , even when authenticating as the service principal. However ansible-inventory would return nothing.

After testing across a few different subscriptions it turned out that only VMs in a running state would be returned. Ie on a subscription with 3 VMs in a resource group, one running the other two stopped, only the running VM was returned by ansible-inventory.

I am still searching for how to return stopped/deallocated VMs.

Edit: Found it, not sure how I missed this in the inventory plugin documentation.

https://docs.ansible.com/ansible/latest/collections/azure/azcollection/azure_rm_inventory.html#parameter-default_host_filters

So if your problem is indeed caused by VMs in powered off state, add the following to your azure_rm.yaml

default_host_filters: ["provisioning_state != \"succeeded\""]

This removes the default filter that is excluding powered off VMs.