Using Ansible to Create, Delete and List Tenants

Prepare the Environment

In this demonstration I am using Cygwin to run the python/ansible scripts in a Windows environment. All my attempts to install ansible via ‘pip install’ in a Windows failed, the number of Stack Overflow posts reporting this problem seems to concur with this.

Handy hint: To cd to a windows C: drive in Cygwin use:

cd /cygdrive/c

To use Ansible with the APIC sandbox host, you will need to first establish a few files before you can start writing your playbooks.

Verify that the installed ansible version is 2.8.x, and that it is using Python3:

Create ansible.cfg file

This file is used by Ansible to tell it that the inventory file is called hosts (unless overridden), to disable gathering of facts by default, and to disable the saving of retry files when playbooks stop with errors:

ansible.cfg

[defaults]

# Use local hosts file
inventory = hosts

# Disable automatic facts gathering.
gathering = explicit

# Do not create retry files when tasks fail. Comment this if you need
# the default behaviour.
retry_files_enabled = False

hosts

Open the hosts file and verify that the login credentials for Cisco APIC (sandbox version) are correct.

[all:vars]
# This is used to load the data model that feeds into
# all of the playbooks in this collection from the local
# file, i.e. vars/SNV_Retail.yml
customer_name=SNV_Retail

# The ACI Controllers
[aci]
apic ansible_host=sandboxapicdc.cisco.com

[aci:vars]
ansible_port=443
# Login credentials
ansible_user=admin
ansible_password=!v3G@!4@Y
# Set to true for production setups that use trusted certificates!
validate_certs=false

Display the current list of Tenants

tenants.yml

create a new file named tenants.yml. In this playbook, write a play with two tasks.
1. the aci_tenant module to query the list of tenants from the lab APIC. Use state parameter ‘query’ to tell the module what operation to perform on the fvTenant class.
2. use the debug module to print text to the terminal and the json_query filter ‘current[].fvTenant.attributes.name’ to extract a list of tenant names from the nested data returned by the API.

json_query is an Ansible Filter (https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#json-query-filter) that allows you to pull only certain subsets of data out of the complex nested structures. In this case, it queries for a list containing only attributes.name key values from all fvTenant dictionaries.

---
- name: LIST TENANTS
  hosts: apic

  tasks:

    - name: List All Tenants
      aci_tenant:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        state: query
      delegate_to: localhost
      register: apic_tenants

    - name: Print Tenant List
      debug:
        msg: "{{ apic_tenants | json_query('current[].fvTenant.attributes.name') }}"
[code]

Run this playbook in verbose mode:

[code language="text"]
ansible-playbook -v tenants.yml

Giving the following output containing the json output of the set of tenants:

Create and list multiple Tenants

Here we create a new playbook file to extend the tenants.yml playbook to create multiple tenants. Declare a vars variable called tenants_list below host, set a variable tenant_list, with the list of names ranging from Customer_01 to Customer_05.

Add a new task at the end of the playbook that executes the aci_tenant module for each tenant name in the tenants_list variable to create the tenants on the Cisco APIC.

createAndListTenants.yml

---
- name: CREATE AND LIST TENANTS
  hosts: apic

  vars:
    tenants_list:
      - Customer_01
      - Customer_02
      - Customer_03
      - Customer_04
      - Customer_05

  tasks:
    - name: List All Tenants
      aci_tenant:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        state: query
      delegate_to: localhost
      register: apic_tenants

    - name: Print Tenant List
      debug:
        msg: "{{ apic_tenants | json_query('current[].fvTenant.attributes.name') }}"

    - name: Create Customer_0X Tenants
      aci_tenant:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        tenant: "{{ item }}"
        state: present
      delegate_to: localhost
      loop: "{{ tenants_list }}"

    - name: List All Tenants
      aci_tenant:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        state: query
      delegate_to: localhost
      register: apic_tenants

    - name: Print Tenant List
      debug:
        msg: "{{ apic_tenants | json_query('current[].fvTenant.attributes.name') }}"

On running ansible-playbook createAndListsTenants.yml we get the following output:

We can then confirm the tenant list in the APIC GUI:

On running the createAndListTenants.yml playbook again see that the Create Customer_0X Tenants task reports ok instead of changed, given that all five tenants exist on the fabric, verifying that this module is idempotent (the outcome of the operation is the same regrdless of whether it is executed once or many times).

Delete a Tenant

The variable state is now set to absent.
This tells the aci_tenant module to delete the tenant from the Cisco APIC. Write a playbook named deleteTenant.yml with a task that deletes the Customer_05 tenant.

Add a new task at the end of the playbook that queries for the list of tenants and saves it to the apic_tenants variable. You can copy the task from the tenants.yml file.

This time, instead of simply printing the list of tenants using the debug module, add a task that uses set_fact variable to save the list of tenants to a new variable called apic_tenant_list. Then add another task Print Tenants List, that prints the apic_tenant_list variable using the debug module.

Finally, check that the Customer_05 tenant has indeed been deleted. Add a task that uses the assert module to check that Customer_05 is not found in the apic_tenant_list variable.

deleteTenant.yml

---
- name: DELETE AND LIST TENANTS
  hosts: apic

  tasks:
    - name: Delete the Customer_05 Tenant
      aci_tenant:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        tenant: "Customer_05"
        state: absent
      delegate_to: localhost

    - name: List All Tenants
      aci_tenant:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        state: query
      delegate_to: localhost
      register: apic_tenants

    - name: Build Actual Tenant List
      set_fact:
        apic_tenant_list: "{{ apic_tenants | json_query('current[].fvTenant.attributes.name') }}"

    - name: Print Tenant List
      debug:
        var: apic_tenant_list

    - name: Check that Customer_05 has been deleted
      assert:
        that: not 'Customer_05' in apic_tenant_list
        fail_msg: "Customer_05 tenant exists on the APIC!"
        success_msg: "Customer_05 tenant does not exist on the APIC."

Run the delete_tenant.yml playbook from the integrated terminal and confirm in the Cisco APIC Sandbox GUI that the Customer_05 tenant has been deleted.

Delete a Tenant using REST

Modify the first task (Delete the Customer_05 Tenant) to use the aci_rest module and delete the Customer_04 tenant.

the aci_rest module requires connection details for the Cisco APIC. Because this is a generic REST API call module, you must give it full details about what it should do (how you’ve sent HTTP requests using Postman or Python requests.)
In this case, it is fairly simple: the operation is delete (one of the three supported HTTP methods), and the path uniquely identifies the Customer_04 tenant (from its Distinguished Name /api/mo/uni/tn-Customer_04).

deleteTenantRest.yml

---
- name: DELETE AND LIST TENANTS
  hosts: apic

  tasks:
    - name: Delete the Customer_04 Tenant
      aci_rest:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        path: "/api/mo/uni/tn-Customer_04.json"
        method: delete
      delegate_to: localhost

    - name: List All Tenants
      aci_tenant:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        state: query
      delegate_to: localhost
      register: apic_tenants

    - name: Build Actual Tenant List
      set_fact:
        apic_tenant_list: "{{ apic_tenants | json_query('current[].fvTenant.attributes.name') }}"

    - name: Print Tenant List
      debug:
        var: apic_tenant_list

    - name: Check that Customer_04 has been deleted
      assert:
        that: not 'Customer_04' in apic_tenant_list
        fail_msg: "Customer_04 tenant exists on the APIC!"
        success_msg: "Customer_04 tenant does not exist on the APIC."

Save and run the deleteTenantRest.yml playbook and confirm that Customer_04 tenant has been deleted:

Create a Tenant with aci_rest

The aci_rest module supports querying (GET), creating/modifying/deleting (POST), and delete (DELETE) operations. Create the createAndListTenantsRest.yml script to ensure that the five tenants are created using aci_rest instead of aci_tenant.

Given that the aci_rest module does not know what objects it is manipulating, you must identify the API path, provide an operation type (method), and provide a payload if necessary.

createAndListTenantsRest.yml

---
- name: CREATE AND LIST TENANTS
  hosts: apic

  vars:
    tenants_list:
      - Customer_01
      - Customer_02
      - Customer_03
      - Customer_04
      - Customer_05

  tasks:
    - name: List All Tenants
      aci_tenant:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        state: query
      delegate_to: localhost
      register: apic_tenants

    - name: Print Tenant List
      debug:
        msg: "{{ apic_tenants | json_query('current[].fvTenant.attributes.name') }}"

    - name: Create Customer_0X Tenants
      aci_rest:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        path: /api/mo/uni.json
        method: post
        content:
          fvTenant:
            attributes:
              name: "{{ item }}"
              descr: "{{ item }} - Tenant managed by Ansible."
      delegate_to: localhost
      loop: "{{ tenants_list }}"


    - name: List All Tenants
      aci_tenant:
        host: "{{ ansible_host }}"
        port: "{{ ansible_port }}"
        user: "{{ ansible_user }}"
        password: "{{ ansible_password }}"
        validate_certs: "{{ validate_certs }}"

        state: query
      delegate_to: localhost
      register: apic_tenants

    - name: Print Tenant List
      debug:
        msg: "{{ apic_tenants | json_query('current[].fvTenant.attributes.name') }}"

We see that it initially lists the 3 tenants given that we have already deleted 04 and 05, and then the customers created using the REST commands rather than the ACI modules:

Setting up new Tenants the NetDevOps way

Create the file named SNV_Retail.yml, this contains all the data, dictionaries etc that we will need in out playbooks. The bridge domain is defined and what details are needed to create it on the APIC, as well as bridge domain subnets that you will use with a separate module after the bridge domain is created.

This file also provides the domain type and provider fields required by this module. It is built as a dictionary lookup based on the domain name defined inside the EPG model. In this case, all EPGs are connected via the vCenter_VMM domain.

This file defines the contract details that are needed to create it on the APIC. Each contract can use one or more different filters.
The data model defines three contracts:

  • WebServices_CON using web_filter and icmp
  • EmailServices_CON using email_filter and icmp
  • StorageServices_CON using storage_filter and icmp
  • You may notice that the icmp filter was not defined previously. That is because it exists on the ACI fabric as it is a commonly used filter and you do not need to create a new one for the same function.

    SNV_Retail.yml

    ---
    # The top-level policy container, this tenant will contain
    # all of the other objects.
    tenant:
      name: "SNV_Retail"
      description: "SNV_Retail Hosted Customer Services - Managed by Ansible"
      
    # VRFs must have a name and belong to their parent tenant.
      vrfs:
        - name: "UserServices_VRF"
          description: "Managed by Ansible"
      
    
    # Bridge Domains must have a name, belong to their parent tenant,
    # and are linked to a specific VRF. They may also include one or more
    # subnet definitions.
      bridge_domains:
        - name: "Services_BD"
          description: "Managed by Ansible"
          vrf: "UserServices_VRF"
          subnet: "10.0.1.254/24"
    
        - name: "Services_BD"
          description: "Managed by Ansible"
          vrf: "UserServices_VRF"
          subnet: "10.0.2.254/24"
    
        - name: "Services_BD"
          description: "Managed by Ansible"
          vrf: "UserServices_VRF"
          subnet: "10.0.3.254/24"
    
        - name: "Users_BD"
          description: "Managed by Ansible"
          vrf: "UserServices_VRF"
          subnet: "10.0.4.254/24"
    
    
    # Application Profiles belong to their parent tenant and serve
    # as policy containers for EPGs and their relationships.
      apps:
        - name: "UserServices_APP"
          description: "Managed by Ansible"
    
    # Endpoint Groups define Endpoint related policy (domain, BD) and allow for
    # contract bindings to implement security policies.
      epgs:
        - name: "Web_EPG"
          description: "Managed by Ansible"
          ap: "UserServices_APP"
          bd: "Services_BD"
          domain: "vCenter_VMM"
    
        - name: "Email_EPG"
          description: "Managed by Ansible"
          ap: "UserServices_APP"
          bd: "Services_BD"
          domain: "vCenter_VMM"
    
        - name: "Storage_EPG"
          description: "Managed by Ansible"
          ap: "UserServices_APP"
          bd: "Services_BD"
          domain: "vCenter_VMM"
    
        - name: "Users_EPG"
          description: "Managed by Ansible"
          ap: "UserServices_APP"
          bd: "Users_BD"
          domain: "vCenter_VMM"
    
    # Filters define stateless traffic flows.
      filters:
        - name: "web_filter"
          description: "Managed by Ansible"
          entry: "http"
          ethertype: "ip"
          ip_protocol: "tcp"
          destination_from: "80"
          destination_to: "80"
      
        - name: "web_filter"
          description: "Managed by Ansible"
          entry: "https"
          ethertype: "ip"
          ip_protocol: "tcp"
          destination_from: "443"
          destination_to: "443"
      
        - name: "email_filter"
          description: "Managed by Ansible"
          entry: "smtp"
          ethertype: "ip"
          ip_protocol: "tcp"
          destination_from: "25"
          destination_to: "25"
      
        - name: "email_filter"
          description: "Managed by Ansible"
          entry: "smtps"
          ethertype: "ip"
          ip_protocol: "tcp"
          destination_from: "587"
          destination_to: "587"
      
        - name: "email_filter"
          description: "Managed by Ansible"
          entry: "imaps"
          ethertype: "ip"
          ip_protocol: "tcp"
          destination_from: "993"
          destination_to: "993"
      
        - name: "storage_filter"
          description: "Managed by Ansible"
          entry: "pgsql"
          ethertype: "ip"
          ip_protocol: "tcp"
          destination_from: "5432"
          destination_to: "5432"
          
      # Contracts define security and connectivity policies that
      # that implement specific filters.
      contracts:
        - name: "WebServices_CON"
          filter: "web_filter"
          description: "Managed by Ansible"
      
        - name: "WebServices_CON"
          filter: "icmp"
          description: "Managed by Ansible"
      
        - name: "EmailServices_CON"
          filter: "email_filter"
          description: "Managed by Ansible"
      
        - name: "EmailServices_CON"
          filter: "icmp"
          description: "Managed by Ansible"
      
        - name: "StorageServices_CON"
          filter: "storage_filter"
          description: "Managed by Ansible"
      
        - name: "StorageServices_CON"
          filter: "icmp"
          description: "Managed by Ansible"
        
    # EPGs can be providers and/or consumers for specific contracts, opening
    # up the traffic flow as per the filter definitions.
      contract_bindings:
        # Users_EPG -> Web_EPG (HTTP, HTTPS)
        - epg: "Users_EPG"
          ap: "UserServices_APP"
          contract: "WebServices_CON"
          type: "consumer"
      
        - epg: "Web_EPG"
          ap: "UserServices_APP"
          contract: "WebServices_CON"
          type: "provider"
      
        # Users_EPG -> Email_EPG (25, 587, 993)
        - epg: "Users_EPG"
          ap: "UserServices_APP"
          contract: "EmailServices_CON"
          type: "consumer"
      
        - epg: "Email_EPG"
          ap: "UserServices_APP"
          contract: "EmailServices_CON"
          type: "provider"
      
        # Web_EPG -> Storage_EPG (5432)
        # Email_EPG -> Storage_EPG (5432)
        - epg: "Web_EPG"
          ap: "UserServices_APP"
          contract: "StorageServices_CON"
          type: "consumer"
      
        - epg: "Email_EPG"
          ap: "UserServices_APP"
          contract: "StorageServices_CON"
          type: "consumer"
      
        - epg: "Storage_EPG"
          ap: "UserServices_APP"
          contract: "StorageServices_CON"
          type: "provider"
          
    domains:
      vCenter_VMM:
        domain_type: "vmm"
        vm_provider: "vmware"
    

    Create the playbook 01_tenant_infra.yml. Each playbook will all run the predefined play named PRE-DEPLOYMENT SETUP AND VALIDATION at the beginning.
    This ensures that when a playbook is run independently, or otherwise, all the necessary input variables are provided.
    The first task simply asserts that the variables are defined—it is up to the user to ensure that they contain valid values.
    The second task (Load Infrastructure Definition) looks for the tenant data model YAML file and loads its contents into the Ansible host vars dictionary created at run-time.

    This playbook uses the data model to create a new tenant on the APIC, and then create child objects such as VRFs, bridge domains, and subnets.

    Note that the plays subsequent to the PRE-DEPLOYMENT SETUP (CREATE TENANT INFRASTRUCTURE etc) are indented at the same level.

    This Create Tenant task creates the tenant using the aci_tenant Ansible module. Note that you are using the tenant.name and tenant.description variables from the data model that the first play in this playbook has loaded.

    The task to create a VRF is indented at the same level as the task that created the tenant. It uses a loop that takes a list of VRFs as input. In the SNV_Retail.yml file the VRFs dictionary contains a list (denoted by the hyphen in YAML), although, in this case, it has only one element.
    The aci_vrf module requires a name for the VRF object and to which tenant it belongs. Since you are in a loop, you can find it under item.name.

    When creating the task to create bridge domains, also indent as before. Each bridge domain belongs to the tenant object, has a name and a description, and a relationship with a VRF.
    Use the aci_bd module and a loop to iterate over the tenant.bridge_domains dictionary to set the bridge domain and description for the VRFs.

    To create the bridge domain subnets, the data model provides the subnet as one value, for example “10.0.1.254/24”. The aci_bd_subnet module uses two parameters, gateway and mask. To provide them, you need to split the previous string on the / character to extract 10.0.1.254 for the gateway and 24 for the mask.

    01_tenant_infra.yml

    ---
    
    - name: PRE-DEPLOYMENT SETUP AND VALIDATION
      hosts: apic
    
      tasks:
    
        # All of these should be defined:
        # ansible_host, ansible_port, ansible_user, ansible_password, validate_certs
        # customer_name
        - name: Test that connection details are defined
          assert:
            that:
              - "ansible_host is defined"
              - "ansible_port is defined"
              - "ansible_user is defined"
              - "ansible_password is defined"
              - "validate_certs is defined"
              - "customer_name is defined"
            fail_msg: "Please ensure that these variables exist: ansible_host,
              ansible_port, ansible_user, ansible_password, validate_certs
              and customer_name!"
            quiet: true
    
        # These variables represent the data model and are used by
        # the rest of the playbook to deploy the policy.
        - name: Load Infrastructure Definition
          include_vars:
            file: "{{ customer_name }}.yml"
    
    
    - name: CREATE TENANT INFRASTRUCTURE
      hosts: apic
    
      tasks:
    
        - name: Create Tenant
          aci_tenant:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            description: "{{ tenant.description }}"
            state: present
          delegate_to: localhost
    
        - name: Create VRF
          aci_vrf:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            vrf: "{{ item.name }}"
            description: "{{ item.description }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.vrfs }}"
    
        - name: Create Bridge Domains
          aci_bd:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            vrf: "{{ item.vrf }}"
            bd: "{{ item.name }}"
            description: "{{ item.description }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.bridge_domains }}"
    
        - name: Create Bridge Domain Subnets
          aci_bd_subnet:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            bd: "{{ item.name }}"
            gateway: "{{ item.subnet.split('/') | first }}"
            mask: "{{ item.subnet.split('/') | last }}"
            description: "{{ item.description }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.bridge_domains }}"
    

    The second playbook will use the data model to create a new Application Profile belonging to the SNV_Retail Tenant on the APIC and then create the four Endpoint Groups (EPGs) for servers to use. Use the following diagram for reference when building ACI policy from the data model.

    02_epgs.yml

    The task to create an Application Profile, uses a loop statement that iterates throughout the tenant.apps dictionary and sets the ap and description for the current tenant.
    An application profile belongs to a particular tenant, has a name (its unique identifier) and a description.

    To create EPGs, add a loop statement that iterates throughout the tenant.epgs dictionary and sets the bd, ap, epg and description for the current tenant.

    ---
    
    - name: PRE-DEPLOYMENT SETUP AND VALIDATION
      hosts: apic
    
      tasks:
    
        # All of these should be defined:
        # host_vars: ansible_host, ansible_port, ansible_user, ansible_password, validate_certs
        # group_vars/all: customer_name
        - name: Test that connection details are defined
          assert:
            that:
              - "ansible_host is defined"
              - "ansible_port is defined"
              - "ansible_user is defined"
              - "ansible_password is defined"
              - "validate_certs is defined"
              - "customer_name is defined"
            fail_msg: "Please ensure that these variables exist: ansible_host,
              ansible_port, ansible_user, ansible_password, validate_certs
              and customer_name!"
            quiet: true
    
        # These variables represent the data model and are used by
        # the rest of the playbook to deploy the policy.
        - name: Load Infrastructure Definition
          include_vars:
            file: "{{ customer_name }}.yml"
    
    
    - name: CREATE APPLICATION PROFILES AND EPGS
      hosts: apic
    
      tasks:
    
        - name: Create Application Profile
          aci_ap:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            ap: "{{ item.name }}"
            description: "{{ item.description }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.apps }}"
    
        - name: Create EPG
          aci_epg:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            bd: "{{ item.bd }}"
            ap: "{{ item.ap }}"
            epg: "{{ item.name }}"
            description: "{{ item.description }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.epgs }}"
    
        - name: Add a new physical domain to EPG binding
          aci_epg_to_domain:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            ap: "{{ item.ap }}"
            epg: "{{ item.name }}"
            domain: "{{ item.domain }}"
            domain_type: "{{ domains[item.domain].domain_type }}"
            vm_provider: "{{ domains[item.domain].vm_provider }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.epgs }}"
    

    Create the Filters and the Contracts Between EPGs

    The next playbook uses the data model to create new ACI Filters (stateless ACL entries) for traffic such as Web (HTTP, HTTPS), Email (SMTP, SMTPS, IMAPS) and Database (PGSQL).
    These filters are used to define contracts, which are an ACI policy construct that provisions connectivity and security policies between Endpoint Groups.
    The Contracts are bound to existing EPGs as consumers or providers to actually apply the connectivity configuration on the ACI fabric.

    03_contracts.yml

    The filter entries are very similar to access list lines in that they match specific values of the Layer 2-4 headers.
    The filter named web_filter has two entries: the first matches http (ip tcp port 80) and the second https (ip tcp port 443). Note that the names http and https are simply names set by yourself, it is the ethertype, ip_protocol, and the port numbers that identify the traffic.

    First, you use the aci_filter module to create the filter. This is a container for filter entries, and it belongs to a specific tenant.
    Then you use the aci_filter_entry module to create each filter entry as specified in the data model (reviewed in the previous step).
    Note how each entry belongs to a particular tenant and filter (the parent objects) and uses all the required parameters to identify the traffic.

    Use the aci_contract module to create the contract. This is a container for contract subjects, and it belongs to a specific tenant.
    Then use the aci_contract_subject module to create each contract subject as specified in the data model (reviewed in the previous step).
    While you can normally have multiple subjects under the same contract, this model design assumes only one and creates it by appending the -SUB string to the contract name.
    Finally, the aci_contract_subject_to_filter module is used to add various filters to the contract subjects created.
    As there are multiple filters per contract, a loop is needed to add them one by one.

    Once all the filters and contracts have been created, add the EPGs to the contracts as either providers or consumers.
    This task loops of the list of contract bindings as seen in the data model above and uses the aci_epg_to_contract module to create this relationship.

    ---
    
    - name: PRE-DEPLOYMENT SETUP AND VALIDATION
      hosts: apic
    
      tasks:
    
        # All of these should be defined:
        # host_vars: ansible_host, ansible_port,
        #            ansible_user, ansible_password, validate_certs
        # group_vars/all: customer_name
        - name: Test that connection details are defined
          assert:
            that:
              - "ansible_host is defined"
              - "ansible_port is defined"
              - "ansible_user is defined"
              - "ansible_password is defined"
              - "validate_certs is defined"
              - "customer_name is defined"
            fail_msg: "Please ensure that these variables exist: ansible_host,
              ansible_port, ansible_user, ansible_password, validate_certs
              and customer_name!"
            quiet: true
    
        # These variables represent the data model and are used by
        # the rest of the playbook to deploy the policy.
        - name: Load Infrastructure Definition
          include_vars:
            file: "{{ customer_name }}.yml"
    
    - name: CREATE FILTERS AND FILTER ENTRIES
      hosts: apic
    
      tasks:
        - name: Create Filter
          aci_filter:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            filter: "{{ item.name }}"
            description: "{{ item.description }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.filters }}"
    
        - name: Create Filter Entry
          aci_filter_entry:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            filter: "{{ item.name }}"
            entry: "{{ item.entry }}"
            ether_type: "{{ item.ethertype }}"
            ip_protocol: "{{ item.ip_protocol }}"
            dst_port_start: "{{ item.destination_from }}"
            dst_port_end: "{{ item.destination_to }}"
            description: "{{ item.description }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.filters }}"
    
    - name: CREATE CONTRACTS AND CONTRACT SUBJECTS
      hosts: apic
    
      tasks:
        - name: Create Contract
          aci_contract:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            contract: "{{ item.name }}"
            description: "{{ item.description }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.contracts }}"
    
        - name: Create Contract Subject
          aci_contract_subject:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            contract: "{{ item.name }}"
            subject: "{{ item.name }}-SUB"
            description: "{{ item.description }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.contracts }}"
    
        - name: Add Filter to Contract Subject
          aci_contract_subject_to_filter:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            contract: "{{ item.name }}"
            subject: "{{ item.name }}-SUB"
            filter: "{{ item.filter }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.contracts }}"00_master.yml
    
    
    - name: BIND CONTRACTS TO EPGS
      hosts: apic
    
      tasks:
        - name: Add Contract to EPG
          aci_epg_to_contract:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
    
            tenant: "{{ tenant.name }}"
            ap: "{{ item.ap }}"
            epg: "{{ item.epg }}"
            contract: "{{ item.contract }}"
            contract_type: "{{ item.type }}"
            state: present
          delegate_to: localhost
          loop: "{{ tenant.contract_bindings }}"
    

    The 00_master.yml playbook simply aggregates all three playbooks in order, starting with the tenant infrastructure, then EPGs, and finally contracts.

    00_master.yml

    ---
    # All playbooks imported here are designed to also execute independently.
    # Note: The workflow is linear and each playbook depends on the policy
    # objects created by the playbooks before it.
    
    ##### Step 1: Create a Tenant and its VRFs and Bridge Domains.
    - name: PROVISION TENANT INFRASTRUCTURE
      import_playbook: 01_tenant_infra.yml
    
    ##### Step 2: Create Application Profiles and Endpoint Groups.
    - name: PROVISION APPLICATION PROFILES AND EPGS
      import_playbook: 02_epgs.yml
    
    ##### Step 3: Create and apply the Security Policy (Contracts).
    - name: PROVISION SECURITY POLICY
      import_playbook: 03_contracts.yml
    

    Run the command ‘ansible-playbook 00_master.yml’ to give the following output:

    $ ansible-playbook 00_master.yml
    
    PLAY [PRE-DEPLOYMENT SETUP AND VALIDATION] ***********************************************************************************************************************
    
    TASK [Test that connection details are defined] ******************************************************************************************************************
    ok: [apic]
    
    TASK [Load Infrastructure Definition] ****************************************************************************************************************************
    ok: [apic]
    
    PLAY [CREATE TENANT INFRASTRUCTURE] ******************************************************************************************************************************
    
    TASK [Create Tenant] *********************************************************************************************************************************************
    changed: [apic -> localhost]
    
    TASK [Create VRF] ************************************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'UserServices_VRF', 'description': 'Managed by Ansible'})
    
    TASK [Create Bridge Domains] *************************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'Services_BD', 'description': 'Managed by Ansible', 'vrf': 'UserServices_VRF', 'subnet': '10.0.1.254/24'})
    ok: [apic -> localhost] => (item={'name': 'Services_BD', 'description': 'Managed by Ansible', 'vrf': 'UserServices_VRF', 'subnet': '10.0.2.254/24'})
    ok: [apic -> localhost] => (item={'name': 'Services_BD', 'description': 'Managed by Ansible', 'vrf': 'UserServices_VRF', 'subnet': '10.0.3.254/24'})
    changed: [apic -> localhost] => (item={'name': 'Users_BD', 'description': 'Managed by Ansible', 'vrf': 'UserServices_VRF', 'subnet': '10.0.4.254/24'})
    
    TASK [Create Bridge Domain Subnets] ******************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'Services_BD', 'description': 'Managed by Ansible', 'vrf': 'UserServices_VRF', 'subnet': '10.0.1.254/24'})
    changed: [apic -> localhost] => (item={'name': 'Services_BD', 'description': 'Managed by Ansible', 'vrf': 'UserServices_VRF', 'subnet': '10.0.2.254/24'})
    changed: [apic -> localhost] => (item={'name': 'Services_BD', 'description': 'Managed by Ansible', 'vrf': 'UserServices_VRF', 'subnet': '10.0.3.254/24'})
    changed: [apic -> localhost] => (item={'name': 'Users_BD', 'description': 'Managed by Ansible', 'vrf': 'UserServices_VRF', 'subnet': '10.0.4.254/24'})
    
    PLAY [PRE-DEPLOYMENT SETUP AND VALIDATION] ***********************************************************************************************************************
    
    TASK [Test that connection details are defined] ******************************************************************************************************************
    ok: [apic]
    
    TASK [Load Infrastructure Definition] ****************************************************************************************************************************
    ok: [apic]
    
    PLAY [CREATE APPLICATION PROFILES AND EPGS] **********************************************************************************************************************
    
    TASK [Create Application Profile] ********************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'UserServices_APP', 'description': 'Managed by Ansible'})
    
    TASK [Create EPG] ************************************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'Web_EPG', 'description': 'Managed by Ansible', 'ap': 'UserServices_APP', 'bd': 'Services_BD', 'domain': 'vCenter_VMM'})
    changed: [apic -> localhost] => (item={'name': 'Email_EPG', 'description': 'Managed by Ansible', 'ap': 'UserServices_APP', 'bd': 'Services_BD', 'domain': 'vCenter_VMM'})
    changed: [apic -> localhost] => (item={'name': 'Storage_EPG', 'description': 'Managed by Ansible', 'ap': 'UserServices_APP', 'bd': 'Services_BD', 'domain': 'vCenter_VMM'})
    changed: [apic -> localhost] => (item={'name': 'Users_EPG', 'description': 'Managed by Ansible', 'ap': 'UserServices_APP', 'bd': 'Users_BD', 'domain': 'vCenter_VMM'})
    
    TASK [Add a new physical domain to EPG binding] ******************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'Web_EPG', 'description': 'Managed by Ansible', 'ap': 'UserServices_APP', 'bd': 'Services_BD', 'domain': 'vCenter_VMM'})
    changed: [apic -> localhost] => (item={'name': 'Email_EPG', 'description': 'Managed by Ansible', 'ap': 'UserServices_APP', 'bd': 'Services_BD', 'domain': 'vCenter_VMM'})
    changed: [apic -> localhost] => (item={'name': 'Storage_EPG', 'description': 'Managed by Ansible', 'ap': 'UserServices_APP', 'bd': 'Services_BD', 'domain': 'vCenter_VMM'})
    changed: [apic -> localhost] => (item={'name': 'Users_EPG', 'description': 'Managed by Ansible', 'ap': 'UserServices_APP', 'bd': 'Users_BD', 'domain': 'vCenter_VMM'})
    
    PLAY [PRE-DEPLOYMENT SETUP AND VALIDATION] ***********************************************************************************************************************
    
    TASK [Test that connection details are defined] ******************************************************************************************************************
    ok: [apic]
    
    TASK [Load Infrastructure Definition] ****************************************************************************************************************************
    ok: [apic]
    
    PLAY [CREATE FILTERS AND FILTER ENTRIES] *************************************************************************************************************************
    
    TASK [Create Filter] *********************************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'web_filter', 'description': 'Managed by Ansible', 'entry': 'http', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '80', 'destination_to': '80'})
    ok: [apic -> localhost] => (item={'name': 'web_filter', 'description': 'Managed by Ansible', 'entry': 'https', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '443', 'destination_to': '443'})
    changed: [apic -> localhost] => (item={'name': 'email_filter', 'description': 'Managed by Ansible', 'entry': 'smtp', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '25', 'destination_to': '25'})
    ok: [apic -> localhost] => (item={'name': 'email_filter', 'description': 'Managed by Ansible', 'entry': 'smtps', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '587', 'destination_to': '587'})
    ok: [apic -> localhost] => (item={'name': 'email_filter', 'description': 'Managed by Ansible', 'entry': 'imaps', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '993', 'destination_to': '993'})
    changed: [apic -> localhost] => (item={'name': 'storage_filter', 'description': 'Managed by Ansible', 'entry': 'pgsql', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '5432', 'destination_to': '5432'})
    
    TASK [Create Filter Entry] ***************************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'web_filter', 'description': 'Managed by Ansible', 'entry': 'http', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '80', 'destination_to': '80'})
    changed: [apic -> localhost] => (item={'name': 'web_filter', 'description': 'Managed by Ansible', 'entry': 'https', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '443', 'destination_to': '443'})
    changed: [apic -> localhost] => (item={'name': 'email_filter', 'description': 'Managed by Ansible', 'entry': 'smtp', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '25', 'destination_to': '25'})
    changed: [apic -> localhost] => (item={'name': 'email_filter', 'description': 'Managed by Ansible', 'entry': 'smtps', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '587', 'destination_to': '587'})
    changed: [apic -> localhost] => (item={'name': 'email_filter', 'description': 'Managed by Ansible', 'entry': 'imaps', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '993', 'destination_to': '993'})
    changed: [apic -> localhost] => (item={'name': 'storage_filter', 'description': 'Managed by Ansible', 'entry': 'pgsql', 'ethertype': 'ip', 'ip_protocol': 'tcp', 'destination_from': '5432', 'destination_to': '5432'})
    
    PLAY [CREATE CONTRACTS AND CONTRACT SUBJECTS] ********************************************************************************************************************
    
    TASK [Create Contract] *******************************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'WebServices_CON', 'filter': 'web_filter', 'description': 'Managed by Ansible'})
    ok: [apic -> localhost] => (item={'name': 'WebServices_CON', 'filter': 'icmp', 'description': 'Managed by Ansible'})
    changed: [apic -> localhost] => (item={'name': 'EmailServices_CON', 'filter': 'email_filter', 'description': 'Managed by Ansible'})
    ok: [apic -> localhost] => (item={'name': 'EmailServices_CON', 'filter': 'icmp', 'description': 'Managed by Ansible'})
    changed: [apic -> localhost] => (item={'name': 'StorageServices_CON', 'filter': 'storage_filter', 'description': 'Managed by Ansible'})
    ok: [apic -> localhost] => (item={'name': 'StorageServices_CON', 'filter': 'icmp', 'description': 'Managed by Ansible'})
    
    TASK [Create Contract Subject] ***********************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'WebServices_CON', 'filter': 'web_filter', 'description': 'Managed by Ansible'})
    ok: [apic -> localhost] => (item={'name': 'WebServices_CON', 'filter': 'icmp', 'description': 'Managed by Ansible'})
    changed: [apic -> localhost] => (item={'name': 'EmailServices_CON', 'filter': 'email_filter', 'description': 'Managed by Ansible'})
    ok: [apic -> localhost] => (item={'name': 'EmailServices_CON', 'filter': 'icmp', 'description': 'Managed by Ansible'})
    changed: [apic -> localhost] => (item={'name': 'StorageServices_CON', 'filter': 'storage_filter', 'description': 'Managed by Ansible'})
    ok: [apic -> localhost] => (item={'name': 'StorageServices_CON', 'filter': 'icmp', 'description': 'Managed by Ansible'})
    
    TASK [Add Filter to Contract Subject] ****************************************************************************************************************************
    changed: [apic -> localhost] => (item={'name': 'WebServices_CON', 'filter': 'web_filter', 'description': 'Managed by Ansible'})
    changed: [apic -> localhost] => (item={'name': 'WebServices_CON', 'filter': 'icmp', 'description': 'Managed by Ansible'})
    changed: [apic -> localhost] => (item={'name': 'EmailServices_CON', 'filter': 'email_filter', 'description': 'Managed by Ansible'})
    changed: [apic -> localhost] => (item={'name': 'EmailServices_CON', 'filter': 'icmp', 'description': 'Managed by Ansible'})
    changed: [apic -> localhost] => (item={'name': 'StorageServices_CON', 'filter': 'storage_filter', 'description': 'Managed by Ansible'})
    changed: [apic -> localhost] => (item={'name': 'StorageServices_CON', 'filter': 'icmp', 'description': 'Managed by Ansible'})
    
    PLAY [BIND CONTRACTS TO EPGS] ************************************************************************************************************************************
    
    TASK [Add Contract to EPG] ***************************************************************************************************************************************
    changed: [apic -> localhost] => (item={'epg': 'Users_EPG', 'ap': 'UserServices_APP', 'contract': 'WebServices_CON', 'type': 'consumer'})
    changed: [apic -> localhost] => (item={'epg': 'Web_EPG', 'ap': 'UserServices_APP', 'contract': 'WebServices_CON', 'type': 'provider'})
    changed: [apic -> localhost] => (item={'epg': 'Users_EPG', 'ap': 'UserServices_APP', 'contract': 'EmailServices_CON', 'type': 'consumer'})
    changed: [apic -> localhost] => (item={'epg': 'Email_EPG', 'ap': 'UserServices_APP', 'contract': 'EmailServices_CON', 'type': 'provider'})
    changed: [apic -> localhost] => (item={'epg': 'Web_EPG', 'ap': 'UserServices_APP', 'contract': 'StorageServices_CON', 'type': 'consumer'})
    changed: [apic -> localhost] => (item={'epg': 'Email_EPG', 'ap': 'UserServices_APP', 'contract': 'StorageServices_CON', 'type': 'consumer'})
    changed: [apic -> localhost] => (item={'epg': 'Storage_EPG', 'ap': 'UserServices_APP', 'contract': 'StorageServices_CON', 'type': 'provider'})
    
    PLAY RECAP *******************************************************************************************************************************************************
    apic                       : ok=19   changed=13   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
    

    By using the the APIC Sandbox GUI we can verify that the tenant SNV_Retail has the expected EPG to contract relationship under the application profile Topology tab:

    Generating a health report

    System health scores can be verified manually via the sandbox version of the Cisco APIC GUI by navigating to System > Dashboard.
    In the Tenants with Health ≤ section, move the slider to 100 to see the Tenants’ health.

    01_health_report.yml

    The first playbook 01_health_report.yml consists of three REST API queries to obtain the health scores from the JSON data, and add it to a report.

    ‘Get System Health’ sends a GET request to return the system health score and other information in JSON format.

    The ‘Save Health Markdown Report’ section collects all the data from previous API requests (health_system, health_topology, and health_tenant variables), combines the data with the templates/health_report.j2 Jinja2 template, and saves a Markdown text file with the result to reports/fragments/01_health_report.md.

    ---
    
    - name:  query fabric and build health report
      hosts: apic
      
      tasks:
        - name: Get System Health
          aci_rest:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
            
            path: "/api/mo/topology/health.json"
            method: get
          delegate_to: localhost
          register: health_system
          
        - name: Print System Health Response Data
          debug:
            var: health_system
            verbosity: 1
          
        - name: Get Topology Health
          aci_rest:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
            
            path: "/api/node/class/topSystem.json?rsp-subtree-include=health,required"
            method: get
          delegate_to: localhost
          register: health_topology
          
        - name: Print Topology Health Response Data
          debug:
            var: health_topology
            verbosity: 1
            
        - name: Get Tenant Health
          aci_rest:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
            
            path: "/api/mo/uni/tn-{{ customer_name }}.json?rsp-subtree-include=health"
            method: get
          delegate_to: localhost
          register: health_tenant
          
        - name: Print Tenant Health Response Data
          debug:
            var: health_tenant
            verbosity: 1
            
        - name: Save Health Markdown Report
          template: 
            src: "health_report.j2"
            dest: "reports/fragments/01_health_report.md"
          delegate_to: localhost
    

    Create a health_report.j2 file in the templates/ folder to createa Jinja2 template.
    This will use for the System Health value from the Cisco API response and place it in your report.

    health_report.j2

    ## ACI Health
    
    ## System Health
     - Current System Health: **{{ health_system.imdata[ 0 ].fabricHealthTotal.attributes.cur }}**
    
    {% if health_topology %}
    ### Fabric Health
    
    {% for node in health_topology.imdata %}
    #### Node {{ node.topSystem.attributes.name }}
    
     - Current Node Health: **{{ node.topSystem.children.0.healthInst.attributes.cur }}**
     - Node **{{ node.topSystem.attributes.id }}** in **{{ node.topSystem.attributes.fabricDomain }}** with role **{{ node.topSystem.attributes.role }}** and serial number **{{ node.topSystem.attributes.serial }}**
    
    {% endfor %}
    {% endif %}
    
    ## Tenant SNV_Retail Health
     - Current Tenant Health: **{{ health_tenant.imdata[ 0 ].fvTenant.children[ 0 ].healthInst.attributes.cur }}**
     - Description: **{{ health_tenant.imdata[ 0 ].fvTenant.attributes.descr }}**
    

    The 02_faults_report.yml playbook is a REST API query that retrieves all objects of the class faultSummary and provides a query parameter that tells the Cisco APIC to return the results ordered descending by their severity attribute. The results are stored in the faults_system variable.

    02_faults_report.yml

    The ‘Get Tenant Fault’ section lists the faults for a specific tenant, supplied dynamically from the inventory variable customer_name, in this case SNV_Retail.

    ---
    
    - name:  QUERY FABRIC AND BUILD FAULTS REPORT
      hosts: apic
            
      tasks:
        - name: Get System Faults Summary
          aci_rest:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
            
            path: "/api/node/class/faultSummary.json?order-by=faultSummary.severity|desc"
            method: get
          delegate_to: localhost
          register: faults_system
          
        - name: Print System Faults Summary Response Data
          debug:
            var: faults_system
            verbosity: 1
            
        - name: Get Tenant Faults
          aci_rest:
            host: "{{ ansible_host }}"
            port: "{{ ansible_port }}"
            user: "{{ ansible_user }}"
            password: "{{ ansible_password }}"
            validate_certs: "{{ validate_certs }}"
            
            path: "/api/mo/uni/tn-{{ customer_name }}.json?rsp-subtree-include=faults,subtree,no-scoped"
            method: get
          delegate_to: localhost
          register: faults_tenant
          
        - name: Print Tenant Faults Response Data
          debug:
            var: faults_tenant
            verbosity: 1
            
        - name: Save Faults Markdown Report
          template: 
            src: "faults_report.j2"
            dest: "reports/fragments/03_faults_report.md"
          delegate_to: localhost
    

    And the corresponding template file for the fault reports:

    faults_report.j2

    ### System Faults Summary
    
    Total number of fault categories: **{{ faults_system.totalCount }}**
    
    {% for fault in faults_system.imdata %}
    - Severity-**{{ fault.faultSummary.attributes.severity }}**/Type-**{{ fault.faultSummary.attributes.type }}**/Code-**{{ fault.faultSummary.attributes.code }}**/Domain-**{{ fault.faultSummary.attributes.domain }}**
        + Cause: `{{ fault.faultSummary.attributes.cause }}`
        + Count: **{{ fault.faultSummary.attributes.count }}**
        + Description: {{ fault.faultSummary.attributes.descr }}
    {% endfor %}
    

    Finally we create a master report to aggregate the the reports generated from the previous playbooks.

    00_master.yml

    The assemble module takes all the files in the reports/fragments and concatenates them into the reports/infra_report.md file.

    The fragments are taken in string sorting order, which is why the individual files are using a numbering system so that you can keep the ordering under control:

    reports/fragments/
    ├── 01_health_report.md
    └── 02_faults_report.md
    
    ---
    
    - name:  ENVIRONMENT SETUP AND VALIDATION
      hosts: apic
            
      tasks:
      
        # All of these should be defined:
        # ansible_host, ansible_port, ansible_user, ansible_password, validate_certs
        # customer_name
        - name: Test that connection details are defined  
          assert:
            that:
              - 'ansible_host is defined'
              - 'ansible_port is defined'
              - 'ansible_user is defined'
              - 'ansible_password is defined'
              - 'validate_certs is defined'
              - 'customer_name is defined'
            fail_msg: "Please that these variables exist: ansible_host, 
              ansible_port, ansible_user, ansible_password, validate_certs
              and customer_name!"
            quiet: true
            
        # Create the reports/fragments folders if they don't already exist
        - name: Ensure the reports/fragments folder exists
          file:
            path: "reports/fragments"
            state: "directory"
          delegate_to: localhost
          
    # All playbooks imported here are designed to also execute independently
    - name: ACI HEALTH REPORT
      import_playbook: 01_health_report.yml
      
    - name: ACI FAULTS REPORT
      import_playbook: 02_faults_report.yml
      
    # Put together all of the reports in one file
    - name: CONSOLIDATE REPORTS INTO FINAL DOCUMENT
      hosts: localhost
      tags: assemble
      
      tasks:
        - name: Assemble the fragments into one file
          assemble:
            src: "reports/fragments"
            dest: "reports/infra_report.md"
    

    View the generated final report reports/infra_report.md. You should see system faults and SNV_Retail tenant faults:

    infra_report.md

    ## ACI Health
    
    ## System Health
     - Current System Health: **0**
    
    ### Fabric Health
    
    #### Node leaf-1
    
     - Current Node Health: **90**
     - Node **101** in **ACI Fabric1** with role **leaf** and serial number **TEP-1-101**
    
    #### Node leaf-2
    
     - Current Node Health: **90**
     - Node **102** in **ACI Fabric1** with role **leaf** and serial number **TEP-1-102**
    
    #### Node spine-1
    
     - Current Node Health: **0**
     - Node **201** in **ACI Fabric1** with role **spine** and serial number **TEP-1-103**
    
    
    ## Tenant SNV_Retail Health
     - Current Tenant Health: **100**
     - Description: **SNV_Retail Hosted Customer Services - Managed by Ansible**
    ### System Faults Summary
    
    Total number of fault categories: **30**
    
    - Severity-**critical**/Type-**operational**/Code-**F0104**/Domain-**infra**
        + Cause: `port-down`
        + Count: **1**
        + Description: This fault occurs when a bond interface on a controller is in the link-down state.
    - Severity-**critical**/Type-**operational**/Code-**F103824**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **1**
        + Description: Threshold crossing alert for class eqptTemp5min, property normalizedLast
    - Severity-**major**/Type-**config**/Code-**F3083**/Domain-**tenant**
        + Cause: `config-error`
        + Count: **44**
        + Description: This fault occurs when multiple MACs have same IP address in the same VRF.
    - Severity-**major**/Type-**environmental**/Code-**F1318**/Domain-**infra**
        + Cause: `equipment-psu-missing`
        + Count: **3**
        + Description: This fault occurs when PSU are not detected correctly
    - Severity-**minor**/Type-**config**/Code-**F0467**/Domain-**tenant**
        + Cause: `configuration-failed`
        + Count: **44**
        + Description: This fault occurs when an End Point Group / End Point Security Group is incompletely or incorrectly configured.
    - Severity-**minor**/Type-**config**/Code-**F1295**/Domain-**infra**
        + Cause: `configuration-failed`
        + Count: **6**
        + Description: This fault is raised when a Date and Time Policy (datetime:Pol) fails to apply due to configuration issues.
    - Severity-**minor**/Type-**operational**/Code-**F1651**/Domain-**infra**
        + Cause: `export-data-failed`
        + Count: **1**
        + Description: This fault occurs when export operation for techsupport or core files did not succeed.
    - Severity-**minor**/Type-**config**/Code-**F0523**/Domain-**tenant**
        + Cause: `configuration-failed`
        + Count: **2**
        + Description: This fault occurs when an End Point Group / End Point Security Group is incompletely or incorrectly configured.
    - Severity-**minor**/Type-**operational**/Code-**F4149**/Domain-**infra**
        + Cause: `oper-state-change`
        + Count: **1**
        + Description: This fault occurs when you remove LC/FM/SUP/SC from the slot
    - Severity-**warning**/Type-**operational**/Code-**F1207**/Domain-**access**
        + Cause: `protocol-arp-adjacency-down`
        + Count: **1**
        + Description: This fault occurs when the operational state of the arp adjacency is down
    - Severity-**warning**/Type-**operational**/Code-**F110344**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **22**
        + Description: Threshold crossing alert for class l2IngrBytesPart5min, property dropRate
    - Severity-**warning**/Type-**operational**/Code-**F112128**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **27**
        + Description: Threshold crossing alert for class l2IngrPkts5min, property dropRate
    - Severity-**warning**/Type-**operational**/Code-**F110473**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **4**
        + Description: Threshold crossing alert for class l2IngrBytesAg15min, property dropRate
    - Severity-**warning**/Type-**operational**/Code-**F112425**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **4**
        + Description: Threshold crossing alert for class l2IngrPktsAg15min, property dropRate
    - Severity-**warning**/Type-**config**/Code-**F1037**/Domain-**infra**
        + Cause: `resolution-failed`
        + Count: **2**
        + Description: The object refers to an object that was not found.
    - Severity-**warning**/Type-**config**/Code-**F1014**/Domain-**infra**
        + Cause: `resolution-failed`
        + Count: **2**
        + Description: The object refers to an object that was not found.
    - Severity-**warning**/Type-**operational**/Code-**F100696**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **18**
        + Description: Threshold crossing alert for class eqptIngrDropPkts5min, property forwardingRate
    - Severity-**warning**/Type-**operational**/Code-**F1360**/Domain-**access**
        + Cause: `protocol-coop-adjacency-down`
        + Count: **2**
        + Description: This fault occurs when the operational state of the coop adjacency is down
    - Severity-**warning**/Type-**config**/Code-**F3057**/Domain-**external**
        + Cause: `product-not-registered`
        + Count: **1**
        + Description: This fault is raised when APIC Controller product is not registered with Cisco Smart Software Manager (CSSM).
    - Severity-**warning**/Type-**config**/Code-**F0956**/Domain-**infra**
        + Cause: `resolution-failed`
        + Count: **4**
        + Description: The object refers to an object that was not found.
    - Severity-**warning**/Type-**operational**/Code-**F110176**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **27**
        + Description: Threshold crossing alert for class l2IngrBytes5min, property dropRate
    - Severity-**warning**/Type-**operational**/Code-**F100264**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **18**
        + Description: Threshold crossing alert for class eqptIngrDropPkts5min, property bufferRate
    - Severity-**warning**/Type-**operational**/Code-**F112296**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **22**
        + Description: Threshold crossing alert for class l2IngrPktsPart5min, property dropRate
    - Severity-**warning**/Type-**config**/Code-**F0955**/Domain-**infra**
        + Cause: `resolution-failed`
        + Count: **2**
        + Description: The object refers to an object that was not found.
    - Severity-**warning**/Type-**operational**/Code-**F96976**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **18**
        + Description: Threshold crossing alert for class eqptEgrDropPkts5min, property errorRate
    - Severity-**warning**/Type-**config**/Code-**F1021**/Domain-**infra**
        + Cause: `resolution-failed`
        + Count: **1**
        + Description: The object refers to an object that was not found.
    - Severity-**warning**/Type-**operational**/Code-**F96760**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **18**
        + Description: Threshold crossing alert for class eqptEgrDropPkts5min, property bufferRate
    - Severity-**warning**/Type-**config**/Code-**F0981**/Domain-**infra**
        + Cause: `resolution-failed`
        + Count: **1**
        + Description: The object refers to an object that was not found.
    - Severity-**warning**/Type-**operational**/Code-**F100480**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **18**
        + Description: Threshold crossing alert for class eqptIngrDropPkts5min, property errorRate
    - Severity-**warning**/Type-**operational**/Code-**F381328**/Domain-**infra**
        + Cause: `threshold-crossed`
        + Count: **12**
        + Description: Threshold crossing alert for class eqptIngrErrPkts5min, property crcLast