From 25141584984d095fc853acad55d2cf756222534c Mon Sep 17 00:00:00 2001
From: Rafael Guterres Jeffman <rjeffman@redhat.com>
Date: Thu, 25 Aug 2022 14:13:33 -0300
Subject: [PATCH] upstream CI: run PR tests only for affected plugins

The current workflow for bug fixing or new enhancements in
ansible-freeipa includes running Ansible playbooks tests for all the
available plugins for every pull request, even for contained
modifications.

This patch creates a new workflow for pull requests where only the
affected plugins are tested in the PR. Changes that might affect other
parts of the code will trigger tests for the parts affected.

A utility script, utils/filter_tests, is used to set the variables
IPA_ENABLED_MODULES and IPA_ENABLED_TESTS before executing the tests,
effectively limiting which tests are executed. The script uses the
python plugin 'utils/filter_plugins.py' which lists all test modules
that should be executed for a list of modified source files.

Tests are selected for execution based on the plugin name. For example,
a change to 'plugins/modules/ipalocation.py' would trigger all playbook
tests under 'tests/location'. If a test playbook is modified, it is
scheduled to be executed. Changes to any file under
'plugins/module_utils' will force the execution of all tests, since any
module might be affected by that change.

The nature of the change is not evaluated, so a simple typo fix of a
comment in a file under 'plugins/module_utils' would still schedule all
test playbooks to be executed.

For roles, any file changed under the role directory would set the role
to be included in the tests. Playbook tests for roles must be created
under 'tests/<rolename>_role', where role name in the name of the role
without 'ipa', for example, the 'ipabackup' role test playbooks would
be stored under 'tests/backup_role'.

Since there is the possibility that the list of tests to be executed
might be less than the number of tests groups used (3), a new pytest
dependency was added, pytest-custom_exit_code, so that having no tests
to run isn't a test failure.

A new pipeline on Azure needs to be created to use the new test script.
---
 requirements-tests.txt                  |   1 +
 tests/azure/pr-pipeline.yml             |  74 +++++++++
 tests/azure/templates/fast_tests.yml    |  41 +++++
 tests/azure/templates/playbook_fast.yml |  84 ++++++++++
 utils/get_test_modules.py               | 206 ++++++++++++++++++++++++
 utils/set_test_modules                  |  44 +++++
 6 files changed, 450 insertions(+)
 create mode 100644 tests/azure/pr-pipeline.yml
 create mode 100644 tests/azure/templates/fast_tests.yml
 create mode 100644 tests/azure/templates/playbook_fast.yml
 create mode 100644 utils/get_test_modules.py
 create mode 100644 utils/set_test_modules

diff --git a/requirements-tests.txt b/requirements-tests.txt
index 390a5eeb..292e81c6 100644
--- a/requirements-tests.txt
+++ b/requirements-tests.txt
@@ -2,5 +2,6 @@
 pytest>=2.7
 pytest-sourceorder>=0.5
 pytest-split>=0.8.0
+pytest-custom_exit_code>=0.3.0
 pytest-testinfra>=5.0
 pyyaml>=3
diff --git a/tests/azure/pr-pipeline.yml b/tests/azure/pr-pipeline.yml
new file mode 100644
index 00000000..0ca82912
--- /dev/null
+++ b/tests/azure/pr-pipeline.yml
@@ -0,0 +1,74 @@
+---
+trigger:
+- master
+
+pool:
+  vmImage: 'ubuntu-latest'
+
+stages:
+
+# Fedora
+
+- stage: Fedora_Latest
+  dependsOn: []
+  jobs:
+  - template: templates/fast_tests.yml
+    parameters:
+      build_number: $(Build.BuildNumber)
+      scenario: fedora-latest
+      ansible_version: "-core >=2.12,<2.13"
+
+# Galaxy on Fedora
+
+- stage: Galaxy_Fedora_Latest
+  dependsOn: []
+  jobs:
+  - template: templates/fast_tests.yml
+    parameters:
+      build_number: $(Build.BuildNumber)
+      scenario: fedora-latest
+      ansible_version: "-core >=2.12,<2.13"
+
+# CentOS 9 Stream
+
+- stage: CentOS_9_Stream
+  dependsOn: []
+  jobs:
+  - template: templates/fast_tests.yml
+    parameters:
+      build_number: $(Build.BuildNumber)
+      scenario: c9s
+      ansible_version: "-core >=2.12,<2.13"
+
+# CentOS 8 Stream
+
+- stage: CentOS_8_Stream
+  dependsOn: []
+  jobs:
+  - template: templates/fast_tests.yml
+    parameters:
+      build_number: $(Build.BuildNumber)
+      scenario: c8s
+      ansible_version: "-core >=2.12,<2.13"
+
+# CentOS 7
+
+- stage: CentOS_7
+  dependsOn: []
+  jobs:
+  - template: templates/fast_tests.yml
+    parameters:
+      build_number: $(Build.BuildNumber)
+      scenario: centos-7
+      ansible_version: "-core >=2.12,<2.13"
+
+# Rawhide
+
+- stage: Fedora_Rawhide
+  dependsOn: []
+  jobs:
+  - template: templates/fast_tests.yml
+    parameters:
+      build_number: $(Build.BuildNumber)
+      scenario: fedora-rawhide
+      ansible_version: "-core >=2.12,<2.13"
diff --git a/tests/azure/templates/fast_tests.yml b/tests/azure/templates/fast_tests.yml
new file mode 100644
index 00000000..cde72a70
--- /dev/null
+++ b/tests/azure/templates/fast_tests.yml
@@ -0,0 +1,41 @@
+---
+parameters:
+  - name: scenario
+    type: string
+    default: fedora-latest
+  - name: build_number
+    type: string
+  - name: ansible_version
+    type: string
+    default: ""
+
+jobs:
+- template: playbook_fast.yml
+  parameters:
+    group_number: 1
+    number_of_groups: 3
+    build_number: ${{ parameters.build_number }}
+    scenario: ${{ parameters.scenario }}
+    ansible_version: ${{ parameters.ansible_version }}
+
+- template: playbook_fast.yml
+  parameters:
+    group_number: 2
+    number_of_groups: 3
+    build_number: ${{ parameters.build_number }}
+    scenario: ${{ parameters.scenario }}
+    ansible_version: ${{ parameters.ansible_version }}
+
+- template: playbook_fast.yml
+  parameters:
+    group_number: 3
+    number_of_groups: 3
+    build_number: ${{ parameters.build_number }}
+    scenario: ${{ parameters.scenario }}
+    ansible_version: ${{ parameters.ansible_version }}z
+
+# - template: pytest_tests.yml
+#   parameters:
+#     build_number: ${{ parameters.build_number }}
+#     scenario: ${{ parameters.scenario }}
+#     ansible_version: ${{ parameters.ansible_version }}
diff --git a/tests/azure/templates/playbook_fast.yml b/tests/azure/templates/playbook_fast.yml
new file mode 100644
index 00000000..ef613cd0
--- /dev/null
+++ b/tests/azure/templates/playbook_fast.yml
@@ -0,0 +1,84 @@
+---
+parameters:
+  - name: group_number
+    type: number
+    default: 1
+  - name: number_of_groups
+    type: number
+    default: 1
+  - name: scenario
+    type: string
+    default: fedora-latest
+  - name: ansible_version
+    type: string
+    default: ""
+  - name: python_version
+    type: string
+    default: 3.x
+  - name: build_number
+    type: string
+
+jobs:
+- job: Test_Group${{ parameters.group_number }}
+  displayName: Run playbook tests ${{ parameters.scenario }} (${{ parameters.group_number }}/${{ parameters.number_of_groups }})
+  timeoutInMinutes: 120
+  variables:
+  - template: variables.yaml
+  - template: variables_${{ parameters.scenario }}.yaml
+  steps:
+  - task: UsePythonVersion@0
+    inputs:
+      versionSpec: '${{ parameters.python_version }}'
+
+  - script: |
+      pip install \
+        "molecule[docker]>=3" \
+        "ansible${{ parameters.ansible_version }}"
+    displayName: Install molecule and Ansible
+
+  - script: ansible-galaxy collection install community.docker ansible.posix
+    displayName: Install Ansible collections
+
+  - script: pip install -r requirements-tests.txt
+    displayName: Install dependencies
+
+  - script: |
+      mkdir -p ~/.ansible/roles ~/.ansible/library ~/.ansible/module_utils
+      cp -a roles/* ~/.ansible/roles
+      cp -a plugins/modules/* ~/.ansible/library
+      cp -a plugins/module_utils/* ~/.ansible/module_utils
+      molecule create -s ${{ parameters.scenario }}
+    displayName: Setup test container
+    env:
+      ANSIBLE_LIBRARY: ./molecule
+
+  - script: |
+      . utils/set_test_modules
+      python utils/check_test_configuration.py ${{ parameters.scenario }}
+    displayName: Check scenario test configuration
+
+  - script: |
+      . utils/set_test_modules
+      if ! pytest \
+        -m "playbook" \
+        --verbose \
+        --color=yes \
+        --suppress-no-test-exit-code \
+        --splits=${{ parameters.number_of_groups }} \
+        --group=${{ parameters.group_number }} \
+        --junit-xml=TEST-results-group-${{ parameters.group_number }}.xml
+      then
+        [ $? -eq 5 ] && true || false
+      fi
+    displayName: Run playbook tests
+    env:
+      IPA_SERVER_HOST: ${{ parameters.scenario }}
+      RUN_TESTS_IN_DOCKER: true
+      IPA_DISABLED_MODULES: ${{ variables.ipa_disabled_modules }}
+      IPA_DISABLED_TESTS: ${{ variables.ipa_disabled_tests }}
+
+  - task: PublishTestResults@2
+    inputs:
+      mergeTestResults: true
+      testRunTitle: PlaybookTests-Build${{ parameters.build_number }}
+    condition: succeededOrFailed()
diff --git a/utils/get_test_modules.py b/utils/get_test_modules.py
new file mode 100644
index 00000000..d3f4c043
--- /dev/null
+++ b/utils/get_test_modules.py
@@ -0,0 +1,206 @@
+"""Filter tests based on plugin modifications."""
+
+import sys
+import os
+from importlib.machinery import SourceFileLoader
+import types
+from unittest import mock
+import yaml
+
+
+PYTHON_IMPORT = __import__
+
+
+def get_plugins_from_playbook(playbook):
+    """Get all plugins called in the given playbook."""
+    def get_tasks(task_block):
+        """
+        Get all plugins used on tasks.
+
+        Recursively process "block", "include_tasks" and "import_tasks".
+        """
+        _result = set()
+        for tasks in task_block:
+            for task in tasks:
+                original_task = task
+                if "." in task:
+                    task = task.split(".")[-1]
+                if task == "block":
+                    _result.update(get_tasks(tasks["block"]))
+                elif task in ["include_tasks", "import_tasks"]:
+                    parent = os.path.dirname(playbook)
+                    include_task = tasks[task]
+                    if isinstance(include_task, dict):
+                        include_file = os.path.join(
+                            parent, include_task["file"]
+                        )
+                    else:
+                        include_file = os.path.join(parent, include_task)
+                    _result.update(get_plugins_from_playbook(include_file))
+                elif task == "include_role":
+                    _result.add(f"_{tasks[original_task]['name']}")
+                elif task.startswith("ipa"):
+                    # assume we are only interested in 'ipa*' modules/roles
+                    _result.add(task)
+                elif task == "role":
+                    # not really a "task", but we'll handle the same way.
+                    _result.add(f"_{tasks[task]}")
+        return _result
+
+    def load_playbook(filename):
+        """Load playbook file using Python's YAML parser."""
+        if not (filename.endswith("yml") or filename.endswith("yaml")):
+            return []
+        # print("Processing:", playbook)
+        try:
+            with open(filename, "rt") as playbook_file:
+                data = yaml.safe_load(playbook_file)
+        except yaml.scanner.ScannerError:  # If not a YAML/JSON file.
+            return []
+        except yaml.parser.ParserError:  # If not a YAML/JSON file.
+            return []
+        else:
+            return data if data else []
+
+    data = load_playbook(playbook)
+    task_blocks = [t.get("tasks", []) if "tasks" in t else [] for t in data]
+    role_blocks = [t.get("roles", []) if "roles" in t else [] for t in data]
+    # assume file is a list of tasks if no "tasks" entry found.
+    if not task_blocks:
+        task_blocks = [data]
+    _result = set()
+    for task_block in task_blocks:
+        _result.update(get_tasks(task_block))
+    # roles
+    for role_block in role_blocks:
+        _result.update(get_tasks(role_block))
+
+    return _result
+
+
+def import_mock(name, *args):
+    """Intercept 'import' calls and store module name."""
+    if not hasattr(import_mock, "call_list"):
+        setattr(import_mock, "call_list", set())
+    import_mock.call_list.add(name)  # pylint: disable=no-member
+    try:
+        # print("NAME:", name)
+        return PYTHON_IMPORT(name, *args)
+    except ModuleNotFoundError:
+        # We're not really interested in loading the module
+        # if it can't be imported, it is not something we really care.
+        return mock.Mock()
+    except Exception:  # pylint: disable=broad-except
+        print(
+            "An unexpected error occured. Do you have all requirements set?",
+            file=sys.stderr
+        )
+        sys.exit(1)
+
+
+def parse_playbooks(test_module):
+    """Load all playbooks for 'test_module' directory."""
+    if test_module.name[0] in [".", "_"] or test_module.name == "pytests":
+        return []
+    _files = set()
+    for arg in os.scandir(test_module):
+        if arg.is_dir():
+            _files.update(parse_playbooks(arg))
+        else:
+            for playbook in get_plugins_from_playbook(arg.path):
+                if playbook.startswith("_"):
+                    source = f"roles/{playbook[1:]}"
+                    if os.path.isdir(source):
+                        _files.add(source)
+                else:
+                    source = f"plugins/modules/{playbook}.py"
+                    if os.path.isfile(source):
+                        _files.add(source)
+                        # If a plugin imports a module from the repository,
+                        # we'l find it by patching the builtin __import__
+                        # function and importing the module from the source
+                        # file. The modules imported as a result of the import
+                        # will be added to the import_mock.call_list list.
+                        with mock.patch(
+                            "builtins.__import__", side_effect=import_mock
+                        ):
+                            # pylint: disable=no-value-for-parameter
+                            loader = SourceFileLoader(playbook, source)
+                            loader.exec_module(types.ModuleType(loader.name))
+                        # pylint: disable=no-member
+                        candidates = [
+                            f.split(".")[1:]
+                            for f in import_mock.call_list
+                            if f.startswith("ansible.")
+                        ]
+                        # pylint: enable=no-member
+                        files = [
+                            "plugins/" + "/".join(f) + ".py"
+                            for f in candidates
+                        ]
+                        _files.update([f for f in files if os.path.isfile(f)])
+                    else:
+                        source = f"roles/{playbook}"
+                        if os.path.isdir(source):
+                            _files.add(source)
+
+    return _files
+
+
+def map_test_module_sources(base):
+    """Create a map of 'test-modules' to 'plugin-sources', from 'base'."""
+    # Find root directory of playbook tests.
+    script_dir = os.path.dirname(__file__)
+    test_root = os.path.realpath(os.path.join(script_dir, f"../{base}"))
+    # create modules:source_files map
+    _result = {}
+    for test_module in [d for d in os.scandir(test_root) if d.is_dir()]:
+        _depends_on = parse_playbooks(test_module)
+        if _depends_on:
+            _result[test_module.name] = _depends_on
+    return _result
+
+
+def usage(err=0):
+    print("filter_plugins.py [-h|--help] [-p|--pytest] PY_SRC...")
+    print(
+        """
+Print a comma-separated list of modules that should be tested if
+PY_SRC is modified.
+
+Options:
+
+    -h, --help      Print this message and exit.
+    -p, --pytest    Evaluate pytest tests (playbooks only).
+"""
+    )
+    sys.exit(err)
+
+
+def main():
+    """Program entry point."""
+    if "-h" in sys.argv or "--help" in sys.argv:
+        usage()
+    _base = "tests"
+    if "-p" in sys.argv or "--pytest" in sys.argv:
+        _base = "tests/pytests"
+    call_args = [x for x in sys.argv[1:] if x not in ["-p", "--pytest"]]
+    _mapping = map_test_module_sources(_base)
+    _test_suits = (
+        [
+            _module for _module, _files in _mapping.items()
+            for _arg in call_args
+            for _file in _files
+            if _file.startswith(_arg)
+        ] + [
+            _role for _role in [x for x in _mapping if x.endswith("_role")]
+            for _arg in call_args
+            if _arg.startswith("roles/ipa" + _role[:-5])
+        ]
+    )
+    if _test_suits:
+        print(",".join(sorted(_test_suits)))
+
+
+if __name__ == "__main__":
+    main()
diff --git a/utils/set_test_modules b/utils/set_test_modules
new file mode 100644
index 00000000..b93e38ce
--- /dev/null
+++ b/utils/set_test_modules
@@ -0,0 +1,44 @@
+#!/bin/bash -eu
+# This file shoud be source'd (. set_test_modules) rather than executed.
+
+#
+# Set "BASE_BRANCH" to a different branch to compare. 
+#
+
+RED="\033[31;1m"
+RST="\033[0m"
+
+die() {
+    echo -e "${RED}${*}${RST}" >&2
+}
+
+TOPDIR="$(dirname "${BASH_SOURCE[0]}")/.."
+
+pushd "${TOPDIR}" >/dev/null 2>&1 || die "Failed to change directory."
+
+files_list=$(mktemp)
+
+BASE_BRANCH=${BASE_BRANCH:-"master"}
+git diff "${BASE_BRANCH}" --name-only > "${files_list}"
+
+# Get all modules that should have tests executed
+enabled_modules="$(python utils/get_test_modules.py $(cat "${files_list}"))"
+[ -z "${enabled_modules}" ] && enabled_modules="None"
+
+# Get individual tests that should be executed
+mapfile -t tests < <(sed -n "s#.*/\(test_[^/]*\).yml#\1#p" "${files_list}" | tr -d " ")
+[ ${#tests[@]} -gt 0 ] && enabled_tests=$(IFS=, ; echo "${tests[*]}")
+[ -z "${enabled_tests}" ] && enabled_tests="None"
+
+[ -n "${enabled_tests}" ] && IPA_ENABLED_TESTS="${enabled_tests},${IPA_ENABLED_TESTS}"
+[ -n "${enabled_modules}" ] && IPA_ENABLED_MODULES="${enabled_modules},${IPA_ENABLED_MODULES}"
+
+rm -f "${files_list}"
+
+export IPA_ENABLED_MODULES
+export IPA_ENABLED_TESTS
+
+echo "IPA_ENABLED_MODULES = [${IPA_ENABLED_MODULES}]"
+echo "IPA_ENABLED_TESTS = [${IPA_ENABLED_TESTS}]"
+
+popd >/dev/null 2>&1 || die "Failed to change back to original directory."
-- 
GitLab