Add quickstart testing docs for collections (#77290)

Co-authored-by: Divine Odazie <dodazie@gmail.com>
Co-authored-by: Emmanuel Ugwu <32464178+ugwutotheeshoes@users.noreply.github.com>
pull/77342/head
Sandra McCann 3 years ago committed by GitHub
parent 82e5f368e1
commit 21827522dc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,156 @@
.. _collection_integration_tests_about:
Understanding integration tests
=================================
.. note::
Some collections do not have integration tests.
Integration tests are functional tests of modules and plugins.
With integration tests, we check if a module or plugin satisfies its functional requirements. Simply put, we check that features work as expected and users get the outcome described in the module or plugin documentation.
There are :ref:`two kinds of integration tests <collections_adding_integration_test>` used in collections:
* integration tests that use Ansible roles
* integration tests that use ``runme.sh``.
This section focuses on integration tests that use Ansible roles.
Integration tests check modules with playbooks that invoke those modules. The tests pass standalone parameters and their combinations, check what the module or plugin reports with the :ref:`assert <ansible_collections.ansible.builtin.assert_module>` module, and the actual state of the system after each task.
Integration test example
-------------------------
Let's say we want to test the ``postgresql_user`` module invoked with the ``name`` parameter. We expect that the module will both create a user based on the provided value of the ``name`` parameter and will report that the system state has changed. We cannot rely on only what the module reports. To be sure that the user has been created, we query our database with another module to see if the user exists.
.. code-block:: yaml
- name: Create PostgreSQL user and store module's output to the result variable
postgresql_user:
name: test_user
register: result
- name: Check the module returns what we expect
assert:
that:
- result is changed
- name: Check actual system state with another module, in other words, that the user exists
postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user'
register: query_result
- name: We expect it returns one row, check it
assert:
that:
- query_result.rowcount == 1
Details about integration tests
--------------------------------
The basic entity of an Ansible integration test is a ``target``. The target is an :ref:`Ansible role <playbooks_reuse_roles>` stored in the ``tests/integration/targets`` directory of the collection repository. The target role contains everything that is needed to test a module.
The names of targets contain the module or plugin name that they test. Target names that start with ``setup_`` are usually executed as dependencies before module and plugin targets start execution. See :ref:`collection_creating_integration_tests` for details.
To run integration tests, we use the ``ansible-test`` utility that is included in the ``ansible-core`` and ``ansible`` packages. See :ref:`collection_run_integration_tests` for details. After you finish your integration tests, see to :ref:`collection_quickstart` to learn how to submit a pull request.
.. _collection_integration_prepare:
Preparing for integration tests for collections
=================================================
To prepare for developing integration tests:
#. :ref:`Set up your local environment <collection_prepare_environment>`.
#. Determine if integration tests already exist.
.. code-block:: bash
ansible-test integration --list-targets
If a collection already has integration tests, they are stored in ``tests/integration/targets/*`` subdirectories of the collection repository.
If you use ``bash`` and the ``argcomplete`` package is installed with ``pip`` on your system, you can also get a full target list.
.. code-block:: shell
ansible-test integration <tab><tab>
Alternately, you can check if the ``tests/integration/targets`` directory contains a corresponding directory with the same name as the module. For example, the tests for the ``postgresql_user`` module of the ``community.postgresql`` collection are stored in the ``tests/integration/targets/postgresql_user`` directory of the collection repository. If there is no corresponding target there, then that module does not have integration tests. In this case, consider adding integration tests for the module. See :ref:`collection_creating_integration_tests` for details.
.. _collection_integration_recommendations:
Recommendations on coverage
===========================
Bugfixes
--------
Before fixing code, create a test case in an :ref:`appropriate test target<collection_integration_prepare>` that reproduces the bug provided by the issue reporter and described in the ``Steps to Reproduce`` issue section. :ref:`Run <collection_run_integration_tests>` the tests.
If you failed to reproduce the bug, ask the reporter to provide additional information. The issue may be related to environment settings. Sometimes specific environment issues cannot be reproduced in integration tests, in that case, manual testing by issue reporter or other interested users is required.
Refactoring code
----------------
When refactoring code, always check that related options are covered in a :ref:`corresponding test target<collection_integration_prepare>`. Do not assume if the test target exists, everything is covered.
.. _collections_recommendation_modules:
Covering modules / new features
-------------------------------
When covering a module, cover all its options separately and their meaningful combinations. Every possible use of the module should be tested against:
- Idempotency (Does rerunning a task report no changes?)
- Check-mode (Does dry-running a task behave the same as a real run? Does it not make any changes?)
- Return values (Does the module return values consistently under different conditions?)
Each test action has to be tested at least the following times:
- Perform an action in check-mode if supported (this should indicate a change).
- Check with another module that the changes have ``not`` been actually made.
- Perform the action for real (this should indicate a change).
- Check with another module that the changes have been actually made.
- Perform the action again in check-mode (this should indicate ``no`` change).
- Perform the action again for real (this should indicate ``no`` change).
To check a task:
1. Register the outcome of the task as a variable, for example, ``register: result``. Using the :ref:`assert <ansible_collections.ansible.builtin.assert_module>` module, check:
#. If ``- result is changed`` or not.
#. Expected return values.
2. If the module changes the system state, check the actual system state using at least one other module. For example, if the module changes a file, we can check that the file has been changed by checking its checksum with the :ref:`stat <ansible_collections.ansible.builtin.stat_module>` module before and after the test tasks.
3. Run the same task with ``check_mode: yes`` (if check-mode is supported by the module). Check with other modules that the actual system state has not been changed.
4. Cover cases when the module must fail. Use the ``ignore_errors: yes`` option and check the returned message with the ``assert`` module.
Example:
.. code-block:: yaml
- name: Task to fail
abstract_module:
...
register: result
ignore_errors: yes
- name: Check the task fails and its error message
assert:
that:
- result is failed
- result.msg == 'Message we expect'
Here is a summary:
- Cover options and their sensible combinations.
- Check returned values.
- Cover check-mode if supported.
- Check a system state using other modules.
- Check when a module must fail and error messages.

@ -0,0 +1,250 @@
.. _collection_creating_integration_tests:
Creating new integration tests
=================================
This section covers the following cases:
- There are no integration tests for a collection / group of modules in a collection at all.
- You are adding a new module and you want to include integration tests.
- You want to add integration tests for a module that already exists without integration tests.
In other words, there are currently no tests for a module regardless of whether the module exists or not.
If the module already has tests, see :ref:`collection_updating_integration_tests`.
Simplified example
--------------------
Here is a simplified abstract example.
Let's say we are going to add integration tests to a new module in the ``community.abstract`` collection which interacts with some service.
We :ref:`checked<collection_integration_prepare>` and determined that there are no integration tests at all.
We should basically do the following:
1. Install and run the service with a ``setup`` target.
2. Create a test target.
3. Add integration tests for the module.
4. :ref:`Run the tests<collection_run_integration_tests>`.
5. Fix the code and tests as needed, run the tests again, and repeat the cycle until they pass.
.. note::
You can reuse the ``setup`` target when implementing other targets that also use the same service.
1. Clone the collection to the ``~/ansible_collections/community.abstract`` directory on your local machine.
2. From the ``~/ansble_collections/community.abstract`` directory, create directories for the ``setup`` target:
.. code-block:: bash
mkdir -p tests/integration/targets/setup_abstract_service/tasks
3. Write all the tasks needed to prepare the environment, install, and run the service.
For simplicity, let's imagine that the service is available in the native distribution repositories and no sophisticated environment configuration is required.
Add the following tasks to the ``tests/integration/targets/setup_abstract_service/tasks/main.yml`` file to install and run the service:
.. code-block:: yaml
- name: Install abstract service
package:
name: abstract_service
- name: Run the service
systemd:
name: abstract_service
state: started
This is a very simplified example.
4. Add the target for the module you are testing.
Let's say the module is called ``abstact_service_info``. Create the following directory structure in the target:
.. code-block:: bash
mkdir -p tests/integration/targets/abstract_service_info/tasks
mkdir -p tests/integration/targets/abstract_service_info/meta
Add all of the needed subdirectories. For example, if you are going to use defaults and files, add the ``defaults`` and ``files`` directories, and so on. The approach is the same as when you are creating a role.
5. To make the ``setup_abstract_service`` target run before the module's target, add the following lines to the ``tests/integration/targets/abstract_service_info/meta/main.yml`` file.
.. code-block:: yaml
dependencies:
- setup_abstract_service
6. Start with writing a single stand-alone task to check that your module can interact with the service.
We assume that the ``abstract_service_info`` module fetches some information from the ``abstract_service`` and that it has two connection parameters.
Among other fields, it returns a field called ``version`` containing a service version.
Add the following to ``tests/integration/targets/abstract_service_info/tasks/main.yml``:
.. code-block:: yaml
- name: Fetch info from abstract service
anstract_service_info:
host: 127.0.0.1 # We assume the service accepts local connection by default
port: 1234 # We assume that the service is listening this port by default
register: result # This variable will contain the returned JSON including the server version
- name: Test the output
assert:
that:
- result.version == '1.0.0' # Check version field contains what we expect
7. :ref:`Run the tests<collection_run_integration_tests>` with the ``-vvv`` argument.
If there are any issues with connectivity (for example, the service is not accepting connections) or with the code, the play will fail.
Examine the output to see at which step the failure occurred. Investigate the reason, fix, and run again. Repeat the cycle until the test passes.
8. If the test succeeds, write more tests. Refer to the :ref:`Recommendations on coverage<collection_integration_recommendations>` section for details.
``community.postregesql`` example
----------------------------------
Here is a real example of writing integration tests from scratch for the ``community.postgresql.postgresql_info`` module.
For the sake of simplicity, we will create very basic tests which we will run using the Ubuntu 20.04 test container.
We use ``Linux`` as a work environment and have ``git`` and ``docker`` installed and running.
We also installed ``ansible-core``.
1. Create the following directories in your home directory:
.. code-block:: bash
mkdir -p ~/ansible_collections/community
2. Fork the `collection repository <https://github.com/ansible-collections/community.postgresql>`_ through the GitHub web interface.
3. Clone the forked repository from your profile to the created path:
.. code-block:: bash
git clone https://github.com/YOURACC/community.postgresql.git ~/ansible_collections/community/postgresql
If you prefer to use the SSH protocol:
.. code-block:: bash
git clone git@github.com:YOURACC/community.postgresql.git ~/ansible_collections/community/postgresql
4. Go to the cloned repository:
.. code-block:: bash
cd ~/ansible_collections/community/postgresql
5. Be sure you are in the default branch:
.. code-block:: bash
git status
6. Checkout a test branch:
.. code-block:: bash
git checkout -b postgresql_info_tests
7. Since we already have tests for the ``postgresql_info`` module, we will run the following command:
.. code-block:: bash
rm -rf tests/integration/targets/*
With all of the targets now removed, the current state is as if we do not have any integration tests for the ``community.postgresql`` collection at all. We can now start writing integration tests from scratch.
8. We will start with creating a ``setup`` target that will install all required packages and will launch PostgreSQL. Create the following directories:
.. code-block:: bash
mkdir -p tests/integration/targets/setup_postgresql_db/tasks
9. Create the ``tests/integration/targets/setup_postgresql_db/tasks/main.yml`` file and add the following tasks to it:
.. code-block:: yaml
- name: Install required packages
package:
name:
- apt-utils
- postgresql
- postgresql-common
- python3-psycopg2
- name: Initialize PostgreSQL
shell: . /usr/share/postgresql-common/maintscripts-functions && set_system_locale && /usr/bin/pg_createcluster -u postgres 12 main
args:
creates: /etc/postgresql/12/
- name: Start PostgreSQL service
service:
name: postgresql
state: started
That is enough for our very basic example.
10. Then, create the following directories for the ``postgresql_info`` target:
.. code-block:: bash
mkdir -p tests/integration/targets/postgresql_info/tasks tests/integration/targets/postgresql_info/meta
11. To make the ``setup_postgresql_db`` target running before the ``postgresql_info`` target as a dependency, create the ``tests/integration/targets/postgresql_info/meta/main.yml`` file and add the following code to it:
.. code-block:: yaml
dependencies:
- setup_postgresql_db
12. Now we are ready to add our first test task for the ``postgresql_info`` module. Create the ``tests/integration/targets/postgresql_info/tasks/main.yml`` file and add the following code to it:
.. code-block:: yaml
- name: Test postgresql_info module
become: yes
become_user: postgres
postgresql_info:
login_user: postgres
login_db: postgres
register: result
- name: Check the module returns what we expect
assert:
that:
- result is not changed
- result.version.major == 12
- result.version.minor == 8
In the first task, we run the ``postgresql_info`` module to fetch information from the database we installed and launched with the ``setup_postgresql_db`` target. We are saving the values returned by the module into the ``result`` variable.
In the second task, we check the ``result`` variable (what the first task returned) with the ``assert`` module. We expect that, among other things, the result has the version and reports that the system state has not been changed.
13. Run the tests in the Ubuntu 20.04 docker container:
.. code-block:: bash
ansible-test integration postgresql_info --docker ubuntu2004 -vvv
The tests should pass. If we look at the output, we should see something like the following:
.. code-block:: shell
TASK [postgresql_info : Check the module returns what we expect] ***************
ok: [testhost] => {
"changed": false,
"msg": "All assertions passed"
}
If your tests fail when you are working on your project, examine the output to see at which step the failure occurred. Investigate the reason, fix, and run again. Repeat the cycle until the test passes. If the test succeeds, write more tests. Refer to the :ref:`Recommendations on coverage<collection_integration_recommendations>` section for details.

@ -0,0 +1,32 @@
.. _collection_run_integration_tests:
Running integration tests
============================
In the following examples, we will use ``Docker`` to run integration tests locally. Ensure you have :ref:`collection_prepare_environment` first.
We assume that you are in the ``~/ansible_collections/NAMESPACE/COLLECTION`` directory.
After you change the tests, you can run them with the following command:
.. code-block:: text
ansible-test integration <target_name> --docker <distro>
The ``target_name`` is a test role directory containing the tests. For example, if the test files you changed are stored in the ``tests/integration/targets/postgresql_info/`` directory and you want to use the ``fedora34`` container image, then the command will be:
.. code-block:: bash
ansible-test integration postgresql_info --docker fedora34
You can use the ``-vv`` or ``-vvv`` argument if you need more detailed output.
In the examples above, the ``fedora34`` test image will be automatically downloaded and used to create and run a test container.
See the :ref:`list of supported container images <test_container_images>`.
In some cases, for example, for platform independent tests, the ``default`` test image is required. Use the ``--docker default`` or just ``--docker`` option without specifying a distribution in this case.
.. note::
If you have any difficulties with writing or running integration tests or you are not sure if the case can be covered, submit your pull request without the tests. Other contributors can help you with them later if needed.

@ -0,0 +1,36 @@
.. _collection_integration_tests:
*****************************************
Adding integration tests to a collection
*****************************************
This section describes the steps to add integration tests to a collection and how to run them locally using the ``ansible-test`` command.
.. toctree::
:maxdepth: 1
collection_integration_about
collection_integration_updating
collection_integration_running
collection_integration_add
.. seealso::
:ref:`testing_units_modules`
Unit testing Ansible modules
`pytest <https://docs.pytest.org/en/latest/>`_
Pytest framework documentation
:ref:`developing_testing`
Ansible Testing Guide
:ref:`collection_unit_tests`
Unit testing for collections
:ref:`testing_integration`
Integration tests guide
:ref:`testing_collections`
Testing collections
:ref:`testing_resource_modules`
Resource module integration tests
:ref:`collection_pr_test`
How to test a pull request locally

@ -0,0 +1,171 @@
.. _collection_updating_integration_tests:
Adding to an existing integration test
=======================================
The test tasks are stored in the ``tests/integration/targets/<target_name>/tasks`` directory.
The ``main.yml`` file holds test tasks and includes other test files.
Look for a suitable test file to integrate your tests or create and include / import a separate test file.
You can use one of the existing test files as a draft.
When fixing a bug
-----------------
When fixing a bug:
1. :ref:`Determine if integration tests for the module exist<collection_integration_prepare>`. If they do not, see :ref:`collection_creating_integration_tests` section.
2. Add a task which reproduces the bug to an appropriate file within the ``tests/integration/targets/<target_name>/tasks`` directory.
3. :ref:`Run the tests<collection_run_integration_tests>`. The newly added task should fail.
4. If they do not fail, re-check if your environment / test task satisfies the conditions described in the ``Steps to Reproduce`` section of the issue.
5. If you reproduce the bug and tests fail, change the code.
6. :ref:`Run the tests<collection_run_integration_tests>` again.
7. If they fail, repeat steps 5-6 until the tests pass.
Here is an example.
Let's say someone reported an issue in the ``community.postgresql`` collection that when users pass a name containing underscores to the ``postgresql_user`` module, the module fails.
We cloned the collection repository to the ``~/ansible_collections/community/postgresql`` directory and :ref:`prepared our environment <collection_prepare_environment>`. From the collection's root directory, we run ``ansible-test integration --list-targets`` and it shows a target called ``postgresql_user``. It means that we already have tests for the module.
We start with reproducing the bug.
First, we look into the ``tests/integration/targets/postgresql_user/tasks/main.yml`` file. In this particular case, the file imports other files from the ``tasks`` directory. The ``postgresql_user_general.yml`` looks like an appropriate one to add our tests.
.. code-block:: yaml
# General tests:
- import_tasks: postgresql_user_general.yml
when: postgres_version_resp.stdout is version('9.4', '>=')
We will add the following code to the file.
.. code-block:: yaml
# https://github.com/ansible-collections/community.postgresql/issues/NUM
- name: Test user name containing underscore
postgresql_user:
name: underscored_user
register: result
- name: Check the module returns what we expect
assert:
that:
- result is changed
- name: Query the database if the user exists
postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'underscored_user'
register: result
- name: Check the database returns one row
assert:
that:
- result.query_result.rowcount == 1
When we :ref:`run the tests<collection_run_integration_tests>` with ``postgresql_user`` as a test target, this task must fail.
Now that we have our failing test; we will fix the bug and run the same tests again. Once the tests pass, we will consider the bug fixed and will submit a pull request.
When adding a new feature
-------------------------
.. note::
The process described in this section also applies when you want to add integration tests to a feature that already exists, but is missing integration tests.
If you have not already implemented the new feature, you can start with writing the integration tests for it. Of course they will not work as the code does not yet exist, but it can help you improve your implementation design before you start writing any code.
When adding new features, the process of adding tests consists of the following steps:
1. :ref:`Determine if integration tests for the module exists<collection_integration_prepare>`. If they do not, see :ref:`collection_creating_integration_tests`.
2. Find an appropriate file for your tests within the ``tests/integration/targets/<target_name>/tasks`` directory.
3. Cover your feature with tests. Refer to the :ref:`Recommendations on coverage<collection_integration_recommendations>` section for details.
4. :ref:`Run the tests<collection_run_integration_tests>`.
5. If they fail, see the test output for details. Fix your code or tests and run the tests again.
6. Repeat steps 4-5 until the tests pass.
Here is an example.
Let's say we decided to add a new option called ``add_attribute`` to the ``postgresql_user`` module of the ``community.postgresql`` collection.
The option is boolean. If set to ``yes``, it adds an additional attribute to a database user.
We cloned the collection repository to the ``~/ansible_collections/community/postgresql`` directory and :ref:`prepared our environment<collection_integration_prepare>`. From the collection's root directory, we run ``ansible-test integration --list-targets`` and it shows a target called ``postgresql_user``. Therefore, we already have some tests for the module.
First, we look at the ``tests/integration/targets/<target_name>/tasks/main.yml`` file. In this particular case, the file imports other files from the ``tasks`` directory. The ``postgresql_user_general.yml`` file looks like an appropriate one to add our tests.
.. code-block:: yaml
# General tests:
- import_tasks: postgresql_user_general.yml
when: postgres_version_resp.stdout is version('9.4', '>=')
We will add the following code to the file.
.. code-block:: yaml
# https://github.com/ansible-collections/community.postgresql/issues/NUM
# We should also run the same tasks with check_mode: yes. We omit it here for simplicity.
- name: Test for new_option, create new user WITHOUT the attribute
postgresql_user:
name: test_user
add_attribute: no
register: result
- name: Check the module returns what we expect
assert:
that:
- result is changed
- name: Query the database if the user exists but does not have the attribute (it is NULL)
postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = NULL
register: result
- name: Check the database returns one row
assert:
that:
- result.query_result.rowcount == 1
- name: Test for new_option, create new user WITH the attribute
postgresql_user:
name: test_user
add_attribute: yes
register: result
- name: Check the module returns what we expect
assert:
that:
- result is changed
- name: Query the database if the user has the attribute (it is TRUE)
postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = 't'
register: result
- name: Check the database returns one row
assert:
that:
- result.query_result.rowcount == 1
Then we :ref:`run the tests<collection_run_integration_tests>` with ``postgresql_user`` passed as a test target.
In reality, we would alternate the tasks above with the same tasks run with the ``check_mode: yes`` option to be sure our option works as expected in check-mode as well. See :ref:`Recommendations on coverage<collection_integration_recommendations>` for details.
If we expect a task to fail, we use the ``ignore_errors: yes`` option and check that the task actually failed and returned the message we expect:
.. code-block:: yaml
- name: Test for fail_when_true option
postgresql_user:
name: test_user
fail_when_true: yes
register: result
ignore_errors: yes
- name: Check the module fails and returns message we expect
assert:
that:
- result is failed
- result.msg == 'The message we expect'

@ -0,0 +1,67 @@
.. _collection_pr_test:
****************************
How to test a collection PR
****************************
Reviewers and issue authors can verify a PR fixes the reported bug by testing the PR locally.
.. contents::
:local:
.. _collection_prepare_environment:
Prepare your environment
========================
We assume that you use Linux as a work environment (you can use a virtual machine as well) and have ``git`` installed.
1. :ref:`Install Ansible <installation_guide>` or ansible-core.
2. Create the following directories in your home directory:
.. code:: bash
mkdir -p ~/ansible_collections/NAMESPACE/COLLECTION_NAME
For example, if the collection is ``community.general``:
.. code:: bash
mkdir -p ~/ansible_collections/community/general
If the collection is ``ansible.posix``:
.. code:: bash
mkdir -p ~/ansible_collections/ansible/posix
3. Clone the forked repository from the author profile to the created path:
.. code:: bash
git clone https://github.com/AUTHOR_ACC/COLLECTION_REPO.git ~/ansible_collections/NAMESPACE/COLLECTION_NAME
4. Go to the cloned repository.
.. code:: bash
cd ~/ansible_collections/NAMESPACE/COLLECTION_NAME
5. Checkout the PR branch (it can be retrieved from the PR's page):
.. code:: bash
git checkout pr_branch
Test the Pull Request
=====================
1. Include `~/ansible_collections` in `COLLECTIONS_PATHS`. See :ref:`COLLECTIONS_PATHS` for details.
2. Run your playbook using the PR branch and verify the PR fixed the bug.
3. Give feedback on the pull request or the linked issue(s).

@ -0,0 +1,162 @@
.. _collection_unit_tests:
******************************
Add unit tests to a collection
******************************
This section describes all of the steps needed to add unit tests to a collection and how to run them locally using the ``ansible-test`` command.
See :ref:`testing_units_modules` for more details.
.. contents::
:local:
Understanding the purpose of unit tests
========================================
Unit tests ensure that a section of code (known as a ``unit``) meets its design requirements and behaves as intended. Some collections do not have unit tests but it does not mean they are not needed.
A ``unit`` is a function or method of a class used in a module or plugin. Unit tests verify that a function with a certain input returns the expected output.
Unit tests should also verify when a function raises or handles exceptions.
Ansible uses `pytest <https://docs.pytest.org/en/latest/>`_ as a testing framework.
See :ref:`testing_units_modules` for complete details.
Inclusion in the Ansible package `requires integration and/or unit tests <https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst#requirements-for-collections-to-be-included-in-the-ansible-package>`_ You should have tests for your collection as well as for individual modules and plugins to make your code more reliable To learn how to get started with integration tests, see :ref:`collection_integration_tests`.
See :ref:`collection_prepare_local` to prepare your environment.
.. _collection_unit_test_required:
Determine if unit tests exist
=============================
Ansible collection unit tests are located in the ``tests/units`` directory.
The structure of the unit tests matches the structure of the code base, so the tests can reside in the ``tests/units/plugins/modules/`` and ``tests/units/plugins/module_utils`` directories. There can be sub-directories, if modules are organized by module groups.
If you are adding unit tests for ``my_module`` for example, check to see if the tests already exist in the collection source tree with the path ``tests/units/plugins/modules/test_my_module.py``.
Example of unit tests
=====================
Let's assume that the following function is in ``my_module`` :
.. code:: python
def convert_to_supported(val):
"""Convert unsupported types to appropriate."""
if isinstance(val, decimal.Decimal):
return float(val)
if isinstance(val, datetime.timedelta):
return str(val)
if val == 42:
raise ValueError("This number is just too cool for us ;)")
return val
Unit tests for this function should, at a minimum, check the following:
* If the function gets a ``Decimal`` argument, it returns a corresponding ``float`` value.
* If the function gets a ``timedelta`` argument, it returns a corresponding ``str`` value.
* If the function gets ``42`` as an argument, it raises a ``ValueError``.
* If the function gets an argument of any other type, it does nothing and returns the same value.
To write these unit tests in collection is called ``community.mycollection``:
1. If you already have your local environment :ref:`prepared <collection_prepare_local>`, go to the collection root directory.
.. code:: bash
cd ~/ansible_collection/community/mycollection
2. Create a test file for ``my_module``. If the path does not exist, create it.
.. code:: bash
touch tests/units/plugins/modules/test_my_module.py
3. Add the following code to the file:
.. code:: python
# -*- coding: utf-8 -*-
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from datetime import timedelta
from decimal import Decimal
import pytest
from ansible_collections.community.mycollection.plugins.modules.my_module import (
convert_to_supported,
)
# We use the @pytest.mark.parametrize decorator to parametrize the function
# https://docs.pytest.org/en/latest/how-to/parametrize.html
# Simply put, the first element of each tuple will be passed to
# the test_convert_to_supported function as the test_input argument
# and the second element of each tuple will be passed as
# the expected argument.
# In the function's body, we use the assert statement to check
# if the convert_to_supported function given the test_input,
# returns what we expect.
@pytest.mark.parametrize('test_input, expected', [
(timedelta(0, 43200), '12:00:00'),
(Decimal('1.01'), 1.01),
('string', 'string'),
(None, None),
(1, 1),
])
def test_convert_to_supported(test_input, expected):
assert convert_to_supported(test_input) == expected
def test_convert_to_supported_exception():
with pytest.raises(ValueError, match=r"too cool"):
convert_to_supported(42)
See :ref:`testing_units_modules` for examples on how to mock ``AnsibleModule`` objects, monkeypatch methods (``module.fail_json``, ``module.exit_json``), emulate API responses, and more.
4. Run the tests using docker:
.. code:: bash
ansible-test units tests/unit/plugins/modules/test_my_module.py --docker
.. _collection_recommendation_unit:
Recommendations on coverage
===========================
Use the following tips to organize your code and test coverage:
* Make your functions simple. Small functions that do one thing with no or minimal side effects are easier to test.
* Test all possible behaviors of a function including exception related ones such as raising, catching and handling exceptions.
* When a function invokes the ``module.fail_json`` method, passed messages should also be checked.
.. seealso::
:ref:`testing_units_modules`
Unit testing Ansible modules
:ref:`developing_testing`
Ansible Testing Guide
:ref:`collection_integration_tests`
Integration testing for collections
:ref:`testing_integration`
Integration tests guide
:ref:`testing_collections`
Testing collections
:ref:`testing_resource_modules`
Resource module integration tests
:ref:`collection_pr_test`
How to test a pull request locally

@ -0,0 +1,14 @@
.. _testing_collections_guide:
**********************************************
Testing Collection Contributions
**********************************************
This section focuses on the different tests a contributor should run on their collection PR.
.. toctree::
:maxdepth: 1
collection_test_pr_locally
collection_unit_tests
collection_integration_tests

@ -1,4 +1,4 @@
.. _colllections_contributions:
.. _collections_contributions:
*************************************
Ansible Collections Contributor Guide
@ -10,11 +10,10 @@ Ansible Collections Contributor Guide
collection_development_process
reporting_collections
create_pr_quick_start
collection_contributors/test_index
documentation_contributions
maintainers
contributing_maintained_collections
other_tools_and_programs
If you have a specific Ansible interest or expertise (for example, VMware, Linode, and so on, consider joining a :ref:`working group <working_group_list>`.

@ -6,6 +6,7 @@ Creating your first collection pull request
This section describes all steps needed to create your first patch and submit a pull request on a collection.
.. _collection_prepare_local:
Prepare your environment
========================

@ -55,6 +55,8 @@ To run only a specific unit test:
You can specify Python requirements in the ``tests/unit/requirements.txt`` file. See :ref:`testing_units` for more information, especially on fixture files.
.. _collections_adding_integration_test:
Adding integration tests
------------------------

@ -97,12 +97,20 @@ exclude_patterns = [
'galaxy',
'network',
'scenario_guides',
'community/collection_contributors/test_index.rst',
'community/collection_contributors/collection_integration_about.rst',
'community/collection_contributors/collection_integration_updating.rst',
'community/collection_contributors/collection_integration_add.rst',
'community/collection_contributors/collection_test_pr_locally.rst',
'community/collection_contributors/collection_integration_tests.rst',
'community/collection_contributors/collection_integration_running.rst',
'community/collection_contributors/collection_unit_tests.rst',
'community/maintainers.rst',
'community/contributions_collections.rst',
'community/create_pr_quick_start.rst',
'community/reporting_collections.rst',
'community/contributing_maintained_collections.rst',
'community/collection_development_process.rst',
'community/maintainers.rst',
'community/maintainers_guidelines.rst',
'community/maintainers_workflow.rst',
'porting_guides/porting_guides.rst',

Loading…
Cancel
Save