These tests will modify files in subdirectories, but will not do things that install or remove packages or things
outside of those test subdirectories. They will also not reconfigure or bounce system services.
..note:: Running integration tests within Docker
To protect your system from any potential changes caused by integration tests, and to ensure the a sensible set of dependencies are available we recommend that you always run integration tests with the ``--docker`` option. See the `list of supported docker images <https://github.com/ansible/ansible/blob/devel/test/runner/completion/docker.txt>`_ for options.
..note:: Avoiding pulling new Docker images:
Use the ``--docker-no-pull`` option to avoid pulling the latest container image. This is required when using custom local images that are not available for download.
Run as follows for all POSIX platform tests executed by our CI system::
Ansible needs fairly wide ranging powers to run the tests in an AWS account. This rights can be provided to a dedicated user. These need to be configured before running the test.
testing-iam-policy.json.j2
--------------------------
The testing-iam-policy.json.j2 file contains a policy which can be given to the user
running the tests to minimize the rights of that user. Please note that while this policy does limit the user to one region, this does not fully restrict the user (primarily due to the limitations of the Amazon ARN notation). The user will still have wide privileges for viewing account definitions, and will also able to manage some resources that are not related to testing (for example, AWS lambdas with different names). Tests should not be run in a primary production account in any case.
Other Definitions required
--------------------------
Apart from installing the policy and giving it to the user identity running the tests, a
lambda role `ansible_integration_tests` has to be created which has lambda basic execution
* To run the network tests you will need a number of test machines and suitably configured inventory file. A sample is included in ``test/integration/inventory.network``
* As with the rest of the integration tests, they can be found grouped by module in ``test/integration/targets/MODULENAME/``
To filter a set of test cases set ``limit_to`` to the name of the group, generally this is the name of the module::
In addition to positive testing, negative tests are required to ensure user friendly warnings & errors are generated, rather than backtraces, for example:
.. code-block: yaml
- name: test invalid subset (foobar)
eos_facts:
provider: "{{ cli }}"
gather_subset:
- "foobar"
register: result
ignore_errors: true
- assert:
that:
# Failures shouldn't return changes
- "result.changed == false"
# It's a failure
- "result.failed == true"
# Sensible Failure message
- "'Subset must be one of' in result.msg"
Conventions
```````````
- Each test case should generally follow the pattern:
setup —> test —> assert —> test again (idempotent) —> assert —> teardown (if needed) -> done
This keeps test playbooks from becoming monolithic and difficult to
troubleshoot.
- Include a name for each task that is not an assertion. (It's OK to add names
to assertions too. But to make it easy to identify the broken task within a failed
test, at least provide a helpful name for each task.)
A top level playbook is required such as ``ansible/test/integration/eos.yaml`` which needs to be references by ``ansible/test/integration/network-all.yaml``
If you'd like to know more about the plans for improving testing Ansible then why not join the `Testing Working Group <https://github.com/ansible/community/blob/master/MEETINGS.md>`_.