mirror of https://github.com/ansible/ansible.git
Docs how to test (2nd) (#24094)
* Big testing doc refactor * Combine all the testing documentation in to one place to make it easier to find * Convert everything to RST * Create testing_network guide * Create testing landing page * For each section detail "how to run" and "how to extend testing" * More examples * Lots more detailpull/24097/head
parent
bc22223d63
commit
ecbf8e933a
@ -1,205 +0,0 @@
|
|||||||
Helping Testing PRs
|
|
||||||
```````````````````
|
|
||||||
|
|
||||||
If you're a developer, one of the most valuable things you can do is look at the github issues list and help fix bugs. We almost always prioritize bug fixing over
|
|
||||||
feature development, so clearing bugs out of the way is one of the best things you can do.
|
|
||||||
|
|
||||||
Even if you're not a developer, helping test pull requests for bug fixes and features is still immensely valuable.
|
|
||||||
|
|
||||||
This goes for testing new features as well as testing bugfixes.
|
|
||||||
|
|
||||||
In many cases, code should add tests that prove it works but that's not ALWAYS possible and tests are not always comprehensive, especially when a user doesn't have access
|
|
||||||
to a wide variety of platforms, or that is using an API or web service.
|
|
||||||
|
|
||||||
In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces.
|
|
||||||
In any case, things should always be tested manually the first time too.
|
|
||||||
|
|
||||||
Thankfully helping test ansible is pretty straightforward, assuming you are already used to how ansible works.
|
|
||||||
|
|
||||||
Get Started with A Source Checkout
|
|
||||||
++++++++++++++++++++++++++++++++++
|
|
||||||
|
|
||||||
You can do this by checking out ansible, making a test branch off the main one, merging a GitHub issue, testing,
|
|
||||||
and then commenting on that particular issue on GitHub. Here's how:
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
Testing source code from GitHub pull requests sent to us does have some inherent risk, as the source code
|
|
||||||
sent may have mistakes or malicious code that could have a negative impact on your system. We recommend
|
|
||||||
doing all testing on a virtual machine, whether a cloud instance, or locally. Some users like Vagrant
|
|
||||||
or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or
|
|
||||||
other flavors, since some features (apt vs. yum, for example) are specific to those OS versions.
|
|
||||||
|
|
||||||
First, you will need to configure your testing environment with the necessary tools required to run our test
|
|
||||||
suites. You will need at least::
|
|
||||||
|
|
||||||
git
|
|
||||||
python-nosetests (sometimes named python-nose)
|
|
||||||
python-passlib
|
|
||||||
python-mock
|
|
||||||
|
|
||||||
If you want to run the full integration test suite you'll also need the following packages installed::
|
|
||||||
|
|
||||||
svn
|
|
||||||
hg
|
|
||||||
python-pip
|
|
||||||
gem
|
|
||||||
|
|
||||||
Second, if you haven't already, clone the Ansible source code from GitHub::
|
|
||||||
|
|
||||||
git clone https://github.com/ansible/ansible.git --recursive
|
|
||||||
cd ansible/
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
If you have previously forked the repository on GitHub, you could also clone it from there.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
If updating your repo for testing something module related, use "git rebase origin/devel" and then "git submodule update" to fetch
|
|
||||||
the latest development versions of modules. Skipping the "git submodule update" step will result in versions that will be stale.
|
|
||||||
|
|
||||||
Activating The Source Checkout
|
|
||||||
++++++++++++++++++++++++++++++
|
|
||||||
|
|
||||||
The Ansible source includes a script that allows you to use Ansible directly from source without requiring a
|
|
||||||
full installation, that is frequently used by developers on Ansible.
|
|
||||||
|
|
||||||
Simply source it (to use the Linux/Unix terminology) to begin using it immediately::
|
|
||||||
|
|
||||||
source ./hacking/env-setup
|
|
||||||
|
|
||||||
This script modifies the PYTHONPATH enviromnent variables (along with a few other things), which will be temporarily
|
|
||||||
set as long as your shell session is open.
|
|
||||||
|
|
||||||
If you'd like your testing environment to always use the latest source, you could call the command from startup scripts (for example,
|
|
||||||
`.bash_profile`).
|
|
||||||
|
|
||||||
Finding A Pull Request and Checking It Out On A Branch
|
|
||||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
|
|
||||||
Next, find the pull request you'd like to test and make note of the line at the top which describes the source
|
|
||||||
and destination repositories. It will look something like this::
|
|
||||||
|
|
||||||
Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by ansible staff.
|
|
||||||
|
|
||||||
The username and branch at the end are the important parts, which will be turned into git commands as follows::
|
|
||||||
|
|
||||||
git checkout -b testing_PRXXXX devel
|
|
||||||
git pull https://github.com/someuser/ansible.git feature_branch_name
|
|
||||||
|
|
||||||
The first command creates and switches to a new branch named testing_PRXXXX, where the XXXX is the actual issue number associated with the pull request (for example, 1234). This branch is based on the devel branch. The second command pulls the new code from the users feature branch into the newly created branch.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
Some users do not create feature branches, which can cause problems when they have multiple, un-related commits in their version of `devel`. If the source looks like `someuser:devel`, make sure there is only one commit listed on the pull request.
|
|
||||||
|
|
||||||
Finding a Pull Request for Ansible Modules
|
|
||||||
++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
Ansible modules are in separate repositories, which are managed as Git submodules. Here's a step by step process for checking out a PR for an Ansible extras module, for instance:
|
|
||||||
|
|
||||||
1. git clone https://github.com/ansible/ansible.git
|
|
||||||
2. cd ansible
|
|
||||||
3. git submodule init
|
|
||||||
4. git submodule update --recursive [ fetches the submodules ]
|
|
||||||
5. cd lib/ansible/modules/extras
|
|
||||||
6. git fetch origin pull/1234/head:pr/1234 [ fetches the specific PR ]
|
|
||||||
7. git checkout pr/1234 [ do your testing here ]
|
|
||||||
8. cd /path/to/ansible/clone
|
|
||||||
9. git submodule update --recursive
|
|
||||||
|
|
||||||
For Those About To Test, We Salute You
|
|
||||||
++++++++++++++++++++++++++++++++++++++
|
|
||||||
|
|
||||||
At this point, you should be ready to begin testing!
|
|
||||||
|
|
||||||
If the PR is a bug-fix pull request, the first things to do are to run the suite of unit and integration tests, to ensure
|
|
||||||
the pull request does not break current functionality:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# Unit Tests
|
|
||||||
make tests
|
|
||||||
|
|
||||||
# Integration Tests
|
|
||||||
make -C test/integration
|
|
||||||
|
|
||||||
# Run specific integration test-target 'file' (as root)
|
|
||||||
sudo ./test/runner/ansible-test integration file -vv
|
|
||||||
|
|
||||||
# Run specific integration test-target 'file' (in docker)
|
|
||||||
./test/runner/ansible-test integration file --docker
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
Ansible does provide integration tests for cloud-based modules as well, however we do not recommend using them for some users
|
|
||||||
due to the associated costs from the cloud providers. As such, typically it's better to run specific parts of the integration battery
|
|
||||||
and skip these tests.
|
|
||||||
|
|
||||||
Integration tests aren't the end all beat all - in many cases what is fixed might not *HAVE* a test, so determining if it works means
|
|
||||||
checking the functionality of the system and making sure it does what it said it would do.
|
|
||||||
|
|
||||||
Pull requests for bug-fixes should reference the bug issue number they are fixing.
|
|
||||||
|
|
||||||
We encourage users to provide playbook examples for bugs that show how to reproduce the error, and these playbooks should be used to verify the bugfix does resolve
|
|
||||||
the issue if available. You may wish to also do your own review to poke the corners of the change.
|
|
||||||
|
|
||||||
Since some reproducers can be quite involved, you might wish to create a testing directory with the issue # as a sub-
|
|
||||||
directory to keep things organized:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
mkdir -p testing/XXXX # where XXXX is again the issue # for the original issue or PR
|
|
||||||
cd testing/XXXX
|
|
||||||
# create files or git clone example playbook repo
|
|
||||||
|
|
||||||
While it should go without saying, be sure to read any playbooks before you run them. VMs help with running untrusted content greatly,
|
|
||||||
though a playbook could still do something to your computing resources that you'd rather not like.
|
|
||||||
|
|
||||||
Once the files are in place, you can run the provided playbook (if there is one) to test the functionality:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
ansible-playbook -vvv playbook_name.yml
|
|
||||||
|
|
||||||
If there's no playbook, you may have to copy and paste playbook snippets or run an ad-hoc command that was pasted in.
|
|
||||||
|
|
||||||
Our issue template also included sections for "Expected Output" and "Actual Output", which should be used to gauge the output
|
|
||||||
from the provided examples.
|
|
||||||
|
|
||||||
If the pull request resolves the issue, please leave a comment on the pull request, showing the following information:
|
|
||||||
|
|
||||||
* "Works for me!"
|
|
||||||
* The output from `ansible --version`.
|
|
||||||
|
|
||||||
In some cases, you may wish to share playbook output from the test run as well.
|
|
||||||
|
|
||||||
Example!
|
|
||||||
|
|
||||||
| Works for me! Tested on `Ansible 1.7.1`. I verified this on CentOS 6.5 and also Ubuntu 14.04.
|
|
||||||
|
|
||||||
If the PR does not resolve the issue, or if you see any failures from the unit/integration tests, just include that output instead:
|
|
||||||
|
|
||||||
| This doesn't work for me.
|
|
||||||
|
|
|
||||||
| When I ran this Ubuntu 16.04 it failed with the following:
|
|
||||||
|
|
|
||||||
| \```
|
|
||||||
| BLARG
|
|
||||||
| StrackTrace
|
|
||||||
| RRRARRGGG
|
|
||||||
| \```
|
|
||||||
|
|
||||||
When you are done testing a feature branch, you can remove it with the following command:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
$ git branch -D someuser-feature_branch_name
|
|
||||||
|
|
||||||
We understand some users may be inexperienced with git, or other aspects of
|
|
||||||
the above procedure, so feel free to stop by ansible-devel list for questions
|
|
||||||
and we'd be happy to help answer them.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,199 @@
|
|||||||
|
***************
|
||||||
|
Testing Ansible
|
||||||
|
***************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
Introduction
|
||||||
|
============
|
||||||
|
|
||||||
|
This document describes:
|
||||||
|
|
||||||
|
* how Ansible is tested
|
||||||
|
* how to test Ansible locally
|
||||||
|
* how to extend the testing capabilities
|
||||||
|
|
||||||
|
Types of tests
|
||||||
|
==============
|
||||||
|
|
||||||
|
At a high level we have the following classifications of tests:
|
||||||
|
|
||||||
|
:compile:
|
||||||
|
* :doc:`testing_compile`
|
||||||
|
* Test python code against a variety of Python versions.
|
||||||
|
:sanity:
|
||||||
|
* :doc:`testing_sanity`
|
||||||
|
* Sanity tests are made up of scripts and tools used to perform static code analysis.
|
||||||
|
* The primary purpose of these tests is to enforce Ansible coding standards and requirements.
|
||||||
|
:integration:
|
||||||
|
* :doc:`testing_integration`
|
||||||
|
* Functional tests of modules and Ansible core functionality.
|
||||||
|
:units:
|
||||||
|
* :doc:`testing_units`
|
||||||
|
* Tests directly against individual parts of the code base.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Link to Manual testing of PRs (testing_pullrequests.rst)
|
||||||
|
If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do.
|
||||||
|
|
||||||
|
Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable.
|
||||||
|
|
||||||
|
|
||||||
|
Testing within GitHub & Shippable
|
||||||
|
=================================
|
||||||
|
|
||||||
|
|
||||||
|
Organization
|
||||||
|
------------
|
||||||
|
|
||||||
|
When Pull Requests (PRs) are created they are tested using Shippable, a Continuous Integration (CI) tool. Results are shown at the end of every PR.
|
||||||
|
|
||||||
|
|
||||||
|
When Shippable detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example::
|
||||||
|
|
||||||
|
The test `ansible-test sanity --test pep8` failed with the following errors:
|
||||||
|
|
||||||
|
lib/ansible/modules/network/foo/bar.py:509:17: E265 block comment should start with '# '
|
||||||
|
|
||||||
|
The test `ansible-test sanity --test validate-modules` failed with the following errors:
|
||||||
|
lib/ansible/modules/network/foo/bar.py:0:0: E307 version_added should be 2.4. Currently 2.3
|
||||||
|
lib/ansible/modules/network/foo/bar.py:0:0: E316 ANSIBLE_METADATA.metadata_version: required key not provided @ data['metadata_version']. Got None
|
||||||
|
|
||||||
|
From the above example we can see that ``--test pep8`` and ``--test validate-modules`` have identified issues. The commands given allow you to run the same tests locally to ensure you've fixed the issues without having to push your changed to GitHub and wait for Shippable, for example:
|
||||||
|
|
||||||
|
TBD
|
||||||
|
|
||||||
|
If you haven't already got Ansible available, use the local checkout by running::
|
||||||
|
|
||||||
|
source hacking/env-setup
|
||||||
|
|
||||||
|
Then run the tests detailed in the GitHub comment::
|
||||||
|
|
||||||
|
ansible-test sanity --test pep8
|
||||||
|
ansible-test sanity --test validate-modules
|
||||||
|
|
||||||
|
|
||||||
|
If there isn't a GitHub comment stating what's failed you can inspect the results by clicking on the "Details" button under the "checks have failed" message at the end of the PR.
|
||||||
|
|
||||||
|
Rerunning a failing CI job
|
||||||
|
--------------------------
|
||||||
|
|
||||||
|
Occasionally you may find your PR fails due to a reason unrelated to your change. This could happen for several reasons, including:
|
||||||
|
|
||||||
|
* a temporary issue accessing an external resource, such as a yum or git repo
|
||||||
|
* a timeout creating a virtual machine to run the tests on
|
||||||
|
|
||||||
|
If either of these issues appear to be the case, you can rerun the Shippable test by:
|
||||||
|
|
||||||
|
* closing and re-opening the PR
|
||||||
|
* making another change to the PR and pushing to GitHub
|
||||||
|
|
||||||
|
If the issue persists, please contact us in ``#ansible-devel`` on Freenode IRC.
|
||||||
|
|
||||||
|
|
||||||
|
How to test a PR
|
||||||
|
================
|
||||||
|
|
||||||
|
If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do.
|
||||||
|
|
||||||
|
Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable.
|
||||||
|
|
||||||
|
Ideally, code should add tests that prove that the code works. That's not always possible and tests are not always comprehensive, especially when a user doesn't have access to a wide variety of platforms, or is using an API or web service. In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time as well.
|
||||||
|
|
||||||
|
Thankfully, helping to test Ansible is pretty straightforward, assuming you are familiar with how Ansible works.
|
||||||
|
|
||||||
|
Setup: Checking out a Pull Request
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
You can do this by:
|
||||||
|
|
||||||
|
* checking out Ansible
|
||||||
|
* making a test branch off the main branch
|
||||||
|
* merging a GitHub issue
|
||||||
|
* testing
|
||||||
|
* commenting on that particular issue on GitHub
|
||||||
|
|
||||||
|
Here's how:
|
||||||
|
|
||||||
|
.. warning::
|
||||||
|
Testing source code from GitHub pull requests sent to us does have some inherent risk, as the source code
|
||||||
|
sent may have mistakes or malicious code that could have a negative impact on your system. We recommend
|
||||||
|
doing all testing on a virtual machine, whether a cloud instance, or locally. Some users like Vagrant
|
||||||
|
or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or
|
||||||
|
other flavors, since some features (apt vs. yum, for example) are specific to those OS versions.
|
||||||
|
|
||||||
|
|
||||||
|
Create a fresh area to work::
|
||||||
|
|
||||||
|
|
||||||
|
git clone https://github.com/ansible/ansible.git ansible-pr-testing
|
||||||
|
cd ansible-pr-testing
|
||||||
|
|
||||||
|
Next, find the pull request you'd like to test and make note of the line at the top which describes the source
|
||||||
|
and destination repositories. It will look something like this::
|
||||||
|
|
||||||
|
Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name
|
||||||
|
|
||||||
|
.. note:: Only test ``ansible:devel``
|
||||||
|
It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by Ansible staff.
|
||||||
|
|
||||||
|
The username and branch at the end are the important parts, which will be turned into git commands as follows::
|
||||||
|
|
||||||
|
git checkout -b testing_PRXXXX devel
|
||||||
|
git pull https://github.com/someuser/ansible.git feature_branch_name
|
||||||
|
|
||||||
|
The first command creates and switches to a new branch named ``testing_PRXXXX``, where the XXXX is the actual issue number associated with the pull request (for example, 1234). This branch is based on the ``devel`` branch. The second command pulls the new code from the users feature branch into the newly created branch.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Some users do not create feature branches, which can cause problems when they have multiple, unrelated commits in their version of ``devel``. If the source looks like ``someuser:devel``, make sure there is only one commit listed on the pull request.
|
||||||
|
|
||||||
|
The Ansible source includes a script that allows you to use Ansible directly from source without requiring a
|
||||||
|
full installation that is frequently used by developers on Ansible.
|
||||||
|
|
||||||
|
Simply source it (to use the Linux/Unix terminology) to begin using it immediately::
|
||||||
|
|
||||||
|
source ./hacking/env-setup
|
||||||
|
|
||||||
|
This script modifies the ``PYTHONPATH`` environment variables (along with a few other things), which will be temporarily
|
||||||
|
set as long as your shell session is open.
|
||||||
|
|
||||||
|
Testing the Pull Request
|
||||||
|
------------------------
|
||||||
|
|
||||||
|
At this point, you should be ready to begin testing!
|
||||||
|
|
||||||
|
Some ideas of what to test are:
|
||||||
|
|
||||||
|
* Create a test Playbook with the examples in and check if they function correctly
|
||||||
|
* Test to see if any Python backtraces returned (that's a bug)
|
||||||
|
* Test on different operating systems, or against different library versions
|
||||||
|
|
||||||
|
|
||||||
|
Any potential issues should be added as comments on the pull request (and it's acceptable to comment if the feature works as well), remembering to include the output of ``ansible --version``
|
||||||
|
|
||||||
|
Example::
|
||||||
|
|
||||||
|
Works for me! Tested on `Ansible 2.3.0`. I verified this on CentOS 6.5 and also Ubuntu 14.04.
|
||||||
|
|
||||||
|
If the PR does not resolve the issue, or if you see any failures from the unit/integration tests, just include that output instead:
|
||||||
|
|
||||||
|
| This doesn't work for me.
|
||||||
|
|
|
||||||
|
| When I ran this Ubuntu 16.04 it failed with the following:
|
||||||
|
|
|
||||||
|
| \```
|
||||||
|
| BLARG
|
||||||
|
| StrackTrace
|
||||||
|
| RRRARRGGG
|
||||||
|
| \```
|
||||||
|
|
||||||
|
Want to know more about testing?
|
||||||
|
================================
|
||||||
|
|
||||||
|
If you'd like to know more about the plans for improving testing Ansible then why not join the `Testing Working Group <https://github.com/ansible/community/blob/master/MEETINGS.md>`_.
|
||||||
|
|
@ -0,0 +1,69 @@
|
|||||||
|
*************
|
||||||
|
Compile Tests
|
||||||
|
*************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
Compile tests check source files for valid syntax on all supported python versions:
|
||||||
|
|
||||||
|
- 2.4 (Ansible 2.3 only)
|
||||||
|
- 2.6
|
||||||
|
- 2.7
|
||||||
|
- 3.5
|
||||||
|
- 3.6
|
||||||
|
|
||||||
|
Running compile tests locally
|
||||||
|
=============================
|
||||||
|
|
||||||
|
Unit tests can be run across the whole code base by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
cd /path/to/ansible/source
|
||||||
|
source hacking/env-setup
|
||||||
|
ansible-test compile
|
||||||
|
|
||||||
|
Against a single file by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test compile lineinfile
|
||||||
|
|
||||||
|
Or against a specific Python version by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test compile --python 2.7 lineinfile
|
||||||
|
|
||||||
|
For advanced usage see the online help:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --help
|
||||||
|
|
||||||
|
For advanced options see ``ansible-test compile --help``
|
||||||
|
|
||||||
|
|
||||||
|
Installing dependencies
|
||||||
|
=======================
|
||||||
|
|
||||||
|
``ansible-test`` has a number of dependencies , for ``compile`` tests we suggest running the tests with ``--local``, which is the default
|
||||||
|
|
||||||
|
The dependencies can be installed using the ``-requirements`` argument. For example:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --requirements lineinfile
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
The full list of requirements can be found at `test/runner/requirements <https://github.com/ansible/ansible/tree/devel/test/runner/requirements>`_. Requirements files are named after their respective commands. See also the `constraints <https://github.com/ansible/ansible/blob/devel/test/runner/requirements/constraints.txt>`_ applicable to all commands.
|
||||||
|
|
||||||
|
|
||||||
|
Extending compile tests
|
||||||
|
=======================
|
||||||
|
|
||||||
|
If you believe changes are needed to the Compile tests please add a comment on the `Testing Working Group Agenda <https://github.com/ansible/community/blob/master/MEETINGS.md>`_ so it can be discussed.
|
@ -0,0 +1,72 @@
|
|||||||
|
**********
|
||||||
|
httptester
|
||||||
|
**********
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
``httptester`` is a docker container used to host certain resources required by :doc:`testing_integration`. This is to avoid CI tests requiring external resources (such as git or package repos) which, if temporarily unavailable, would cause tests to fail.
|
||||||
|
|
||||||
|
HTTP Testing endpoint which provides the following capabilities:
|
||||||
|
|
||||||
|
* httpbin
|
||||||
|
* nginx
|
||||||
|
* SSL
|
||||||
|
* SNI
|
||||||
|
|
||||||
|
|
||||||
|
Source files can be found at `test/utils/docker/httptester/ <https://github.com/ansible/ansible/tree/devel/test/utils/docker/httptester>`_
|
||||||
|
|
||||||
|
Building
|
||||||
|
========
|
||||||
|
|
||||||
|
Docker
|
||||||
|
------
|
||||||
|
|
||||||
|
Both ways of building ``docker`` utilize the ``nginx:alpine`` image, but can
|
||||||
|
be customized for ``Fedora``, ``Red Hat``, ``CentOS``, ``Ubuntu``,
|
||||||
|
``Debian`` and other variants of ``Alpine``
|
||||||
|
|
||||||
|
When utilizing ``packer`` or configuring with ``ansible-playbook``,
|
||||||
|
the services will not automatically start on launch, and will have to be
|
||||||
|
manually started using::
|
||||||
|
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
./services.sh
|
||||||
|
|
||||||
|
Such as when starting a docker container::
|
||||||
|
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
docker run -ti --rm -p 80:80 -p 443:443 --name httptester ansible/ansible:httptester /services.sh
|
||||||
|
|
||||||
|
docker build
|
||||||
|
------------
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
docker build -t ansible/ansible:httptester .
|
||||||
|
|
||||||
|
packer
|
||||||
|
------
|
||||||
|
|
||||||
|
The ``packer`` build will use ``ansible-playbook`` to perform the
|
||||||
|
configuration, and will tag the image as ``ansible/ansible:httptester``::
|
||||||
|
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
packer build packer.json
|
||||||
|
|
||||||
|
Ansible
|
||||||
|
=======
|
||||||
|
|
||||||
|
::
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
ansible-playbook -i hosts -v httptester.yml
|
||||||
|
|
||||||
|
|
||||||
|
Extending httptester
|
||||||
|
====================
|
||||||
|
|
||||||
|
If you have sometime to improve ``httptester`` please add a comment on the `Testing Working Group Agenda <https://github.com/ansible/community/blob/master/MEETINGS.md>`_ to avoid duplicated effort.
|
@ -0,0 +1,235 @@
|
|||||||
|
*****************
|
||||||
|
Integration tests
|
||||||
|
*****************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
The Ansible integration Test system.
|
||||||
|
|
||||||
|
Tests for playbooks, by playbooks.
|
||||||
|
|
||||||
|
Some tests may require credentials. Credentials may be specified with `credentials.yml`.
|
||||||
|
|
||||||
|
Some tests may require root.
|
||||||
|
|
||||||
|
Quick Start
|
||||||
|
===========
|
||||||
|
|
||||||
|
It is highly recommended that you install and activate the ``argcomplete`` python package.
|
||||||
|
It provides tab completion in ``bash`` for the ``ansible-test`` test runner.
|
||||||
|
|
||||||
|
To get started quickly using Docker containers for testing,
|
||||||
|
see [Tests in Docker containers](#tests-in-docker-containers).
|
||||||
|
|
||||||
|
Configuration
|
||||||
|
=============
|
||||||
|
|
||||||
|
Making your own version of ``integration_config.yml`` can allow for setting some
|
||||||
|
tunable parameters to help run the tests better in your environment. Some
|
||||||
|
tests (e.g. cloud) will only run when access credentials are provided. For
|
||||||
|
more information about supported credentials, refer to ``credentials.template``.
|
||||||
|
|
||||||
|
Prerequisites
|
||||||
|
=============
|
||||||
|
|
||||||
|
The tests will assume things like hg, svn, and git are installed and in path.
|
||||||
|
|
||||||
|
(Complete list pending)
|
||||||
|
|
||||||
|
Non-destructive Tests
|
||||||
|
=====================
|
||||||
|
|
||||||
|
These tests will modify files in subdirectories, but will not do things that install or remove packages or things
|
||||||
|
outside of those test subdirectories. They will also not reconfigure or bounce system services.
|
||||||
|
|
||||||
|
.. note:: Running integration tests within Docker
|
||||||
|
|
||||||
|
To protect your system from any potential changes caused by integration tests, and to ensure the a sensible set of dependencies are available we recommend that you always run integration tests with the ``--docker`` option. See the `list of supported docker images <https://github.com/ansible/ansible/blob/devel/test/runner/completion/docker.txt>`_ for options.
|
||||||
|
|
||||||
|
.. note:: Avoiding pulling new Docker images:
|
||||||
|
|
||||||
|
Use the ``--docker-no-pull`` option to avoid pulling the latest container image. This is required when using custom local images that are not available for download.
|
||||||
|
|
||||||
|
Run as follows for all POSIX platform tests executed by our CI system::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration --docker fedora25 -v posix/ci/
|
||||||
|
|
||||||
|
You can select specific tests as well, such as for individual modules::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration -v ping
|
||||||
|
|
||||||
|
By installing ``argcomplete`` you can obtain a full list by doing::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration <tab><tab>
|
||||||
|
|
||||||
|
Destructive Tests
|
||||||
|
=================
|
||||||
|
|
||||||
|
These tests are allowed to install and remove some trivial packages. You will likely want to devote these
|
||||||
|
to a virtual environment, such as Docker. They won't reformat your filesystem::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration --docker fedora25 -v destructive/
|
||||||
|
|
||||||
|
Windows Tests
|
||||||
|
=============
|
||||||
|
|
||||||
|
These tests exercise the ``winrm`` connection plugin and Windows modules. You'll
|
||||||
|
need to define an inventory with a remote Windows 2008 or 2012 Server to use
|
||||||
|
for testing, and enable PowerShell Remoting to continue.
|
||||||
|
|
||||||
|
Running these tests may result in changes to your Windows host, so don't run
|
||||||
|
them against a production/critical Windows environment.
|
||||||
|
|
||||||
|
Enable PowerShell Remoting (run on the Windows host via Remote Desktop):
|
||||||
|
Enable-PSRemoting -Force
|
||||||
|
|
||||||
|
Define Windows inventory::
|
||||||
|
|
||||||
|
cp inventory.winrm.template inventory.winrm
|
||||||
|
${EDITOR:-vi} inventory.winrm
|
||||||
|
|
||||||
|
Run the Windows tests executed by our CI system::
|
||||||
|
|
||||||
|
test/runner/ansible-test windows-integration -v windows/ci/
|
||||||
|
|
||||||
|
Tests in Docker containers
|
||||||
|
==========================
|
||||||
|
|
||||||
|
If you have a Linux system with Docker installed, running integration tests using the same Docker containers used by
|
||||||
|
the Ansible continuous integration (CI) system is recommended.
|
||||||
|
|
||||||
|
.. note: Docker on non-Linux::
|
||||||
|
|
||||||
|
Using Docker Engine to run Docker on a non-Linux host is not recommended.
|
||||||
|
Some tests may fail, depending on the image used for testing.
|
||||||
|
Using the ``--docker-privileged`` option may resolve the issue.
|
||||||
|
|
||||||
|
Running Integration Tests
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
To run all CI integration test targets for POSIX platforms in a Ubuntu 16.04 container::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration -v posix/ci/ --docker
|
||||||
|
|
||||||
|
You can also run specific tests or select a different Linux distribution.
|
||||||
|
For example, to run tests for the ``ping`` module on a Ubuntu 14.04 container::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration -v ping --docker ubuntu1404
|
||||||
|
|
||||||
|
Container Images
|
||||||
|
----------------
|
||||||
|
|
||||||
|
Python 2
|
||||||
|
````````
|
||||||
|
|
||||||
|
Most container images are for testing with Python 2:
|
||||||
|
|
||||||
|
- centos6
|
||||||
|
- centos7
|
||||||
|
- fedora24
|
||||||
|
- fedora25
|
||||||
|
- opensuse42.1
|
||||||
|
- opensuse42.2
|
||||||
|
- ubuntu1204
|
||||||
|
- ubuntu1404
|
||||||
|
- ubuntu1604
|
||||||
|
|
||||||
|
Python 3
|
||||||
|
````````
|
||||||
|
|
||||||
|
To test with Python 3 use the following images:
|
||||||
|
|
||||||
|
- ubuntu1604py3
|
||||||
|
|
||||||
|
Network Tests
|
||||||
|
=============
|
||||||
|
|
||||||
|
This page details the specifics around testing Ansible Networking modules.
|
||||||
|
|
||||||
|
|
||||||
|
.. important:: Network testing requirements for Ansible 2.4
|
||||||
|
|
||||||
|
Starting with Ansible 2.4, all network modules MUST include corresponding unit tests to defend functionality.
|
||||||
|
The unit tests must be added in the same PR that includes the new network module, or extends functionality.
|
||||||
|
Integration tests, although not required, are a welcome addition.
|
||||||
|
How to do this is explained in the rest of this document.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Network integration tests can be ran by doing::
|
||||||
|
|
||||||
|
cd test/integration
|
||||||
|
ANSIBLE_ROLES_PATH=targets ansible-playbook network-all.yaml
|
||||||
|
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
* To run the network tests you will need a number of test machines and suitably configured inventory file. A sample is included in ``test/integration/inventory.network``
|
||||||
|
* As with the rest of the integration tests, they can be found grouped by module in ``test/integration/targets/MODULENAME/``
|
||||||
|
|
||||||
|
To filter a set of test cases set ``limit_to`` to the name of the group, generally this is the name of the module::
|
||||||
|
|
||||||
|
ANSIBLE_ROLES_PATH=targets ansible-playbook -i inventory.network network-all.yaml -e "limit_to=eos_command"
|
||||||
|
|
||||||
|
|
||||||
|
To filter a singular test case set the tags options to eapi or cli, set limit_to to the test group,
|
||||||
|
and test_cases to the name of the test::
|
||||||
|
|
||||||
|
ANSIBLE_ROLES_PATH=targets ansible-playbook -i inventory.network network-all.yaml --tags="cli" -e "limit_to=eos_command test_case=notequal"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Writing network integration tests
|
||||||
|
---------------------------------
|
||||||
|
|
||||||
|
Test cases are added to roles based on the module being testing. Test cases
|
||||||
|
should include both `cli` and `eapi` test cases. Cli test cases should be
|
||||||
|
added to `test/integration/targets/modulename/tests/cli` and eapi tests should be added to
|
||||||
|
`test/integration/targets/modulename/tests/eapi`.
|
||||||
|
|
||||||
|
In addition to positive testing, negative tests are required to ensure user friendly warnings & errors are generated, rather than backtraces, for example:
|
||||||
|
|
||||||
|
.. code-block: yaml
|
||||||
|
|
||||||
|
- name: test invalid subset (foobar)
|
||||||
|
eos_facts:
|
||||||
|
provider: "{{ cli }}"
|
||||||
|
gather_subset:
|
||||||
|
- "foobar"
|
||||||
|
register: result
|
||||||
|
ignore_errors: true
|
||||||
|
|
||||||
|
- assert:
|
||||||
|
that:
|
||||||
|
# Failures shouldn't return changes
|
||||||
|
- "result.changed == false"
|
||||||
|
# It's a failure
|
||||||
|
- "result.failed == true"
|
||||||
|
# Sensible Failure message
|
||||||
|
- "'Subset must be one of' in result.msg"
|
||||||
|
|
||||||
|
|
||||||
|
Conventions
|
||||||
|
```````````
|
||||||
|
|
||||||
|
- Each test case should generally follow the pattern:
|
||||||
|
|
||||||
|
setup —> test —> assert —> test again (idempotent) —> assert —> teardown (if needed) -> done
|
||||||
|
|
||||||
|
This keeps test playbooks from becoming monolithic and difficult to
|
||||||
|
troubleshoot.
|
||||||
|
|
||||||
|
- Include a name for each task that is not an assertion. (It's OK to add names
|
||||||
|
to assertions too. But to make it easy to identify the broken task within a failed
|
||||||
|
test, at least provide a helpful name for each task.)
|
||||||
|
|
||||||
|
- Files containing test cases must end in `.yaml`
|
||||||
|
|
||||||
|
|
||||||
|
Adding a new Network Platform
|
||||||
|
`````````````````````````````
|
||||||
|
|
||||||
|
A top level playbook is required such as `ansible/test/integration/eos.yaml` which needs to be references by `ansible/test/integration/network-all.yaml`
|
||||||
|
|
||||||
|
Where to find out more
|
||||||
|
======================
|
@ -0,0 +1,77 @@
|
|||||||
|
*******************************************
|
||||||
|
Testing using the Legacy Integration system
|
||||||
|
*******************************************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
This page details how to run the integration tests that haven't been ported to the new ``ansible-test`` framework.
|
||||||
|
|
||||||
|
The following areas are still tested using the legacy ``make tests`` command:
|
||||||
|
|
||||||
|
* amazon
|
||||||
|
* azure
|
||||||
|
* cloudflare
|
||||||
|
* cloudscale
|
||||||
|
* cloudstack
|
||||||
|
* consul
|
||||||
|
* exoscale
|
||||||
|
* gce
|
||||||
|
* jenkins
|
||||||
|
* rackspace
|
||||||
|
|
||||||
|
Over time the above list will be reduced as tests are ported to the ``ansible-test`` framework.
|
||||||
|
|
||||||
|
|
||||||
|
Running Cloud Tests
|
||||||
|
====================
|
||||||
|
|
||||||
|
Cloud tests exercise capabilities of cloud modules (e.g. ec2_key). These are
|
||||||
|
not 'tests run in the cloud' so much as tests that leverage the cloud modules
|
||||||
|
and are organized by cloud provider.
|
||||||
|
|
||||||
|
Some AWS tests may use environment variables. It is recommended to either unset any AWS environment variables( such as ``AWS_DEFAULT_PROFILE``, ``AWS_SECRET_ACCESS_KEY``, etc) or be sure that the environment variables match the credentials provided in ``credentials.yml`` to ensure the tests run with consistency to their full capability on the expected account . See `AWS CLI docs <http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html>`_ for information on creating a profile.
|
||||||
|
|
||||||
|
Subsets of tests may be run by ``#commenting`` out unnecessary roles in the appropriate playbook, such as ``test/integration/amazon.yml``.
|
||||||
|
|
||||||
|
In order to run cloud tests, you must provide access credentials in a file
|
||||||
|
named ``credentials.yml``. A sample credentials file named
|
||||||
|
``credentials.template`` is available for syntax help.
|
||||||
|
|
||||||
|
|
||||||
|
Provide cloud credentials::
|
||||||
|
|
||||||
|
cp credentials.template credentials.yml
|
||||||
|
${EDITOR:-vi} credentials.yml
|
||||||
|
|
||||||
|
|
||||||
|
Other configuration
|
||||||
|
===================
|
||||||
|
In order to run some tests, you must provide access credentials in a file
|
||||||
|
named ``credentials.yml``. A sample credentials file named
|
||||||
|
``credentials.template`` is available for syntax help.
|
||||||
|
|
||||||
|
Running Tests
|
||||||
|
=============
|
||||||
|
|
||||||
|
The tests are invoked via a ``Makefile``::
|
||||||
|
|
||||||
|
# If you haven't already got Ansible available use the local checkout by doing::
|
||||||
|
|
||||||
|
source hacking/env-setup
|
||||||
|
|
||||||
|
cd test/integration/
|
||||||
|
# TARGET is the name of the test from the list at the top of this page
|
||||||
|
#make TARGET
|
||||||
|
# e.g.
|
||||||
|
make amazon
|
||||||
|
# To run all cloud tests you can do:
|
||||||
|
make cloud
|
||||||
|
|
||||||
|
.. warning:: Possible cost of running cloud tests
|
||||||
|
|
||||||
|
Running cloud integration tests will create and destroy cloud
|
||||||
|
resources. Running these tests may result in additional fees associated with
|
||||||
|
your cloud account. Care is taken to ensure that created resources are
|
||||||
|
removed. However, it is advisable to inspect your AWS console to ensure no
|
||||||
|
unexpected resources are running.
|
||||||
|
|
@ -0,0 +1,55 @@
|
|||||||
|
*****
|
||||||
|
PEP 8
|
||||||
|
*****
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
`PEP 8`_ style guidelines are enforced by ``pep8`` on all python files in the repository by default.
|
||||||
|
|
||||||
|
Current Rule Set
|
||||||
|
================
|
||||||
|
|
||||||
|
By default all files are tested using the current rule set.
|
||||||
|
All ``pep8`` tests are executed, except those listed in the `current ignore list`_.
|
||||||
|
|
||||||
|
.. warning: Updating the Rule Set
|
||||||
|
|
||||||
|
Changes to the Rule Set need approval from the Core Team, and must be done via the `Testing Working Group <https://github.com/ansible/community/blob/master/MEETINGS.md>`_.
|
||||||
|
|
||||||
|
Legacy Rule Set
|
||||||
|
===============
|
||||||
|
|
||||||
|
Files which are listed in the `legacy file list`_ are tested using the legacy rule set.
|
||||||
|
|
||||||
|
All ``pep8`` tests are executed, except those listed in the `current ignore list`_ or `legacy ignore list`_.
|
||||||
|
|
||||||
|
Files listed in the legacy file list which pass the current rule set will result in an error.
|
||||||
|
|
||||||
|
This is intended to prevent regressions on style guidelines for files which pass the more stringent current rule set.
|
||||||
|
|
||||||
|
Skipping Tests
|
||||||
|
==============
|
||||||
|
|
||||||
|
Files listed in the `skip list`_ are not tested by ``pep8``.
|
||||||
|
|
||||||
|
Removed Files
|
||||||
|
=============
|
||||||
|
|
||||||
|
Files which have been removed from the repository must be removed from the legacy file list and the skip list.
|
||||||
|
|
||||||
|
Running Locally
|
||||||
|
===============
|
||||||
|
|
||||||
|
The pep8 check can be run locally with::
|
||||||
|
|
||||||
|
|
||||||
|
./test/runner/ansible-test sanity --test pep8 [file-or-directory-path-to-check] ...
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _PEP 8: https://www.python.org/dev/peps/pep-0008/
|
||||||
|
.. _pep8: https://pypi.python.org/pypi/pep8
|
||||||
|
.. _current ignore list: https://github.com/ansible/ansible/blob/devel/test/sanity/pep8/current-ignore.txt
|
||||||
|
.. _legacy file list: https://github.com/ansible/ansible/blob/devel/test/sanity/pep8/legacy-files.txt
|
||||||
|
.. _legacy ignore list: https://github.com/ansible/ansible/blob/devel/test/sanity/pep8/legacy-ignore.txt
|
||||||
|
.. _skip list: https://github.com/ansible/ansible/blob/devel/test/sanity/pep8/skip.txt
|
@ -0,0 +1,93 @@
|
|||||||
|
**********
|
||||||
|
Unit Tests
|
||||||
|
**********
|
||||||
|
|
||||||
|
Unit tests are small isolated tests that target a specific library or module.
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
Available Tests
|
||||||
|
===============
|
||||||
|
|
||||||
|
Unit tests can be found in `test/units <https://github.com/ansible/ansible/tree/devel/test/units>`_, notice that the directory structure matches that of ``lib/ansible/``
|
||||||
|
|
||||||
|
Running Tests
|
||||||
|
=============
|
||||||
|
|
||||||
|
Unit tests can be run across the whole code base by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
cd /path/to/ansible/source
|
||||||
|
source hacking/env-setup
|
||||||
|
ansible-test units --tox
|
||||||
|
|
||||||
|
Against a single file by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --tox apt
|
||||||
|
|
||||||
|
Or against a specific Python version by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --tox --python 2.7 apt
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
For advanced usage see the online help::
|
||||||
|
|
||||||
|
ansible-test units --help
|
||||||
|
|
||||||
|
|
||||||
|
Installing dependencies
|
||||||
|
=======================
|
||||||
|
|
||||||
|
``ansible-test`` has a number of dependencies , for ``units`` tests we suggest using ``tox``
|
||||||
|
|
||||||
|
The dependencies can be installed using the ``--requirements`` argument. For example:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --tox --python 2.7 --requirements apt
|
||||||
|
|
||||||
|
|
||||||
|
.. note:: tox version requirement
|
||||||
|
|
||||||
|
When using ``ansible-test`` with ``--tox`` requires tox >= 2.5.0
|
||||||
|
|
||||||
|
|
||||||
|
The full list of requirements can be found at `test/runner/requirements <https://github.com/ansible/ansible/tree/devel/test/runner/requirements>`_. Requirements files are named after their respective commands. See also the `constraints <https://github.com/ansible/ansible/blob/devel/test/runner/requirements/constraints.txt>`_ applicable to all commands.
|
||||||
|
|
||||||
|
|
||||||
|
Extending unit tests
|
||||||
|
====================
|
||||||
|
|
||||||
|
|
||||||
|
.. warning:: What a unit test isn't
|
||||||
|
|
||||||
|
If you start writing a test that requires external services then you may be writing an integration test, rather than a unit test.
|
||||||
|
|
||||||
|
Fixtures files
|
||||||
|
``````````````
|
||||||
|
|
||||||
|
To mock out fetching results from devices, you can use ``fixtures`` to read in pre-generated data.
|
||||||
|
|
||||||
|
Text files live in ``test/units/modules/network/PLATFORM/fixtures/``
|
||||||
|
|
||||||
|
Data is loaded using the ``load_fixture`` method
|
||||||
|
|
||||||
|
See `eos_banner test <https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py>`_ for a practical example.
|
||||||
|
|
||||||
|
Code Coverage
|
||||||
|
`````````````
|
||||||
|
|
||||||
|
Most ``ansible-test`` commands allow you to collect code coverage, this is particularly useful when to indicate where to extend testing.
|
||||||
|
|
||||||
|
To collect coverage data add the ``--coverage`` argument to your ``ansible-test`` command line:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --coverage
|
||||||
|
ansible-test coverage html
|
@ -1,13 +0,0 @@
|
|||||||
Compile Tests
|
|
||||||
=============
|
|
||||||
|
|
||||||
Compile tests check source files for valid syntax on all supported python versions:
|
|
||||||
|
|
||||||
- 2.6
|
|
||||||
- 2.7
|
|
||||||
- 3.5
|
|
||||||
- 3.6
|
|
||||||
|
|
||||||
Tests are run with ``ansible-test compile``.
|
|
||||||
All versions are tested unless the ``--python`` option is used.
|
|
||||||
All ``*.py`` files are tested unless specific files are specified.
|
|
@ -1,223 +0,0 @@
|
|||||||
Integration tests
|
|
||||||
=================
|
|
||||||
|
|
||||||
The ansible integration system.
|
|
||||||
|
|
||||||
Tests for playbooks, by playbooks.
|
|
||||||
|
|
||||||
Some tests may require credentials. Credentials may be specified with `credentials.yml`.
|
|
||||||
|
|
||||||
Some tests may require root.
|
|
||||||
|
|
||||||
Quick Start
|
|
||||||
===========
|
|
||||||
|
|
||||||
It is highly recommended that you install and activate the `argcomplete` python package.
|
|
||||||
It provides tab completion in `bash` for the `ansible-test` test runner.
|
|
||||||
|
|
||||||
To get started quickly using Docker containers for testing,
|
|
||||||
see [Tests in Docker containers](#tests-in-docker-containers).
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
=============
|
|
||||||
|
|
||||||
Making your own version of `integration_config.yml` can allow for setting some
|
|
||||||
tunable parameters to help run the tests better in your environment. Some
|
|
||||||
tests (e.g. cloud) will only run when access credentials are provided. For
|
|
||||||
more information about supported credentials, refer to `credentials.template`.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
=============
|
|
||||||
|
|
||||||
The tests will assume things like hg, svn, and git are installed and in path.
|
|
||||||
|
|
||||||
(Complete list pending)
|
|
||||||
|
|
||||||
Non-destructive Tests
|
|
||||||
=====================
|
|
||||||
|
|
||||||
These tests will modify files in subdirectories, but will not do things that install or remove packages or things
|
|
||||||
outside of those test subdirectories. They will also not reconfigure or bounce system services.
|
|
||||||
|
|
||||||
Run as follows for all POSIX platform tests executed by our CI system:
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v posix/ci/
|
|
||||||
|
|
||||||
You can select specific tests as well, such as for individual modules:
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v ping
|
|
||||||
|
|
||||||
Destructive Tests
|
|
||||||
=================
|
|
||||||
|
|
||||||
These tests are allowed to install and remove some trivial packages. You will likely want to devote these
|
|
||||||
to a virtual environment. They won't reformat your filesystem, however :)
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v destructive/
|
|
||||||
|
|
||||||
Cloud Tests
|
|
||||||
===========
|
|
||||||
|
|
||||||
Cloud tests exercise capabilities of cloud modules (e.g. ec2_key). These are
|
|
||||||
not 'tests run in the cloud' so much as tests that leverage the cloud modules
|
|
||||||
and are organized by cloud provider.
|
|
||||||
|
|
||||||
In order to run cloud tests, you must provide access credentials in a file
|
|
||||||
named `credentials.yml`. A sample credentials file named
|
|
||||||
`credentials.template` is available for syntax help.
|
|
||||||
|
|
||||||
|
|
||||||
Provide cloud credentials:
|
|
||||||
|
|
||||||
cp credentials.template credentials.yml
|
|
||||||
${EDITOR:-vi} credentials.yml
|
|
||||||
|
|
||||||
Run the tests:
|
|
||||||
make cloud
|
|
||||||
|
|
||||||
*WARNING* running cloud integration tests will create and destroy cloud
|
|
||||||
resources. Running these tests may result in additional fees associated with
|
|
||||||
your cloud account. Care is taken to ensure that created resources are
|
|
||||||
removed. However, it is advisable to inspect your AWS console to ensure no
|
|
||||||
unexpected resources are running.
|
|
||||||
|
|
||||||
Windows Tests
|
|
||||||
=============
|
|
||||||
|
|
||||||
These tests exercise the winrm connection plugin and Windows modules. You'll
|
|
||||||
need to define an inventory with a remote Windows 2008 or 2012 Server to use
|
|
||||||
for testing, and enable PowerShell Remoting to continue.
|
|
||||||
|
|
||||||
Running these tests may result in changes to your Windows host, so don't run
|
|
||||||
them against a production/critical Windows environment.
|
|
||||||
|
|
||||||
Enable PowerShell Remoting (run on the Windows host via Remote Desktop):
|
|
||||||
Enable-PSRemoting -Force
|
|
||||||
|
|
||||||
Define Windows inventory:
|
|
||||||
|
|
||||||
cp inventory.winrm.template inventory.winrm
|
|
||||||
${EDITOR:-vi} inventory.winrm
|
|
||||||
|
|
||||||
Run the Windows tests executed by our CI system:
|
|
||||||
|
|
||||||
test/runner/ansible-test windows-integration -v windows/ci/
|
|
||||||
|
|
||||||
Tests in Docker containers
|
|
||||||
==========================
|
|
||||||
|
|
||||||
If you have a Linux system with Docker installed, running integration tests using the same Docker containers used by
|
|
||||||
the Ansible continuous integration (CI) system is recommended.
|
|
||||||
|
|
||||||
> Using Docker Engine to run Docker on a non-Linux host is not recommended.
|
|
||||||
> Some tests may fail, depending on the image used for testing.
|
|
||||||
> Using the `--docker-privileged` option may resolve the issue.
|
|
||||||
|
|
||||||
## Running Integration Tests
|
|
||||||
|
|
||||||
To run all CI integration test targets for POSIX platforms in a Ubuntu 16.04 container:
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v posix/ci/ --docker
|
|
||||||
|
|
||||||
You can also run specific tests or select a different Linux distribution.
|
|
||||||
For example, to run tests for the `ping` module on a Ubuntu 14.04 container:
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v ping --docker ubuntu1404
|
|
||||||
|
|
||||||
## Container Images
|
|
||||||
|
|
||||||
### Python 2
|
|
||||||
|
|
||||||
Most container images are for testing with Python 2:
|
|
||||||
|
|
||||||
- centos6
|
|
||||||
- centos7
|
|
||||||
- fedora24
|
|
||||||
- fedora25
|
|
||||||
- opensuse42.1
|
|
||||||
- opensuse42.2
|
|
||||||
- ubuntu1204
|
|
||||||
- ubuntu1404
|
|
||||||
- ubuntu1604
|
|
||||||
|
|
||||||
### Python 3
|
|
||||||
|
|
||||||
To test with Python 3 use the following images:
|
|
||||||
|
|
||||||
- ubuntu1604py3
|
|
||||||
|
|
||||||
Network Tests
|
|
||||||
=============
|
|
||||||
**Note:** From Ansible 2.3, for any new Network Module to be accepted it must be accompanied by a corresponding test.
|
|
||||||
|
|
||||||
For further help with this please contact `gundalow` in `#ansible-devel` on FreeNode IRC.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ANSIBLE_ROLES_PATH=targets ansible-playbook network-all.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
*NOTE* To run the network tests you will need a number of test machines and sutabily configured inventory file, a sample is included in `test/integration/inventory.network`
|
|
||||||
|
|
||||||
*NOTE* As with the rest of the integration tests, they can be found grouped by module in `test/integration/targets/MODULENAME/`
|
|
||||||
|
|
||||||
To filter a set of test cases set `limit_to` to the name of the group, generally this is the name of the module:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ANSIBLE_ROLES_PATH=targets ansible-playbook -i inventory.network network-all.yaml -e "limit_to=eos_command"
|
|
||||||
```
|
|
||||||
|
|
||||||
To filter a singular test case set the tags options to eapi or cli, set limit_to to the test group,
|
|
||||||
and test_cases to the name of the test:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ANSIBLE_ROLES_PATH=targets ansible-playbook -i inventory.network network-all.yaml --tags="cli" -e "limit_to=eos_command test_case=notequal"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributing Test Cases
|
|
||||||
|
|
||||||
Test cases are added to roles based on the module being testing. Test cases
|
|
||||||
should include both `cli` and `eapi` test cases. Cli test cases should be
|
|
||||||
added to `test/integration/targets/modulename/tests/cli` and eapi tests should be added to
|
|
||||||
`test/integration/targets/modulename/tests/eapi`.
|
|
||||||
|
|
||||||
In addition to positive testing, negative tests are required to ensure user friendly warnings & errors are generated, rather than backtraces, for example:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: test invalid subset (foobar)
|
|
||||||
eos_facts:
|
|
||||||
provider: "{{ cli }}"
|
|
||||||
gather_subset:
|
|
||||||
- "foobar"
|
|
||||||
register: result
|
|
||||||
ignore_errors: true
|
|
||||||
|
|
||||||
- assert:
|
|
||||||
that:
|
|
||||||
# Failures shouldn't return changes
|
|
||||||
- "result.changed == false"
|
|
||||||
# It's a failure
|
|
||||||
- "result.failed == true"
|
|
||||||
# Sensible Failure message
|
|
||||||
- "'Subset must be one of' in result.msg"
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### Conventions
|
|
||||||
|
|
||||||
- Each test case should generally follow the pattern:
|
|
||||||
|
|
||||||
>setup —> test —> assert —> test again (idempotent) —> assert —> -teardown (if needed) -> done
|
|
||||||
|
|
||||||
This keeps test playbooks from becoming monolithic and difficult to
|
|
||||||
troubleshoot.
|
|
||||||
|
|
||||||
- Include a name for each task that is not an assertion. (It's OK to add names
|
|
||||||
to assertions too. But to make it easy to identify the broken task within a failed
|
|
||||||
test, at least provide a helpful name for each task.)
|
|
||||||
|
|
||||||
- Files containing test cases must end in `.yaml`
|
|
||||||
|
|
||||||
|
|
||||||
### Adding a new Network Platform
|
|
||||||
|
|
||||||
A top level playbook is required such as `ansible/test/integration/eos.yaml` which needs to be references by `ansible/test/integration/network-all.yaml`
|
|
@ -1,33 +0,0 @@
|
|||||||
# PEP 8
|
|
||||||
|
|
||||||
[PEP 8](https://www.python.org/dev/peps/pep-0008/) style guidelines are enforced by
|
|
||||||
[pep8](https://pypi.python.org/pypi/pep8) on all python files in the repository by default.
|
|
||||||
|
|
||||||
## Current Rule Set
|
|
||||||
|
|
||||||
By default all files are tested using the current rule set.
|
|
||||||
All `pep8` tests are executed, except those listed in the [current ignore list](current-ignore.txt).
|
|
||||||
|
|
||||||
## Legacy Rule Set
|
|
||||||
|
|
||||||
Files which are listed in the [legacy file list](legacy-files.txt) are tested using the legacy rule set.
|
|
||||||
All `pep8` tests are executed, except those listed in the [current ignore list](current-ignore.txt) or
|
|
||||||
[legacy ignore list](legacy-ignore.txt).
|
|
||||||
|
|
||||||
> Files listed in the legacy file list which pass the current rule set will result in an error.
|
|
||||||
> This is intended to prevent regressions on style guidelines for files which pass the more stringent current rule set.
|
|
||||||
|
|
||||||
## Skipping Tests
|
|
||||||
|
|
||||||
Files listed in the [skip list](skip.txt) are not tested by `pep8`.
|
|
||||||
|
|
||||||
## Removed Files
|
|
||||||
|
|
||||||
Files which have been removed from the repository must be removed from the legacy file list and the skip list.
|
|
||||||
|
|
||||||
## Running Locally
|
|
||||||
|
|
||||||
The pep8 check can be run locally with:
|
|
||||||
|
|
||||||
./test/runner/ansible-test sanity --test pep8 [file-or-directory-path-to-check] ...
|
|
||||||
|
|
@ -1,50 +0,0 @@
|
|||||||
httptester
|
|
||||||
==========
|
|
||||||
|
|
||||||
HTTP Testing endpoint which provides httpbin, nginx, SSL and SNI
|
|
||||||
capabilities, for providing a local HTTP endpoint for testing
|
|
||||||
|
|
||||||
Building
|
|
||||||
--------
|
|
||||||
|
|
||||||
Docker
|
|
||||||
~~~~~~
|
|
||||||
|
|
||||||
Both ways of building docker utilize the ``nginx:alpine`` image, but can
|
|
||||||
be customized for ``Fedora``, ``Red Hat``, ``CentOS``, ``Ubuntu``,
|
|
||||||
``Debian`` and other variants of ``Alpine``
|
|
||||||
|
|
||||||
When utilizing ``packer`` or configuring with ``ansible-playbook``
|
|
||||||
the services will not automtically start on launch, and will have to be
|
|
||||||
manually started using::
|
|
||||||
|
|
||||||
$ /services.sh
|
|
||||||
|
|
||||||
Such as when starting a docker container::
|
|
||||||
|
|
||||||
docker run -ti --rm -p 80:80 -p 443:443 --name httptester ansible/ansible:httptester /services.sh
|
|
||||||
|
|
||||||
docker build
|
|
||||||
^^^^^^^^^^^^
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
docker build -t ansible/ansible:httptester .
|
|
||||||
|
|
||||||
packer
|
|
||||||
^^^^^^
|
|
||||||
|
|
||||||
The packer build will use ``ansible-playbook`` to perform the
|
|
||||||
configuration, and will tag the image as ``ansible/ansible:httptester``
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
packer build packer.json
|
|
||||||
|
|
||||||
Ansible
|
|
||||||
~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
ansible-playbook -i hosts -v httptester.yml
|
|
||||||
|
|
Loading…
Reference in New Issue