Code coverage reporting was ignoring scripts executed during integration
tests when those scripts resided in the temporary working directory used
during an integration test run.
* Adding module for managing AWS Secrets Manager resources
* adding aws_secret lookup plugin
Also use the data returned by describe_secret everywhere.
* replace the explicit /root use by a temporary dir
* aws_secret: rework module
Reworked module to use a class avoiding using client and module in every
functions.
* Added support of "recovery_window" parameter to allow user to provide
recovery period.
* updated return value to be the api output providing more details about
the secret.
* Fix Python 3 bug in tests if the role is not removed
* Add unsupported alias due to issue restricting resource for creating secrets
* OpenSUSE - Add OpenSUSE 15 test containers ci_complete
* Reset matrix back to normal
* Set container version instead of latest
* Remove old Docker completion file
This avoids re-installation during integration test runs.
Pinning this requirement shouldn't be needed for consistent test
results when running pylint.
* gitlab_group: refactor module
* gitlab_user: refactor module
* gitlab_group, gitlab_user; pylint
* gitlab_project: refactor module
* gitlab_group, gitlab_project, gitlab_user: Enchance modules
- Add generic loop to update object
- Enchance return messages
- PyLint
* gitlab_runner: refactor module
* gitlab_hooks: refactor module
* gitlab_deploy_key: refactor module
* gitlab_group: enchance module and documentation
- Enchange function arguments
- Add check_mode break
- Rewrite module documentation
* gitlab_hook: enchance module and documentation
- Rewrite documentation
- Enchance function parameters
- Rename functions
* gitlab_project: enchance module and documentation
- Rewrite documentation
- Enchance function parameters
- Add try/except on project creation
* gitlab_runner: enchance module and documentation
- Rewrite documentation
- Fix Copyright
- Enchance function arguments
- Add check_mode break
- Add missing function: deletion
* gitlab_user: enchance module and documentation
- Rewrite documentation
- Enchance function parameters
- Add check_mode break
- Add try/except on user creation
* gitlab_deploy_key, gitlab_group, gitlab_hooks, gitlab_project,
gitlab_runner, gitlab_user: Fix residual bugs
- Fix Copyright
- Fix result messages
- Add missing check_mode break
* gitlab_deploy_key, gitlab_group, gitlab_hooks, gitlab_project, gitlab_runner, gitlab_user: pylint
* gitlab_runner: Add substitution function for 'cmp' in python3
* unit-test: remove deprecated gitlab module tests
- gitlab_deploy_key
- gitlab_hooks
- gitlab_project
Actually, they can't be reused because of the modification of the way that the module communicate with the Gitlab instance. It doesn't make direct call to the API, now it use a python library that do the job. So using a pytest mocker to test the module won't work.
* gitlab_deploy_key, gitlab_group, gitlab_hooks, gitlab_project, gitlab_runner, gitlab_user: add copyright
* gitlab_deploy_key, gitlab_group, gitlab_hooks, gitlab_project, gitlab_runner, gitlab_user: Support old parameters format
* module_utils Gitlab: Edit copyright
* gitlab_deploy_key, gitlab_group, gitlab_hooks, gitlab_project,
gitlab_runner, gitlab_user: Unifying module inputs
- Rename verify_ssl into validate_certs to match standards
- Remove unused alias parameters
- Unify parameters type and requirement
- Reorder list order
* gitlab_deploy_key, gitlab_group, gitlab_hooks, gitlab_project, gitlab_runner, gitlab_user: Unifying module outputs
- Use standard output parameter "msg" instead of "return"
- Use snail_case for return values instead of camelCase
* validate-module: remove sanity ignore
* BOTMETA: remove gitlab_* test
- This tests need to be completely rewriten because of the refactoring
of these modules
- TodoList Community Wiki was updated
* gitlab_user: Fix group identifier
* gitlab_project: Fix when group was empty
* gitlab_deploy_key: edit return msg
* module_utils gitlab: fall back to user namespace is project not found
* gitlab modules: Add units tests
* unit test: gitlab module fake current user
* gitlab_user: fix access_level verification
* gitlab unit tests: use decoration instead of with statement
* unit tests: gitlab module skip python 2.6
* unit tests: gitlab module skip library import if python 2.6
* gitlab unit tests: use builtin unittest class
* gitlab unit tests: use custom test class
* unit test: gitlab module lint
* unit tests: move gitlab utils
* unit test: gitlab fix imports
* gitlab_module: edit requirement
python-gitlab library require python >= 2.7
* gitlab_module: add myself as author
* gitlab_modules: add python encoding tag
* gitlab_modules: keep consistency between variable name "validate_certs"
* gitlab_modules: enchance documentation
* gitlab_runner: fix syntax error in documentation
* gitlab_module: use basic_auth module_utils and add deprecation warning
* gitlab_module: documentation corrections
* gitlab_module: python lint
* gitlab_module: deprecate options and aliases for ansible 2.10
* gitlab_group: don't use 'local_action' is documentation example
* gitlab_module: correct return messages
* gitlab_module: use module_util 'missing_required_lib' when python library is missing
* gitlab_module: fix typo in function name.
* gitlab_modules: unify return msg on check_mode
* gitlab_modules: don't use deprecated options in examples
* promote doc_fragments into actual plugins
change tests hardcoded path to doc fragments
avoid sanity in fragments
avoid improper testing of doc_fragments
also change runner paths
fix botmeta
updated comment for fragments
updated docs
Previously empty test targets were ignored by ansible-test.
This would prevent them from participating in dependency analysis.
These targets are actually empty roles, and should be processed as such.
* Further cleanup of integration test inventory.
* Preserve aci and msc inventory in template.
* Update ansible-test inventory template handling.
* Fix classification of inventory file.
Some integration test targets have dependencies on files outside
the `test/integration/targets/` directory tree. Changes to these
dependencies can result in unexpected test failures since they do
not trigger integration tests which depend on them.
* Log dependencies at verbosity level 4.
This makes it easier to debug target dependency issues.
* Scan symlinks for target dependencies.
Some test targets use symlinks to files in other test targets.
These dependencies were previously undetected. This could result in
changes made to dependencies without triggering the dependent tests.
* Track missing target deps with `needs/target/*`.
Some existing test targets have untracked dependencies on other
test targets. This can result in changes to those dependencies
not triggering their dependent tests, resulting in test failures
after a PR is merged.
This PR adds the appropriate `needs/target/*` aliases to track
those dependencies, along with appropriate processing in
ansible-test to handle the new aliases.
* Scan meta dependencies in script targets.
Script targets are often former role targets which were converted
to allow custom invocations of ansible-playbook. These targets still
have their meta dependencies, but they were not being detected.
This could result in changes to dependencies not triggering the
targets which depend on them.
Previously, the following dependencies:
A used by B
B used by C
Would have been converted to:
A used by C
B used by C
Intead of being expanded to:
A used by B
A used by C
B used by C
This change preserves the existing dependency when expanding it.
* Move var_blending test inventory into test.
* Remove Amazon specific inventory entry for tests.
* Remove Azure specific inventory entry for tests.
* Move var_precedence test inventory into test.
* Move unicode test inventory into test.
* Remove unused inventory entry.
* Move gathering_facts test inventory into test.
* Move delegate_to test inventory into test.
* Clean up inventory for binary_modules test.
* Clean up integration test inventory.
* Cloudscale integration test setup
CloudProvider and CloudEnvironment classes for Cloudscale integration
tests. This also contains a cloudscale_common role with common
variables for all tests.
* cloudscale_volume module
New cloud module to manage volumes on the cloudscale.ch IaaS service.
It is currently supported only with the `--remote` option.
This makes it easier to troubleshoot new instances which are not
yet supported by the setup scripts used by ansible-test.
* Support skip of platforms by version in tests.
Previously a remote platform could be skipped completely using the alias:
`skip/{platform}` such as `skip/rhel`
Now a specific platform version can be skipped using the alias:
`skip/{platform}{version}` such as `skip/rhel7.6`
This feature is available for platforms specified with the `--remote` option.
* Add skip by version to the docs.
* Provide Kubernetes resource validation to k8s module
Use kubernetes-validate to validate Kubernetes resource
definitions against the published schema
* Additional tests for kubernetes-validate
* Improve k8s error messages on exceptions
Parse the response body for the message rather than returning
a JSON blob
If we've validated and there are warnings, return those too - they
can be more helpful
```
"msg": "Failed to patch object: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},
\"status\":\"Failure\",\"message\":\"[pos 334]: json: decNum: got first char 'h'\",\"code\":500}\n",
```
vs
```
"msg": "Failed to patch object: [pos 334]: json: decNum: got first char 'h'\nresource
validation error at spec.replicas: 'hello' is not of type u'integer'",
```
* Update versions used
In particular openshift/origin:3.9.0
* Add changelog for k8s validate change
Track the interpreter for each copy of the injector by the interpreter
path instead of the interpreter version. This avoids the possibility
of mixing different interpreters with the same version.
Inject a symlink to the correct python into the copied injector
directory instead of altering the shebang of the injector. This
has the side-effect of also intercepting `python` for integration
tests which simplifies cases where it needs to be directly invoked
without collecting code coverage.
* Add support for POST-as-GET if GET fails with 405.
* Bumping ACME test container version to 1.4. This includes letsencrypt/pebble#162 and letsencrypt/pebble#168.
* Also use POST-as-GET for account data retrival.
This is not yet supported by any ACME server (see letsencrypt/pebble#171),
so we fall back to a regular empty update if a 'malformedRequest' error is
returned.
* Using newest ACME test container image.
Includes letsencrypt/pebble#171 and letsencrypt/pebble#172, which make Pebble behave closer to the current specs.
* Remove workaround for old Pebble version.
* Add changelog entry.
* First try POST-as-GET, then fall back to unauthenticated GET.
Trying to get ansible-test working on my fedora-28 system, I noticed I
was getting invalid keys from paramiko. It looks like this is because
ssh-keygen is now defaulting to RFC4716 format for private / public
keys.
For now, we can still use PEM based SSH keys, but the long term fix here
is to report a bug to paramiko and support RFC4716 for rsa keys.
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
* Move ansible.compat.tests to test/units/compat/.
* Fix unit test references to ansible.compat.tests.
* Move builtins compat to separate file.
* Fix classification of test/units/compat/ dir.
* ansible-test: add skip/windows/... alias to skip tests on specific Windows versions
* show what tests were skipped
* changes to logic to only skip if all Windows targets are set to skip
* codestyle improvements
* change warning message based on review
* check args type before running the Windows path
* Add symlinks sanity test.
* Replace legacy test symlinks with actual content.
* Remove dir symlink from template_jinja2_latest.
* Update import test to use generated library dir.
* Fix copy test symlink setup.
* Add unified diff output to environment validation.
This makes it easier to see where the environment changed.
* Compare Python interpreters by version to pip shebangs.
This helps expose cases where pip executables use a different
Python interpreter than is expected.
* Query `pip.__version__` instead of using `pip --version`.
This is a much faster way to query the pip version. It also more
closely matches how we invoke pip within ansible-test.
* Remove redundant environment scan between tests.
This reuses the environment scan from the end of the previous test
as the basis for comparison during the next test.