* clear configuration candidate when return to user-view.
* add a changelog fragment for the pr.
* Update 63513-ce_action_wait_prompt_trigger_time_out.yaml
* Update 63513-ce_action_wait_prompt_trigger_time_out.yaml
* postgresql_privs: add support a type parameter option for types
* postgresql_privs: add support a type parameter option for types, add changelog fragment
* postgresql_privs: add support a type parameter option for types, add schema handling
* postgresql_privs: add support a type parameter option for types, fix typo
* postgresql_privs: add support a type parameter option for types, add comment
* Deprecate openssl_csr's version.
* Add changelog.
* Change PR so that version will no longer accept values != 1 from 2.14 on.
* Make sure it is a string.
* Add support for format option.
* Improve private key format detection.
* Fix raw format handling.
* Improve error handling.
* Improve raw key handling.
* Add failed raw test.
* Improve raw key loading.
* Simplify tests.
* Add raw format tests.
* Fail if format != 'auto_ignore' is specified for pyopenssl backend.
* Fix quoting.
* Bump version.
* Allow to convert private keys between different formats.
* Improve description.
* Add extra args and executable name to podman connection plugin
Like there is for docker plugin, add extra arguments for command
line of podman. Also add configurable executable and checking if
this executable exists on host. Fail module if executable is not
in PATH.
* Update changelogs/fragments/63166-add-extra-args-executalbe-podman-connection.yaml
Co-Authored-By: Felix Fontein <felix@fontein.de>
* Handle galaxy v2/v3 API diffs for artifact publish response
For publishing a collection artifact
(POST /v3/collections/artifacts/), the response
format is different between v2 and v3.
For v2 galaxy, the 'task' url returned is
a full url with scheme:
{"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
For v3 galaxy, the task url is relative:
{"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
So check which API we are using and update the task url approriately.
* Use full url for all wait_for_import messages
Update unit tests to parameterize the expected
responses and urls.
* update explanatory comment
* Rename n_url to full_url.
* Fix issue with overwrite of the complete path
* Fixes overwrite of the complete path in case there's extra path stored
in self.api_sever
* Normalizes the input to the wait_import_task function so it receives
the same value on both v2 and v3
Builds on #63523
* Update unittests for new call signature
* Add changelog for ansible-galaxy publish API fixes.
The commit 4e895c1 aimed to ensure that TXT record values were sanely
quoted. Sadly it failed to take the scenario of non-existing values
into account. While record values are required for record creation
they are not required for record deletion.
This change rectifies that oversight, saving Ansible from
unsuccessfully trying to operate on NoneType objects.
Resolves#63364
* Get no_log parameters from subspec
* Add changelog and unit tests
* Handle list of dicts in suboptions
Add fancy error message (this will probably haunt me)
* Update unit tests to test for list of dicts in suboptions
* Add integration tests
* Validate parameters in dict and list
In case it comes in as a string
* Make changes based on feedback, fix tests
* Simplify validators since we only need to validate dicts
Add test for suboptions passed in as strings to ensure they get validated properly and turned into a dictionary.
ci_complete
* Add a few more integration tests
* clean "changed" after it has been processed
without this change, a loop of `debug` tasks with `changed_when`
causes the "changed" status to get lost before output
* runme.sh tests for debug loop status
* fix default collection resolution in adhoc
* if an adhoc command is run with a playbook-dir under a configured collection, default collection resolution is used to resolve unqualified module/action names
* Set ANSIBLE_PLAYBOOK_DIR in integration tests.
* Fix config conflict in ansible integration test.
* add adhoc default collection test
* text-ify warning string
* mysql_replication: add connection_name param for MariaDB multi source support
* mysql_replication: add connection_name param for MariaDB multi source support, add changelog
* add ANSIBLE_PLAYBOOK_DIR envvar support
* allows `ANSIBLE_PLAYBOOK_DIR` envvar as a fallback on CLI types that support `--playbook-dir`. This should have been implemented with #59464, but was missed due to an oversight.
* added basic integration test
* make first-class PLAYBOOK_DIR config entry
* update changelog
Newer versions of ssh-keygen create PEM keys that are not recognized by Paramiko.
Now ansible-test compensates for this by updating they keys it generates so Paramiko will recognize them.
Previously the temporary directory used to run integration tests resided under the user's home directory. This prevented ansible-playbook from detecting the default collection when running tests.
Now the temporary directory is created within the collection to facilitate default collection detection.
This change effectively filters out any network interfaces which were
not explicitly configured for the guest. This fixes some unexpected behaviour where a machine with multiple IP addresses (for example, when Docker is installed, an internal IPv4 interface is added to
communicate with the container) would show one of the internal
addresses in the 'ipv4' field, but then no other information about the
corresponding hardware interface.
This fixes test errors related to failures copying temporary test results files from a remote system back to the local system.
It also speeds up processing of test results and reduces network utilization by avoiding the temporary files.
* Specifying IP addresses needs API version 1.22 or newer.
* Simplify code.
* Use IPAMConfig.IPv*Address instead of IPAddress and GlobalIPv6Address.
* Add changelog.
* Fix syntax errors.
* Add integration test.
* Don't rely on netaddr.
* Normalize IPv6 addresses before comparison.
* Install netaddr, and use it.
Using a regular recursive resolver to lookup the zone name might not
work when the zone in question belong to a private/internal
domain. The authoritative server being used on the other hand will
definitely know about the zone(s) it's serving.
This approach is also consistent with the nsupdate module already
querying the specified authoritative server for TTL values.
The reason for the implementation having to loop until finding a
direct match is to account for different SOA responses triggered by
CNAMEs and DNAMEs. The previously used `dns.resolver.zone_for_name()`
function does the same.
Resolves#62052
* support creating an image from a volume
* leave filename/volume optional
* enforce volume/filename mutual exclusivity
* bump version_added to 2.10 for volume option
* add changelog fragment
Running from an installed version of ansible-test now results in tests using a dedicated directory for PYTHONPATH instead of using the site-packages directory where ansible is installed.
This provides consistency with tests running from source, which already used a dedicated directory.
Resolves https://github.com/ansible/ansible/issues/62716
* fix get_nc_next.
* add a changelog fragment.
* upadte for changelgo fragment.
* merge two prs, one depens another.
* merge two prs, one depens another.
* update changelog.
* iam_role: Add support for managing MaxSessionDuration
* iam_role: Add support for deleting the IAM Instance Profiles we created
* iam_role: migrate all boto failures to fail_json_aws for consistency
* iam_role: test validity of path so we can throw a more understandable error
* iam_role: (integration tests) Split iam_role integration tests from sts_assume_role tests
- Make the iam_role tests more comprehensive
- Add tests for iam_role_info
* iam_role: (integration tests) Make some of our pauses optional
If the tests appear to be flakey we may need to enable standard_pauses
Improve tests
- add more unit test cases
- add specific integration test with more cases
Testing shows no major downside to calling .strip() twice in a comprehension vs. using a regular for loop and only calling .strip() once. Going with the comprehension for ease of maintenance and because comprehensions are optimized in CPython.
when creating or deleting an object (e.g. via an API), before/after can
be `None` (or at least represented as such by the used library). to
avoid modules havig to do
diff={'before': before or '', 'after': after or ''}
let's just convert `None` to an empty string that can be diffed properly
* Add a representer for AnsibleUnsafeBytes
* changelog
* Add unit tests
Remove native string test until we have time to evaluate how this the function should work
Add non-ASCII characters to test cases
* Compare to the string on Python 2
Add a comment in the test about this behavior
* Ensure k8s apply works with check mode
Update the new predicted object with fields from the previous object
before applying in check mode
Don't log output of `file` with `state: absent` on huge virtualenvs!
Fixes#60510
* Use openshift client fix to improve apply for check mode
Use new apply_object method to get a better approximation
of the expected object in check mode.
Requires released upgrade to openshift
* Add changelog fragment for k8s apply check mode fix
* Update changelogs/fragments/60510-k8s-apply-check-mode.yml
Co-Authored-By: Felix Fontein <felix@fontein.de>
* Fix plugin names for collection plugins.
Add an integration test to verify plugin __name__ is correct for collection plugins.
* Fix collection loader PEP 302 compliance.
The `find_module` function now returns `None` if the module cannot be found. Previously it would return `self` for modules which did not exist.
Returning a loader from `find_module` which cannot find the module will result in import errors on Python 2.x when using implicit relative imports.
* add changelog
* sanity/units/merge fixes
In some remote environments, the `crontab` executable is
overloaded with a custom executable, which typically does
some pre/post processing before forwarding to crontab.
Instead of using the hardcoded `/usr/bin/crontab`, this uses
the `get_bin_path` utility to locate the default crontab executable.
* ce_bgp_neighbor_af: fix a typo in module's parameter
* ce_bgp_neighbor_af: fix a typo in module's parameter, add version_added and changelog
* ce_bgp_neighbor_af: fix a typo in module's parameter, add aliase