* Refactored code
* Added support for Cumulus Linux 2.5.4
* Added support for Cumulus Linux 3.7.3
* Test added
Fixes: #29969
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* Introduce new "required_by' argument_spec option
This PR introduces a new **required_by** argument_spec option which allows you to say *"if parameter A is set, parameter B and C are required as well"*.
- The difference with **required_if** is that it can only add dependencies if a parameter is set to a specific value, not when it is just defined.
- The difference with **required_together** is that it has a commutative property, so: *"Parameter A and B are required together, if one of them has been defined"*.
As an example, we need this for the complex options that the xml module provides. One of the issues we often see is that users are not using the correct combination of options, and then are surprised that the module does not perform the requested action(s).
This would be solved by adding the correct dependencies, and mutual exclusives. For us this is important to get this shipped together with the new xml module in Ansible v2.4. (This is related to bugfix https://github.com/ansible/ansible/pull/28657)
```python
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', aliases=['dest', 'file']),
xmlstring=dict(type='str'),
xpath=dict(type='str'),
namespaces=dict(type='dict', default={}),
state=dict(type='str', default='present', choices=['absent',
'present'], aliases=['ensure']),
value=dict(type='raw'),
attribute=dict(type='raw'),
add_children=dict(type='list'),
set_children=dict(type='list'),
count=dict(type='bool', default=False),
print_match=dict(type='bool', default=False),
pretty_print=dict(type='bool', default=False),
content=dict(type='str', choices=['attribute', 'text']),
input_type=dict(type='str', default='yaml', choices=['xml',
'yaml']),
backup=dict(type='bool', default=False),
),
supports_check_mode=True,
required_by=dict(
add_children=['xpath'],
attribute=['value', 'xpath'],
content=['xpath'],
set_children=['xpath'],
value=['xpath'],
),
required_if=[
['count', True, ['xpath']],
['print_match', True, ['xpath']],
],
required_one_of=[
['path', 'xmlstring'],
['add_children', 'content', 'count', 'pretty_print', 'print_match', 'set_children', 'value'],
],
mutually_exclusive=[
['add_children', 'content', 'count', 'print_match','set_children', 'value'],
['path', 'xmlstring'],
],
)
```
* Rebase and fix conflict
* Add modules that use required_by functionality
* Update required_by schema
* Fix rebase issue
* identity: Add GSSAPI suport for FreeIPA authentication
This enables the usage of GSSAPI for authentication, instead of having
to pass the username and password as part of the playbook run.
If there is GSSAPI support, this makes the password optional, and will
be able to use the KRB5_CLIENT_KTNAME or the KRB5CCNAME environment
variables; which are standard when using kerberos authentication.
Note that this depends on the urllib_gssapi library, and will only
enable this if that library is available.
* identity: Add documentation for GSSAPI authentication for FreeIPA
This documentation describes how to use GSSAPI authentication with the
IPA identity modules.
* identity: Add changelog for GSSAPI support for IPA
This adds the changelog entry for the GSSAPI authentication feature for
the IPA identity module.
* Move docker_ module_utils into subpackage.
* Remove docker_ prefix from module_utils.docker modules.
* Adding jurisdiction for module_utils/docker to $team_docker.
* Making docker* unit tests community supported.
* Linting.
* Python < 2.6 is not supported.
* Refactoring docker-py version comments. Moving them to doc fragments. Cleaning up some indentations.
* facts: solaris: introduce distribution_major version detection for Solaris
Currently, there's no distribution_major in facts module on Solaris OS.
Use "uname -r" output to report major version.
Before the patch we get this on Solaris 11.3 :
$ ansible -o solaris11 -m setup -a filter=ansible_distribution_major_version
solaris11 | SUCCESS => {"ansible_facts": {}, "changed": false}
and after this patch, output is the following:
$ ansible -o solaris11 -m setup -a filter=ansible_distribution_major_version
solaris11 | SUCCESS => {"ansible_facts": {"ansible_distribution_major_version": "11"}, "changed": false}
Tested with Solaris 11.3 and Solaris 10 (both are x86_64 VMs)
Includes patch for test/units.
Fixes#18197
* Try to fix test unit
* should work now...
* fixes for W291 (trailing whitespace) and E265 (block comment)
* mock uname_release for solaris 10 and solaris 11
* facts: solaris: introduce distribution_major version detection for Solaris
Currently, there's no distribution_major in facts module on Solaris OS.
Use "uname -r" output to report major version.
Before the patch we get this on Solaris 11.3 :
$ ansible -o solaris11 -m setup -a filter=ansible_distribution_major_version
solaris11 | SUCCESS => {"ansible_facts": {}, "changed": false}
and after this patch, output is the following:
$ ansible -o solaris11 -m setup -a filter=ansible_distribution_major_version
solaris11 | SUCCESS => {"ansible_facts": {"ansible_distribution_major_version": "11"}, "changed": false}
Tested with Solaris 11.3 and Solaris 10 (both are x86_64 VMs)
Includes patch for test/units.
Fixes#18197
* Try to fix test unit
* should work now...
* fixes for W291 (trailing whitespace) and E265 (block comment)
* mock uname_release for solaris 10 and solaris 11
* typo uname_v -> uname_r
* rebase
* fix pep8 E302: 2 blank lines
* remove int() cast to match test case
* use single function for uname_r and uname_v
* add solaris 11.4 OS to distribution test unit
* fix pep8 sanity - E231 missing whitespace
* distribution_major_version variable strip newline
* mocker test function for mock_get_uname with parameters instead of two different functions
* failed to make one fuction with test unit, revert to use 2 different functions
* try to use single get_uname function
* fix pep8: E703
* parallelize getting mount info
* fixed timeout and made 8 max thread count
- minor cleanup
- avoid empty mount entries
- set timeout on get
- enforce timeout per mount/thread
- make note on failure per mount
- make note on timeout per mount
- ensure proper pool control
- minor fixes
- less vars, simpler code
- move filter 'pre threading'
- remove timeout for all mounts, now per mount
- also use cpu count from multiprocessing lib
- moved 'bind' options out of thread as per comments
- warn on error, more info on failure to get info
* removed info declaration from documentation fragment as this is not implemented
* added optional headers for POST and PUT requests
* updated documentation
* added missing headers field decalaration
* removed info choice from state field
* added tests for the new utm_utils function
* fixed class invocation
* added missing required params
* fixed the pytests
* Move get_all_subclasses out of sys_info as it is unrelated to system
information.
* get_all_subclasses now returns a set() instead of a list.
* Don't port get_platform to sys_info as it is deprecated. Code using
the common API should just use platform.system() directly.
* Rename load_platform_subclass() to get_platform_subclass and do not
instantiate the rturned class.
* Test the compat shims in module_utils/basic.py separately from the new
API in module_utils/common/sys_info.py and module_utils/common/_utils.py
* Revert "allow caller to deal with timeout (#49449)"
This reverts commit 63279823a7.
Flawed on many levels
* Adds poor API to a public function
* Papers over the fact that the public function is doing something bad
by catching exceptions it cannot handle in the first place
* Papers over the real cause of the issue which is a bug in the timeout
decorator
* Doesn't reraise properly
* Catches the wrong exception
Fixes#49824Fixes#49817
* Make the timeout decorator properly raise an exception outside of the function's scope
signal handlers which raise exceptions will never work well because the
exception can be raised anywhere in the called code. This leads to
exception race conditions where the exceptions could end up being
hanlded by unintended pieces of the called code.
The timeout decorator was using just that idiom. It was especially bad
because the decorator syntactically occurs outside of the called code
but because of the signal handler, the exception was being raised inside
of the called code.
This change uses a thread instead of a signal to manage the timeout in
parallel to the execution of the decorated function. Since raising of
the exception happens inside of the decorator, now, instead of inside of
a signal handler, the timeout exception is raised from outside of the
called code as expected which makes reasoning about where exceptions are
to be expected intuitive again.
Fixes#43884
* Add a common case test.
Adding an integration test driven from our unittests. Most of the time
we'll timeout in run_command which is running things in a subprocess.
Create a test for that specific case in case anything funky comes up
between threading and execve.
* Don't use OSError-based TimeoutError as a base class
Unlike most standard exceptions, OSError has a specific parameter list
with specific meanings. Instead follow the example of other stdlib
functions, concurrent.futures and multiprocessing and define a separate
TimeoutException.
* Add comment and docstring to point out that this is not hte Python3 TimeoutError
Since the 'platform.dist()' and 'platform.linux_distribution()'
methods will be removed from future versions of python, this
provides an alternative to replace ansibles use of those
methods.
lib/ansible/module_utils/distro.py is a copy of
https://github.com/nir0s/distro/blob/master/distro.py
This module is originally from https://github.com/nir0s/distro
and is license under the Apache License, Version 2.0.
* set ansible_os_family from name variable in os-release for clearlinux system
Signed-off-by: Josue David Hernandez Gutierrez <josue.d.hernandez.gutierrez@intel.com>
* Add os_family for clear linux and clear linux mixes
Signed-off-by: Josue David Hernandez Gutierrez <josue.d.hernandez.gutierrez@intel.com>
* Fix for changes in clearlinux
clearlinux is now providing /etc/os-release file and ansible is identifying as NA
then this change allow ansible to find it
Signed-off-by: Josue David Hernandez Gutierrez <josue.d.hernandez.gutierrez@intel.com>
* Add changelog fragment for clearlinux changes
Signed-off-by: Josue David Hernandez Gutierrez <josue.d.hernandez.gutierrez@intel.com>
* VMware: Fix module usages in module_utils
* Skip test for Python 2.6 as SSL context is not available in Python 2.6
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* FTD modules: bug fixes and upsert functionality
* Fix sanity checks
* Fix unit tests for Python 2.6
* Log status code for login/logout
* Use string formatting in logging
* More generic comparison code from docker_container to docker_common.
* More flexibility if a is None and method is allow_to_present.
Note that this odes not affect docker_container, as there a is never None.
* Update docker_secret and docker_config: simplify labels comparison.
* Added unit tests.
* Use proper subsequence test for allow_more_present for lists.
Note that this does not affect existing code in docker_container, since lists
don't use allow_more_present. Using allow_more_present will only be possible
in Ansible 2.8.
* pep8
* Move ansible.compat.tests to test/units/compat/.
* Fix unit test references to ansible.compat.tests.
* Move builtins compat to separate file.
* Fix classification of test/units/compat/ dir.
* Separate networking tools that may be used by modules outside of networking so changes to networking-only utilities don't trigger AWS integration tests
* Add unit tests for moved network utils
* Add comment to prevent imports from being mistakenly removed
* Move to_bits as well
* Add common and Swagger client utils for FTD modules
* Update FTD HTTP API plugin and add unit tests for it
* Add configuration layer handling object idempotency
* Add ftd_configuration module with unit tests
* Add ftd_file_download and ftd_file_upload modules with unit tests
* Validate operation data and parameters
* Fix ansible-doc, boilerplate and import errors
* Fix pip8 sanity errors
* Update object comparison to work recursively
* Add copyright
* Remove use of simplejson throughout code base. Fixes#42761
* Address failing tests
* Remove simplejson from contrib and other outlying files
* Add changelog fragment for simplejson removal
Now that we don't need to worry about python-2.4 and 2.5, we can make
some improvements to the way AnsiballZ handles modules.
* Change AnsiballZ wrapper to use import to invoke the module
We need the module to think of itself as a script because it could be
coded as:
main()
or as:
if __name__ == '__main__':
main()
Or even as:
if __name__ == '__main__':
random_function_name()
A script will invoke all of those. Prior to this change, we invoked
a second Python interpreter on the module so that it really was
a script. However, this means that we have to run python twice (once
for the AnsiballZ wrapper and once for the module). This change makes
the module think that it is a script (because __name__ in the module ==
'__main__') but it's actually being invoked by us importing the module
code.
There's three ways we've come up to do this.
* The most elegant is to use zipimporter and tell the import mechanism
that the module being loaded is __main__:
* 5959f11c9d/lib/ansible/executor/module_common.py (L175)
* zipimporter is nice because we do not have to extract the module from
the zip file and save it to the disk when we do that. The import
machinery does it all for us.
* The drawback is that modules do not have a __file__ which points
to a real file when they do this. Modules could be using __file__
to for a variety of reasons, most of those probably have
replacements (the most common one is to find a writable directory
for temporary files. AnsibleModule.tmpdir should be used instead)
We can monkeypatch __file__ in fom AnsibleModule initialization
but that's kind of gross. There's no way I can see to do this
from the wrapper.
* Next, there's imp.load_module():
* https://github.com/abadger/ansible/blob/340edf7489/lib/ansible/executor/module_common.py#L151
* imp has the nice property of allowing us to set __name__ to
__main__ without changing the name of the file itself
* We also don't have to do anything special to set __file__ for
backwards compatibility (although the reason for that is the
drawback):
* Its drawback is that it requires the file to exist on disk so we
have to explicitly extract it from the zipfile and save it to
a temporary file
* The last choice is to use exec to execute the module:
* https://github.com/abadger/ansible/blob/f47a4ccc76/lib/ansible/executor/module_common.py#L175
* The code we would have to maintain for this looks pretty clean.
In the wrapper we create a ModuleType, set __file__ on it, read
the module's contents in from the zip file and then exec it.
* Drawbacks: We still have to explicitly extract the file's contents
from the zip archive instead of letting python's import mechanism
handle it.
* Exec also has hidden performance issues and breaks certain
assumptions that modules could be making about their own code:
http://lucumr.pocoo.org/2011/2/1/exec-in-python/
Our plan is to use imp.load_module() for now, deprecate the use of
__file__ in modules, and switch to zipimport once the deprecation
period for __file__ is over (without monkeypatching a fake __file__ in
via AnsibleModule).
* Rename the name of the AnsiBallZ wrapped module
This makes it obvious that the wrapped module isn't the module file that
we distribute. It's part of trying to mitigate the fact that the module
is now named __main)).py in tracebacks.
* Shield all wrapper symbols inside of a function
With the new import code, all symbols in the wrapper become visible in
the module. To mitigate the chance of collisions, move most symbols
into a toplevel function. The only symbols left in the global namespace
are now _ANSIBALLZ_WRAPPER and _ansiballz_main.
revised porting guide entry
Integrate code coverage collection into AnsiballZ.
ci_coverage
ci_complete
d7df072b96 changed how we call
journal.send() from positional arguments to keyword arguments. So we
need to update the test to check for the arguments it was called with in
the keyword args, not in the positional args.
* Properly handle default package manager vs apt
For distros where apt might be installed but is not the default
package manager for the distro, properly identify the default distro
package manager during fact finding and re-use fact finding from
DistributionFactCollector and instead of reimplementing small
portions of it in PkgMgrFactCollector
Add unit test to always check the apt + Fedora combination to test
the new code.
Fixes#34014
Signed-off-by: Adam Miller <admiller@redhat.com>
* remove q debugging output I accidentally left behind
Signed-off-by: Adam Miller <admiller@redhat.com>
* add os_family to the conditional so we're only hitting that code path when needed
Signed-off-by: Adam Miller <admiller@redhat.com>
* setup for a _check* pattern for general os_family group pkg_mgr checking
Signed-off-by: Adam Miller <admiller@redhat.com>
* use Mock.patch decorator for os.path.exists in TestPkgMgrFactsAptFedora
Signed-off-by: Adam Miller <admiller@redhat.com>
Move dict_merge from azure_rm_resource module to
module_utils.common.dict_transformations and add tests.
Use dict_merge to provide a fairly realistic, reliable
diff output when k8s-based modules are run in check_mode.
Rename unit tests so that they actually run and reflect
the module_utils they're based on.
* Fix tmpdir on non root become
- also avoid exception if tmpdir and remote_tmp are None
- give 'None' on deescalation so tempfile will fallback to it's default behaviour
and use system dirs
- fix issue with bad tempdir (not existing/not createable/not writeable)
i.e nobody and ~/.ansible/tmp
- added tests for blockfile case
* Revert "Temporarily revert c119d54"
This reverts commit 5c614a59a6.
* changes based on PR feedback and changelog fragment
* changes based on the review
* Fix tmpdir when makedirs failed so we just use the system tmp
* Let missing remote_tmp fail
If remote_tmp is missing then there's something more basic wrong in the
communication from the controller to the module-side. It's better to
be alerted in this case than to silently ignore it.
jborean and I have independently checked what happens if the user sets
ansible_remote_tmp to empty string and !!null and both cases work fine.
(null is turned into a default value controller-side. empty string
triggers the warning because it is probably not a directory that the
become user is able to use).
* Issue #39860: Add 'not_contains' method to parsing.py
* Issue #30860 Adds self.negate to Conditional class in lib/ansible/module_utils/network/common/parsing.py
* Issue #39860 Fix singleton-comparison issue per sanity tests
* Issue #39860 'test/integration/targets/nxos_command/tests/cli/not_comparison_operator.yaml' integration test
* Issue #39860 Add unit tests to '../../test/units/module_utils/network/common/test_parsing.py'
* Issue #39860 Fix singleton comparison issue
* Fix E302 expected 2 blank lines, found 1
* Issue #39860 Add license header to unit tests
* Issue #39860 Move integration test to 'test/integration/targets/nxos_command/tests/common/'; remove unnecessary comment from unit test
* Issue #39860 remove unnecessary comment from unit test
When parsing the distribution files such as /etc/os-release, we extract
the full distribution version but not the major version. As such, the
ansible_distribution_major_version ends up being 'NA' whereas the
ansible_distribution_version contains the full version.
Before this patch we get this on openSUSE Leap 15
ansible -o localhost -m setup -a filter=ansible_distribution_major_version
localhost | SUCCESS => {"ansible_facts": {"ansible_distribution_major_version": "NA"}, "changed": false}
After this patch we get this
ansible -o localhost -m setup -a filter=ansible_distribution_major_version
localhost | SUCCESS => {"ansible_facts": {"ansible_distribution_major_version": "15"}, "changed": false}
This also fixes the Tumbleweed distribution test to report a proper
major version and also adds a test for openSUSE Leap 15.0 to avoid
potential future regressions.
Fixes: #41410
* Adds requests.Session like class
* py2 syntax fix
* Add a few examples to the Request docstrings
* Add helper methods and docs
* Fix test failures
* Switch tests to test Request instead of open_url, add simple open_url test to validate funcitonality
* Fix filename in replace-urlopen code smell test
* To fix following github issues 35774, 36574 and 39494
* To fix following github issues 35774, 36574 and 39494
* To fix following github issues 35774, 36574 and 39494
* To fix following github issues 35774, 36574 and 39494
* To fix following github issues 35774, 36574 and 39494
* To fix following github issues 35774, 36574 and 39494
* removed old_name new entry to make ui cleaner
* removed old_name new entry to make ui cleaner
* removed old_name new entry to make ui cleaner
* removed old_name new entry to make ui cleaner
* removed old_name new entry to make ui cleaner
* removed old_name new entry to make ui cleaner
* to resolve the bug 40709
* reslove shippable error
* reslove shippable error
* reslove shippable error
* reslove shippable error
* reslove shippable error
* reslove shippable error
* reslove shippable error
* reslove shippable error
* reslove shippable error
* to fix shippable nios automation error
* modified the name input parsing method
* modified the name input parsing method
* modified the name input parsing method
* modified the name input parsing method
* modified the name input parsing method
* modified the name input parsing method
* modified the name input parsing method
* modified the name input parsing method
* modified the name input parsing method
* shippable error fix
* shippable error fix
* shippable error fix
* shippable error fix
* shippable error fix
* review comment fix
* shippable error fix
* shippable error fix
When running the test test/units/module_utils/urls/test_open_url.py
test_open_url_no_validate_certs, the test fails because of the SSLv2
check.
Test is run on a machine using openssl 1.1.0g. By reading the openssl
man page[1], one can see that support for SSLv2 has been removed.
> Support for SSLv2 and the corresponding SSLv2_method(),
> SSLv2_server_method() and SSLv2_client_method() functions where removed
> in OpenSSL 1.1.0.
>
> SSLv23_method(), SSLv23_server_method() and SSLv23_client_method() were
> deprecated and the preferred TLS_method(), TLS_server_method() and
> TLS_client_method() functions were introduced in OpenSSL 1.1.0.
Hence this commit remove the uses of this flag when it is not defined.
[1] https://www.openssl.org/docs/man1.1.0/ssl/SSLv23_method.html
* create module tmpdir based on remote_tmp
* Source remote_tmp from controller if possible
* Fixed sanity test and not use lambda
* Added expansion of env vars to the remote tmp
* Fixed sanity issues
* Added note around shell remote_tmp option
* Changed fallback tmp dir to ~/.ansible/tmp to make shell defaults
* Allow subspec defaults to be processed when the parent argument is not supplied
* Allow this to be configurable via apply_defaults on the parent
* Document attributes of arguments in argument_spec
* Switch manageiq_connection to use apply_defaults
* add choices to api_version in argument_spec
* Adding slxos_config module and supporing util functions.
* Adding slxos module_utils load_config test
* Adding slxos_config module tests
* Removing unneeded required false statements from slxos_config module
* Removing version_aded from slxos_config module
* Removing force and save from slxos config module
* Removing save test
* Adding slx_command module and supporting module_utils.
This commit adds the slx_command module and tests as well as the
required slxos module_utils.
* Update copyright in header
* Adding missing module init
* Cleaning up shebangs/licensing.
* Incorporating feedback
Removing reference to `waitfor` alias in `slxos_command` module.
Adding `Extreme Networks` to `short_description` of `slxos_command` module.
* Adding cliconf tests
* Fixing 3.X tests
* Adding docstrings to test methods for slxos cliconf tests
* Adding slxos terminal tests
* Adding slxos module_utils tests
* Adding Extreme Networks team members to BOTMETA.yml
* Handle duplicate headers, and make it easier for users to use cookies, by providing a pre-built string
* Ensure proper cookie ordering, make key plural
* Add note about cookie sort order
* Add tests for duplicate headers and cookies_string
* Extend tests, normalize headers between py2 and py3
* Add some notes in test code
* Don't use AttributeError, use six.PY3. Use better names.
* Start of tests for ansible.module_utils.urls
* Start adding file for generic functions throughout urls
* Add tests for maybe_add_ssl_handler
* Remove commented out line
* Improve coverage of maybe_add_ssl_handler, test basic_auth_header
* Start tests for open_url
* pep8 and ignore urlopen in test_url_open.py tests
* Extend auth tests, add test for validate_certs=False
* Finish tests for open_url
* Add tests for fetch_url
* Add fetch_url tests to replace-urlopen ignore
* dummy instead of _
* Add BadStatusLine test
* Reorganize/rename tests
* Add tests for RedirectHandlerFactory
* Add POST test to confirm behavior is to convert to GET
* Update tests to handle recent changes to RedirectHandlerFactory
* Special test, just to confirm that aliasing http_error_308 to http_error_307 does not cause issues with urllib2 type redirects
NSO operations can take much longer than 10 seconds as they operate on
real network equipment, set default timeout to 5 minutes and allow for
user override.
False assumption that values can not have cyclic dependencies. Fix by
removing dependency on self and look for cycles, if found remove
dependency to get a partial sort done.
Fix issues in ValueBuilder used in nso_config and nso_verify so that it
can handle leaf-list in NSO 4.5 and detect identityref types from
unions.
Fail gracefully if a type is not found.
* allows ib_spec attrs to be filtered in update
This change will allow the ib_spec entries to be be filtered on a change
object by setting the update keyword to false. The default value for
update is true. When the update keyword is set to false, the keyed
entry will be removed from the update object before it is sent to the
api endpoint.
fixes#36563
* fix up pep8 issues
* basic: allow one or more when param list having choices
* add unit tests
* optimize a bit
* re-add get_exception import
* a number of existing modules expect to be able to get it from basic.py
* ACI: Change result output as discussed
* Update all modules to use new aci.exit_json()
* Update output_level spec and docs
* Fix integration tests
* Small PEP8 fix
* Asorted fixes to tests and aci_rest
* More test fixes and support for ANSIBLE_DEBUG
* Fix another PEP8 issues
* Move response handling inside ACI module
* Reform of ACI error handling and error output
* Diff multiline json output
* Fix a few more tests
* Revert aci_bd tests
* Small correction
* UI change: existing->current, original->previous
* UI change: config->sent
* Update all modules with RETURN values
* Fix a few more tests
* Improve docstring and add 'raw' return value
* Fix thinko
* Fix sanity/pep8 issues
* Rewrite unit tests to comply with new design
* refactors nios api shared code to handle provider better
This change refactors the shared code to be easily shared between
modules, plugins and dynamic inventory scripts. All parts now implement
the provider arguments uniformly.
This also provides a centralized fix to suppress urllib3 warnings coming
from the requests library implemented by infoblox_client
* fix up pep8 errors
* fix missing var name
Add deps/requires for fact collectors
Fact collectors can now set a required_facts
class attribute that will be a set of the names
of fact collectors they require to be run first.
ie, if a collector needs to know the ansible_distribution,
it should set it's required_facts to include 'distribution'
required_facts = set(['distribution'])
If a collector requires another collector, it gets added
to the selected collector names.
We then topological sort the ordering of the collectors
so that deps work out (ie, 'distribution' will run before
'service_mgr')
required_facts were added to the collectors for:
- network (requires 'distribution', 'platform')
- hardware (requires 'platform')
- service_mgr (requires 'distribution', 'platform')
Fix name references for facts (need 'ansible_' prefix)
is service_mgr
Fixes#30753
The accumulated collected_facts was being update
with new facts _after_ filtering them. So only
facts that pass the filter would ever be passed
to other fact collectors.
For 'filter=ansible_service_mgr', even though it requires
the platform and distribution facts and even collects them,
they would get filtered out and never passed to the other
collectors that need them (service_mgr for ex).
Fix is just to add the unfiltered facts to collected_facts.
Adds unit tests for fact filter and collected_facts.
Fixes#32286
The search string used to look for Clear Linux
was changed in 45a9f96774 to
be more specific, but was too specific. Now finding
a substring match for 'Clear Linux' in /usr/lib/os-release
is enough to consider a match.
Since the details of the full name in os-release varies
('Clear Linux Software for Intel Architecture',
'Clear Linux OS for Intel Architecture', etc) the
search string match was failing and would fall back to the
'first word in the release file' method resulting in
ansible_distribution='NAME="Clear'
Also add a meta fact indicating which search string
was matched.
Test case info from:
https://github.com/ansible/ansible/issues/31501#issuecomment-340861535Fixes#31501
* Allow protection of certain keys during camel_to_snake
Create an `ignore_list` parameter that preserves the case
of the contents of certain dictionaries. Most valuable
for `tags` but other uses might arise.
* Port ec2_vpc_route_table to boto3
Update tests to reflect fixes in boto3.
* Add RETURN documentation to ec2_vpc_route_table
* Update DOCUMENTATION to be valid yaml
* Add check mode tests
* basic.py: add mock to os.path.exists
* set_*_if_different: if check_mode enabled & file missing: set changed to True
Fixes#32676
Thanks to mscherer and Spredzy for the distributed triplet programming
session!
* update version parsing and move requirements to nso_* modules
prepare for introduction of nso_show module that has other version
requirements than the existing nso_* modules.
* Add nso_show module for retreiving config from Cisco NSO
New module that supports getting configuration and operational data
from Cisco NSO.
* WIP adds network subnetting functions
* adds functions to convert between netmask and masklen
* adds functions to verify netmask and masklen
* adds function to dtermine network and subnet from address / mask pair
* network_common: add a function to get the first 48 bits in a IPv6 address.
ec2_group: only use network bits of a CIDR.
* Add tests for CIDRs with host bits set.
* ec2_group: add warning if CIDR isn't the networking address.
* Fix pep8.
* Improve wording.
* fix import for network utils
* Update tests to use pytest instead of unittest
* add test for to_ipv6_network()
* Fix PEP8
Split the one monolithic test for basic.py into several files
* Split test_basic.py along categories.
This is preliminary to get a handle on things. Eventually we may want
to further split it so each file is only testing a single function.
* Cleanup unused imports from splitting test_basic.py
* Port atomic_move test to pytest.
Working on getting rid of need to maintain procenv
* Split a test of symbolic_mode_to_octal to follow unittest best practices
Each test should only invoke the function under test once
* Port test_argument_spec to pytest.
* Fix suboptions failure
Allow CamelCase version of snake_dict_to_camel_dict
(currently only dromedaryCase is supported)
Add reversible option to camel_dict_to_snake_dict
Add tests for both of these options
* Deprecate Entity, EntityCollection and use subspec in network modules
* As per proposal https://github.com/ansible/proposals/issues/76
deprecate use of Entity, EntityCollection, ComplexDict, ComplexList
and use subspec instead.
* Refactor ios modules
* Refactor eos modules
* Refactor vyos modules
* Refactor nxos modules
* Refactor iosxr modules
* Add support for key in suboptions handling
* Fix CI issues
* Porting tests to pytest
* Achievement Get: No longer need mock/generator.py
* Now done via pytest's parametrization
* Port safe_eval to pytest
* Port text tests to pytest
* Port test_set_mode_if_different to pytest
* Change conftest AnsibleModule fixtures to be more flexible
* Move the AnsibleModules fixtures to module_utils/conftest.py for sharing
* Testing the argspec code requires:
* injecting both the argspec and the arguments.
* Patching the arguments into sys.stdin at a different level
* More porting to obsolete mock/procenv.py
* Port run_command to pytest
* Port known_hosts tests to pytest
* Port safe_eval to pytest
* Port test_distribution_version.py to pytest
* Port test_log to pytest
* Port test__log_invocation to pytest
* Remove unneeded import of procenv in test_postgresql
* Port test_pip to pytest style
* As part of this, create a pytest ansiblemodule fixture in
modules/conftest.py. This is slightly different than the
approach taken in module_utils because here we need to override the
AnsibleModule that the modules will inherit from instead of one that
we're instantiating ourselves.
* Fixup usage of parametrization in test_deprecate_warn
* Check that the pip module failed in our test
* Refactor common network shared and platform specific code into package (part-1)
As per proposal #76 refactor common network shared and platform specific
code into sub-package.
https://github.com/ansible/proposals/issues/76
* ansible.module_utils.network.common - command shared functions
* ansible.module_utils.network.{{ platform }} - where platform is platform specific shared functions
* Fix review comments
* Fix review comments
* jsonify inventory
* smarter import, dont pass kwargs where not needed
* added datetime
* Eventual plan for json utilities to migrate to common/json_utils when we split
basic.py no need to move jsonify to another file now as we'll do that later.
* json_dict_bytes_to_unicode and json_dict_unicode_to_bytes will also
change names and move to common/text.py at that time (not to json).
Their purpose is to recursively change the elements of a container
(dict, list, set, tuple) into text or bytes, not to json encode or
decode (they could be a generic precursor to that but are not limited
to that.)
* Reimplement the private _SetEncoder which changes sets and datetimes
into objects that are json serializable into a private function
instead. Functions are more flexible, less overhead, and simpler than
an object.
* Remove code that handled simplejson-1.5.x and earlier. Raise an error
if that's the case instead.
* We require python-2.6 or better which has the json module builtin to
the stdlib. So this is only an issue if the stdlib json has been
overridden by a third party module and the simplejson on the system
is 1.5.x or less. (1.5 was released on 2007-01-18)
* Make ansible_selinux facts a consistent type
Rather than returning a bool if the Python library is missing, return a dict with one key containing a message explaining there is no way to tell the status of SELinux on the system becasue the Python library is not present.
* Fix unit test
The /etc/os-release based distro detection doesn't
seem to work for Ubuntu 10.04 (no /etc/os-release?).
So it was testing the next case which was /etc/lsb-release to
see if it is 'Mandriva'. Since the check for existence of
(/etc/lsb-release, Mandrive) was the first non-empty dist
file match, 'ansible_distribution' was being set to 'Mandriva'
expecting to be corrected by the data from the dist file content.
But since the dist file parsing for Mandriva didn't match for
Ubuntu 10.04 /etc/lsb-release _and_ there is no Debian specific
lsb-release check, 'ansible_distribution' stayed at 'Mandriva'
and the dist file checking loop keeps going and eventually off
the end of the list before finding a better match.
Adding a debian/ubuntu specific check for /etc/lsb-release after
the debian os-release sets the info correctly and stops further
checking of dist files.
Fixes#30693
* Fix fact failures cause by ordering of collectors
Some fact collectors need info collected by other facts.
(for ex, service_mgr needs to know 'ansible_system').
This info is passed to the Collector.collect method via
the 'collected_facts' info.
But, the order the fact collectors were running in is
not a set order, so collectors like service_mgr could
run before the PlatformFactCollect ('ansible_system', etc),
so the 'ansible_system' fact would not exist yet.
Depending on the collector and the deps, this can result
in incorrect behavior and wrong or missing facts.
To make the ordering of the collectors more consistent
and predictable, the code that builds that list is now
driven by the order of collectors in default_collectors.py,
and the rest of the code tries to preserve it.
* Flip the loops when building collector names
iterate over the ordered default_collectors list
selecting them for the final list in order instead
of driving it from the unordered collector_names set.
This lets the list returned by select_collector_classes
to stay in the same order as default_collectors.collectors
For collectors that have implicit deps on other fact collectors,
the default collectors can be ordered to include those early.
* default_collectors.py now uses a handful of sub lists of
collectors that can be ordered in default_collectors.collectors.
fixes#30753fixes#30623
* Fix 'distribution' fact for ArchLinux
Allow empty wasn't breaking out of the process_dist_files
loop, so a empty /etc/arch-release would continue searching
and eventually try /etc/os-release. The os-release parsing
works, but the distro name there is 'Arch Linux' which does
not match the 2.3 behavior of 'Archlinux'
Add a OS_RELEASE_ALIAS map for the cases where we need to get
the distro name from os-release but use an alias.
We can't include 'Archlinux' in SEARCH_STRING because a name match on its keys
but without a match on the content causes a fallback to using the first
whitespace seperated item from the file content as the name.
For os-release, that is in form 'NAME=Arch Linux'
With os-release returning the right name, this also supports the
case where there is no /etc/arch-release, but there is a /etc/os-release
Fixes#30600
* pep8 and comment cleanup
* Fix pkg_mgr fact on OpenBSD
Add a OpenBSDPkgMgrFactCollector that hardcodes pkg_mgr
to 'openbsd_pkg'. The ansible collector will choose the
OpenBSD collector if the system is OpenBSD and the 'Generic'
one otherwise.
This removes PkgMgrFactCollectors depenency on the
'system' fact being in collected_facts, which also
avoids ordering issues (if the pkg mgr fact is collected
before the system fact...)
Fixes#30623
* Added the ability to extend the exception list in CloudRetry
* AWSRetry boto and boto compatible
* Updated tests to reflect boto/boto3
* Added boto to shippable requirements
* Have base_class and added_exceptions default to None in CloudRetry
AWSRetry - only retry on boto3 exceptions and remove boto requirement from tests
* Make requested changes.
* Fix for issue ansible/ansible#27715
* Also fixing mutually exclusive check
* Updating subspec checks
These changes take into account a spec with all features enabled and do
the following tests for subspecs:
1. Test proper specs
2. Test Alias
3. Test missing required param
4. Test mutually exclusive params
5. Test required if params
6. Test required one of params
7. Test required together params
8. Test required if params with a default value
9. Test basis subspec params
10. Test invalid subsec params
* adds new filter plugins for network use cases
* adds parse_cli filter
* adds parse_cli_textfsm filter
* adds Template class to network_common
* adds conditional function to network_common
* fix up PEP8 issues
previously gather_subset=['!all'] would still gather the
min set of facts, and there was no way to collect no facts.
The 'min' specifier in gather_subset is equilivent to
exclude the minimal_gather_subset facts as well.
gather_subset=['!all', '!min'] will collect no facts
This also lets explicitly added gather_subsets override excludes.
gather_subset=['pkg_mgr', '!all', '!min'] will collect only the pkg_mgr
fact.
* Add 2.0-2.3 facts api compat (ansible_facts(), get_all_facts())
These are intended to provide compatibilty for modules that
use 'ansible.module_utils.facts.ansible_facts' and
'ansible.module_utils.facts.get_all_facts' from 2.0-2.3 facts
API.
Fixes#25686
Some related changes/fixes needed to provide the compat api:
* rm ansible.constants import from module_utils.facts.compat
Just use a hard coded default for gather_subset/gather_timeout
instead of trying to load it from non existent config if the
module params dont include it.
* include 'external' collectors in compat ansible_facts()
* Add facter/ohai back to the valid collector classes
facter/ohai had gotten removed from the default_collectors
class used as the default list for all_collector_classes by
setup.py and compat.py
That made gather_subset['facter'] fail.
* Add aggregate parameter validation
aggregate parameter validation will support checking each individual dict
to resolve conditions for aliases, no_log, mutually_exclusive,
required, type check, values, required_together, required_one_of
and required_if conditions in argspec. It will also set default values.
eg:
tasks:
- name: Configure interface attribute with aggregate
net_interface:
aggregate:
- {name: ge-0/0/1, description: test-interface-1, duplex: full, state: present}
- {name: ge-0/0/2, description: test-interface-2, active: False}
register: response
purge: Yes
Usage:
```
from ansible.module_utils.network_common import AggregateCollection
transform = AggregateCollection(module)
param = transform(module.params.get('aggregate'))
```
Aggregate allows supports for `purge` parameter, it will instruct the module
to remove resources from remote device that hasn’t been explicitly
defined in aggregate. This is not supported by with_* iterators
Also, it improves performace as compared to with_* iterator for network device
that has seperate candidate and running datastore.
For with_* iteration the sequence of operartion is
load-config-1 (candidate db) -> commit (running db) -> load_config-2
(candidate db) -> commit (running db) ...
With aggregate the sequence of operation is
load-config-1 (candidate db) -> load-config-2 (candidate db) -> commit
(running db)
As commit is executed only once per task for aggregate it has
huge perfomance benefit for large configurations.
* Fix CI issues
* Fix review comments
* Add support for options validation for aliases, no_log,
mutually_exclusive, required, type check, value check,
required_together, required_one_of and required_if
conditions in sub-argspec.
* Add unit test for options in argspec.
* Reverted aggregate implementaion.
* Minor change
* Add multi-level argspec support
* Multi-level argspec support with module's top most
conditionals options.
* Fix unit test failure
* Add parent context in errors for sub options
* Resolve merge conflict
* Fix CI issue
* Make camel_to_snake work on capitalized plurals
`TargetGroupARNs` should become `target_group_arns`, not
`target_group_ar_ns`
Promote `camel_to_snake` to top layer function but prefix
it with an underscore.
Add tests for improved `_camel_to_snake` function.
Reduce use of `re.compile` as it makes no sense when the
compilation result is not reused.
* Remove unused LooseVersion check
* Fix PLURALs case for camel_to_snake
Also renamed EXPECTED_CAMELIZATION to EXPECTED_SNAKIFICATION
* ACI module_utils library for ACI modules
This PR includes:
- the ACI argument_spec
- an aci_login function
- an experimental aci_request function
- an aci_response function
- included the ACI team
* New prototype using ACIModule
This PR includes:
- A new ACIModule object with various useful methods
* Module argument_spec now accepts a callable for the type argument, which is passed through and called with the value when appropriate. On validation/conversion failure, the name of the callable (or its type as a fallback) is used in the error message.
* adds basic smoke tests for custom callable validator functionality
* Move tests to their own file
* Port to a pytest parametrized test so it's easier to define new tests
* Now that _symbolic_mode_to_octal is a classmethod, we don't have to
instantiate an AnsibleModule to test it.
* Add tests for #14994, having more than one operator per role and umask
chmod
Consolidate the module_utils, constants, and config functions that
convert values into booleans into a single function in module_utils.
Port code to use the module_utils.validate.convert_bool.boolean function
isntead of mk_boolean.
It was in lib/ansible/modules/system/setup.py since it
was the only thing using it, but move it back to module_utils
and add a ansible_collector.get_ansible_collector() to build
a facts collector just like the one used by setup.py
mv test_setup.py -> test_ansible_collector.py
All the code it was testing is now in ansible_collector
rm code to create 'ansible_facts' subkey from namespace
Just leave it up to the caller to do, and just return a
flat dictionary from AnsibleFactCollector.collect()
restored 'rc' inspection but only when failed is not specified
removed redundant changed from basic.py as task_executor already adds
removed redundant filters, they are tests
added aliases to tests removed from filters
fixed test to new rc handling
* adds new common functions for declarative intent modules
* adds Entity and EntityCollection
* adds dict_diff and dict_combine
* update for CI PEP8 compliance
* more CI PEP8 fixes
* more PEP8 CI clean up
* refactors the lambda assignments into top level classes
this is to be in compliant the PEP8 CI sanity checks
* one last pep8 ci fix
* Add more mount point statvfs info including sizes
Based on https://github.com/ansible/ansible/pull/12073
facts.utils.get_mount_size() now returns a dict of most
of the posix statvfs data, including block_size and inode
counts.
Update the facts.hardware classes that use get_mount_size() to
use the new info by mount_info.update(mount_statvfs_inof) to merge.
* add back unit tests for LinuxHardware mount/fs facts
* add test cases for facts.utils.get_mount_size
* Support NetBSD 7.1+ style ifconfig -a output
network facts on NetBSD after 7.1 cvs would fail
because of format changes in 'ifconfig -a' output.
update code to support new and old format.
add unit tests for both based on
examples from Bruce V Chiarelli.
* wrap use of interfaces.keys() in list() for py3 compat
* sort interface ids for stability
* Fix ansible_cmdline initrd fact for UEFI
UEFI cmdline paths use \ path sep which would
get munged by cmdline fact collection.
* Make CmdLineFactCollector easier to test
extract the parsing of the /proc/cmdline content to
_parse_proc_cmdline()
add a wrapper method for get_file_content _get_proc_cmdline()
Add unit tests of _parse_proc_cmdline based on examples
from issue #23647Fixes#23647
When operating on arbitrary return data from modules, it is possible to
hit the recursion limit when cleaning out no_log values from the data.
To fix this, we have to switch from recursion to iteration.
Unittest for remove_values recursion limit
Fixes#24560
Facts Refresh (2.4 roadmap)
This commit implements most of the 2.4 roadmap 'Facts Refresh'
- move facts.py to facts/__init__.py
- move facts Distribution() to its own class
- add a facts/utils.py
- move get_file_content and get_uname_version to facts/utils.py
- move Facts() class from facts/__init__ to facts/facts.py
- mv get_file_lines to facts/utils.py
- mv Ohai()/Facter() class to facts/ohai.py and facter.py
- Start moving fact Hardware() classes to facts/hardware/*.py
- mv HPUX() hardware class to facts/hardware/hpux.py
- move SunOSHardware() fact class to facts/hardware/sunos.py
- move OpenBSDHardware() class to facts/hardware/openbsd.py
- mv FreeBsdHardware() and DragonFlyHardware() to facts/hardware/
- mv NetBSDHardware() to facts/hardware/netbsd.py
- mv Darwin() hardware class to facts/hardware/darwin.py
- pep8/etc cleanups on facts/hardware/*.py
- Mv network facts classes to facts/network/*.py
- mv Virtual fact classes to facts/virtual
- mv Hardware.get_sysctl to facts/sysctl.py:get_sysctl
- Also mv get_uname_version from facts/utils.py -> distribution.py
since distribution.py is the only thing using it.
- add collector.py with new BaseFactCollector
- add a subclass for AnsibleFactCollector
- hook up dict key munging FactNamespaces
- add some test cases for testing the names of facts
- mv timeout stuff to facts.timeout
- rm ansible_facts()/get_all_facts() etc
- Instead of calling facts.ansible_facts(), fact collection
api used by setup.py is now to create an AnsibleFactCollector()
and call it's collect method.
- replace Facts.get_user_facts with UserFactCollector
- add a 'systems' facts package, mv UserFactCollector there
- mv get_dns_facts to DnsFactCollector
- mv get_env_facts to EnvFactCollector
- include the timeout length in exception message
- modules and module_utils that use AnsibleFactCollector
can now theoretically set the 'valid_subsets'
May be useful for network facts module that currently have
to reimplement a good chunk of facts.py to get gather_subsets
to work.
- get_local_facts -> system/LocalFactCollector
- get_date_time -> system/date_time.py
- get_fips_facts -> system/fips.py
- get_caps_facts() -> system/caps.py
- get_apparmor_facts -> system/apparmor.py
- get_selinux_facts -> system/selinux.py
- get_lsb_facts -> system/lsb.py
- get_service_mgr_facts -> system/service_mgr.py
- Facts.is_systemd_managed -> system/service_mgr.py
- get_pkg_mgr_facts -> system/pkg_mgr.py
- Facts()._get_mount_size_facts() -> facts.utils.get_mount_size()
- add unit test for EnvFactCollector
- add a test case for minimal gather_subsets
- add test case for collect_ids
- Make gather_subset match existing behavior or '!all'
If 'gather_subset' is provided as '!all', the existing behavior
(in 2.2/2.3) is that means 'dont collect any facts except those
from the Facts() class'. So 'skip everything except
'apparmor', 'caps', 'date_time', 'env', 'fips', 'local', 'lsb',
'pkg_mgr', 'python', 'selinux', 'service_mgr', 'user', 'platform', etc.
The new facts setup was making '!all' mean no facts at all, since
it can add/exclude at a finer granularity. Since that makes more
sense for the ansible collector, and the set of minimal facts to
collect is really more up to setup.py to decide we do just that.
So if setup.py needs to always collect some gather_subset, even
on !all, setup.py needs to have the that subset added to the
list it passes as minimal_gather_subset.
This should fix some intg tests that assume '!all' means that
some facts are still collected (user info and env for example).
If we want to make setup.py collect a more minimal set, we can do that.
- force facts_dicts.keys() to a list so py3 works
- split fact collector tests to test_collectors.py
- convert Facter(Facts) -> other/facter.py:FacterFactCollector
- add FactCollector.collect_with_namespace()
regular .collect() will return a dict with the key names
using the base names ('ip_address', 'service_mgr' etc)
.collect_with_namespace() will return a dict where the key names
have been transformed with the collectors namespace, if there is
one. For most, this means a namespace that adds 'ansible_' to the
start of the key name.
For 'FacterFactCollector', the namespace transforms the key to
'facter_*'.
- add test cases for collect_with_namespace
- move all the concrete 'which facts does setup.py' stuff to setup.py
The caller of AnsibleFactCollector.from_gather_subset() needs to
pass in the list of collector classes now.
- update system/setup.py to import all of the fact classes and pass
in that list.
- split the Distribution fact class up a bit
extracted the 'distro release' file handling (ie, linux
boxes with /etc/release, /etc/os-release etc) into its
own class.
- extract get_cmdline_facts -> cmdline.py
- extract get_public_ssh_host_keys -> system/ssh_pub_keys.py
- extract get_platform_facts -> system/platform.py
platform.py may be a good candidate for further splitting.
- rm test for plain Facts() base class
- let the base class for Collector unit tests provide collected_facts
some Collectors and/or their migrated Facts() subsclasses need
to look at facts collected by other modules ('ansible_architecture'
the main one...).
Collector.collect() has the collected_facts arg for this, so add
a class variable to BaseFactsTest so we can specify it.
- mv Ohai to other/ohai.py and convert to Collector
- update hardware/*.py to return facts (no side effects)
- mv AnsibleFactCollector to setup.py
- extra collector class gathering to module method in
facts/__init__.py (collector_classes_from_gather_subset)
- add a CollectorMetaDataCollector collector used to provide
the 'gather_setup' fact
- add unit test module for 'setup' module
(test/units/modules/system/setup.py)
- Collector init now doesnt need a module, but collect does
An instance of a FactCollector() isnt tied to a AnsibleModule
instance, but the collect() method can be, so optionally pass
in module to FactCollector.collect() (everywhere)
- add a default_collectors for list of default collectors
import and use it from setup.py module
eventually, would like to replace this with a plugin loader
style class finder/loader
- unit tests for module_utils/facts/__init__.py
- add unit tests for ohai facts collector
- remove self.facts side effect on populate() in hardware/sunos.py
- convert OpenBSDHardware() to rm side effects on self.facts
- try to rm some self.facts side effects in Network()
plumb in collected_facts from populate() where it is needed.
stop passing collected_facts into Network() [via cached_facts=,
where it eventually becomes self.facts]
- nothing provides Fact() cached_facts arg now, rm it
Facts() should be internal only implementation so nothing
should be using it.
Of course, now someone will.
- add a Collector.name attr to build a map of name->_fact_ids
To properly exclude a gather_subset spec like '!hardware', we
need to know that 'hardware' also means 'devices', 'dmi', etc.
Before, '!hardware' would remove the 'hardware' collector name
but not 'devices'. Since both would end up in id_collector_map,
we would still end up with the HardwareCollector in the collector
list. End result being that '!hardware' wouldn't stop hardware
from being collected.
So we need to be able to build that map, so add the Collector.name
attribute that is the primary name (like 'hardware') and let
Collector._fact_ids be the other fact ids that a collector is
responsible for.
Construct the aliases_map of Collector.name -> set of _fact_ids
in fact/__init__.py get_collector_names, and use it when we are
populating the exclude set.
- refactor of distribution.py
make the big OS_FAMILY literal a little easier to read
Also keys can now be any string instead of python literals
99% sure the test for 'KDE Neon' was wrong
I don't see how/where it should or could get 'Neon' instead
of 'KDE Neon' as provided in os-release NAME=
Use 'distribution' string for key to OS_MAP
ie, we dont need to make it a valid python label anymore so dont.
move _has_dist_file to module as _file_exists
easier to mock without mucking with os.path
mv platform.system() calls to within get_distribution_facts() instead
of Distribution() init.
- remove _json compat module
The code in here was to support:
-a 'json' python module that was not the standard one included
with python since 2.6.
- potentially fallback to simplejson if 'json' was not available.
'json' is available for all supported python versions now so
no longer needed.
- mv get_collector_names -> facts.collector
- mv collector_classes_from_gather_subset -> facts.collector
- mv collector tests from test_facts -> test_collector
- Use six's reduce() in sunos/netbsd hardware facts
- rm extraneous get_uname_version in utils
only system/distribution.py uses it
- Remove Facts() subclass metaclass usage
- using fact_id and a platform id for matching collectors
gut most of Facts() subclasses
rm Facts() subclasses with weird metaclass
only add collectors that match the fact_ids and the platform_info
to the list of collectors used.
atm, a collectors platform_id will default to 'Generic', and
any platform matches 'Generic'
goal is to select collector classes including matching the
systems platform in collector.py, instead of relying on
metaclasses in hardware/*. To finish this, the various
Facts() subclasses will need to be replaced entirely with
Collector() subclasses.
use collector classmethod platform_match() to match the platform
This lets the particular class decide if it is compatible with
a given platform_info. platform_info is a dict like obj, so it could be
expanded in the future.
Add a default platform_match to BaseFactCollector that matches
platform_info['system'] == cls._platform
They were needed previously to trigger a module
load on all the collector classes when we import
facts/hardare so that the Hardware() and related
classes that used __new__ and find_all_subclasses()
would work.
Now that is done in collectors based on platform matching
at runtime we dont need to do it py module import/parse
time. So the non empty __init__.pys are no longer needed
and their is a more flexible mechanism for selection
platform specific stuff.
facts/facts.py is no longer used, rm'ed
- if we dont find an implement class for gather spec.. just ignore it.
Would be useful to add a warn to warn about this case.
- Fix SD-UX typo (should be HP-UX)
- Port fix for #21893 (0 sockets) to this branch
This readds the change from 8ad182059d
that got lost in merge/rebase
Fixes#21893
- port sunos fact locale fix for #24542 to this branch
based on e558ec19cdFixes#24542
Solaris fact fix (#24793)
ensure locale for solaris fact gathering
fixes issue with locale interfering with proper reading of decimals
- raise exceptions in the air like we just dont care.
Pretty much ignore any not exit exception in facts
collection. And add some test cases.
- added new selinux fact to clarify python lib
the selinux fact is boolean false when the library is not installed,
a dictionary/hash otherwise, but this is ambigous
added new fact so we can eventually remove the type dichtomy and normalize it as a dict
Re-add of devel commit 85c7a7b844 to
the new code layout, since it got removed in merge/rebase
This is required for modules that may return a non-zero `rc` value for a
successful run, similar to #24865 for Windows fixing **win_chocolatey**.
We also disable the dependency on `rc` value only, even if `failed` was
set.
Adapted unit and integration tests to the new scheme.
Updated raw, shell, script, expect to take `rc` into account.
* test/: PEP8 compliancy
- Make PEP8 compliant
* Python3 chokes on casting int to bytes (#24952)
But if we tell the formatter that the var is a number, it works
* [GCP] Healthcheck module
* fix return YAML block
* removed update_ return value; removed python26 check; typos and docs updates
* doc fix
* Updated int test for no-update conditions
* added filter_gcp_fields test
* fixed bug in update where dictionary wasn't built correctly and port was not being set.
* added default values to documentation block
* [GCP] UrlMap module
This module provides support for UrlMaps on Google Cloud Platform. UrlMaps allow users to segment requests by hostname and path and direct those requests to Backend Services.
UrlMaps are a powerful and necessary part of HTTP(S) Global Load Balancing on Google Cloud Platform.
UrlMap takes advantage of the python-api so the appropriate infrastructure has been added to module_utils.
More about UrlMaps can be found at:
https://cloud.google.com/compute/docs/load-balancing/http/url-map
UrlMap API:
https://cloud.google.com/compute/docs/reference/latest/
Google Cloud Platform HTTP(S) Cross-Region Load Balancer:
https://cloud.google.com/compute/docs/load-balancing/http/
* updated documentation, remmoved parens
* fixed tabs
The timeout for gathering facts needs to be settable from three places
(highest precedence to lowest):
* programmatically
* ansible.cfg (equivalent to the user specifying it explicitly when
calling setup)
* from the default value
The code was changed in b4bd6c80de to
allow programmatically and the default value to work correctly but
setting via ansible.cfg/parameter was broken.
This change should fix setting via ansible.cfg and adds unittests for
all three cases
Fixes#23753
When building in automated build systems, there are sometimes cases
where the user doing the building does not have a .ssh directory. In
this case, we need to mock out some os.path functions so that the
add_host_key() function we're testing won't complain or try to create
one.
* Update module_utils.six to latest
We've been held back on the version of six we could use on the module
side to 1.4.x because of python-2.4 compatibility. Now that our minimum
is Python-2.6, we can update to the latest version of six in
module_utils and get rid of the second copy in lib/ansible/compat.
Add support for default credentials. Practically, this means that a playbook creator would not have to specify the service_account_email or credentials_file Ansible parameters.
Default Credentials only work when running on Google Cloud Platform. The 'project_id' is still required.
A test has been added to trigger this condition.
* refactor postgres,
* adds a basic unit test module
* first step towards a common utils module
* set postgresql_db doc argument defaults to what the code actually uses
* unit tests that actually test a missing/found psycopg2, no dependency needed
* add doc fragments, use common args, ansible2ify the imports
* update dict
* add AnsibleModule import
* mv AnsibleModule import to correct file
* restore some database utils we need
* rm some more duplicated pg doc fragments
* change ssl_mode from disable to prefer, add update docs
* use LibraryError pattern for import verification
per comments on #21435. basically LibraryError and touching up its usage in pg_db and the tests.
* Add tests for `get_fqdn_and_port` method.
Currently tests verify original behavior - returning default `ssh-keyscan` port
Add test around `add_host_key` to verify underlying command arguments
Add some new expectations for `get_fqdn_and_port`
Test that non-standard port is passed to `ssh-keyscan` command
* Ensure ssh hostkey checks respect server port
ssh-keyscan will default to getting the host key for port 22.
If the ssh service is running on a different port, ssh-keyscan
will need to know this.
Tidy up minor flake8 issues
* Update known_hosts tests for port being None
Ensure that git urls don't try and set port when a path
is specified
Update known_hosts tests to meet flake8
* Fix stdin swap context for test_known_hosts
Move test_known_hosts from under basic, as it is its own library.
Remove module_utils.known_hosts from pep8 legacy files list
* updates eos modules to use persistent connection socket
* removes split eos shared module and combines into one
* adds singular eos doc frag (eos_local to be removed after module updates)
* updates unit test cases
* Unittests for some of module_common.py
* Port test_run_command to use pytest-mock
The use of addCleanup(patch.stopall) from the unittest idiom was
conflicting with the pytest-mock idiom of closing all patches
automatically. Switching to pytest-mock ensures that the patches are
closed and removing the stopall stops the conflict.
* eos module now uses network_cli connection plugin
* adds unit tests for eos module
* eapi support now provided by eapi module
* updates doc fragment for eapi common properties
* Google Cloud Pubsub Module
The Google Cloud Pubsub module allows the Ansible user to:
* Create/Delete Topics
* Create/Delete Subscriptions
* Change subscription from pull to push (and configure endpoint)
* Publish messages to a topic
* Pull messages from a Subscription
An accessory module, gcpubsub_facts, has been added to list topics and subscriptions.
* Added docs for state field to DOCUMENTATION and RETURN blocks.
- Replace nose usage with pytest.
- Remove legacy Shippable integration.sh.
- Update Makefile to use pytest and ansible-test.
- Convert most yield unit tests to pytest parametrize.
Support for the Google API and GCloud-Python Clients have been added.
The three libraries:
* GCloud-Python: A new function, get_google_cloud_credentials, should be used. The credentials-object returned can be passed to any gcloud-python client. Using this client library requires in the installation of gcloud-python. This is preferred library for new modules.
* Google API: A new function, gcp_api_auth, should be used to take advantage of services requiring this client. This client library should be used if the desired functionality is not available in GCloud-Python. Using this library requires the installation of google-api-python-client.
* libcloud: Existing function, gcp_connect, should be used. The interface and return values have not changed and existing modules (such as gce, gce_pd and gce_net) should work without modification. Note that the credentials-fetching code has been refactored out of gcp_connect so that can be reused by all connection functions. To use this function, apache-libcloud must be installed.
Import guards have been added and will only be trigger if a user tries to use a function that is missing dependencies.
Credential-specifying mechanisms (i.e, ansible module params, env vars and libcloud secrets.py) have not changed. They have been refactored and unit tests have been added to allow for changes going forward. We are deprecating (and removing in a subsequent release) the ability to specify credentials via the libcloud secrets file. Also, we have deprecated (and also plan to remove in a subsequent release) the ability to use a p12 pem file for a key - the JSON format is strongly preferred. Deprecation warnings have been added for both of these issues (see the Ansible docs on how to disable deprecation warnings).
As neon is derived from Ubuntu, ansible_os_family should have the value
"Debian" instead of "Neon". Add a test case for KDE neon and set
os_family correctly for it.
On openSUSE Tumbleweed, lsb-release -a currently reports
the distributor ID as "openSUSE Tumbleweed". On openSUSE
Leap, the distributor ID is "SUSE LINUX".
Add them to the OS_FAMILY dict as Suse family systems.
Also add an entry to TESTSETS in test_distribution_version.py
for openSUSE Tumbleweed.
Python2 seems to allow any integer. Python3.5 on Linux seems to allow
a 32 bit unsigned int. Python3.5 on El Capitan seems to limit it to
a smaller size... perhaps a 16 bit int.
This adds some test data to test_facts.py that
includes mnt points that have a single quote in
the path.
Ala, https://github.com/ansible/ansible/issues/16855
The bug was already fixed via other changes, but this is
for regression testing.
A parameter of type int should accept int and string, but not float.
A parameter of type float should accept float, int, and string.
Also reset the arguments in another test so that it runs cleanly. This
agrees with what all the other tests are doing.
As suggested in feedback on
https://github.com/ansible/ansible/pull/17575, add
os_family to test_distribution_version. Add the
correct os_family to the existing testcase data
entries.
Also add os_family to the output of
gen_distribution_version_testcase.py so any new
generated entries will contain this data.
* Added aws_retry decorator function with unit tests
* Restructured the code to be used with a base class.
This base class CloudRetry can be reused by any other cloud provider.
This decorator should be used in situations, where you need to implement
a backoff algorithm and want to retry based on the status code from the
exception.
* updated documentation
* fixed tabs
* added botocore and boto3 to requirements.txt
* removed cloud.py from py24 tests, as it depends on boto3
* fix relative imports
* updated test to be 2.6 compat
* updated method name from retry to backoff
* readded lxd
* Updated default backoff from 2 seconds to 1.1s.
This will be about a total of 48 seconds in 10 tries. This is
configurable.
* Port set_*_if_different functions to python3
* Add surrogate_or_strict and surrogate_or_replace error handlers for
to_text, to_bytes, to_native
* Set default error handler to surrogate_or_replace
* Make use of the new error handlers in the already ported code
* Move the unittests for module_utils._text as they aren't in basic.py
* Cleanup around SEQUENCETYPE. On python2.6+ SEQUENCETYPE includes
strings so make sure code omits those explicitly if necessary
* Allow arg_spec aliases to be other sequence types
* Fix to_native call in selinux_context and selinux_default_context to
use the error handler correctly.
* Port set_mode_if_different to work on python3
* Port atomic_move to work on python3
* Fix check_password_prompt variable which wasn't renamed properly
tempfile.NamedTemporaryFile keeps a file handle causing os.rename() to fail with windows based vboxfs: [Errno 26] Text file busy.
Changed NamedTemporaryFile to mkstemp() and added a finally block to unlink the temp file in each and every case.
Make some python3 fixes to make the unittests pass:
* galaxy imports
* dictionary iteration in role requirements
* swap_stdout helper for unittests
* Normalize to text string in a facts.py function
* Give native strings to selinux library functions.
SELinux takes pathnames as native strings. That means we need to
convert to bytes on python2 and convert to text on python3.
Fixes#17155
* Read kitchen documentation, make module_utils params more like kitchen API
* Remove none nonstring strategy and add strict
* Raise TypeError on invalid nonstring strategy
* Document to_native()
* Make unittests for testing module_utils.text
Fixes#10779
Refactor some of the block device, mount point, and
mtab/fstab facts collection for linux for better
performance on systems with lots of block devices.
Instead of invoking 'lsblk' for every entry in mtab,
invoke it once, then map the results to mtab entries.
Change the args used for invoking 'findmnt' since the
previous combination of args conflicts, so this would
always fail on some systems depending on version.
Add test cases for facts Hardware()/Network()/Virtual() classes
__new__ method and verify they create the proper subclass based
on the platform.system() results.
Split out all the 'invoke some command and grab it's output'
bits related to linux mount paths into their own methods so
it is easier to mock them in unit tests.
Fix the DragonFly* classes that did not defined a 'platform'
class attribute. This caused FreeBSD systems to potentially
get the DragonFly* subclasses incorrectly. In practice it
didnt matter much since the DragonFly* subclasses duplicated
the FreeBSD ones. Actual DragonFly systems would end up with
the generic Hardware() etc instead of the DragonFly* classes.
Fix Hardware.__new__() on PY3, passing args to __new__
would cause "object() takes no parameters" errors. So
check for PY3 and just call __new__ without the args
See
https://hg.python.org/cpython/file/44ed0cd3dc6d/Objects/typeobject.c#l2818
for some explaination.
A little unittest refactoring
* Add a class decorator to generate tests when using a unittest.TestCase base class
* Add a TestCase subclass with setUp() and tearDown() that sets up
module parameter parsing
* Move test_safe_eval to use the class decorator and ModuleTestCase base
class
* Move testing of set_mode_if_different into its own file and separate
some test methods out so we get better errors and more coverage in
case of errors.
* Naming convention for test cases doesn't need to duplicate information
that's already in the file path.