Facts Refresh (2.4 roadmap)
This commit implements most of the 2.4 roadmap 'Facts Refresh'
- move facts.py to facts/__init__.py
- move facts Distribution() to its own class
- add a facts/utils.py
- move get_file_content and get_uname_version to facts/utils.py
- move Facts() class from facts/__init__ to facts/facts.py
- mv get_file_lines to facts/utils.py
- mv Ohai()/Facter() class to facts/ohai.py and facter.py
- Start moving fact Hardware() classes to facts/hardware/*.py
- mv HPUX() hardware class to facts/hardware/hpux.py
- move SunOSHardware() fact class to facts/hardware/sunos.py
- move OpenBSDHardware() class to facts/hardware/openbsd.py
- mv FreeBsdHardware() and DragonFlyHardware() to facts/hardware/
- mv NetBSDHardware() to facts/hardware/netbsd.py
- mv Darwin() hardware class to facts/hardware/darwin.py
- pep8/etc cleanups on facts/hardware/*.py
- Mv network facts classes to facts/network/*.py
- mv Virtual fact classes to facts/virtual
- mv Hardware.get_sysctl to facts/sysctl.py:get_sysctl
- Also mv get_uname_version from facts/utils.py -> distribution.py
since distribution.py is the only thing using it.
- add collector.py with new BaseFactCollector
- add a subclass for AnsibleFactCollector
- hook up dict key munging FactNamespaces
- add some test cases for testing the names of facts
- mv timeout stuff to facts.timeout
- rm ansible_facts()/get_all_facts() etc
- Instead of calling facts.ansible_facts(), fact collection
api used by setup.py is now to create an AnsibleFactCollector()
and call it's collect method.
- replace Facts.get_user_facts with UserFactCollector
- add a 'systems' facts package, mv UserFactCollector there
- mv get_dns_facts to DnsFactCollector
- mv get_env_facts to EnvFactCollector
- include the timeout length in exception message
- modules and module_utils that use AnsibleFactCollector
can now theoretically set the 'valid_subsets'
May be useful for network facts module that currently have
to reimplement a good chunk of facts.py to get gather_subsets
to work.
- get_local_facts -> system/LocalFactCollector
- get_date_time -> system/date_time.py
- get_fips_facts -> system/fips.py
- get_caps_facts() -> system/caps.py
- get_apparmor_facts -> system/apparmor.py
- get_selinux_facts -> system/selinux.py
- get_lsb_facts -> system/lsb.py
- get_service_mgr_facts -> system/service_mgr.py
- Facts.is_systemd_managed -> system/service_mgr.py
- get_pkg_mgr_facts -> system/pkg_mgr.py
- Facts()._get_mount_size_facts() -> facts.utils.get_mount_size()
- add unit test for EnvFactCollector
- add a test case for minimal gather_subsets
- add test case for collect_ids
- Make gather_subset match existing behavior or '!all'
If 'gather_subset' is provided as '!all', the existing behavior
(in 2.2/2.3) is that means 'dont collect any facts except those
from the Facts() class'. So 'skip everything except
'apparmor', 'caps', 'date_time', 'env', 'fips', 'local', 'lsb',
'pkg_mgr', 'python', 'selinux', 'service_mgr', 'user', 'platform', etc.
The new facts setup was making '!all' mean no facts at all, since
it can add/exclude at a finer granularity. Since that makes more
sense for the ansible collector, and the set of minimal facts to
collect is really more up to setup.py to decide we do just that.
So if setup.py needs to always collect some gather_subset, even
on !all, setup.py needs to have the that subset added to the
list it passes as minimal_gather_subset.
This should fix some intg tests that assume '!all' means that
some facts are still collected (user info and env for example).
If we want to make setup.py collect a more minimal set, we can do that.
- force facts_dicts.keys() to a list so py3 works
- split fact collector tests to test_collectors.py
- convert Facter(Facts) -> other/facter.py:FacterFactCollector
- add FactCollector.collect_with_namespace()
regular .collect() will return a dict with the key names
using the base names ('ip_address', 'service_mgr' etc)
.collect_with_namespace() will return a dict where the key names
have been transformed with the collectors namespace, if there is
one. For most, this means a namespace that adds 'ansible_' to the
start of the key name.
For 'FacterFactCollector', the namespace transforms the key to
'facter_*'.
- add test cases for collect_with_namespace
- move all the concrete 'which facts does setup.py' stuff to setup.py
The caller of AnsibleFactCollector.from_gather_subset() needs to
pass in the list of collector classes now.
- update system/setup.py to import all of the fact classes and pass
in that list.
- split the Distribution fact class up a bit
extracted the 'distro release' file handling (ie, linux
boxes with /etc/release, /etc/os-release etc) into its
own class.
- extract get_cmdline_facts -> cmdline.py
- extract get_public_ssh_host_keys -> system/ssh_pub_keys.py
- extract get_platform_facts -> system/platform.py
platform.py may be a good candidate for further splitting.
- rm test for plain Facts() base class
- let the base class for Collector unit tests provide collected_facts
some Collectors and/or their migrated Facts() subsclasses need
to look at facts collected by other modules ('ansible_architecture'
the main one...).
Collector.collect() has the collected_facts arg for this, so add
a class variable to BaseFactsTest so we can specify it.
- mv Ohai to other/ohai.py and convert to Collector
- update hardware/*.py to return facts (no side effects)
- mv AnsibleFactCollector to setup.py
- extra collector class gathering to module method in
facts/__init__.py (collector_classes_from_gather_subset)
- add a CollectorMetaDataCollector collector used to provide
the 'gather_setup' fact
- add unit test module for 'setup' module
(test/units/modules/system/setup.py)
- Collector init now doesnt need a module, but collect does
An instance of a FactCollector() isnt tied to a AnsibleModule
instance, but the collect() method can be, so optionally pass
in module to FactCollector.collect() (everywhere)
- add a default_collectors for list of default collectors
import and use it from setup.py module
eventually, would like to replace this with a plugin loader
style class finder/loader
- unit tests for module_utils/facts/__init__.py
- add unit tests for ohai facts collector
- remove self.facts side effect on populate() in hardware/sunos.py
- convert OpenBSDHardware() to rm side effects on self.facts
- try to rm some self.facts side effects in Network()
plumb in collected_facts from populate() where it is needed.
stop passing collected_facts into Network() [via cached_facts=,
where it eventually becomes self.facts]
- nothing provides Fact() cached_facts arg now, rm it
Facts() should be internal only implementation so nothing
should be using it.
Of course, now someone will.
- add a Collector.name attr to build a map of name->_fact_ids
To properly exclude a gather_subset spec like '!hardware', we
need to know that 'hardware' also means 'devices', 'dmi', etc.
Before, '!hardware' would remove the 'hardware' collector name
but not 'devices'. Since both would end up in id_collector_map,
we would still end up with the HardwareCollector in the collector
list. End result being that '!hardware' wouldn't stop hardware
from being collected.
So we need to be able to build that map, so add the Collector.name
attribute that is the primary name (like 'hardware') and let
Collector._fact_ids be the other fact ids that a collector is
responsible for.
Construct the aliases_map of Collector.name -> set of _fact_ids
in fact/__init__.py get_collector_names, and use it when we are
populating the exclude set.
- refactor of distribution.py
make the big OS_FAMILY literal a little easier to read
Also keys can now be any string instead of python literals
99% sure the test for 'KDE Neon' was wrong
I don't see how/where it should or could get 'Neon' instead
of 'KDE Neon' as provided in os-release NAME=
Use 'distribution' string for key to OS_MAP
ie, we dont need to make it a valid python label anymore so dont.
move _has_dist_file to module as _file_exists
easier to mock without mucking with os.path
mv platform.system() calls to within get_distribution_facts() instead
of Distribution() init.
- remove _json compat module
The code in here was to support:
-a 'json' python module that was not the standard one included
with python since 2.6.
- potentially fallback to simplejson if 'json' was not available.
'json' is available for all supported python versions now so
no longer needed.
- mv get_collector_names -> facts.collector
- mv collector_classes_from_gather_subset -> facts.collector
- mv collector tests from test_facts -> test_collector
- Use six's reduce() in sunos/netbsd hardware facts
- rm extraneous get_uname_version in utils
only system/distribution.py uses it
- Remove Facts() subclass metaclass usage
- using fact_id and a platform id for matching collectors
gut most of Facts() subclasses
rm Facts() subclasses with weird metaclass
only add collectors that match the fact_ids and the platform_info
to the list of collectors used.
atm, a collectors platform_id will default to 'Generic', and
any platform matches 'Generic'
goal is to select collector classes including matching the
systems platform in collector.py, instead of relying on
metaclasses in hardware/*. To finish this, the various
Facts() subclasses will need to be replaced entirely with
Collector() subclasses.
use collector classmethod platform_match() to match the platform
This lets the particular class decide if it is compatible with
a given platform_info. platform_info is a dict like obj, so it could be
expanded in the future.
Add a default platform_match to BaseFactCollector that matches
platform_info['system'] == cls._platform
They were needed previously to trigger a module
load on all the collector classes when we import
facts/hardare so that the Hardware() and related
classes that used __new__ and find_all_subclasses()
would work.
Now that is done in collectors based on platform matching
at runtime we dont need to do it py module import/parse
time. So the non empty __init__.pys are no longer needed
and their is a more flexible mechanism for selection
platform specific stuff.
facts/facts.py is no longer used, rm'ed
- if we dont find an implement class for gather spec.. just ignore it.
Would be useful to add a warn to warn about this case.
- Fix SD-UX typo (should be HP-UX)
- Port fix for #21893 (0 sockets) to this branch
This readds the change from 8ad182059d
that got lost in merge/rebase
Fixes#21893
- port sunos fact locale fix for #24542 to this branch
based on e558ec19cdFixes#24542
Solaris fact fix (#24793)
ensure locale for solaris fact gathering
fixes issue with locale interfering with proper reading of decimals
- raise exceptions in the air like we just dont care.
Pretty much ignore any not exit exception in facts
collection. And add some test cases.
- added new selinux fact to clarify python lib
the selinux fact is boolean false when the library is not installed,
a dictionary/hash otherwise, but this is ambigous
added new fact so we can eventually remove the type dichtomy and normalize it as a dict
Re-add of devel commit 85c7a7b844 to
the new code layout, since it got removed in merge/rebase
This is required for modules that may return a non-zero `rc` value for a
successful run, similar to #24865 for Windows fixing **win_chocolatey**.
We also disable the dependency on `rc` value only, even if `failed` was
set.
Adapted unit and integration tests to the new scheme.
Updated raw, shell, script, expect to take `rc` into account.
* test/: PEP8 compliancy
- Make PEP8 compliant
* Python3 chokes on casting int to bytes (#24952)
But if we tell the formatter that the var is a number, it works
* [GCP] Healthcheck module
* fix return YAML block
* removed update_ return value; removed python26 check; typos and docs updates
* doc fix
* Updated int test for no-update conditions
* added filter_gcp_fields test
* fixed bug in update where dictionary wasn't built correctly and port was not being set.
* added default values to documentation block
* [GCP] UrlMap module
This module provides support for UrlMaps on Google Cloud Platform. UrlMaps allow users to segment requests by hostname and path and direct those requests to Backend Services.
UrlMaps are a powerful and necessary part of HTTP(S) Global Load Balancing on Google Cloud Platform.
UrlMap takes advantage of the python-api so the appropriate infrastructure has been added to module_utils.
More about UrlMaps can be found at:
https://cloud.google.com/compute/docs/load-balancing/http/url-map
UrlMap API:
https://cloud.google.com/compute/docs/reference/latest/
Google Cloud Platform HTTP(S) Cross-Region Load Balancer:
https://cloud.google.com/compute/docs/load-balancing/http/
* updated documentation, remmoved parens
* fixed tabs
The timeout for gathering facts needs to be settable from three places
(highest precedence to lowest):
* programmatically
* ansible.cfg (equivalent to the user specifying it explicitly when
calling setup)
* from the default value
The code was changed in b4bd6c80de to
allow programmatically and the default value to work correctly but
setting via ansible.cfg/parameter was broken.
This change should fix setting via ansible.cfg and adds unittests for
all three cases
Fixes#23753
When building in automated build systems, there are sometimes cases
where the user doing the building does not have a .ssh directory. In
this case, we need to mock out some os.path functions so that the
add_host_key() function we're testing won't complain or try to create
one.
* Update module_utils.six to latest
We've been held back on the version of six we could use on the module
side to 1.4.x because of python-2.4 compatibility. Now that our minimum
is Python-2.6, we can update to the latest version of six in
module_utils and get rid of the second copy in lib/ansible/compat.
Add support for default credentials. Practically, this means that a playbook creator would not have to specify the service_account_email or credentials_file Ansible parameters.
Default Credentials only work when running on Google Cloud Platform. The 'project_id' is still required.
A test has been added to trigger this condition.
* refactor postgres,
* adds a basic unit test module
* first step towards a common utils module
* set postgresql_db doc argument defaults to what the code actually uses
* unit tests that actually test a missing/found psycopg2, no dependency needed
* add doc fragments, use common args, ansible2ify the imports
* update dict
* add AnsibleModule import
* mv AnsibleModule import to correct file
* restore some database utils we need
* rm some more duplicated pg doc fragments
* change ssl_mode from disable to prefer, add update docs
* use LibraryError pattern for import verification
per comments on #21435. basically LibraryError and touching up its usage in pg_db and the tests.
* Add tests for `get_fqdn_and_port` method.
Currently tests verify original behavior - returning default `ssh-keyscan` port
Add test around `add_host_key` to verify underlying command arguments
Add some new expectations for `get_fqdn_and_port`
Test that non-standard port is passed to `ssh-keyscan` command
* Ensure ssh hostkey checks respect server port
ssh-keyscan will default to getting the host key for port 22.
If the ssh service is running on a different port, ssh-keyscan
will need to know this.
Tidy up minor flake8 issues
* Update known_hosts tests for port being None
Ensure that git urls don't try and set port when a path
is specified
Update known_hosts tests to meet flake8
* Fix stdin swap context for test_known_hosts
Move test_known_hosts from under basic, as it is its own library.
Remove module_utils.known_hosts from pep8 legacy files list
* updates eos modules to use persistent connection socket
* removes split eos shared module and combines into one
* adds singular eos doc frag (eos_local to be removed after module updates)
* updates unit test cases
* Unittests for some of module_common.py
* Port test_run_command to use pytest-mock
The use of addCleanup(patch.stopall) from the unittest idiom was
conflicting with the pytest-mock idiom of closing all patches
automatically. Switching to pytest-mock ensures that the patches are
closed and removing the stopall stops the conflict.
* eos module now uses network_cli connection plugin
* adds unit tests for eos module
* eapi support now provided by eapi module
* updates doc fragment for eapi common properties
* Google Cloud Pubsub Module
The Google Cloud Pubsub module allows the Ansible user to:
* Create/Delete Topics
* Create/Delete Subscriptions
* Change subscription from pull to push (and configure endpoint)
* Publish messages to a topic
* Pull messages from a Subscription
An accessory module, gcpubsub_facts, has been added to list topics and subscriptions.
* Added docs for state field to DOCUMENTATION and RETURN blocks.
- Replace nose usage with pytest.
- Remove legacy Shippable integration.sh.
- Update Makefile to use pytest and ansible-test.
- Convert most yield unit tests to pytest parametrize.
Support for the Google API and GCloud-Python Clients have been added.
The three libraries:
* GCloud-Python: A new function, get_google_cloud_credentials, should be used. The credentials-object returned can be passed to any gcloud-python client. Using this client library requires in the installation of gcloud-python. This is preferred library for new modules.
* Google API: A new function, gcp_api_auth, should be used to take advantage of services requiring this client. This client library should be used if the desired functionality is not available in GCloud-Python. Using this library requires the installation of google-api-python-client.
* libcloud: Existing function, gcp_connect, should be used. The interface and return values have not changed and existing modules (such as gce, gce_pd and gce_net) should work without modification. Note that the credentials-fetching code has been refactored out of gcp_connect so that can be reused by all connection functions. To use this function, apache-libcloud must be installed.
Import guards have been added and will only be trigger if a user tries to use a function that is missing dependencies.
Credential-specifying mechanisms (i.e, ansible module params, env vars and libcloud secrets.py) have not changed. They have been refactored and unit tests have been added to allow for changes going forward. We are deprecating (and removing in a subsequent release) the ability to specify credentials via the libcloud secrets file. Also, we have deprecated (and also plan to remove in a subsequent release) the ability to use a p12 pem file for a key - the JSON format is strongly preferred. Deprecation warnings have been added for both of these issues (see the Ansible docs on how to disable deprecation warnings).
As neon is derived from Ubuntu, ansible_os_family should have the value
"Debian" instead of "Neon". Add a test case for KDE neon and set
os_family correctly for it.
On openSUSE Tumbleweed, lsb-release -a currently reports
the distributor ID as "openSUSE Tumbleweed". On openSUSE
Leap, the distributor ID is "SUSE LINUX".
Add them to the OS_FAMILY dict as Suse family systems.
Also add an entry to TESTSETS in test_distribution_version.py
for openSUSE Tumbleweed.
Python2 seems to allow any integer. Python3.5 on Linux seems to allow
a 32 bit unsigned int. Python3.5 on El Capitan seems to limit it to
a smaller size... perhaps a 16 bit int.
This adds some test data to test_facts.py that
includes mnt points that have a single quote in
the path.
Ala, https://github.com/ansible/ansible/issues/16855
The bug was already fixed via other changes, but this is
for regression testing.
A parameter of type int should accept int and string, but not float.
A parameter of type float should accept float, int, and string.
Also reset the arguments in another test so that it runs cleanly. This
agrees with what all the other tests are doing.
As suggested in feedback on
https://github.com/ansible/ansible/pull/17575, add
os_family to test_distribution_version. Add the
correct os_family to the existing testcase data
entries.
Also add os_family to the output of
gen_distribution_version_testcase.py so any new
generated entries will contain this data.
* Added aws_retry decorator function with unit tests
* Restructured the code to be used with a base class.
This base class CloudRetry can be reused by any other cloud provider.
This decorator should be used in situations, where you need to implement
a backoff algorithm and want to retry based on the status code from the
exception.
* updated documentation
* fixed tabs
* added botocore and boto3 to requirements.txt
* removed cloud.py from py24 tests, as it depends on boto3
* fix relative imports
* updated test to be 2.6 compat
* updated method name from retry to backoff
* readded lxd
* Updated default backoff from 2 seconds to 1.1s.
This will be about a total of 48 seconds in 10 tries. This is
configurable.
* Port set_*_if_different functions to python3
* Add surrogate_or_strict and surrogate_or_replace error handlers for
to_text, to_bytes, to_native
* Set default error handler to surrogate_or_replace
* Make use of the new error handlers in the already ported code
* Move the unittests for module_utils._text as they aren't in basic.py
* Cleanup around SEQUENCETYPE. On python2.6+ SEQUENCETYPE includes
strings so make sure code omits those explicitly if necessary
* Allow arg_spec aliases to be other sequence types
* Fix to_native call in selinux_context and selinux_default_context to
use the error handler correctly.
* Port set_mode_if_different to work on python3
* Port atomic_move to work on python3
* Fix check_password_prompt variable which wasn't renamed properly
tempfile.NamedTemporaryFile keeps a file handle causing os.rename() to fail with windows based vboxfs: [Errno 26] Text file busy.
Changed NamedTemporaryFile to mkstemp() and added a finally block to unlink the temp file in each and every case.
Make some python3 fixes to make the unittests pass:
* galaxy imports
* dictionary iteration in role requirements
* swap_stdout helper for unittests
* Normalize to text string in a facts.py function
* Give native strings to selinux library functions.
SELinux takes pathnames as native strings. That means we need to
convert to bytes on python2 and convert to text on python3.
Fixes#17155
* Read kitchen documentation, make module_utils params more like kitchen API
* Remove none nonstring strategy and add strict
* Raise TypeError on invalid nonstring strategy
* Document to_native()
* Make unittests for testing module_utils.text
Fixes#10779
Refactor some of the block device, mount point, and
mtab/fstab facts collection for linux for better
performance on systems with lots of block devices.
Instead of invoking 'lsblk' for every entry in mtab,
invoke it once, then map the results to mtab entries.
Change the args used for invoking 'findmnt' since the
previous combination of args conflicts, so this would
always fail on some systems depending on version.
Add test cases for facts Hardware()/Network()/Virtual() classes
__new__ method and verify they create the proper subclass based
on the platform.system() results.
Split out all the 'invoke some command and grab it's output'
bits related to linux mount paths into their own methods so
it is easier to mock them in unit tests.
Fix the DragonFly* classes that did not defined a 'platform'
class attribute. This caused FreeBSD systems to potentially
get the DragonFly* subclasses incorrectly. In practice it
didnt matter much since the DragonFly* subclasses duplicated
the FreeBSD ones. Actual DragonFly systems would end up with
the generic Hardware() etc instead of the DragonFly* classes.
Fix Hardware.__new__() on PY3, passing args to __new__
would cause "object() takes no parameters" errors. So
check for PY3 and just call __new__ without the args
See
https://hg.python.org/cpython/file/44ed0cd3dc6d/Objects/typeobject.c#l2818
for some explaination.
A little unittest refactoring
* Add a class decorator to generate tests when using a unittest.TestCase base class
* Add a TestCase subclass with setUp() and tearDown() that sets up
module parameter parsing
* Move test_safe_eval to use the class decorator and ModuleTestCase base
class
* Move testing of set_mode_if_different into its own file and separate
some test methods out so we get better errors and more coverage in
case of errors.
* Naming convention for test cases doesn't need to duplicate information
that's already in the file path.
* Port urls.py to python3
Fixes (largely normalizing byte vs text strings) for python3
* Rework what we do with attributes that aren't set already.
* Comments
* add tests for centos6, rhel6 and rhel7
* gen_distribution_version_testcase with python2.6
* remove unused imports
* fix redhat/vmware/... parsing
* add centos7 test case
Updated python module wrapper explode method to drop 'args' file next to module.
Both execute() and excommunicate() debug methods now pass the module args via file to enable debuggers that are picky about stdin.
Updated unit tests to use a context manager for masking/restoring default streams and argv.
* Ziploader proof of concept (jimi-c)
* Cleanups to proof of concept ziploader branch:
* python3 compatible base64 encoding
* zipfile compression (still need to enable toggling this off for
systems without zlib support in python)
* Allow non-wildcard imports (still need to make this recusrsive so that
we can have module_utils code that imports other module_utils code.)
* Better tracebacks: module filename is kept and module_utils directory
is kept so that tracebacks show the real filenames that the errors
appear in.
* Make sure we import modules that are used into the module_utils files that they are used in.
* Set ansible version in a more pythonic way for ziploader than we were doing in module replacer
* Make it possible to set the module compression as an inventory var
This may be necessary on systems where python has been compiled without
zlib compression.
* Refactoring of module_common code:
* module replacer only replaces values that make sense for that type of
file (example: don't attempt to replace python imports if we're in
a powershell module).
* Implement configurable shebang support for ziploader wrapper
* Implement client-side constants (for SELINUX_SPECIAL_FS and SYSLOG)
via environment variable.
* Remove strip_comments param as we're never going to use it (ruins line
numbering)
* Don't repeat ourselves about detecting REPLACER
* Add an easy way to debug
* Port test-module to the ziploader-aware modify_module()
* strip comments and blank lines from the wrapper so we send less over the wire.
* Comments cleanup
* Remember to output write the module line itself in powershell modules
* for line in lines strips the newlines so we have to add them back in