When specifying a literal whitelist of AWS EC2 regions in the dynamic
inventory configuration file, it should not be necessary to also include
a literal blacklist, especially as the blacklist is not honored in this
case anyway. By reading the literal blacklist only when necessary, it is
possible for a user to provide a more minimal EC2 configuration file.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* EC2 inventory can now connect using an IAM role
* Fix comment indentation
* Make sure that Ec2Inventory.iam_role is always defined
* Add missing import
* Add ads server itself as an host in the inventory
* Comment all value in example
* Add Id in variable list per device
* Centralize code to add device status to variables
* Fix device variable name for blueprint
* Add Nagios livestatus inventory plugin.
* Add new capabilities for the nagios_livestatus inventory:
- host_field: set the name returned (default: 'name')
- group_field: set the field used for group (default: 'groups')
- host_filter: filter host using this filter (default: None)
To be more consistent, prefix was renamed into var_prefix.
* Fix py34 runtests errors against print call.
With this proposed PR, we want to make the use of many ec2 dynamic inventory files more flexible.
We are using multiple AWS accounts. We want to use different ini file (one for every account) and only one ec2.py.
* Use with statement when doing rw on files
* Deserialize file-like object directly instead of a string
For python 2/3 compatibility reasons, per PR feedback.
* adding route53_hostnames option to set the hostnames from route 53
* checking whether the route53_hostnames option is present as suggested by @s-hertel
* setting route53_hostnames to None when config option not present
* skip the to_safe only when using route53_hostnames option, as suggested by @ryansb
* skipping the to_safe strip only for the hostnames that came from route53 as suggested by @ryansb
Allows users to specify a key name in a given project’s `ansible.cfg`
file and thus handle keyring integration with vaults with different
passwords. If no key name is specified, the original default `ansible`
key name will be used.
Other improvements:
* `username` is now optional; defaults to user that invokes the script
* change string interpolation to new `.format()` style
* clean up and expand upon documentation
* enforce PEP 8 compliance
* Add dynamic inventory script and config for Packet.net
* The script and config have been shamelessly cargo
culted from the `ec2.py` and `ec2.ini` dynamic inventory
script.
* This is an initial version and could very well be
enhanced and made better.
Examples:
`PACKET_NET_API_KEY=<MY_AUTH_TOKEN> --list` to get inventory for
all hosts in Packet.net in all projects (defaults to `--list`
if no argument provided).
`PACKET_NET_API_KEY=<MY_AUTH_TOKEN> --host HOST` to get variables
for a single host.
* improvements in Packet host dynamic inventory
Ensure command line profile argument and AWS_PROFILE environment variable
overrides config file
Remove unnecessary `lambda` function
Fix cache file path construction to be more pythonic (and windows-ready)
use inkey attribute in _process_object_types recursive loop to generate key name in skip_keys directive.
This permit to ignore nested variables, for example summary.vm to optimize inventory collect
* vmware_inventory: permit to group by custom field
This permits to create instances, affect some custom fields like EC2 tags and then retrieve groups from custom fields like EC2 inventory
* vmware_inventory: Customize skip_keys & add resourceconfig to skip_keys
Verify if customfield is a str before processing custom fields for a host
* Fix many points reported by PyCharm as PEP 8 code style
* Improve inventory performance by dropping vim.HostSystem & vim.VirtualMachine collect when depth >= 2
* Declare some class variables properly
* Remove some unused variables
* Add documentation in vmware_inventory.ini for VMWARE_USERNAME & VMWARE_PASSWORD env vars
This forces basic auth to be used. Using the normal HTTPPasswordMgrWithDefaultRealm
password manager from urllib2 fails since collins doesn't send a 401 retry on failure.
More about this can be seen here http://stackoverflow.com/questions/2407126/python-urllib2-basic-auth-problem.
I added a small comment about the format of the host so others don't waste time like i did.
- Remove shebangs from:
- ini files
- unit tests
- module_utils
- plugins
- module_docs_fragments
- non-executable Makefiles
- Change non-modules from '/usr/bin/python' to '/usr/bin/env python'.
- Change '/bin/env' to '/usr/bin/env'.
Also removed main functions from unit tests (since they no longer
have a shebang) and fixed a python 3 compatibility issue with
update_bundled.py so it does not need to specify a python 2 shebang.
A script was added to check for unexpected shebangs in files.
This script is run during CI on Shippable.
* [GCE] Caching support for inventory script.
The GCE inventory script now supports reading from a cache rather than making the request each time. The format of the list and host output have not changed.
On script execution, the cache is checked to see if it older than 'cache_max_age', and if so, it is rebuilt (it can also be explicity rebuilt).
To support this functionality, the following have been added.
* Config file (gce.ini) changes: A new 'cache' section has been added to the config file, with 'cache_path' and 'cache_max_age' options to allow for configuration. There are intelligent defaults in place if that section and options are not found in the configuration file.
* Command line argument: A new --refresh-cache argument has been added to force the cache to be rebuild.
* A CloudInventoryCache class, contained in the same file has been added. As a seperate class, it allowed for testing (unit tests not included in this PR) and hopefully could be re-used in the future (it contains borrowed code from other inventory scripts)
* load_inventory_from_cache and do_api_calls_and_update_cache methods (, which were largely lifted from other inventory scripts, in a hope to promote consistency in the future) to determine if the cache is fresh and rebuild if necessary.
* A 'main' check, to support the script being imported and testable.
A new dictionary has been added to the list output, located at ['_meta']['stats'] that informs if the cache was used and how long it took to load the inventory (in 'cache_used' and 'inventory_load_time', respectively).
* fixed default value error; change cache time to 300
This fixes the use of public IPs in the discovered hosts by
ensuring that the use_private_network check doesn't always evaluate
to False if the associated .ini file specifies this option.
Currently the machine_type will not work if the instance type is set in ovirt. In that case, inst.get_instance_type will be an object and will fails while converting to json. This only work if the instance type is not set in ovirt where inst.get_instance_type is a Null value. The current change make sure that correct "instance type" is passed when instance is set in ovirt and Null when it's not set in ovirt.
* vmware_inventory: fix the --host option
* Fix skip_key evaluation
* Short circuit deep dives in datastores and resourcegroups
* Put timestamps in the debug output and add a few more
* Implement a user defined proplist to increase performance
* Make all props into dicts
* Update ini with example
* Fix tests
Trying to preserve the meaning of the examples. Not all occurrences in
`docsite/rst/playbooks_lookups.rst` have been changed for instance to
allow the unchanged examples to be used for testing.
Related to: #17479
* vmware_inventory script improvements
* switch instance finding method to use containerview based searches
* overhaul the serialization method for objects
* Cleanup the debug outputs
* Add a warning about performance
* add cloudforms inventory script
based on the foreman inventory script, features:
* cached results (default 600 seconds)
* paginated host results (default 100 hosts)
* ssl verification (default True)
* arguments to flush cache and run in debug mode
* suggested rework
* removed second cache / dict with duplicate info
* added purge_actions configuration option to remove the actions from a host (defaults to False)
* added prefer_ip_address configuration option so give the option of using ip address instead of name (defaults to True)
* removed self variables — just use the arguments directly
* added --pretty command line option to pretty print results
* renamed _resolve_params to _resolve_host
* implement suggestions
* removed not used import
* added warnings to help debug connection issues
* renamed self.cache to self.hosts for clarity
* now will use the first ip address as ansible_ssh_host
* flipped default for prefer_ip_address config option to false - preserve name, and specify ansible_ssh_host as ip address
* added checks and warnings to configuration options, sane defaults for all except required:
** `url` - the first part of the cloudforms server url (https://cfme.example.com)
** `username` - the cloudforms username to log in with
** `password` - the password for the cloudforms user specified
* removed redundant call to fetch host information (since we’re paging results, no need to split the calls)
* added warning for unexpected responses from CloudForms
* debug for returned sting now prints the string instead of forcing to JSON
* removed no longer needed methods to fetch host information
* using ‘key in list’ instead of ‘list.has_key(key)’
* correctly formatted groups and allowed nested groups
* now create groups for `location`, `type` and `vendor`, with appropriate sub-groups and children
* made to_safe honor config option to clean group names for ansible consumption
* remove prefer_ip_address configuration option
no longer needed since we will specify `ansible_ssh_host` as the returned ip address.
* removed dns_name
no longer needed, will preserve `host[name]` as name in Ansible.
* purge actions from hostvars
changed purge_actions to True
* flake8 suggestion for whitespace
* fix undefined r variable in warning output
use the correct ret variable
* Default purge_actions to True
We probably don’t need them, but it is configurable, so just default to remove them.
* Add configuration option to nest cloudforms tags
disabled by default, the nest_tags option will expand cloudforms tags into a nested group/subgroup structure. Otherwise, it will use the whole tag name.
* added purging the actions
removed in previous clean up in error.
* fixed undefined variable
specified the correct variable for logging.
cache_path is used to calculate cache_dir , the script doesn't actually read cache_dir from this file.
This makes the setting work (otherwise it always uses the default).
* Fix broken indentation in vmware inventory
* Allow script to be a symlink without breaking ini path.
* Add some more properties to the bad_types list
* Encode unicode strings to ascii Fixes#16763
Updated as per @ryansb comments. The EC2 inventory script will now fail
with a useful message when boto3 is not installed and the user is trying
to read RDS cluster information.
When making calls to AWS EC2 api with DescribeTags actiion and if the
number of filter values is greater than or equal to 200, it results in
400 bad request reply and the error message is:
"Error connecting to AWS backend.\n The maximum number of filter values specified on a single call is 200".
The change is so that we call get_all_tags with maximum 199 filter
values one at a time until all are consumed.
Enables an LXC server's configuration as an inventory source for LXC
containers.
In LXC, containers can be defined with an "lxc.group" configuration
option that is normally used with lxc-autostart -g. Here, we are using
the same option to build Ansible inventory groups.
In addition to being grouped according to their lxc.group entry (or
entries, as LXC allows a single container to be in multiple groups),
we also add all containers (including those with no lxc.group entry)
to the "all" group.
vault-keyring.py was using an older version of
the ansible.constants.load_config_file() API.
The newer version returns a tuple, which caused
the config load to fail and a catch all exception
to blame it on a missing section.
Update to new API, and catch the ConfigParser error
specifically.
Fixes#15984
* Initial work on Brook.io dynamic inventory
* Handle error cases in Brook.io dynamic inventory
* Remove defaults from brook.ini
* Update Brook.io dynamic inventory for libbrookv0.3
Use authentication api to obtain a valid JWT from an API Token.
* Remove defaults from brook.ini
add cobbler api authentication options: username and password, which
can be provided if authentication is enabled or cobbler api is behind
a proxy that needs authentication.
Add support for a new option to the openstack inventory. This is so
should one cloud be unavailable you can still list hosts from any
other openstack clouds you have configured.
This is exposed as an option under the extra config part of ansible
in the openstack clouds.yaml.