* adding route53_hostnames option to set the hostnames from route 53
* checking whether the route53_hostnames option is present as suggested by @s-hertel
* setting route53_hostnames to None when config option not present
* skip the to_safe only when using route53_hostnames option, as suggested by @ryansb
* skipping the to_safe strip only for the hostnames that came from route53 as suggested by @ryansb
* Add dynamic inventory script and config for Packet.net
* The script and config have been shamelessly cargo
culted from the `ec2.py` and `ec2.ini` dynamic inventory
script.
* This is an initial version and could very well be
enhanced and made better.
Examples:
`PACKET_NET_API_KEY=<MY_AUTH_TOKEN> --list` to get inventory for
all hosts in Packet.net in all projects (defaults to `--list`
if no argument provided).
`PACKET_NET_API_KEY=<MY_AUTH_TOKEN> --host HOST` to get variables
for a single host.
* improvements in Packet host dynamic inventory
Ensure command line profile argument and AWS_PROFILE environment variable
overrides config file
Remove unnecessary `lambda` function
Fix cache file path construction to be more pythonic (and windows-ready)
use inkey attribute in _process_object_types recursive loop to generate key name in skip_keys directive.
This permit to ignore nested variables, for example summary.vm to optimize inventory collect
* vmware_inventory: permit to group by custom field
This permits to create instances, affect some custom fields like EC2 tags and then retrieve groups from custom fields like EC2 inventory
* vmware_inventory: Customize skip_keys & add resourceconfig to skip_keys
Verify if customfield is a str before processing custom fields for a host
* Fix many points reported by PyCharm as PEP 8 code style
* Improve inventory performance by dropping vim.HostSystem & vim.VirtualMachine collect when depth >= 2
* Declare some class variables properly
* Remove some unused variables
* Add documentation in vmware_inventory.ini for VMWARE_USERNAME & VMWARE_PASSWORD env vars
This forces basic auth to be used. Using the normal HTTPPasswordMgrWithDefaultRealm
password manager from urllib2 fails since collins doesn't send a 401 retry on failure.
More about this can be seen here http://stackoverflow.com/questions/2407126/python-urllib2-basic-auth-problem.
I added a small comment about the format of the host so others don't waste time like i did.
- Remove shebangs from:
- ini files
- unit tests
- module_utils
- plugins
- module_docs_fragments
- non-executable Makefiles
- Change non-modules from '/usr/bin/python' to '/usr/bin/env python'.
- Change '/bin/env' to '/usr/bin/env'.
Also removed main functions from unit tests (since they no longer
have a shebang) and fixed a python 3 compatibility issue with
update_bundled.py so it does not need to specify a python 2 shebang.
A script was added to check for unexpected shebangs in files.
This script is run during CI on Shippable.
* [GCE] Caching support for inventory script.
The GCE inventory script now supports reading from a cache rather than making the request each time. The format of the list and host output have not changed.
On script execution, the cache is checked to see if it older than 'cache_max_age', and if so, it is rebuilt (it can also be explicity rebuilt).
To support this functionality, the following have been added.
* Config file (gce.ini) changes: A new 'cache' section has been added to the config file, with 'cache_path' and 'cache_max_age' options to allow for configuration. There are intelligent defaults in place if that section and options are not found in the configuration file.
* Command line argument: A new --refresh-cache argument has been added to force the cache to be rebuild.
* A CloudInventoryCache class, contained in the same file has been added. As a seperate class, it allowed for testing (unit tests not included in this PR) and hopefully could be re-used in the future (it contains borrowed code from other inventory scripts)
* load_inventory_from_cache and do_api_calls_and_update_cache methods (, which were largely lifted from other inventory scripts, in a hope to promote consistency in the future) to determine if the cache is fresh and rebuild if necessary.
* A 'main' check, to support the script being imported and testable.
A new dictionary has been added to the list output, located at ['_meta']['stats'] that informs if the cache was used and how long it took to load the inventory (in 'cache_used' and 'inventory_load_time', respectively).
* fixed default value error; change cache time to 300
This fixes the use of public IPs in the discovered hosts by
ensuring that the use_private_network check doesn't always evaluate
to False if the associated .ini file specifies this option.
Currently the machine_type will not work if the instance type is set in ovirt. In that case, inst.get_instance_type will be an object and will fails while converting to json. This only work if the instance type is not set in ovirt where inst.get_instance_type is a Null value. The current change make sure that correct "instance type" is passed when instance is set in ovirt and Null when it's not set in ovirt.
* vmware_inventory: fix the --host option
* Fix skip_key evaluation
* Short circuit deep dives in datastores and resourcegroups
* Put timestamps in the debug output and add a few more
* Implement a user defined proplist to increase performance
* Make all props into dicts
* Update ini with example
* Fix tests
Trying to preserve the meaning of the examples. Not all occurrences in
`docsite/rst/playbooks_lookups.rst` have been changed for instance to
allow the unchanged examples to be used for testing.
Related to: #17479
* vmware_inventory script improvements
* switch instance finding method to use containerview based searches
* overhaul the serialization method for objects
* Cleanup the debug outputs
* Add a warning about performance
* add cloudforms inventory script
based on the foreman inventory script, features:
* cached results (default 600 seconds)
* paginated host results (default 100 hosts)
* ssl verification (default True)
* arguments to flush cache and run in debug mode
* suggested rework
* removed second cache / dict with duplicate info
* added purge_actions configuration option to remove the actions from a host (defaults to False)
* added prefer_ip_address configuration option so give the option of using ip address instead of name (defaults to True)
* removed self variables — just use the arguments directly
* added --pretty command line option to pretty print results
* renamed _resolve_params to _resolve_host
* implement suggestions
* removed not used import
* added warnings to help debug connection issues
* renamed self.cache to self.hosts for clarity
* now will use the first ip address as ansible_ssh_host
* flipped default for prefer_ip_address config option to false - preserve name, and specify ansible_ssh_host as ip address
* added checks and warnings to configuration options, sane defaults for all except required:
** `url` - the first part of the cloudforms server url (https://cfme.example.com)
** `username` - the cloudforms username to log in with
** `password` - the password for the cloudforms user specified
* removed redundant call to fetch host information (since we’re paging results, no need to split the calls)
* added warning for unexpected responses from CloudForms
* debug for returned sting now prints the string instead of forcing to JSON
* removed no longer needed methods to fetch host information
* using ‘key in list’ instead of ‘list.has_key(key)’
* correctly formatted groups and allowed nested groups
* now create groups for `location`, `type` and `vendor`, with appropriate sub-groups and children
* made to_safe honor config option to clean group names for ansible consumption
* remove prefer_ip_address configuration option
no longer needed since we will specify `ansible_ssh_host` as the returned ip address.
* removed dns_name
no longer needed, will preserve `host[name]` as name in Ansible.
* purge actions from hostvars
changed purge_actions to True
* flake8 suggestion for whitespace
* fix undefined r variable in warning output
use the correct ret variable
* Default purge_actions to True
We probably don’t need them, but it is configurable, so just default to remove them.
* Add configuration option to nest cloudforms tags
disabled by default, the nest_tags option will expand cloudforms tags into a nested group/subgroup structure. Otherwise, it will use the whole tag name.
* added purging the actions
removed in previous clean up in error.
* fixed undefined variable
specified the correct variable for logging.
cache_path is used to calculate cache_dir , the script doesn't actually read cache_dir from this file.
This makes the setting work (otherwise it always uses the default).
* Fix broken indentation in vmware inventory
* Allow script to be a symlink without breaking ini path.
* Add some more properties to the bad_types list
* Encode unicode strings to ascii Fixes#16763
Updated as per @ryansb comments. The EC2 inventory script will now fail
with a useful message when boto3 is not installed and the user is trying
to read RDS cluster information.
When making calls to AWS EC2 api with DescribeTags actiion and if the
number of filter values is greater than or equal to 200, it results in
400 bad request reply and the error message is:
"Error connecting to AWS backend.\n The maximum number of filter values specified on a single call is 200".
The change is so that we call get_all_tags with maximum 199 filter
values one at a time until all are consumed.
Enables an LXC server's configuration as an inventory source for LXC
containers.
In LXC, containers can be defined with an "lxc.group" configuration
option that is normally used with lxc-autostart -g. Here, we are using
the same option to build Ansible inventory groups.
In addition to being grouped according to their lxc.group entry (or
entries, as LXC allows a single container to be in multiple groups),
we also add all containers (including those with no lxc.group entry)
to the "all" group.
* Initial work on Brook.io dynamic inventory
* Handle error cases in Brook.io dynamic inventory
* Remove defaults from brook.ini
* Update Brook.io dynamic inventory for libbrookv0.3
Use authentication api to obtain a valid JWT from an API Token.
* Remove defaults from brook.ini
add cobbler api authentication options: username and password, which
can be provided if authentication is enabled or cobbler api is behind
a proxy that needs authentication.
Add support for a new option to the openstack inventory. This is so
should one cloud be unavailable you can still list hosts from any
other openstack clouds you have configured.
This is exposed as an option under the extra config part of ansible
in the openstack clouds.yaml.
Fix openstack inventory for when we have multiple servers with the same
name but different IDs. Instead of giving every server with the same
name the details for the first server returned with that name add the
individual servers as they are returned.
This was a logic bug where in a loop over a list of servers we always
added the first server in that list despite having more than one server.
EC2 inventory scripts reads configuration from an INI file. The `instance_filters` option controls which EC2 instances are retrieved for inventory. Filling this option and running the inventory script with Python 3 crashes with the following error:
```python
Traceback (most recent call last):
File "./contrib/inventory/ec2.py", line 1328, in <module>
Ec2Inventory()
File "./contrib/inventory/ec2.py", line 163, in __init__
self.read_settings()
File "./contrib/inventory/ec2.py", line 393, in read_settings
for instance_filter in config.get('ec2', 'instance_filters', '').split(','):
TypeError: get() takes 3 positional arguments but 4 were given
```
The problem is the last parameter of config.get() call, because `fallback` keyword argument is not specified.
The fix handles epmpty `instance_filers` in case of Python 2&3
There are cases where the host list back from the cloud comes back
duplicated. This causes us to report those with UUIDs, which we do to
support truly different servers with the same name. However, in the case
where duplicate host entries have the same UUID, we can know it's a data
hiccup.
The OpenStack inventory lists hostnames as the UUIDs because hostsnames
are not guarnateed to be unique on OpenStack. However, for the common
case, this is just confusing.
The new behavior is a visible change, so make it an opt-in via config.
Only turn the hostnames to UUIDs if there are duplicate hostnames.
If enabled, this will convert tags of the form "a,b,c" to a list and use
the results to create additional inventory groups.
This is based on PR #8676 by nickpeck (but not a straight rebase—both
the code and the nomenclature have been changed here).
Closes#8676
This allows the EC2 inventory plugin to be used with
the same configuration against different EC2 accounts
Profile can be passed using --profile variable or using
EC2_PROFILE environment variable e.g.
```
EC2_PROFILE=prod ansible-playbook -i ec2.py playbook.yml
```
Added documentation on profiles to EC2 dynamic inventory doc
Only tries to use profiles if --profile argument is given
or EC2_PROFILE is set to maintain compatibility will boto < 2.24.
Works around a minor bug in boto where if you try and use
a security token with a profile it fails (boto/boto#2100)
Replace .iteritems() with six.iteritems() everywhere except in
module_utils (because there's no 'six' on the remote host). And except
in lib/ansible/galaxy/data/metadata_template.j2, because I'm not sure
six is available there.
Configured with environment variables -- e.g.:
ANSIBLE_INVENTORY_CONSUL_IO_LOG_ENABLED=1 ANSIBLE_INVENTORY_CONSUL_IO_LOG_LEVEL=DEBUG /path/to/consul_io.py --list
This gives some verbose logging, including showing all HTTP requests being
made, which I am finding useful, as I am trying to improve the performance of
this script.