* fix problem with documentation and param definition difference
* removed some E324 from ignore.txt
* fixed mistake
* remove one more E324
* removed function app
* fixing append tags
* leaving append tags for later
<!--- Your description here -->
Edit to the notepadplusplus example.
Removed `.install` as per chocolatey documentation, the package is just notepadplusplus.
docs found here: https://chocolatey.org/packages/notepadplusplus
+label: docsite_pr
* Allow idempotent use of ec2_ami_copy
When `tag_equality` is set true, use tags to determine
if AMIs in different accounts are the same, and don't
copy the AMI twice if they are the same.
Use AnsibleAWSModule and make imports more consistent
with other modules
* Update version added
* More code review changes
* Review changes - Recommended way to start EC2 connection
pip 10 gives exit code 1 for empty argument lists (pip < 10 gave exit 0)
see also https://github.com/pypa/pip/pull/4210
To still allow playbooks to pass when giving empty lists, don't call
pip in that case, but show a warning.
Check datatype of device instead of comparing them directly in
vmware_guest. Also, added testcases to check this behavior.
DPVG is not supported in current version vcsim
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* Fix for Azure loadbalancer tags.
It was possible to add tags to an Azure loadbalancer, but the tags were never set in Azure.
This patch fixes that.
* Pass shippable tests
* azure_rm_loadbalancer_facts requires rg
Getting facts of all loadbalancers via azure_rm_loadbalancer_facts requires a resource group.
This fix adds the rg as parameter to the list() call.
* Revert changes in azure_rm_loadbalancer_facts
The changes belong to another pull request.
Previously the test framework (DCI, Zuul) were installing the various
dependencies, this meant the list of what was required was duplicated.
Having everything defined in ansible-test makes it easier for people to
run tests locally.
Also this allows the test to work correctly on Python 2 & Python 3
* Add members to bigip_gtm_pool
* Add monitors to bigip_gtm_pool
* Add availability_requirements to bigip_gtm_pool
* Refactor bigip_gtm_pool
* Normalize the product value returned by gtm facts
* Corrected various documentation
* Updated various F5 coding conventions
* Add partition to bigip_static_route
* Added more unit tests
* Refactor bigip_gtm_virtual_server
* Add translation_address to bigip_gtm_virtual_server
* Add translation_port to bigip_gtm_virtual_server
* Add availability_requirements to bigip_gtm_virtual_server
* Add monitors to bigip_gtm_virtual_server
* Add virtual_server_dependencies to bigip_gtm_virtual_server
* Add link to bigip_gtm_virtual_server
* Add limits to bigip_gtm_virtual_server
* Add partition to bigip_gtm_virtual_server
* Fix bigip_gtm_server to correctly create other server types
* Add type to virtual_server
* Add address_translation to virtual_server
* Add port_translation to virtual_server
* Add ip_protocol to virtual_server
* Add firewall_enforced_policy to virtual_server
* Add firewall_staged_policy to virtual_server
* Add security_log_profiles to virtual_server
* configurable list of facts modules
- allow for args dict for specific modules
- add way to pass parameters
- avoid facts poluting test
- move to 'facts gathered' flag
- add 'gathering' setting tests
Based on the documentation, 'wait_timeout' is 'Used in conjunction with instance_ids option'. This lead me to believe that I could not use this parameter to try and solve the 'Waited too long for ELB instances to be healthy' error I was experiencing.
This seems a little like duplicating code since all of the called
functions need it but prev_state isn't part of argument parsing so it
doesn't belong in the toplevel main() function.
It feels like this repeats itself because it pulls the creation of
a byte string for path into every state function. However, it actually
cleans the API by only passing a single parameter for a thing (the path)
instead of sending it in twice.
Well organized programs should only have a few successful exit points.
This commit moves all of the successful exit points for the file module
into the main() function. Other functions return their results to the
main function which can then choose whether there is more procesing to
do before exit or not.
Use an exception to return failures rather than fail_json(). This way
we can easily catch the failures if the calling code decides it can deal
with it. This has the side effect of making it easier to unittest this
code as we can catch the expected exceptions instead of having to catch
the interpreter exiting and then parse stdout for the expected data.