Merge remote-tracking branch 'upstream/devel' into devel

reviewable/pr18780/r1
Timothy Vandenbrande 10 years ago
commit 68c85950b0

@ -0,0 +1,15 @@
sudo: false
language: python
python:
- "2.7"
addons:
apt:
sources:
- deadsnakes
packages:
- python2.4
- python2.6
script:
- python2.4 -m compileall -fq -x 'cloud/|monitoring/zabbix.*\.py|/layman\.py|/maven_artifact\.py|clustering/consul.*\.py|notification/pushbullet\.py' .
- python2.6 -m compileall -fq .
- python2.7 -m compileall -fq .

@ -1,28 +1,37 @@
Welcome To Ansible GitHub
=========================
Contributing to ansible-modules-extras
======================================
Hi! Nice to see you here!
The Ansible Extras Modules are written and maintained by the Ansible community, according to the following contribution guidelines.
If you'd like to contribute code
================================
Please see [this web page](http://docs.ansible.com/community.html) for information about the contribution process. Important license agreement information is also included on that page.
If you'd like to contribute code to an existing module
======================================================
Each module in Extras is maintained by the owner of that module; each module's owner is indicated in the documentation section of the module itself. Any pull request for a module that is given a +1 by the owner in the comments will be merged by the Ansible team.
If you'd like to contribute a new module
========================================
Ansible welcomes new modules. Please be certain that you've read the [module development guide and standards](http://docs.ansible.com/developing_modules.html) thoroughly before submitting your module.
Each new module requires two current module owners to approve a new module for inclusion. The Ansible community reviews new modules as often as possible, but please be patient; there are a lot of new module submissions in the pipeline, and it takes time to evaluate a new module for its adherence to module standards.
Once your module is accepted, you become responsible for maintenance of that module, which means responding to pull requests and issues in a reasonably timely manner.
If you'd like to ask a question
===============================
Please see [this web page ](http://docs.ansible.com/community.html) for community information, which includes pointers on how to ask questions on the [mailing lists](http://docs.ansible.com/community.html#mailing-list-information) and IRC.
The github issue tracker is not the best place for questions for various reasons, but both IRC and the mailing list are very helpful places for those things, and that page has the pointers to those.
If you'd like to contribute code
================================
Please see [this web page](http://docs.ansible.com/community.html) for information about the contribution process. Important license agreement information is also included on that page.
The Github issue tracker is not the best place for questions for various reasons, but both IRC and the mailing list are very helpful places for those things, and that page has the pointers to those.
If you'd like to file a bug
===========================
I'd also read the community page above, but in particular, make sure you copy [this issue template](https://github.com/ansible/ansible/blob/devel/ISSUE_TEMPLATE.md) into your ticket description. We have a friendly neighborhood bot that will remind you if you forget :) This template helps us organize tickets faster and prevents asking some repeated questions, so it's very helpful to us and we appreciate your help with it.
Read the community page above, but in particular, make sure you copy [this issue template](https://github.com/ansible/ansible/blob/devel/ISSUE_TEMPLATE.md) into your ticket description. We have a friendly neighborhood bot that will remind you if you forget :) This template helps us organize tickets faster and prevents asking some repeated questions, so it's very helpful to us and we appreciate your help with it.
Also please make sure you are testing on the latest released version of Ansible or the development branch.
Thanks!

@ -0,0 +1,160 @@
New module reviewers
====================
The following list represents all current Github module reviewers. It's currently comprised of all Ansible module authors, past and present.
Two +1 votes by any of these module reviewers on a new module pull request will result in the inclusion of that module into Ansible Extras.
Active
======
"Adam Garside (@fabulops)"
"Adam Keech (@smadam813)"
"Adam Miller (@maxamillion)"
"Alex Coomans (@drcapulet)"
"Alexander Bulimov (@abulimov)"
"Alexander Saltanov (@sashka)"
"Alexander Winkler (@dermute)"
"Andrew de Quincey (@adq)"
"André Paramés (@andreparames)"
"Andy Hill (@andyhky)"
"Artūras `arturaz` Šlajus (@arturaz)"
"Augustus Kling (@AugustusKling)"
"BOURDEL Paul (@pb8226)"
"Balazs Pocze (@banyek)"
"Ben Whaley (@bwhaley)"
"Benno Joy (@bennojoy)"
"Bernhard Weitzhofer (@b6d)"
"Boyd Adamson (@brontitall)"
"Brad Olson (@bradobro)"
"Brian Coca (@bcoca)"
"Brice Burgess (@briceburg)"
"Bruce Pennypacker (@bpennypacker)"
"Carson Gee (@carsongee)"
"Chris Church (@cchurch)"
"Chris Hoffman (@chrishoffman)"
"Chris Long (@alcamie101)"
"Chris Schmidt (@chrisisbeef)"
"Christian Berendt (@berendt)"
"Christopher H. Laco (@claco)"
"Cristian van Ee (@DJMuggs)"
"Dag Wieers (@dagwieers)"
"Dane Summers (@dsummersl)"
"Daniel Jaouen (@danieljaouen)"
"Daniel Schep (@dschep)"
"Dariusz Owczarek (@dareko)"
"Darryl Stoflet (@dstoflet)"
"David CHANIAL (@davixx)"
"David Stygstra (@stygstra)"
"Derek Carter (@goozbach)"
"Dimitrios Tydeas Mengidis (@dmtrs)"
"Doug Luce (@dougluce)"
"Dylan Martin (@pileofrogs)"
"Elliott Foster (@elliotttf)"
"Eric Johnson (@erjohnso)"
"Evan Duffield (@scicoin-project)"
"Evan Kaufman (@EvanK)"
"Evgenii Terechkov (@evgkrsk)"
"Franck Cuny (@franckcuny)"
"Gareth Rushgrove (@garethr)"
"Hagai Kariti (@hkariti)"
"Hector Acosta (@hacosta)"
"Hiroaki Nakamura (@hnakamur)"
"Ivan Vanderbyl (@ivanvanderbyl)"
"Jakub Jirutka (@jirutka)"
"James Cammarata (@jimi-c)"
"James Laska (@jlaska)"
"James S. Martin (@jsmartin)"
"Jan-Piet Mens (@jpmens)"
"Jayson Vantuyl (@jvantuyl)"
"Jens Depuydt (@jensdepuydt)"
"Jeroen Hoekx (@jhoekx)"
"Jesse Keating (@j2sol)"
"Jim Dalton (@jsdalton)"
"Jim Richardson (@weaselkeeper)"
"Jimmy Tang (@jcftang)"
"Johan Wiren (@johanwiren)"
"John Dewey (@retr0h)"
"John Jarvis (@jarv)"
"John Whitbeck (@jwhitbeck)"
"Jon Hawkesworth (@jhawkesworth)"
"Jonas Pfenniger (@zimbatm)"
"Jonathan I. Davila (@defionscode)"
"Joseph Callen (@jcpowermac)"
"Kevin Carter (@cloudnull)"
"Lester Wade (@lwade)"
"Lorin Hochstein (@lorin)"
"Manuel Sousa (@manuel-sousa)"
"Mark Theunissen (@marktheunissen)"
"Matt Coddington (@mcodd)"
"Matt Hite (@mhite)"
"Matt Makai (@makaimc)"
"Matt Martz (@sivel)"
"Matt Wright (@mattupstate)"
"Matthew Vernon (@mcv21)"
"Matthew Williams (@mgwilliams)"
"Matthias Vogelgesang (@matze)"
"Max Riveiro (@kavu)"
"Michael Gregson (@mgregson)"
"Michael J. Schultz (@mjschultz)"
"Michael Warkentin (@mwarkentin)"
"Mischa Peters (@mischapeters)"
"Monty Taylor (@emonty)"
"Nandor Sivok (@dominis)"
"Nate Coraor (@natefoo)"
"Nate Kingsley (@nate-kingsley)"
"Nick Harring (@NickatEpic)"
"Patrick Callahan (@dirtyharrycallahan)"
"Patrick Ogenstad (@ogenstad)"
"Patrick Pelletier (@skinp)"
"Patrik Lundin (@eest)"
"Paul Durivage (@angstwad)"
"Pavel Antonov (@softzilla)"
"Pepe Barbe (@elventear)"
"Peter Mounce (@petemounce)"
"Peter Oliver (@mavit)"
"Peter Sprygada (@privateip)"
"Peter Tan (@tanpeter)"
"Philippe Makowski (@pmakowski)"
"Phillip Gentry, CX Inc (@pcgentry)"
"Quentin Stafford-Fraser (@quentinsf)"
"Ramon de la Fuente (@ramondelafuente)"
"Raul Melo (@melodous)"
"Ravi Bhure (@ravibhure)"
"René Moser (@resmo)"
"Richard Hoop (@rhoop)"
"Richard Isaacson (@risaacson)"
"Rick Mendes (@rickmendes)"
"Romeo Theriault (@romeotheriault)"
"Scott Anderson (@tastychutney)"
"Sebastian Kornehl (@skornehl)"
"Serge van Ginderachter (@srvg)"
"Sergei Antipov (@UnderGreen)"
"Seth Edwards (@sedward)"
"Silviu Dicu (@silviud)"
"Simon JAILLET (@jails)"
"Stephen Fromm (@sfromm)"
"Steve (@groks)"
"Steve Gargan (@sgargan)"
"Steve Smith (@tarka)"
"Takashi Someda (@tksmd)"
"Taneli Leppä (@rosmo)"
"Tim Bielawa (@tbielawa)"
"Tim Bielawa (@tbielawa)"
"Tim Mahoney (@timmahoney)"
"Timothy Appnel (@tima)"
"Tom Bamford (@tombamford)"
"Trond Hindenes (@trondhindenes)"
"Vincent Van der Kussen (@vincentvdk)"
"Vincent Viallet (@zbal)"
"WAKAYAMA Shirou (@shirou)"
"Will Thames (@willthames)"
"Willy Barro (@willybarro)"
"Xabier Larrakoetxea (@slok)"
"Yeukhon Wong (@yeukhon)"
"Zacharie Eakin (@zeekin)"
"berenddeboer (@berenddeboer)"
"bleader (@bleader)"
"curtis (@ccollicutt)"
Retired
=======
None yet :)

@ -0,0 +1 @@
1.8.2

@ -0,0 +1,88 @@
Guidelines for AWS modules
--------------------------
Naming your module
==================
Base the name of the module on the part of AWS that
you actually use. (A good rule of thumb is to take
whatever module you use with boto as a starting point).
Don't further abbreviate names - if something is a well
known abbreviation due to it being a major component of
AWS, that's fine, but don't create new ones independently
(e.g. VPC, ELB, etc. are fine)
Using boto
==========
Wrap the `import` statements in a try block and fail the
module later on if the import fails
```
try:
import boto
import boto.module.that.you.use
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
<lots of code here>
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
module_specific_parameter=dict(),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
```
Try and keep backward compatibility with relatively recent
versions of boto. That means that if want to implement some
functionality that uses a new feature of boto, it should only
fail if that feature actually needs to be run, with a message
saying which version of boto is needed.
Use feature testing (e.g. `hasattr('boto.module', 'shiny_new_method')`)
to check whether boto supports a feature rather than version checking
e.g. from the `ec2` module:
```
if boto_supports_profile_name_arg(ec2):
params['instance_profile_name'] = instance_profile_name
else:
if instance_profile_name is not None:
module.fail_json(
msg="instance_profile_name parameter requires Boto version 2.5.0 or higher")
```
Connecting to AWS
=================
For EC2 you can just use
```
ec2 = ec2_connect(module)
```
For other modules, you should use `get_aws_connection_info` and then
`connect_to_aws`. To connect to an example `xyz` service:
```
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
xyz = connect_to_aws(boto.xyz, region, **aws_connect_params)
```
The reason for using `get_aws_connection_info` and `connect_to_aws`
(and even `ec2_connect` uses those under the hood) rather than doing it
yourself is that they handle some of the more esoteric connection
options such as security tokens and boto profiles.

@ -19,9 +19,13 @@ DOCUMENTATION = """
module: cloudtrail
short_description: manage CloudTrail creation and deletion
description:
- Creates or deletes CloudTrail configuration. Ensures logging is also enabled. This module has a dependency on python-boto >= 2.21.
- Creates or deletes CloudTrail configuration. Ensures logging is also enabled.
version_added: "2.0"
author: Ted Timmons
author:
- "Ansible Core Team"
- "Ted Timmons"
requirements:
- "boto >= 2.21"
options:
state:
description:
@ -85,21 +89,17 @@ EXAMPLES = """
s3_key_prefix='' region=us-east-1
- name: remove cloudtrail
local_action: cloudtrail state=absent name=main region=us-east-1
local_action: cloudtrail state=disabled name=main region=us-east-1
"""
import time
import sys
import os
from collections import Counter
boto_import_failed = False
HAS_BOTO = False
try:
import boto
import boto.cloudtrail
from boto.regioninfo import RegionInfo
HAS_BOTO = True
except ImportError:
boto_import_failed = True
HAS_BOTO = False
class CloudTrailManager:
"""Handles cloudtrail configuration"""
@ -150,9 +150,6 @@ class CloudTrailManager:
def main():
if not has_libcloud:
module.fail_json(msg='boto is required.')
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
state={'required': True, 'choices': ['enabled', 'disabled'] },
@ -164,6 +161,10 @@ def main():
required_together = ( ['state', 's3_bucket_name'] )
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True, required_together=required_together)
if not HAS_BOTO:
module.fail_json(msg='boto is required.')
ec2_url, access_key, secret_key, region = get_ec2_creds(module)
aws_connect_params = dict(aws_access_key_id=access_key,
aws_secret_access_key=secret_key)

@ -0,0 +1,275 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = """
---
module: dynamodb_table
short_description: Create, update or delete AWS Dynamo DB tables.
description:
- Create or delete AWS Dynamo DB tables.
- Can update the provisioned throughput on existing tables.
- Returns the status of the specified table.
author: Alan Loi (@loia)
version_added: "2.0"
requirements:
- "boto >= 2.13.2"
options:
state:
description:
- Create or delete the table
required: false
choices: ['present', 'absent']
default: 'present'
name:
description:
- Name of the table.
required: true
hash_key_name:
description:
- Name of the hash key.
- Required when C(state=present).
required: false
hash_key_type:
description:
- Type of the hash key.
required: false
choices: ['STRING', 'NUMBER', 'BINARY']
default: 'STRING'
range_key_name:
description:
- Name of the range key.
required: false
range_key_type:
description:
- Type of the range key.
required: false
choices: ['STRING', 'NUMBER', 'BINARY']
default: 'STRING'
read_capacity:
description:
- Read throughput capacity (units) to provision.
required: false
default: 1
write_capacity:
description:
- Write throughput capacity (units) to provision.
required: false
default: 1
region:
description:
- The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used.
required: false
aliases: ['aws_region', 'ec2_region']
extends_documentation_fragment: aws
"""
EXAMPLES = '''
# Create dynamo table with hash and range primary key
- dynamodb_table:
name: my-table
region: us-east-1
hash_key_name: id
hash_key_type: STRING
range_key_name: create_time
range_key_type: NUMBER
read_capacity: 2
write_capacity: 2
# Update capacity on existing dynamo table
- dynamodb_table:
name: my-table
region: us-east-1
read_capacity: 10
write_capacity: 10
# Delete dynamo table
- dynamodb_table:
name: my-table
region: us-east-1
state: absent
'''
RETURN = '''
table_status:
description: The current status of the table.
returned: success
type: string
sample: ACTIVE
'''
try:
import boto
import boto.dynamodb2
from boto.dynamodb2.table import Table
from boto.dynamodb2.fields import HashKey, RangeKey
from boto.dynamodb2.types import STRING, NUMBER, BINARY
from boto.exception import BotoServerError, JSONResponseError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
DYNAMO_TYPE_MAP = {
'STRING': STRING,
'NUMBER': NUMBER,
'BINARY': BINARY
}
def create_or_update_dynamo_table(connection, module):
table_name = module.params.get('name')
hash_key_name = module.params.get('hash_key_name')
hash_key_type = module.params.get('hash_key_type')
range_key_name = module.params.get('range_key_name')
range_key_type = module.params.get('range_key_type')
read_capacity = module.params.get('read_capacity')
write_capacity = module.params.get('write_capacity')
schema = [
HashKey(hash_key_name, DYNAMO_TYPE_MAP.get(hash_key_type)),
RangeKey(range_key_name, DYNAMO_TYPE_MAP.get(range_key_type))
]
throughput = {
'read': read_capacity,
'write': write_capacity
}
result = dict(
region=module.params.get('region'),
table_name=table_name,
hash_key_name=hash_key_name,
hash_key_type=hash_key_type,
range_key_name=range_key_name,
range_key_type=range_key_type,
read_capacity=read_capacity,
write_capacity=write_capacity,
)
try:
table = Table(table_name, connection=connection)
if dynamo_table_exists(table):
result['changed'] = update_dynamo_table(table, throughput=throughput, check_mode=module.check_mode)
else:
if not module.check_mode:
Table.create(table_name, connection=connection, schema=schema, throughput=throughput)
result['changed'] = True
if not module.check_mode:
result['table_status'] = table.describe()['Table']['TableStatus']
except BotoServerError:
result['msg'] = 'Failed to create/update dynamo table due to error: ' + traceback.format_exc()
module.fail_json(**result)
else:
module.exit_json(**result)
def delete_dynamo_table(connection, module):
table_name = module.params.get('name')
result = dict(
region=module.params.get('region'),
table_name=table_name,
)
try:
table = Table(table_name, connection=connection)
if dynamo_table_exists(table):
if not module.check_mode:
table.delete()
result['changed'] = True
else:
result['changed'] = False
except BotoServerError:
result['msg'] = 'Failed to delete dynamo table due to error: ' + traceback.format_exc()
module.fail_json(**result)
else:
module.exit_json(**result)
def dynamo_table_exists(table):
try:
table.describe()
return True
except JSONResponseError, e:
if e.message and e.message.startswith('Requested resource not found'):
return False
else:
raise e
def update_dynamo_table(table, throughput=None, check_mode=False):
table.describe() # populate table details
if has_throughput_changed(table, throughput):
if not check_mode:
return table.update(throughput=throughput)
else:
return True
return False
def has_throughput_changed(table, new_throughput):
if not new_throughput:
return False
return new_throughput['read'] != table.throughput['read'] or \
new_throughput['write'] != table.throughput['write']
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
state=dict(default='present', choices=['present', 'absent']),
name=dict(required=True, type='str'),
hash_key_name=dict(required=True, type='str'),
hash_key_type=dict(default='STRING', type='str', choices=['STRING', 'NUMBER', 'BINARY']),
range_key_name=dict(type='str'),
range_key_type=dict(default='STRING', type='str', choices=['STRING', 'NUMBER', 'BINARY']),
read_capacity=dict(default=1, type='int'),
write_capacity=dict(default=1, type='int'),
))
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
connection = connect_to_aws(boto.dynamodb2, region, **aws_connect_params)
state = module.params.get('state')
if state == 'present':
create_or_update_dynamo_table(connection, module)
elif state == 'absent':
delete_dynamo_table(connection, module)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
main()

@ -0,0 +1,208 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_ami_copy
short_description: copies AMI between AWS regions, return new image id
description:
- Copies AMI from a source region to a destination region. This module has a dependency on python-boto >= 2.5
version_added: "2.0"
options:
source_region:
description:
- the source region that AMI should be copied from
required: true
region:
description:
- the destination region that AMI should be copied to
required: true
aliases: ['aws_region', 'ec2_region', 'dest_region']
source_image_id:
description:
- the id of the image in source region that should be copied
required: true
name:
description:
- The name of the new image to copy
required: false
default: null
description:
description:
- An optional human-readable string describing the contents and purpose of the new AMI.
required: false
default: null
wait:
description:
- wait for the copied AMI to be in state 'available' before returning.
required: false
default: "no"
choices: [ "yes", "no" ]
wait_timeout:
description:
- how long before wait gives up, in seconds
required: false
default: 1200
tags:
description:
- a hash/dictionary of tags to add to the new copied AMI; '{"key":"value"}' and '{"key":"value","key":"value"}'
required: false
default: null
author: Amir Moulavi <amir.moulavi@gmail.com>
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Basic AMI Copy
- local_action:
module: ec2_ami_copy
source_region: eu-west-1
dest_region: us-east-1
source_image_id: ami-xxxxxxx
name: SuperService-new-AMI
description: latest patch
tags: '{"Name":"SuperService-new-AMI", "type":"SuperService"}'
wait: yes
register: image_id
'''
import sys
import time
try:
import boto
import boto.ec2
from boto.vpc import VPCConnection
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
def copy_image(module, ec2):
"""
Copies an AMI
module : AnsibleModule object
ec2: authenticated ec2 connection object
"""
source_region = module.params.get('source_region')
source_image_id = module.params.get('source_image_id')
name = module.params.get('name')
description = module.params.get('description')
tags = module.params.get('tags')
wait_timeout = int(module.params.get('wait_timeout'))
wait = module.params.get('wait')
try:
params = {'source_region': source_region,
'source_image_id': source_image_id,
'name': name,
'description': description
}
image_id = ec2.copy_image(**params).image_id
except boto.exception.BotoServerError, e:
module.fail_json(msg="%s: %s" % (e.error_code, e.error_message))
img = wait_until_image_is_recognized(module, ec2, wait_timeout, image_id, wait)
img = wait_until_image_is_copied(module, ec2, wait_timeout, img, image_id, wait)
register_tags_if_any(module, ec2, tags, image_id)
module.exit_json(msg="AMI copy operation complete", image_id=image_id, state=img.state, changed=True)
# register tags to the copied AMI in dest_region
def register_tags_if_any(module, ec2, tags, image_id):
if tags:
try:
ec2.create_tags([image_id], tags)
except Exception as e:
module.fail_json(msg=str(e))
# wait here until the image is copied (i.e. the state becomes available
def wait_until_image_is_copied(module, ec2, wait_timeout, img, image_id, wait):
wait_timeout = time.time() + wait_timeout
while wait and wait_timeout > time.time() and (img is None or img.state != 'available'):
img = ec2.get_image(image_id)
time.sleep(3)
if wait and wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="timed out waiting for image to be copied")
return img
# wait until the image is recognized.
def wait_until_image_is_recognized(module, ec2, wait_timeout, image_id, wait):
for i in range(wait_timeout):
try:
return ec2.get_image(image_id)
except boto.exception.EC2ResponseError, e:
# This exception we expect initially right after registering the copy with EC2 API
if 'InvalidAMIID.NotFound' in e.error_code and wait:
time.sleep(1)
else:
# On any other exception we should fail
module.fail_json(
msg="Error while trying to find the new image. Using wait=yes and/or a longer wait_timeout may help: " + str(
e))
else:
module.fail_json(msg="timed out waiting for image to be recognized")
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
source_region=dict(required=True),
source_image_id=dict(required=True),
name=dict(),
description=dict(default=""),
wait=dict(type='bool', default=False),
wait_timeout=dict(default=1200),
tags=dict(type='dict')))
module = AnsibleModule(argument_spec=argument_spec)
try:
ec2 = ec2_connect(module)
except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg=str(e))
try:
region, ec2_url, boto_params = get_aws_connection_info(module)
vpc = connect_to_aws(boto.vpc, region, **boto_params)
except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg = str(e))
if not region:
module.fail_json(msg="region must be specified")
copy_image(module, ec2)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
main()

@ -0,0 +1,404 @@
#!/usr/bin/python
#
# This is a free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This Ansible library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this library. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_eni
short_description: Create and optionally attach an Elastic Network Interface (ENI) to an instance
description:
- Create and optionally attach an Elastic Network Interface (ENI) to an instance. If an ENI ID is provided, an attempt is made to update the existing ENI. By passing 'None' as the instance_id, an ENI can be detached from an instance.
version_added: "2.0"
author: Rob White, wimnat [at] gmail.com, @wimnat
options:
eni_id:
description:
- The ID of the ENI
required: false
default: null
instance_id:
description:
- Instance ID that you wish to attach ENI to. To detach an ENI from an instance, use 'None'.
required: false
default: null
private_ip_address:
description:
- Private IP address.
required: false
default: null
subnet_id:
description:
- ID of subnet in which to create the ENI. Only required when state=present.
required: true
description:
description:
- Optional description of the ENI.
required: false
default: null
security_groups:
description:
- List of security groups associated with the interface. Only used when state=present.
required: false
default: null
state:
description:
- Create or delete ENI.
required: false
default: present
choices: [ 'present', 'absent' ]
device_index:
description:
- The index of the device for the network interface attachment on the instance.
required: false
default: 0
force_detach:
description:
- Force detachment of the interface. This applies either when explicitly detaching the interface by setting instance_id to None or when deleting an interface with state=absent.
required: false
default: no
delete_on_termination:
description:
- Delete the interface when the instance it is attached to is terminated. You can only specify this flag when the interface is being modified, not on creation.
required: false
source_dest_check:
description:
- By default, interfaces perform source/destination checks. NAT instances however need this check to be disabled. You can only specify this flag when the interface is being modified, not on creation.
required: false
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Create an ENI. As no security group is defined, ENI will be created in default security group
- ec2_eni:
private_ip_address: 172.31.0.20
subnet_id: subnet-xxxxxxxx
state: present
# Create an ENI and attach it to an instance
- ec2_eni:
instance_id: i-xxxxxxx
device_index: 1
private_ip_address: 172.31.0.20
subnet_id: subnet-xxxxxxxx
state: present
# Destroy an ENI, detaching it from any instance if necessary
- ec2_eni:
eni_id: eni-xxxxxxx
force_detach: yes
state: absent
# Update an ENI
- ec2_eni:
eni_id: eni-xxxxxxx
description: "My new description"
state: present
# Detach an ENI from an instance
- ec2_eni:
eni_id: eni-xxxxxxx
instance_id: None
state: present
### Delete an interface on termination
# First create the interface
- ec2_eni:
instance_id: i-xxxxxxx
device_index: 1
private_ip_address: 172.31.0.20
subnet_id: subnet-xxxxxxxx
state: present
register: eni
# Modify the interface to enable the delete_on_terminaton flag
- ec2_eni:
eni_id: {{ "eni.interface.id" }}
delete_on_termination: true
'''
import time
import xml.etree.ElementTree as ET
import re
try:
import boto.ec2
from boto.exception import BotoServerError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def get_error_message(xml_string):
root = ET.fromstring(xml_string)
for message in root.findall('.//Message'):
return message.text
def get_eni_info(interface):
interface_info = {'id': interface.id,
'subnet_id': interface.subnet_id,
'vpc_id': interface.vpc_id,
'description': interface.description,
'owner_id': interface.owner_id,
'status': interface.status,
'mac_address': interface.mac_address,
'private_ip_address': interface.private_ip_address,
'source_dest_check': interface.source_dest_check,
'groups': dict((group.id, group.name) for group in interface.groups),
}
if interface.attachment is not None:
interface_info['attachment'] = {'attachment_id': interface.attachment.id,
'instance_id': interface.attachment.instance_id,
'device_index': interface.attachment.device_index,
'status': interface.attachment.status,
'attach_time': interface.attachment.attach_time,
'delete_on_termination': interface.attachment.delete_on_termination,
}
return interface_info
def wait_for_eni(eni, status):
while True:
time.sleep(3)
eni.update()
# If the status is detached we just need attachment to disappear
if eni.attachment is None:
if status == "detached":
break
else:
if status == "attached" and eni.attachment.status == "attached":
break
def create_eni(connection, module):
instance_id = module.params.get("instance_id")
if instance_id == 'None':
instance_id = None
do_detach = True
else:
do_detach = False
device_index = module.params.get("device_index")
subnet_id = module.params.get('subnet_id')
private_ip_address = module.params.get('private_ip_address')
description = module.params.get('description')
security_groups = module.params.get('security_groups')
changed = False
try:
eni = compare_eni(connection, module)
if eni is None:
eni = connection.create_network_interface(subnet_id, private_ip_address, description, security_groups)
if instance_id is not None:
try:
eni.attach(instance_id, device_index)
except BotoServerError as ex:
eni.delete()
raise
changed = True
# Wait to allow creation / attachment to finish
wait_for_eni(eni, "attached")
eni.update()
except BotoServerError as e:
module.fail_json(msg=get_error_message(e.args[2]))
module.exit_json(changed=changed, interface=get_eni_info(eni))
def modify_eni(connection, module):
eni_id = module.params.get("eni_id")
instance_id = module.params.get("instance_id")
if instance_id == 'None':
instance_id = None
do_detach = True
else:
do_detach = False
device_index = module.params.get("device_index")
subnet_id = module.params.get('subnet_id')
private_ip_address = module.params.get('private_ip_address')
description = module.params.get('description')
security_groups = module.params.get('security_groups')
force_detach = module.params.get("force_detach")
source_dest_check = module.params.get("source_dest_check")
delete_on_termination = module.params.get("delete_on_termination")
changed = False
try:
# Get the eni with the eni_id specified
eni_result_set = connection.get_all_network_interfaces(eni_id)
eni = eni_result_set[0]
if description is not None:
if eni.description != description:
connection.modify_network_interface_attribute(eni.id, "description", description)
changed = True
if security_groups is not None:
if sorted(get_sec_group_list(eni.groups)) != sorted(security_groups):
connection.modify_network_interface_attribute(eni.id, "groupSet", security_groups)
changed = True
if source_dest_check is not None:
if eni.source_dest_check != source_dest_check:
connection.modify_network_interface_attribute(eni.id, "sourceDestCheck", source_dest_check)
changed = True
if delete_on_termination is not None:
if eni.attachment is not None:
if eni.attachment.delete_on_termination is not delete_on_termination:
connection.modify_network_interface_attribute(eni.id, "deleteOnTermination", delete_on_termination, eni.attachment.id)
changed = True
else:
module.fail_json(msg="Can not modify delete_on_termination as the interface is not attached")
if eni.attachment is not None and instance_id is None and do_detach is True:
eni.detach(force_detach)
wait_for_eni(eni, "detached")
changed = True
else:
if instance_id is not None:
eni.attach(instance_id, device_index)
wait_for_eni(eni, "attached")
changed = True
except BotoServerError as e:
print e
module.fail_json(msg=get_error_message(e.args[2]))
eni.update()
module.exit_json(changed=changed, interface=get_eni_info(eni))
def delete_eni(connection, module):
eni_id = module.params.get("eni_id")
force_detach = module.params.get("force_detach")
try:
eni_result_set = connection.get_all_network_interfaces(eni_id)
eni = eni_result_set[0]
if force_detach is True:
if eni.attachment is not None:
eni.detach(force_detach)
# Wait to allow detachment to finish
wait_for_eni(eni, "detached")
eni.update()
eni.delete()
changed = True
else:
eni.delete()
changed = True
module.exit_json(changed=changed)
except BotoServerError as e:
msg = get_error_message(e.args[2])
regex = re.compile('The networkInterface ID \'.*\' does not exist')
if regex.search(msg) is not None:
module.exit_json(changed=False)
else:
module.fail_json(msg=get_error_message(e.args[2]))
def compare_eni(connection, module):
eni_id = module.params.get("eni_id")
subnet_id = module.params.get('subnet_id')
private_ip_address = module.params.get('private_ip_address')
description = module.params.get('description')
security_groups = module.params.get('security_groups')
try:
all_eni = connection.get_all_network_interfaces(eni_id)
for eni in all_eni:
remote_security_groups = get_sec_group_list(eni.groups)
if (eni.subnet_id == subnet_id) and (eni.private_ip_address == private_ip_address) and (eni.description == description) and (remote_security_groups == security_groups):
return eni
except BotoServerError as e:
module.fail_json(msg=get_error_message(e.args[2]))
return None
def get_sec_group_list(groups):
# Build list of remote security groups
remote_security_groups = []
for group in groups:
remote_security_groups.append(group.id.encode())
return remote_security_groups
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
eni_id = dict(default=None),
instance_id = dict(default=None),
private_ip_address = dict(),
subnet_id = dict(),
description = dict(),
security_groups = dict(type='list'),
device_index = dict(default=0, type='int'),
state = dict(default='present', choices=['present', 'absent']),
force_detach = dict(default='no', type='bool'),
source_dest_check = dict(default=None, type='bool'),
delete_on_termination = dict(default=None, type='bool')
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region:
try:
connection = connect_to_aws(boto.ec2, region, **aws_connect_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
state = module.params.get("state")
eni_id = module.params.get("eni_id")
if state == 'present':
if eni_id is None:
if module.params.get("subnet_id") is None:
module.fail_json(msg="subnet_id must be specified when state=present")
create_eni(connection, module)
else:
modify_eni(connection, module)
elif state == 'absent':
if eni_id is None:
module.fail_json(msg="eni_id must be specified")
else:
delete_eni(connection, module)
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
# this is magic, see lib/ansible/module_common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()

@ -0,0 +1,135 @@
#!/usr/bin/python
#
# This is a free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This Ansible library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this library. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_eni_facts
short_description: Gather facts about ec2 ENI interfaces in AWS
description:
- Gather facts about ec2 ENI interfaces in AWS
version_added: "2.0"
author: "Rob White (@wimnat)"
options:
eni_id:
description:
- The ID of the ENI. Pass this option to gather facts about a particular ENI, otherwise, all ENIs are returned.
required: false
default: null
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Gather facts about all ENIs
- ec2_eni_facts:
# Gather facts about a particular ENI
- ec2_eni_facts:
eni_id: eni-xxxxxxx
'''
import xml.etree.ElementTree as ET
try:
import boto.ec2
from boto.exception import BotoServerError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def get_error_message(xml_string):
root = ET.fromstring(xml_string)
for message in root.findall('.//Message'):
return message.text
def get_eni_info(interface):
interface_info = {'id': interface.id,
'subnet_id': interface.subnet_id,
'vpc_id': interface.vpc_id,
'description': interface.description,
'owner_id': interface.owner_id,
'status': interface.status,
'mac_address': interface.mac_address,
'private_ip_address': interface.private_ip_address,
'source_dest_check': interface.source_dest_check,
'groups': dict((group.id, group.name) for group in interface.groups),
}
if interface.attachment is not None:
interface_info['attachment'] = {'attachment_id': interface.attachment.id,
'instance_id': interface.attachment.instance_id,
'device_index': interface.attachment.device_index,
'status': interface.attachment.status,
'attach_time': interface.attachment.attach_time,
'delete_on_termination': interface.attachment.delete_on_termination,
}
return interface_info
def list_eni(connection, module):
eni_id = module.params.get("eni_id")
interface_dict_array = []
try:
all_eni = connection.get_all_network_interfaces(eni_id)
except BotoServerError as e:
module.fail_json(msg=get_error_message(e.args[2]))
for interface in all_eni:
interface_dict_array.append(get_eni_info(interface))
module.exit_json(interfaces=interface_dict_array)
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
eni_id = dict(default=None)
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region:
try:
connection = connect_to_aws(boto.ec2, region, **aws_connect_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
list_eni(connection, module)
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
# this is magic, see lib/ansible/module_common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()

@ -0,0 +1,159 @@
#!/usr/bin/python
#
# This is a free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This Ansible library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this library. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_vpc_igw
short_description: Manage an AWS VPC Internet gateway
description:
- Manage an AWS VPC Internet gateway
version_added: "2.0"
author: Robert Estelle, @erydo
options:
vpc_id:
description:
- The VPC ID for the VPC in which to manage the Internet Gateway.
required: true
default: null
state:
description:
- Create or terminate the IGW
required: false
default: present
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Ensure that the VPC has an Internet Gateway.
# The Internet Gateway ID is can be accessed via {{igw.gateway_id}} for use
# in setting up NATs etc.
local_action:
module: ec2_vpc_igw
vpc_id: {{vpc.vpc_id}}
region: {{vpc.vpc.region}}
state: present
register: igw
'''
import sys # noqa
try:
import boto.ec2
import boto.vpc
from boto.exception import EC2ResponseError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
if __name__ != '__main__':
raise
class AnsibleIGWException(Exception):
pass
def ensure_igw_absent(vpc_conn, vpc_id, check_mode):
igws = vpc_conn.get_all_internet_gateways(
filters={'attachment.vpc-id': vpc_id})
if not igws:
return {'changed': False}
if check_mode:
return {'changed': True}
for igw in igws:
try:
vpc_conn.detach_internet_gateway(igw.id, vpc_id)
vpc_conn.delete_internet_gateway(igw.id)
except EC2ResponseError as e:
raise AnsibleIGWException(
'Unable to delete Internet Gateway, error: {0}'.format(e))
return {'changed': True}
def ensure_igw_present(vpc_conn, vpc_id, check_mode):
igws = vpc_conn.get_all_internet_gateways(
filters={'attachment.vpc-id': vpc_id})
if len(igws) > 1:
raise AnsibleIGWException(
'EC2 returned more than one Internet Gateway for VPC {0}, aborting'
.format(vpc_id))
if igws:
return {'changed': False, 'gateway_id': igws[0].id}
else:
if check_mode:
return {'changed': True, 'gateway_id': None}
try:
igw = vpc_conn.create_internet_gateway()
vpc_conn.attach_internet_gateway(igw.id, vpc_id)
return {'changed': True, 'gateway_id': igw.id}
except EC2ResponseError as e:
raise AnsibleIGWException(
'Unable to create Internet Gateway, error: {0}'.format(e))
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
vpc_id = dict(required=True),
state = dict(choices=['present', 'absent'], default='present')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
if not HAS_BOTO:
module.fail_json(msg='boto is required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region:
try:
connection = connect_to_aws(boto.ec2, region, **aws_connect_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
vpc_id = module.params.get('vpc_id')
state = module.params.get('state', 'present')
try:
if state == 'present':
result = ensure_igw_present(connection, vpc_id, check_mode=module.check_mode)
elif state == 'absent':
result = ensure_igw_absent(connection, vpc_id, check_mode=module.check_mode)
except AnsibleIGWException as e:
module.fail_json(msg=str(e))
module.exit_json(**result)
from ansible.module_utils.basic import * # noqa
from ansible.module_utils.ec2 import * # noqa
if __name__ == '__main__':
main()

@ -0,0 +1,91 @@
#!/usr/bin/python
DOCUMENTATION = '''
---
module: ec2_win_password
short_description: gets the default administrator password for ec2 windows instances
description:
- Gets the default administrator password from any EC2 Windows instance. The instance is referenced by its id (e.g. i-XXXXXXX). This module has a dependency on python-boto.
version_added: "2.0"
author: "Rick Mendes (@rickmendes)"
options:
instance_id:
description:
- The instance id to get the password data from.
required: true
key_file:
description:
- path to the file containing the key pair used on the instance
required: true
region:
description:
- The AWS region to use. Must be specified if ec2_url is not used. If not specified then the value of the EC2_REGION environment variable, if any, is used.
required: false
default: null
aliases: [ 'aws_region', 'ec2_region' ]
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Example of getting a password
tasks:
- name: get the Administrator password
ec2_win_password:
profile: my-boto-profile
instance_id: i-XXXXXX
region: us-east-1
key_file: "~/aws-creds/my_test_key.pem"
'''
from base64 import b64decode
from os.path import expanduser
from Crypto.Cipher import PKCS1_v1_5
from Crypto.PublicKey import RSA
try:
import boto.ec2
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
instance_id = dict(required=True),
key_file = dict(required=True),
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='Boto required for this module.')
instance_id = module.params.get('instance_id')
key_file = expanduser(module.params.get('key_file'))
ec2 = ec2_connect(module)
data = ec2.get_password_data(instance_id)
decoded = b64decode(data)
f = open(key_file, 'r')
key = RSA.importKey(f.read())
cipher = PKCS1_v1_5.new(key)
sentinel = 'password decryption failed!!!'
try:
decrypted = cipher.decrypt(decoded, sentinel)
except ValueError as e:
decrypted = None
if decrypted == None:
module.exit_json(win_password='', changed=False)
else:
module.exit_json(win_password=decrypted, changed=True)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
main()

@ -0,0 +1,410 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_account
short_description: Manages account on Apache CloudStack based clouds.
description:
- Create, disable, lock, enable and remove accounts.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of account.
required: true
username:
description:
- Username of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
password:
description:
- Password of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
first_name:
description:
- First name of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
last_name:
description:
- Last name of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
email:
description:
- Email of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
timezone:
description:
- Timezone of the user to be created if account did not exist.
required: false
default: null
network_domain:
description:
- Network domain of the account.
required: false
default: null
account_type:
description:
- Type of the account.
required: false
default: 'user'
choices: [ 'user', 'root_admin', 'domain_admin' ]
domain:
description:
- Domain the account is related to.
required: false
default: 'ROOT'
state:
description:
- State of the account.
required: false
default: 'present'
choices: [ 'present', 'absent', 'enabled', 'disabled', 'locked' ]
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# create an account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
username: customer_xy
password: S3Cur3
last_name: Doe
first_name: John
email: john.doe@example.com
domain: CUSTOMERS
# Lock an existing account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
domain: CUSTOMERS
state: locked
# Disable an existing account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
domain: CUSTOMERS
state: disabled
# Enable an existing account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
domain: CUSTOMERS
state: enabled
# Remove an account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
domain: CUSTOMERS
state: absent
'''
RETURN = '''
---
name:
description: Name of the account.
returned: success
type: string
sample: linus@example.com
account_type:
description: Type of the account.
returned: success
type: string
sample: user
account_state:
description: State of the account.
returned: success
type: string
sample: enabled
network_domain:
description: Network domain of the account.
returned: success
type: string
sample: example.local
domain:
description: Domain the account is related.
returned: success
type: string
sample: ROOT
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackAccount(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.account = None
self.account_types = {
'user': 0,
'root_admin': 1,
'domain_admin': 2,
}
def get_account_type(self):
account_type = self.module.params.get('account_type')
return self.account_types[account_type]
def get_account(self):
if not self.account:
args = {}
args['listall'] = True
args['domainid'] = self.get_domain('id')
accounts = self.cs.listAccounts(**args)
if accounts:
account_name = self.module.params.get('name')
for a in accounts['account']:
if account_name in [ a['name'] ]:
self.account = a
break
return self.account
def enable_account(self):
account = self.get_account()
if not account:
self.module.fail_json(msg="Failed: account not present")
if account['state'].lower() != 'enabled':
self.result['changed'] = True
args = {}
args['id'] = account['id']
args['account'] = self.module.params.get('name')
args['domainid'] = self.get_domain('id')
if not self.module.check_mode:
res = self.cs.enableAccount(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
account = res['account']
return account
def lock_account(self):
return self.lock_or_disable_account(lock=True)
def disable_account(self):
return self.lock_or_disable_account()
def lock_or_disable_account(self, lock=False):
account = self.get_account()
if not account:
self.module.fail_json(msg="Failed: account not present")
# we need to enable the account to lock it.
if lock and account['state'].lower() == 'disabled':
account = self.enable_account()
if lock and account['state'].lower() != 'locked' \
or not lock and account['state'].lower() != 'disabled':
self.result['changed'] = True
args = {}
args['id'] = account['id']
args['account'] = self.module.params.get('name')
args['domainid'] = self.get_domain('id')
args['lock'] = lock
if not self.module.check_mode:
account = self.cs.disableAccount(**args)
if 'errortext' in account:
self.module.fail_json(msg="Failed: '%s'" % account['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
account = self._poll_job(account, 'account')
return account
def present_account(self):
missing_params = []
if not self.module.params.get('email'):
missing_params.append('email')
if not self.module.params.get('username'):
missing_params.append('username')
if not self.module.params.get('password'):
missing_params.append('password')
if not self.module.params.get('first_name'):
missing_params.append('first_name')
if not self.module.params.get('last_name'):
missing_params.append('last_name')
if missing_params:
self.module.fail_json(msg="missing required arguments: %s" % ','.join(missing_params))
account = self.get_account()
if not account:
self.result['changed'] = True
args = {}
args['account'] = self.module.params.get('name')
args['domainid'] = self.get_domain('id')
args['accounttype'] = self.get_account_type()
args['networkdomain'] = self.module.params.get('network_domain')
args['username'] = self.module.params.get('username')
args['password'] = self.module.params.get('password')
args['firstname'] = self.module.params.get('first_name')
args['lastname'] = self.module.params.get('last_name')
args['email'] = self.module.params.get('email')
args['timezone'] = self.module.params.get('timezone')
if not self.module.check_mode:
res = self.cs.createAccount(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
account = res['account']
return account
def absent_account(self):
account = self.get_account()
if account:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.deleteAccount(id=account['id'])
if 'errortext' in account:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
res = self._poll_job(res, 'account')
return account
def get_result(self, account):
if account:
if 'name' in account:
self.result['name'] = account['name']
if 'accounttype' in account:
for key,value in self.account_types.items():
if value == account['accounttype']:
self.result['account_type'] = key
break
if 'state' in account:
self.result['account_state'] = account['state']
if 'domain' in account:
self.result['domain'] = account['domain']
if 'networkdomain' in account:
self.result['network_domain'] = account['networkdomain']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(choices=['present', 'absent', 'enabled', 'disabled', 'locked' ], default='present'),
account_type = dict(choices=['user', 'root_admin', 'domain_admin'], default='user'),
network_domain = dict(default=None),
domain = dict(default='ROOT'),
email = dict(default=None),
first_name = dict(default=None),
last_name = dict(default=None),
username = dict(default=None),
password = dict(default=None),
timezone = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_acc = AnsibleCloudStackAccount(module)
state = module.params.get('state')
if state in ['absent']:
account = acs_acc.absent_account()
elif state in ['enabled']:
account = acs_acc.enable_account()
elif state in ['disabled']:
account = acs_acc.disable_account()
elif state in ['locked']:
account = acs_acc.lock_account()
else:
account = acs_acc.present_account()
result = acs_acc.get_result(account)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -25,7 +25,7 @@ short_description: Manages affinity groups on Apache CloudStack based clouds.
description:
- Create and remove affinity groups.
version_added: '2.0'
author: René Moser
author: "René Moser (@resmo)"
options:
name:
description:
@ -47,6 +47,16 @@ options:
required: false
default: 'present'
choices: [ 'present', 'absent' ]
domain:
description:
- Domain the affinity group is related to.
required: false
default: null
account:
description:
- Account the affinity group is related to.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
@ -56,14 +66,12 @@ extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
---
# Create a affinity group
- local_action:
module: cs_affinitygroup
name: haproxy
affinty_type: host anti-affinity
# Remove a affinity group
- local_action:
module: cs_affinitygroup
@ -104,20 +112,21 @@ class AnsibleCloudStackAffinityGroup(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
self.affinity_group = None
def get_affinity_group(self):
if not self.affinity_group:
affinity_group_name = self.module.params.get('name')
affinity_group = self.module.params.get('name')
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
affinity_groups = self.cs.listAffinityGroups()
affinity_groups = self.cs.listAffinityGroups(**args)
if affinity_groups:
for a in affinity_groups['affinitygroup']:
if a['name'] == affinity_group_name:
if affinity_group in [ a['name'], a['id'] ]:
self.affinity_group = a
break
return self.affinity_group
@ -142,10 +151,12 @@ class AnsibleCloudStackAffinityGroup(AnsibleCloudStack):
if not affinity_group:
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['type'] = self.get_affinity_type()
args = {}
args['name'] = self.module.params.get('name')
args['type'] = self.get_affinity_type()
args['description'] = self.module.params.get('description')
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
if not self.module.check_mode:
res = self.cs.createAffinityGroup(**args)
@ -156,7 +167,6 @@ class AnsibleCloudStackAffinityGroup(AnsibleCloudStack):
poll_async = self.module.params.get('poll_async')
if res and poll_async:
affinity_group = self._poll_job(res, 'affinitygroup')
return affinity_group
@ -165,8 +175,10 @@ class AnsibleCloudStackAffinityGroup(AnsibleCloudStack):
if affinity_group:
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args = {}
args['name'] = self.module.params.get('name')
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
if not self.module.check_mode:
res = self.cs.deleteAffinityGroup(**args)
@ -177,7 +189,6 @@ class AnsibleCloudStackAffinityGroup(AnsibleCloudStack):
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'affinitygroup')
return affinity_group
@ -189,6 +200,10 @@ class AnsibleCloudStackAffinityGroup(AnsibleCloudStack):
self.result['description'] = affinity_group['description']
if 'type' in affinity_group:
self.result['affinity_type'] = affinity_group['type']
if 'domain' in affinity_group:
self.result['domain'] = affinity_group['domain']
if 'account' in affinity_group:
self.result['account'] = affinity_group['account']
return self.result
@ -199,11 +214,17 @@ def main():
affinty_type = dict(default=None),
description = dict(default=None),
state = dict(choices=['present', 'absent'], default='present'),
domain = dict(default=None),
account = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(default='get'),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
@ -225,6 +246,9 @@ def main():
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets

@ -0,0 +1,221 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_facts
short_description: Gather facts on instances of Apache CloudStack based clouds.
description:
- This module fetches data from the metadata API in CloudStack. The module must be called from within the instance itself.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
filter:
description:
- Filter for a specific fact.
required: false
default: null
choices:
- cloudstack_service_offering
- cloudstack_availability_zone
- cloudstack_public_hostname
- cloudstack_public_ipv4
- cloudstack_local_hostname
- cloudstack_local_ipv4
- cloudstack_instance_id
- cloudstack_user_data
requirements: [ 'yaml' ]
'''
EXAMPLES = '''
# Gather all facts on instances
- name: Gather cloudstack facts
cs_facts:
# Gather specific fact on instances
- name: Gather cloudstack facts
cs_facts: filter=cloudstack_instance_id
'''
RETURN = '''
---
cloudstack_availability_zone:
description: zone the instance is deployed in.
returned: success
type: string
sample: ch-gva-2
cloudstack_instance_id:
description: UUID of the instance.
returned: success
type: string
sample: ab4e80b0-3e7e-4936-bdc5-e334ba5b0139
cloudstack_local_hostname:
description: local hostname of the instance.
returned: success
type: string
sample: VM-ab4e80b0-3e7e-4936-bdc5-e334ba5b0139
cloudstack_local_ipv4:
description: local IPv4 of the instance.
returned: success
type: string
sample: 185.19.28.35
cloudstack_public_hostname:
description: public hostname of the instance.
returned: success
type: string
sample: VM-ab4e80b0-3e7e-4936-bdc5-e334ba5b0139
cloudstack_public_ipv4:
description: public IPv4 of the instance.
returned: success
type: string
sample: 185.19.28.35
cloudstack_service_offering:
description: service offering of the instance.
returned: success
type: string
sample: Micro 512mb 1cpu
cloudstack_user_data:
description: data of the instance provided by users.
returned: success
type: dict
sample: { "bla": "foo" }
'''
import os
try:
import yaml
has_lib_yaml = True
except ImportError:
has_lib_yaml = False
CS_METADATA_BASE_URL = "http://%s/latest/meta-data"
CS_USERDATA_BASE_URL = "http://%s/latest/user-data"
class CloudStackFacts(object):
def __init__(self):
self.facts = ansible_facts(module)
self.api_ip = None
self.fact_paths = {
'cloudstack_service_offering': 'service-offering',
'cloudstack_availability_zone': 'availability-zone',
'cloudstack_public_hostname': 'public-hostname',
'cloudstack_public_ipv4': 'public-ipv4',
'cloudstack_local_hostname': 'local-hostname',
'cloudstack_local_ipv4': 'local-ipv4',
'cloudstack_instance_id': 'instance-id'
}
def run(self):
result = {}
filter = module.params.get('filter')
if not filter:
for key,path in self.fact_paths.iteritems():
result[key] = self._fetch(CS_METADATA_BASE_URL + "/" + path)
result['cloudstack_user_data'] = self._get_user_data_json()
else:
if filter == 'cloudstack_user_data':
result['cloudstack_user_data'] = self._get_user_data_json()
elif filter in self.fact_paths:
result[filter] = self._fetch(CS_METADATA_BASE_URL + "/" + self.fact_paths[filter])
return result
def _get_user_data_json(self):
try:
# this data come form users, we try what we can to parse it...
return yaml.load(self._fetch(CS_USERDATA_BASE_URL))
except:
return None
def _fetch(self, path):
api_ip = self._get_api_ip()
if not api_ip:
return None
api_url = path % api_ip
(response, info) = fetch_url(module, api_url, force=True)
if response:
data = response.read()
else:
data = None
return data
def _get_dhcp_lease_file(self):
"""Return the path of the lease file."""
default_iface = self.facts['default_ipv4']['interface']
dhcp_lease_file_locations = [
'/var/lib/dhcp/dhclient.%s.leases' % default_iface, # debian / ubuntu
'/var/lib/dhclient/dhclient-%s.leases' % default_iface, # centos 6
'/var/lib/dhclient/dhclient--%s.lease' % default_iface, # centos 7
'/var/db/dhclient.leases.%s' % default_iface, # openbsd
]
for file_path in dhcp_lease_file_locations:
if os.path.exists(file_path):
return file_path
module.fail_json(msg="Could not find dhclient leases file.")
def _get_api_ip(self):
"""Return the IP of the DHCP server."""
if not self.api_ip:
dhcp_lease_file = self._get_dhcp_lease_file()
for line in open(dhcp_lease_file):
if 'dhcp-server-identifier' in line:
# get IP of string "option dhcp-server-identifier 185.19.28.176;"
line = line.translate(None, ';')
self.api_ip = line.split()[2]
break
if not self.api_ip:
module.fail_json(msg="No dhcp-server-identifier found in leases file.")
return self.api_ip
def main():
global module
module = AnsibleModule(
argument_spec = dict(
filter = dict(default=None, choices=[
'cloudstack_service_offering',
'cloudstack_availability_zone',
'cloudstack_public_hostname',
'cloudstack_public_ipv4',
'cloudstack_local_hostname',
'cloudstack_local_ipv4',
'cloudstack_instance_id',
'cloudstack_user_data',
]),
),
supports_check_mode=False
)
if not has_lib_yaml:
module.fail_json(msg="missing python library: yaml")
cs_facts = CloudStackFacts().run()
cs_facts_result = dict(changed=False, ansible_facts=cs_facts)
module.exit_json(**cs_facts_result)
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
from ansible.module_utils.facts import *
main()

@ -19,29 +19,45 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_firewall
short_description: Manages firewall rules on Apache CloudStack based clouds.
description:
- Creates and removes firewall rules.
version_added: '2.0'
author: René Moser
author: "René Moser (@resmo)"
options:
ip_address:
description:
- Public IP address the rule is assigned to.
required: true
- Public IP address the ingress rule is assigned to.
- Required if C(type=ingress).
required: false
default: null
network:
description:
- Network the egress rule is related to.
- Required if C(type=egress).
required: false
default: null
state:
description:
- State of the firewall rule.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
type:
description:
- Type of the firewall rule.
required: false
default: 'ingress'
choices: [ 'ingress', 'egress' ]
protocol:
description:
- Protocol of the firewall rule.
- C(all) is only available if C(type=egress)
required: false
default: 'tcp'
choices: [ 'tcp', 'udp', 'icmp' ]
choices: [ 'tcp', 'udp', 'icmp', 'all' ]
cidr:
description:
- CIDR (full notation) to be used for firewall rule.
@ -52,9 +68,10 @@ options:
- Start port for this rule. Considered if C(protocol=tcp) or C(protocol=udp).
required: false
default: null
aliases: [ 'port' ]
end_port:
description:
- End port for this rule. Considered if C(protocol=tcp) or C(protocol=udp).
- End port for this rule. Considered if C(protocol=tcp) or C(protocol=udp). If not specified, equal C(start_port).
required: false
default: null
icmp_type:
@ -67,37 +84,47 @@ options:
- Error code for this icmp message. Considered if C(protocol=icmp).
required: false
default: null
domain:
description:
- Domain the firewall rule is related to.
required: false
default: null
account:
description:
- Account the firewall rule is related to.
required: false
default: null
project:
description:
- Name of the project.
- Name of the project the firewall rule is related to.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
---
# Allow inbound port 80/tcp from 1.2.3.4 to 4.3.2.1
- local_action:
module: cs_firewall
ip_address: 4.3.2.1
start_port: 80
end_port: 80
port: 80
cidr: 1.2.3.4/32
# Allow inbound tcp/udp port 53 to 4.3.2.1
- local_action:
module: cs_firewall
ip_address: 4.3.2.1
start_port: 53
end_port: 53
port: 53
protocol: '{{ item }}'
with_items:
- tcp
- udp
# Ensure firewall rule is removed
- local_action:
module: cs_firewall
@ -106,6 +133,70 @@ EXAMPLES = '''
end_port: 8888
cidr: 17.0.0.0/8
state: absent
# Allow all outbound traffic
- local_action:
module: cs_firewall
network: my_network
type: egress
protocol: all
# Allow only HTTP outbound traffic for an IP
- local_action:
module: cs_firewall
network: my_network
type: egress
port: 80
cidr: 10.101.1.20
'''
RETURN = '''
---
ip_address:
description: IP address of the rule if C(type=ingress)
returned: success
type: string
sample: 10.100.212.10
type:
description: Type of the rule.
returned: success
type: string
sample: ingress
cidr:
description: CIDR of the rule.
returned: success
type: string
sample: 0.0.0.0/0
protocol:
description: Protocol of the rule.
returned: success
type: string
sample: tcp
start_port:
description: Start port of the rule.
returned: success
type: int
sample: 80
end_port:
description: End port of the rule.
returned: success
type: int
sample: 80
icmp_code:
description: ICMP code of the rule.
returned: success
type: int
sample: 1
icmp_type:
description: ICMP type of the rule.
returned: success
type: int
sample: 1
network:
description: Name of the network if C(type=egress)
returned: success
type: string
sample: my_network
'''
try:
@ -122,38 +213,57 @@ class AnsibleCloudStackFirewall(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
self.firewall_rule = None
def get_end_port(self):
if self.module.params.get('end_port'):
return self.module.params.get('end_port')
return self.module.params.get('start_port')
def get_firewall_rule(self):
if not self.firewall_rule:
cidr = self.module.params.get('cidr')
protocol = self.module.params.get('protocol')
start_port = self.module.params.get('start_port')
end_port = self.module.params.get('end_port')
icmp_code = self.module.params.get('icmp_code')
icmp_type = self.module.params.get('icmp_type')
cidr = self.module.params.get('cidr')
protocol = self.module.params.get('protocol')
start_port = self.module.params.get('start_port')
end_port = self.get_end_port()
icmp_code = self.module.params.get('icmp_code')
icmp_type = self.module.params.get('icmp_type')
fw_type = self.module.params.get('type')
if protocol in ['tcp', 'udp'] and not (start_port and end_port):
self.module.fail_json(msg="no start_port or end_port set for protocol '%s'" % protocol)
self.module.fail_json(msg="missing required argument for protocol '%s': start_port or end_port" % protocol)
if protocol == 'icmp' and not icmp_type:
self.module.fail_json(msg="no icmp_type set")
self.module.fail_json(msg="missing required argument for protocol 'icmp': icmp_type")
if protocol == 'all' and fw_type != 'egress':
self.module.fail_json(msg="protocol 'all' could only be used for type 'egress'" )
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
if fw_type == 'egress':
args['networkid'] = self.get_network(key='id')
if not args['networkid']:
self.module.fail_json(msg="missing required argument for type egress: network")
firewall_rules = self.cs.listEgressFirewallRules(**args)
else:
args['ipaddressid'] = self.get_ip_address('id')
if not args['ipaddressid']:
self.module.fail_json(msg="missing required argument for type ingress: ip_address")
firewall_rules = self.cs.listFirewallRules(**args)
args = {}
args['ipaddressid'] = self.get_ip_address_id()
args['projectid'] = self.get_project_id()
firewall_rules = self.cs.listFirewallRules(**args)
if firewall_rules and 'firewallrule' in firewall_rules:
for rule in firewall_rules['firewallrule']:
type_match = self._type_cidr_match(rule, cidr)
protocol_match = self._tcp_udp_match(rule, protocol, start_port, end_port) \
or self._icmp_match(rule, protocol, icmp_code, icmp_type)
or self._icmp_match(rule, protocol, icmp_code, icmp_type) \
or self._egress_all_match(rule, protocol, fw_type)
if type_match and protocol_match:
self.firewall_rule = rule
@ -168,6 +278,12 @@ class AnsibleCloudStackFirewall(AnsibleCloudStack):
and end_port == int(rule['endport'])
def _egress_all_match(self, rule, protocol, fw_type):
return protocol in ['all'] \
and protocol == rule['protocol'] \
and fw_type == 'egress'
def _icmp_match(self, rule, protocol, icmp_code, icmp_type):
return protocol == 'icmp' \
and protocol == rule['protocol'] \
@ -179,22 +295,58 @@ class AnsibleCloudStackFirewall(AnsibleCloudStack):
return cidr == rule['cidrlist']
def get_network(self, key=None, network=None):
if not network:
network = self.module.params.get('network')
if not network:
return None
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
args['zoneid'] = self.get_zone('id')
networks = self.cs.listNetworks(**args)
if not networks:
self.module.fail_json(msg="No networks available")
for n in networks['network']:
if network in [ n['displaytext'], n['name'], n['id'] ]:
return self._get_by_key(key, n)
break
self.module.fail_json(msg="Network '%s' not found" % network)
def create_firewall_rule(self):
firewall_rule = self.get_firewall_rule()
if not firewall_rule:
self.result['changed'] = True
args = {}
args['cidrlist'] = self.module.params.get('cidr')
args['protocol'] = self.module.params.get('protocol')
args['startport'] = self.module.params.get('start_port')
args['endport'] = self.module.params.get('end_port')
args['icmptype'] = self.module.params.get('icmp_type')
args['icmpcode'] = self.module.params.get('icmp_code')
args['ipaddressid'] = self.get_ip_address_id()
if not self.module.check_mode:
firewall_rule = self.cs.createFirewallRule(**args)
args = {}
args['cidrlist'] = self.module.params.get('cidr')
args['protocol'] = self.module.params.get('protocol')
args['startport'] = self.module.params.get('start_port')
args['endport'] = self.get_end_port()
args['icmptype'] = self.module.params.get('icmp_type')
args['icmpcode'] = self.module.params.get('icmp_code')
fw_type = self.module.params.get('type')
if not self.module.check_mode:
if fw_type == 'egress':
args['networkid'] = self.get_network(key='id')
res = self.cs.createEgressFirewallRule(**args)
else:
args['ipaddressid'] = self.get_ip_address('id')
res = self.cs.createFirewallRule(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
firewall_rule = self._poll_job(res, 'firewallrule')
return firewall_rule
@ -202,42 +354,82 @@ class AnsibleCloudStackFirewall(AnsibleCloudStack):
firewall_rule = self.get_firewall_rule()
if firewall_rule:
self.result['changed'] = True
args = {}
args = {}
args['id'] = firewall_rule['id']
fw_type = self.module.params.get('type')
if not self.module.check_mode:
res = self.cs.deleteFirewallRule(**args)
if fw_type == 'egress':
res = self.cs.deleteEgressFirewallRule(**args)
else:
res = self.cs.deleteFirewallRule(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
res = self._poll_job(res, 'firewallrule')
return firewall_rule
def get_result(self, firewall_rule):
if firewall_rule:
self.result['type'] = self.module.params.get('type')
if 'cidrlist' in firewall_rule:
self.result['cidr'] = firewall_rule['cidrlist']
if 'startport' in firewall_rule:
self.result['start_port'] = int(firewall_rule['startport'])
if 'endport' in firewall_rule:
self.result['end_port'] = int(firewall_rule['endport'])
if 'protocol' in firewall_rule:
self.result['protocol'] = firewall_rule['protocol']
if 'ipaddress' in firewall_rule:
self.result['ip_address'] = firewall_rule['ipaddress']
if 'icmpcode' in firewall_rule:
self.result['icmp_code'] = int(firewall_rule['icmpcode'])
if 'icmptype' in firewall_rule:
self.result['icmp_type'] = int(firewall_rule['icmptype'])
if 'networkid' in firewall_rule:
self.result['network'] = self.get_network(key='displaytext', network=firewall_rule['networkid'])
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
ip_address = dict(required=True, default=None),
ip_address = dict(default=None),
network = dict(default=None),
cidr = dict(default='0.0.0.0/0'),
protocol = dict(choices=['tcp', 'udp', 'icmp'], default='tcp'),
protocol = dict(choices=['tcp', 'udp', 'icmp', 'all'], default='tcp'),
type = dict(choices=['ingress', 'egress'], default='ingress'),
icmp_type = dict(type='int', default=None),
icmp_code = dict(type='int', default=None),
start_port = dict(type='int', default=None),
start_port = dict(type='int', aliases=['port'], default=None),
end_port = dict(type='int', default=None),
state = dict(choices=['present', 'absent'], default='present'),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(default='get'),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_one_of = (
['ip_address', 'network'],
),
required_together = (
['start_port', 'end_port'],
['icmp_type', 'icmp_code'],
['api_key', 'api_secret', 'api_url'],
),
mutually_exclusive = (
['icmp_type', 'start_port'],
['icmp_type', 'end_port'],
['ip_address', 'network'],
),
supports_check_mode=True
)
@ -259,6 +451,9 @@ def main():
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets

@ -23,9 +23,9 @@ DOCUMENTATION = '''
module: cs_instance
short_description: Manages instances and virtual machines on Apache CloudStack based clouds.
description:
- Deploy, start, restart, stop and destroy instances on Apache CloudStack, Citrix CloudPlatform and Exoscale.
- Deploy, start, restart, stop and destroy instances.
version_added: '2.0'
author: René Moser
author: "René Moser (@resmo)"
options:
name:
description:
@ -49,22 +49,29 @@ options:
choices: [ 'deployed', 'started', 'stopped', 'restarted', 'destroyed', 'expunged', 'present', 'absent' ]
service_offering:
description:
- Name or id of the service offering of the new instance. If not set, first found service offering is used.
- Name or id of the service offering of the new instance.
- If not set, first found service offering is used.
required: false
default: null
template:
description:
- Name or id of the template to be used for creating the new instance. Required when using C(state=present). Mutually exclusive with C(ISO) option.
- Name or id of the template to be used for creating the new instance.
- Required when using C(state=present).
- Mutually exclusive with C(ISO) option.
required: false
default: null
iso:
description:
- Name or id of the ISO to be used for creating the new instance. Required when using C(state=present). Mutually exclusive with C(template) option.
- Name or id of the ISO to be used for creating the new instance.
- Required when using C(state=present).
- Mutually exclusive with C(template) option.
required: false
default: null
hypervisor:
description:
- Name the hypervisor to be used for creating the new instance. Relevant when using C(state=present) and option C(ISO) is used. If not set, first found hypervisor will be used.
- Name the hypervisor to be used for creating the new instance.
- Relevant when using C(state=present), but only considered if not set on ISO/template.
- If not set or found on ISO/template, first found hypervisor will be used.
required: false
default: null
choices: [ 'KVM', 'VMware', 'BareMetal', 'XenServer', 'LXC', 'HyperV', 'UCS', 'OVM' ]
@ -82,7 +89,7 @@ options:
aliases: [ 'network' ]
ip_address:
description:
- IPv4 address for default instance's network during creation
- IPv4 address for default instance's network during creation.
required: false
default: null
ip6_address:
@ -106,6 +113,16 @@ options:
required: false
default: []
aliases: [ 'security_group' ]
domain:
description:
- Domain the instance is related to.
required: false
default: null
account:
description:
- Account the instance is related to.
required: false
default: null
project:
description:
- Name of the project the instance to be deployed in.
@ -113,7 +130,8 @@ options:
default: null
zone:
description:
- Name of the zone in which the instance shoud be deployed. If not set, default zone is used.
- Name of the zone in which the instance shoud be deployed.
- If not set, default zone is used.
required: false
default: null
ssh_key:
@ -138,7 +156,7 @@ options:
description:
- Force stop/start the instance if required to apply changes, otherwise a running instance will not be changed.
required: false
default: true
default: false
tags:
description:
- List of tags. Tags are a list of dictionaries having keys C(key) and C(value).
@ -154,8 +172,7 @@ extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
---
# Create a instance on CloudStack from an ISO
# Create a instance from an ISO
# NOTE: Names of offerings and ISOs depending on the CloudStack configuration.
- local_action:
module: cs_instance
@ -172,7 +189,6 @@ EXAMPLES = '''
- Sync Integration
- Storage Integration
# For changing a running instance, use the 'force' parameter
- local_action:
module: cs_instance
@ -182,7 +198,6 @@ EXAMPLES = '''
service_offering: 2cpu_2gb
force: yes
# Create or update a instance on Exoscale's public cloud
- local_action:
module: cs_instance
@ -193,19 +208,13 @@ EXAMPLES = '''
tags:
- { key: admin, value: john }
- { key: foo, value: bar }
register: vm
- debug: msg='default ip {{ vm.default_ip }} and is in state {{ vm.state }}'
# Ensure a instance has stopped
- local_action: cs_instance name=web-vm-1 state=stopped
# Ensure a instance is running
- local_action: cs_instance name=web-vm-1 state=started
# Remove a instance
- local_action: cs_instance name=web-vm-1 state=absent
'''
@ -248,10 +257,20 @@ password:
type: string
sample: Ge2oe7Do
ssh_key:
description: Name of ssh key deployed to instance.
description: Name of SSH key deployed to instance.
returned: success
type: string
sample: key@work
domain:
description: Domain the instance is related to.
returned: success
type: string
sample: example domain
account:
description: Account the instance is related to.
returned: success
type: string
sample: example account
project:
description: Name of project the instance is related to.
returned: success
@ -263,7 +282,7 @@ default_ip:
type: string
sample: 10.23.37.42
public_ip:
description: Public IP address with instance via static nat rule.
description: Public IP address with instance via static NAT rule.
returned: success
type: string
sample: 1.2.3.4
@ -307,6 +326,16 @@ tags:
returned: success
type: dict
sample: '[ { "key": "foo", "value": "bar" } ]'
hypervisor:
description: Hypervisor related to this instance.
returned: success
type: string
sample: KVM
instance_name:
description: Internal name of the instance (ROOT admin only).
returned: success
type: string
sample: i-44-3992-VM
'''
import base64
@ -326,6 +355,8 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.instance = None
self.template = None
self.iso = None
def get_service_offering_id(self):
@ -342,7 +373,7 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
self.module.fail_json(msg="Service offering '%s' not found" % service_offering)
def get_template_or_iso_id(self):
def get_template_or_iso(self, key=None):
template = self.module.params.get('template')
iso = self.module.params.get('iso')
@ -352,20 +383,35 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
if template and iso:
self.module.fail_json(msg="Template are ISO are mutually exclusive.")
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
args['zoneid'] = self.get_zone('id')
if template:
templates = self.cs.listTemplates(templatefilter='executable')
if self.template:
return self._get_by_key(key, self.template)
args['templatefilter'] = 'executable'
templates = self.cs.listTemplates(**args)
if templates:
for t in templates['template']:
if template in [ t['displaytext'], t['name'], t['id'] ]:
return t['id']
self.template = t
return self._get_by_key(key, self.template)
self.module.fail_json(msg="Template '%s' not found" % template)
elif iso:
isos = self.cs.listIsos()
if self.iso:
return self._get_by_key(key, self.iso)
args['isofilter'] = 'executable'
isos = self.cs.listIsos(**args)
if isos:
for i in isos['iso']:
if iso in [ i['displaytext'], i['name'], i['id'] ]:
return i['id']
self.iso = i
return self._get_by_key(key, self.iso)
self.module.fail_json(msg="ISO '%s' not found" % iso)
@ -375,7 +421,10 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
if not disk_offering:
return None
disk_offerings = self.cs.listDiskOfferings()
args = {}
args['domainid'] = self.get_domain('id')
disk_offerings = self.cs.listDiskOfferings(**args)
if disk_offerings:
for d in disk_offerings['diskoffering']:
if disk_offering in [ d['displaytext'], d['name'], d['id'] ]:
@ -388,9 +437,12 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
if not instance:
instance_name = self.module.params.get('name')
args = {}
args['projectid'] = self.get_project_id()
args['zoneid'] = self.get_zone_id()
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
args['zoneid'] = self.get_zone('id')
instances = self.cs.listVirtualMachines(**args)
if instances:
for v in instances['virtualmachine']:
@ -405,9 +457,12 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
if not network_names:
return None
args = {}
args['zoneid'] = self.get_zone_id()
args['projectid'] = self.get_project_id()
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
args['zoneid'] = self.get_zone('id')
networks = self.cs.listNetworks(**args)
if not networks:
self.module.fail_json(msg="No networks available")
@ -457,13 +512,14 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
self.result['changed'] = True
args = {}
args['templateid'] = self.get_template_or_iso_id()
args['zoneid'] = self.get_zone_id()
args['templateid'] = self.get_template_or_iso(key='id')
args['zoneid'] = self.get_zone('id')
args['serviceofferingid'] = self.get_service_offering_id()
args['projectid'] = self.get_project_id()
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
args['diskofferingid'] = self.get_disk_offering_id()
args['networkids'] = self.get_network_ids()
args['hypervisor'] = self.get_hypervisor()
args['userdata'] = self.get_user_data()
args['keyboard'] = self.module.params.get('keyboard')
args['ipaddress'] = self.module.params.get('ip_address')
@ -475,6 +531,10 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
args['securitygroupnames'] = ','.join(self.module.params.get('security_groups'))
args['affinitygroupnames'] = ','.join(self.module.params.get('affinity_groups'))
template_iso = self.get_template_or_iso()
if 'hypervisor' not in template_iso:
args['hypervisor'] = self.get_hypervisor()
instance = None
if not self.module.check_mode:
instance = self.cs.deployVirtualMachine(**args)
@ -498,12 +558,12 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
args_instance_update['group'] = self.module.params.get('group')
args_instance_update['displayname'] = self.get_display_name()
args_instance_update['userdata'] = self.get_user_data()
args_instance_update['ostypeid'] = self.get_os_type_id()
args_instance_update['ostypeid'] = self.get_os_type('id')
args_ssh_key = {}
args_ssh_key['id'] = instance['id']
args_ssh_key['keypair'] = self.module.params.get('ssh_key')
args_ssh_key['projectid'] = self.get_project_id()
args_ssh_key['projectid'] = self.get_project('id')
if self._has_changed(args_service_offering, instance) or \
self._has_changed(args_instance_update, instance) or \
@ -668,8 +728,16 @@ class AnsibleCloudStackInstance(AnsibleCloudStack):
self.result['display_name'] = instance['displayname']
if 'group' in instance:
self.result['group'] = instance['group']
if 'domain' in instance:
self.result['domain'] = instance['domain']
if 'account' in instance:
self.result['account'] = instance['account']
if 'project' in instance:
self.result['project'] = instance['project']
if 'hypervisor' in instance:
self.result['hypervisor'] = instance['hypervisor']
if 'instancename' in instance:
self.result['instance_name'] = instance['instancename']
if 'publicip' in instance:
self.result['public_ip'] = instance['public_ip']
if 'passwordenabled' in instance:
@ -729,9 +797,11 @@ def main():
disk_offering = dict(default=None),
disk_size = dict(type='int', default=None),
keyboard = dict(choices=['de', 'de-ch', 'es', 'fi', 'fr', 'fr-be', 'fr-ch', 'is', 'it', 'jp', 'nl-be', 'no', 'pt', 'uk', 'us'], default=None),
hypervisor = dict(default=None),
hypervisor = dict(choices=['KVM', 'VMware', 'BareMetal', 'XenServer', 'LXC', 'HyperV', 'UCS', 'OVM'], default=None),
security_groups = dict(type='list', aliases=[ 'security_group' ], default=[]),
affinity_groups = dict(type='list', aliases=[ 'affinity_group' ], default=[]),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
user_data = dict(default=None),
zone = dict(default=None),
@ -740,9 +810,13 @@ def main():
tags = dict(type='list', aliases=[ 'tag' ], default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(default='get'),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
@ -781,6 +855,9 @@ def main():
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets

@ -0,0 +1,233 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_instancegroup
short_description: Manages instance groups on Apache CloudStack based clouds.
description:
- Create and remove instance groups.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the instance group.
required: true
domain:
description:
- Domain the instance group is related to.
required: false
default: null
account:
description:
- Account the instance group is related to.
required: false
default: null
project:
description:
- Project the instance group is related to.
required: false
default: null
state:
description:
- State of the instance group.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Create an instance group
- local_action:
module: cs_instancegroup
name: loadbalancers
# Remove an instance group
- local_action:
module: cs_instancegroup
name: loadbalancers
state: absent
'''
RETURN = '''
---
id:
description: ID of the instance group.
returned: success
type: string
sample: 04589590-ac63-4ffc-93f5-b698b8ac38b6
name:
description: Name of the instance group.
returned: success
type: string
sample: webservers
created:
description: Date when the instance group was created.
returned: success
type: string
sample: 2015-05-03T15:05:51+0200
domain:
description: Domain the instance group is related to.
returned: success
type: string
sample: example domain
account:
description: Account the instance group is related to.
returned: success
type: string
sample: example account
project:
description: Project the instance group is related to.
returned: success
type: string
sample: example project
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackInstanceGroup(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.instance_group = None
def get_instance_group(self):
if self.instance_group:
return self.instance_group
name = self.module.params.get('name')
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
instance_groups = self.cs.listInstanceGroups(**args)
if instance_groups:
for g in instance_groups['instancegroup']:
if name in [ g['name'], g['id'] ]:
self.instance_group = g
break
return self.instance_group
def present_instance_group(self):
instance_group = self.get_instance_group()
if not instance_group:
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
if not self.module.check_mode:
res = self.cs.createInstanceGroup(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
instance_group = res['instancegroup']
return instance_group
def absent_instance_group(self):
instance_group = self.get_instance_group()
if instance_group:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.deleteInstanceGroup(id=instance_group['id'])
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
return instance_group
def get_result(self, instance_group):
if instance_group:
if 'id' in instance_group:
self.result['id'] = instance_group['id']
if 'created' in instance_group:
self.result['created'] = instance_group['created']
if 'name' in instance_group:
self.result['name'] = instance_group['name']
if 'project' in instance_group:
self.result['project'] = instance_group['project']
if 'domain' in instance_group:
self.result['domain'] = instance_group['domain']
if 'account' in instance_group:
self.result['account'] = instance_group['account']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(default='present', choices=['present', 'absent']),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_ig = AnsibleCloudStackInstanceGroup(module)
state = module.params.get('state')
if state in ['absent']:
instance_group = acs_ig.absent_instance_group()
else:
instance_group = acs_ig.present_instance_group()
result = acs_ig.get_result(instance_group)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -25,7 +25,7 @@ short_description: Manages ISOs images on Apache CloudStack based clouds.
description:
- Register and remove ISO images.
version_added: '2.0'
author: René Moser
author: "René Moser (@resmo)"
options:
name:
description:
@ -73,6 +73,16 @@ options:
- Register the ISO to be bootable. Only used if C(state) is present.
required: false
default: true
domain:
description:
- Domain the ISO is related to.
required: false
default: null
account:
description:
- Account the ISO is related to.
required: false
default: null
project:
description:
- Name of the project the ISO to be registered in.
@ -99,7 +109,6 @@ extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
---
# Register an ISO if ISO name does not already exist.
- local_action:
module: cs_iso
@ -107,23 +116,20 @@ EXAMPLES = '''
url: http://mirror.switch.ch/ftp/mirror/debian-cd/current/amd64/iso-cd/debian-7.7.0-amd64-netinst.iso
os_type: Debian GNU/Linux 7(64-bit)
# Register an ISO with given name if ISO md5 checksum does not already exist.
- local_action:
module: cs_iso
name: Debian 7 64-bit
url: http://mirror.switch.ch/ftp/mirror/debian-cd/current/amd64/iso-cd/debian-7.7.0-amd64-netinst.iso
os_type:
os_type: Debian GNU/Linux 7(64-bit)
checksum: 0b31bccccb048d20b551f70830bb7ad0
# Remove an ISO by name
- local_action:
module: cs_iso
name: Debian 7 64-bit
state: absent
# Remove an ISO by checksum
- local_action:
module: cs_iso
@ -169,6 +175,21 @@ created:
returned: success
type: string
sample: 2015-03-29T14:57:06+0200
domain:
description: Domain the ISO is related to.
returned: success
type: string
sample: example domain
account:
description: Account the ISO is related to.
returned: success
type: string
sample: example account
project:
description: Project the ISO is related to.
returned: success
type: string
sample: example project
'''
try:
@ -185,20 +206,26 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
self.iso = None
def register_iso(self):
iso = self.get_iso()
if not iso:
args = {}
args['zoneid'] = self.get_zone_id()
args['projectid'] = self.get_project_id()
args['bootable'] = self.module.params.get('bootable')
args['ostypeid'] = self.get_os_type_id()
args = {}
args['zoneid'] = self.get_zone('id')
args['domainid'] = self.get_domain('id')
args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['bootable'] = self.module.params.get('bootable')
args['ostypeid'] = self.get_os_type('id')
args['name'] = self.module.params.get('name')
args['displaytext'] = self.module.params.get('name')
args['checksum'] = self.module.params.get('checksum')
args['isdynamicallyscalable'] = self.module.params.get('is_dynamically_scalable')
args['isfeatured'] = self.module.params.get('is_featured')
args['ispublic'] = self.module.params.get('is_public')
if args['bootable'] and not args['ostypeid']:
self.module.fail_json(msg="OS type 'os_type' is requried if 'bootable=true'.")
@ -206,13 +233,6 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
if not args['url']:
self.module.fail_json(msg="URL is requried.")
args['name'] = self.module.params.get('name')
args['displaytext'] = self.module.params.get('name')
args['checksum'] = self.module.params.get('checksum')
args['isdynamicallyscalable'] = self.module.params.get('is_dynamically_scalable')
args['isfeatured'] = self.module.params.get('is_featured')
args['ispublic'] = self.module.params.get('is_public')
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.registerIso(**args)
@ -222,11 +242,14 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
def get_iso(self):
if not self.iso:
args = {}
args['isready'] = self.module.params.get('is_ready')
args['isofilter'] = self.module.params.get('iso_filter')
args['projectid'] = self.get_project_id()
args['zoneid'] = self.get_zone_id()
args = {}
args['isready'] = self.module.params.get('is_ready')
args['isofilter'] = self.module.params.get('iso_filter')
args['domainid'] = self.get_domain('id')
args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['zoneid'] = self.get_zone('id')
# if checksum is set, we only look on that.
checksum = self.module.params.get('checksum')
@ -249,10 +272,12 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
iso = self.get_iso()
if iso:
self.result['changed'] = True
args = {}
args['id'] = iso['id']
args['projectid'] = self.get_project_id()
args['zoneid'] = self.get_zone_id()
args = {}
args['id'] = iso['id']
args['projectid'] = self.get_project('id')
args['zoneid'] = self.get_zone('id')
if not self.module.check_mode:
res = self.cs.deleteIso(**args)
return iso
@ -274,17 +299,25 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
self.result['is_ready'] = iso['isready']
if 'created' in iso:
self.result['created'] = iso['created']
if 'project' in iso:
self.result['project'] = iso['project']
if 'domain' in iso:
self.result['domain'] = iso['domain']
if 'account' in iso:
self.result['account'] = iso['account']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True, default=None),
name = dict(required=True),
url = dict(default=None),
os_type = dict(default=None),
zone = dict(default=None),
iso_filter = dict(default='self', choices=[ 'featured', 'self', 'selfexecutable','sharedexecutable','executable', 'community' ]),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
checksum = dict(default=None),
is_ready = dict(choices=BOOLEANS, default=False),
@ -293,9 +326,13 @@ def main():
is_dynamically_scalable = dict(choices=BOOLEANS, default=False),
state = dict(choices=['present', 'absent'], default='present'),
api_key = dict(default=None),
api_secret = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(default='get'),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
@ -317,6 +354,9 @@ def main():
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets

@ -0,0 +1,637 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_network
short_description: Manages networks on Apache CloudStack based clouds.
description:
- Create, update, restart and delete networks.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name (case sensitive) of the network.
required: true
displaytext:
description:
- Displaytext of the network.
- If not specified, C(name) will be used as displaytext.
required: false
default: null
network_offering:
description:
- Name of the offering for the network.
- Required if C(state=present).
required: false
default: null
start_ip:
description:
- The beginning IPv4 address of the network belongs to.
- Only considered on create.
required: false
default: null
end_ip:
description:
- The ending IPv4 address of the network belongs to.
- If not specified, value of C(start_ip) is used.
- Only considered on create.
required: false
default: null
gateway:
description:
- The gateway of the network.
- Required for shared networks and isolated networks when it belongs to VPC.
- Only considered on create.
required: false
default: null
netmask:
description:
- The netmask of the network.
- Required for shared networks and isolated networks when it belongs to VPC.
- Only considered on create.
required: false
default: null
start_ipv6:
description:
- The beginning IPv6 address of the network belongs to.
- Only considered on create.
required: false
default: null
end_ipv6:
description:
- The ending IPv6 address of the network belongs to.
- If not specified, value of C(start_ipv6) is used.
- Only considered on create.
required: false
default: null
cidr_ipv6:
description:
- CIDR of IPv6 network, must be at least /64.
- Only considered on create.
required: false
default: null
gateway_ipv6:
description:
- The gateway of the IPv6 network.
- Required for shared networks.
- Only considered on create.
required: false
default: null
vlan:
description:
- The ID or VID of the network.
required: false
default: null
vpc:
description:
- The ID or VID of the network.
required: false
default: null
isolated_pvlan:
description:
- The isolated private vlan for this network.
required: false
default: null
clean_up:
description:
- Cleanup old network elements.
- Only considered on C(state=restarted).
required: false
default: false
acl_type:
description:
- Access control type.
- Only considered on create.
required: false
default: account
choices: [ 'account', 'domain' ]
network_domain:
description:
- The network domain.
required: false
default: null
state:
description:
- State of the network.
required: false
default: present
choices: [ 'present', 'absent', 'restarted' ]
zone:
description:
- Name of the zone in which the network should be deployed.
- If not set, default zone is used.
required: false
default: null
project:
description:
- Name of the project the network to be deployed in.
required: false
default: null
domain:
description:
- Domain the network is related to.
required: false
default: null
account:
description:
- Account the network is related to.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# create a network
- local_action:
module: cs_network
name: my network
zone: gva-01
network_offering: DefaultIsolatedNetworkOfferingWithSourceNatService
network_domain: example.com
# update a network
- local_action:
module: cs_network
name: my network
displaytext: network of domain example.local
network_domain: example.local
# restart a network with clean up
- local_action:
module: cs_network
name: my network
clean_up: yes
state: restared
# remove a network
- local_action:
module: cs_network
name: my network
state: absent
'''
RETURN = '''
---
id:
description: ID of the network.
returned: success
type: string
sample: 04589590-ac63-4ffc-93f5-b698b8ac38b6
name:
description: Name of the network.
returned: success
type: string
sample: web project
displaytext:
description: Display text of the network.
returned: success
type: string
sample: web project
dns1:
description: IP address of the 1st nameserver.
returned: success
type: string
sample: 1.2.3.4
dns2:
description: IP address of the 2nd nameserver.
returned: success
type: string
sample: 1.2.3.4
cidr:
description: IPv4 network CIDR.
returned: success
type: string
sample: 10.101.64.0/24
gateway:
description: IPv4 gateway.
returned: success
type: string
sample: 10.101.64.1
netmask:
description: IPv4 netmask.
returned: success
type: string
sample: 255.255.255.0
cidr_ipv6:
description: IPv6 network CIDR.
returned: success
type: string
sample: 2001:db8::/64
gateway_ipv6:
description: IPv6 gateway.
returned: success
type: string
sample: 2001:db8::1
state:
description: State of the network.
returned: success
type: string
sample: Implemented
zone:
description: Name of zone.
returned: success
type: string
sample: ch-gva-2
domain:
description: Domain the network is related to.
returned: success
type: string
sample: ROOT
account:
description: Account the network is related to.
returned: success
type: string
sample: example account
project:
description: Name of project.
returned: success
type: string
sample: Production
tags:
description: List of resource tags associated with the network.
returned: success
type: dict
sample: '[ { "key": "foo", "value": "bar" } ]'
acl_type:
description: Access type of the network (Domain, Account).
returned: success
type: string
sample: Account
broadcast_domaintype:
description: Broadcast domain type of the network.
returned: success
type: string
sample: Vlan
type:
description: Type of the network.
returned: success
type: string
sample: Isolated
traffic_type:
description: Traffic type of the network.
returned: success
type: string
sample: Guest
state:
description: State of the network (Allocated, Implemented, Setup).
returned: success
type: string
sample: Allocated
is_persistent:
description: Whether the network is persistent or not.
returned: success
type: boolean
sample: false
network_domain:
description: The network domain
returned: success
type: string
sample: example.local
network_offering:
description: The network offering name.
returned: success
type: string
sample: DefaultIsolatedNetworkOfferingWithSourceNatService
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackNetwork(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.network = None
def get_or_fallback(self, key=None, fallback_key=None):
value = self.module.params.get(key)
if not value:
value = self.module.params.get(fallback_key)
return value
def get_vpc(self, key=None):
vpc = self.module.params.get('vpc')
if not vpc:
return None
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
args['zoneid'] = self.get_zone(key='id')
vpcs = self.cs.listVPCs(**args)
if vpcs:
for v in vpcs['vpc']:
if vpc in [ v['name'], v['displaytext'], v['id'] ]:
return self._get_by_key(key, v)
self.module.fail_json(msg="VPC '%s' not found" % vpc)
def get_network_offering(self, key=None):
network_offering = self.module.params.get('network_offering')
if not network_offering:
self.module.fail_json(msg="missing required arguments: network_offering")
args = {}
args['zoneid'] = self.get_zone(key='id')
network_offerings = self.cs.listNetworkOfferings(**args)
if network_offerings:
for no in network_offerings['networkoffering']:
if network_offering in [ no['name'], no['displaytext'], no['id'] ]:
return self._get_by_key(key, no)
self.module.fail_json(msg="Network offering '%s' not found" % network_offering)
def _get_args(self):
args = {}
args['name'] = self.module.params.get('name')
args['displaytext'] = self.get_or_fallback('displaytext','name')
args['networkdomain'] = self.module.params.get('network_domain')
args['networkofferingid'] = self.get_network_offering(key='id')
return args
def get_network(self):
if not self.network:
network = self.module.params.get('name')
args = {}
args['zoneid'] = self.get_zone(key='id')
args['projectid'] = self.get_project(key='id')
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
networks = self.cs.listNetworks(**args)
if networks:
for n in networks['network']:
if network in [ n['name'], n['displaytext'], n['id']]:
self.network = n
break
return self.network
def present_network(self):
network = self.get_network()
if not network:
network = self.create_network(network)
else:
network = self.update_network(network)
return network
def update_network(self, network):
args = self._get_args()
args['id'] = network['id']
if self._has_changed(args, network):
self.result['changed'] = True
if not self.module.check_mode:
network = self.cs.updateNetwork(**args)
if 'errortext' in network:
self.module.fail_json(msg="Failed: '%s'" % network['errortext'])
poll_async = self.module.params.get('poll_async')
if network and poll_async:
network = self._poll_job(network, 'network')
return network
def create_network(self, network):
self.result['changed'] = True
args = self._get_args()
args['acltype'] = self.module.params.get('acl_type')
args['zoneid'] = self.get_zone(key='id')
args['projectid'] = self.get_project(key='id')
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['startip'] = self.module.params.get('start_ip')
args['endip'] = self.get_or_fallback('end_ip', 'start_ip')
args['netmask'] = self.module.params.get('netmask')
args['gateway'] = self.module.params.get('gateway')
args['startipv6'] = self.module.params.get('start_ipv6')
args['endipv6'] = self.get_or_fallback('end_ipv6', 'start_ipv6')
args['ip6cidr'] = self.module.params.get('cidr_ipv6')
args['ip6gateway'] = self.module.params.get('gateway_ipv6')
args['vlan'] = self.module.params.get('vlan')
args['isolatedpvlan'] = self.module.params.get('isolated_pvlan')
args['subdomainaccess'] = self.module.params.get('subdomain_access')
args['vpcid'] = self.get_vpc(key='id')
if not self.module.check_mode:
res = self.cs.createNetwork(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
network = res['network']
return network
def restart_network(self):
network = self.get_network()
if not network:
self.module.fail_json(msg="No network named '%s' found." % self.module.params('name'))
# Restarting only available for these states
if network['state'].lower() in [ 'implemented', 'setup' ]:
self.result['changed'] = True
args = {}
args['id'] = network['id']
args['cleanup'] = self.module.params.get('clean_up')
if not self.module.check_mode:
network = self.cs.restartNetwork(**args)
if 'errortext' in network:
self.module.fail_json(msg="Failed: '%s'" % network['errortext'])
poll_async = self.module.params.get('poll_async')
if network and poll_async:
network = self._poll_job(network, 'network')
return network
def absent_network(self):
network = self.get_network()
if network:
self.result['changed'] = True
args = {}
args['id'] = network['id']
if not self.module.check_mode:
res = self.cs.deleteNetwork(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'network')
return network
def get_result(self, network):
if network:
if 'id' in network:
self.result['id'] = network['id']
if 'name' in network:
self.result['name'] = network['name']
if 'displaytext' in network:
self.result['displaytext'] = network['displaytext']
if 'dns1' in network:
self.result['dns1'] = network['dns1']
if 'dns2' in network:
self.result['dns2'] = network['dns2']
if 'cidr' in network:
self.result['cidr'] = network['cidr']
if 'broadcastdomaintype' in network:
self.result['broadcast_domaintype'] = network['broadcastdomaintype']
if 'netmask' in network:
self.result['netmask'] = network['netmask']
if 'gateway' in network:
self.result['gateway'] = network['gateway']
if 'ip6cidr' in network:
self.result['cidr_ipv6'] = network['ip6cidr']
if 'ip6gateway' in network:
self.result['gateway_ipv6'] = network['ip6gateway']
if 'state' in network:
self.result['state'] = network['state']
if 'type' in network:
self.result['type'] = network['type']
if 'traffictype' in network:
self.result['traffic_type'] = network['traffictype']
if 'zone' in network:
self.result['zone'] = network['zonename']
if 'domain' in network:
self.result['domain'] = network['domain']
if 'account' in network:
self.result['account'] = network['account']
if 'project' in network:
self.result['project'] = network['project']
if 'acltype' in network:
self.result['acl_type'] = network['acltype']
if 'networkdomain' in network:
self.result['network_domain'] = network['networkdomain']
if 'networkofferingname' in network:
self.result['network_offering'] = network['networkofferingname']
if 'ispersistent' in network:
self.result['is_persistent'] = network['ispersistent']
if 'tags' in network:
self.result['tags'] = []
for tag in network['tags']:
result_tag = {}
result_tag['key'] = tag['key']
result_tag['value'] = tag['value']
self.result['tags'].append(result_tag)
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
displaytext = dict(default=None),
network_offering = dict(default=None),
zone = dict(default=None),
start_ip = dict(default=None),
end_ip = dict(default=None),
gateway = dict(default=None),
netmask = dict(default=None),
start_ipv6 = dict(default=None),
end_ipv6 = dict(default=None),
cidr_ipv6 = dict(default=None),
gateway_ipv6 = dict(default=None),
vlan = dict(default=None),
vpc = dict(default=None),
isolated_pvlan = dict(default=None),
clean_up = dict(type='bool', choices=BOOLEANS, default=False),
network_domain = dict(default=None),
state = dict(choices=['present', 'absent', 'restarted' ], default='present'),
acl_type = dict(choices=['account', 'domain'], default='account'),
project = dict(default=None),
domain = dict(default=None),
account = dict(default=None),
poll_async = dict(type='bool', choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
['start_ip', 'netmask', 'gateway'],
['start_ipv6', 'cidr_ipv6', 'gateway_ipv6'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_network = AnsibleCloudStackNetwork(module)
state = module.params.get('state')
if state in ['absent']:
network = acs_network.absent_network()
elif state in ['restarted']:
network = acs_network.restart_network()
else:
network = acs_network.present_network()
result = acs_network.get_result(network)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -0,0 +1,437 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_portforward
short_description: Manages port forwarding rules on Apache CloudStack based clouds.
description:
- Create, update and remove port forwarding rules.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
ip_address:
description:
- Public IP address the rule is assigned to.
required: true
vm:
description:
- Name of virtual machine which we make the port forwarding rule for.
- Required if C(state=present).
required: false
default: null
state:
description:
- State of the port forwarding rule.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
protocol:
description:
- Protocol of the port forwarding rule.
required: false
default: 'tcp'
choices: [ 'tcp', 'udp' ]
public_port:
description:
- Start public port for this rule.
required: true
public_end_port:
description:
- End public port for this rule.
- If not specified equal C(public_port).
required: false
default: null
private_port:
description:
- Start private port for this rule.
required: true
private_end_port:
description:
- End private port for this rule.
- If not specified equal C(private_port).
required: false
default: null
open_firewall:
description:
- Whether the firewall rule for public port should be created, while creating the new rule.
- Use M(cs_firewall) for managing firewall rules.
required: false
default: false
vm_guest_ip:
description:
- VM guest NIC secondary IP address for the port forwarding rule.
required: false
default: false
domain:
description:
- Domain the C(vm) is related to.
required: false
default: null
account:
description:
- Account the C(vm) is related to.
required: false
default: null
project:
description:
- Name of the project the C(vm) is located in.
required: false
default: null
zone:
description:
- Name of the zone in which the virtual machine is in.
- If not set, default zone is used.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# 1.2.3.4:80 -> web01:8080
- local_action:
module: cs_portforward
ip_address: 1.2.3.4
vm: web01
public_port: 80
private_port: 8080
# forward SSH and open firewall
- local_action:
module: cs_portforward
ip_address: '{{ public_ip }}'
vm: '{{ inventory_hostname }}'
public_port: '{{ ansible_ssh_port }}'
private_port: 22
open_firewall: true
# forward DNS traffic, but do not open firewall
- local_action:
module: cs_portforward
ip_address: 1.2.3.4
vm: '{{ inventory_hostname }}'
public_port: 53
private_port: 53
protocol: udp
open_firewall: true
# remove ssh port forwarding
- local_action:
module: cs_portforward
ip_address: 1.2.3.4
public_port: 22
private_port: 22
state: absent
'''
RETURN = '''
---
ip_address:
description: Public IP address.
returned: success
type: string
sample: 1.2.3.4
protocol:
description: Protocol.
returned: success
type: string
sample: tcp
private_port:
description: Start port on the virtual machine's IP address.
returned: success
type: int
sample: 80
private_end_port:
description: End port on the virtual machine's IP address.
returned: success
type: int
public_port:
description: Start port on the public IP address.
returned: success
type: int
sample: 80
public_end_port:
description: End port on the public IP address.
returned: success
type: int
sample: 80
tags:
description: Tags related to the port forwarding.
returned: success
type: list
sample: []
vm_name:
description: Name of the virtual machine.
returned: success
type: string
sample: web-01
vm_display_name:
description: Display name of the virtual machine.
returned: success
type: string
sample: web-01
vm_guest_ip:
description: IP of the virtual machine.
returned: success
type: string
sample: 10.101.65.152
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackPortforwarding(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.portforwarding_rule = None
self.vm_default_nic = None
def get_public_end_port(self):
if not self.module.params.get('public_end_port'):
return self.module.params.get('public_port')
return self.module.params.get('public_end_port')
def get_private_end_port(self):
if not self.module.params.get('private_end_port'):
return self.module.params.get('private_port')
return self.module.params.get('private_end_port')
def get_vm_guest_ip(self):
vm_guest_ip = self.module.params.get('vm_guest_ip')
default_nic = self.get_vm_default_nic()
if not vm_guest_ip:
return default_nic['ipaddress']
for secondary_ip in default_nic['secondaryip']:
if vm_guest_ip == secondary_ip['ipaddress']:
return vm_guest_ip
self.module.fail_json(msg="Secondary IP '%s' not assigned to VM" % vm_guest_ip)
def get_vm_default_nic(self):
if self.vm_default_nic:
return self.vm_default_nic
nics = self.cs.listNics(virtualmachineid=self.get_vm(key='id'))
if nics:
for n in nics['nic']:
if n['isdefault']:
self.vm_default_nic = n
return self.vm_default_nic
self.module.fail_json(msg="No default IP address of VM '%s' found" % self.module.params.get('vm'))
def get_portforwarding_rule(self):
if not self.portforwarding_rule:
protocol = self.module.params.get('protocol')
public_port = self.module.params.get('public_port')
public_end_port = self.get_public_end_port()
private_port = self.module.params.get('private_port')
private_end_port = self.get_public_end_port()
args = {}
args['ipaddressid'] = self.get_ip_address(key='id')
args['projectid'] = self.get_project(key='id')
portforwarding_rules = self.cs.listPortForwardingRules(**args)
if portforwarding_rules and 'portforwardingrule' in portforwarding_rules:
for rule in portforwarding_rules['portforwardingrule']:
if protocol == rule['protocol'] \
and public_port == int(rule['publicport']):
self.portforwarding_rule = rule
break
return self.portforwarding_rule
def present_portforwarding_rule(self):
portforwarding_rule = self.get_portforwarding_rule()
if portforwarding_rule:
portforwarding_rule = self.update_portforwarding_rule(portforwarding_rule)
else:
portforwarding_rule = self.create_portforwarding_rule()
return portforwarding_rule
def create_portforwarding_rule(self):
args = {}
args['protocol'] = self.module.params.get('protocol')
args['publicport'] = self.module.params.get('public_port')
args['publicendport'] = self.get_public_end_port()
args['privateport'] = self.module.params.get('private_port')
args['privateendport'] = self.get_private_end_port()
args['openfirewall'] = self.module.params.get('open_firewall')
args['vmguestip'] = self.get_vm_guest_ip()
args['ipaddressid'] = self.get_ip_address(key='id')
args['virtualmachineid'] = self.get_vm(key='id')
portforwarding_rule = None
self.result['changed'] = True
if not self.module.check_mode:
portforwarding_rule = self.cs.createPortForwardingRule(**args)
poll_async = self.module.params.get('poll_async')
if poll_async:
portforwarding_rule = self._poll_job(portforwarding_rule, 'portforwardingrule')
return portforwarding_rule
def update_portforwarding_rule(self, portforwarding_rule):
args = {}
args['protocol'] = self.module.params.get('protocol')
args['publicport'] = self.module.params.get('public_port')
args['publicendport'] = self.get_public_end_port()
args['privateport'] = self.module.params.get('private_port')
args['privateendport'] = self.get_private_end_port()
args['openfirewall'] = self.module.params.get('open_firewall')
args['vmguestip'] = self.get_vm_guest_ip()
args['ipaddressid'] = self.get_ip_address(key='id')
args['virtualmachineid'] = self.get_vm(key='id')
if self._has_changed(args, portforwarding_rule):
self.result['changed'] = True
if not self.module.check_mode:
# API broken in 4.2.1?, workaround using remove/create instead of update
# portforwarding_rule = self.cs.updatePortForwardingRule(**args)
self.absent_portforwarding_rule()
portforwarding_rule = self.cs.createPortForwardingRule(**args)
poll_async = self.module.params.get('poll_async')
if poll_async:
portforwarding_rule = self._poll_job(portforwarding_rule, 'portforwardingrule')
return portforwarding_rule
def absent_portforwarding_rule(self):
portforwarding_rule = self.get_portforwarding_rule()
if portforwarding_rule:
self.result['changed'] = True
args = {}
args['id'] = portforwarding_rule['id']
if not self.module.check_mode:
res = self.cs.deletePortForwardingRule(**args)
poll_async = self.module.params.get('poll_async')
if poll_async:
self._poll_job(res, 'portforwardingrule')
return portforwarding_rule
def get_result(self, portforwarding_rule):
if portforwarding_rule:
if 'id' in portforwarding_rule:
self.result['id'] = portforwarding_rule['id']
if 'virtualmachinedisplayname' in portforwarding_rule:
self.result['vm_display_name'] = portforwarding_rule['virtualmachinedisplayname']
if 'virtualmachinename' in portforwarding_rule:
self.result['vm_name'] = portforwarding_rule['virtualmachinename']
if 'ipaddress' in portforwarding_rule:
self.result['ip_address'] = portforwarding_rule['ipaddress']
if 'vmguestip' in portforwarding_rule:
self.result['vm_guest_ip'] = portforwarding_rule['vmguestip']
if 'publicport' in portforwarding_rule:
self.result['public_port'] = int(portforwarding_rule['publicport'])
if 'publicendport' in portforwarding_rule:
self.result['public_end_port'] = int(portforwarding_rule['publicendport'])
if 'privateport' in portforwarding_rule:
self.result['private_port'] = int(portforwarding_rule['privateport'])
if 'privateendport' in portforwarding_rule:
self.result['private_end_port'] = int(portforwarding_rule['privateendport'])
if 'protocol' in portforwarding_rule:
self.result['protocol'] = portforwarding_rule['protocol']
if 'tags' in portforwarding_rule:
self.result['tags'] = []
for tag in portforwarding_rule['tags']:
result_tag = {}
result_tag['key'] = tag['key']
result_tag['value'] = tag['value']
self.result['tags'].append(result_tag)
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
ip_address = dict(required=True),
protocol= dict(choices=['tcp', 'udp'], default='tcp'),
public_port = dict(type='int', required=True),
public_end_port = dict(type='int', default=None),
private_port = dict(type='int', required=True),
private_end_port = dict(type='int', default=None),
state = dict(choices=['present', 'absent'], default='present'),
open_firewall = dict(choices=BOOLEANS, default=False),
vm_guest_ip = dict(default=None),
vm = dict(default=None),
zone = dict(default=None),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_pf = AnsibleCloudStackPortforwarding(module)
state = module.params.get('state')
if state in ['absent']:
pf_rule = acs_pf.absent_portforwarding_rule()
else:
pf_rule = acs_pf.present_portforwarding_rule()
result = acs_pf.get_result(pf_rule)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -0,0 +1,342 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_project
short_description: Manages projects on Apache CloudStack based clouds.
description:
- Create, update, suspend, activate and remove projects.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the project.
required: true
displaytext:
description:
- Displaytext of the project.
- If not specified, C(name) will be used as displaytext.
required: false
default: null
state:
description:
- State of the project.
required: false
default: 'present'
choices: [ 'present', 'absent', 'active', 'suspended' ]
domain:
description:
- Domain the project is related to.
required: false
default: null
account:
description:
- Account the project is related to.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Create a project
- local_action:
module: cs_project
name: web
# Rename a project
- local_action:
module: cs_project
name: web
displaytext: my web project
# Suspend an existing project
- local_action:
module: cs_project
name: web
state: suspended
# Activate an existing project
- local_action:
module: cs_project
name: web
state: active
# Remove a project
- local_action:
module: cs_project
name: web
state: absent
'''
RETURN = '''
---
id:
description: ID of the project.
returned: success
type: string
sample: 04589590-ac63-4ffc-93f5-b698b8ac38b6
name:
description: Name of the project.
returned: success
type: string
sample: web project
displaytext:
description: Display text of the project.
returned: success
type: string
sample: web project
state:
description: State of the project.
returned: success
type: string
sample: Active
domain:
description: Domain the project is related to.
returned: success
type: string
sample: example domain
account:
description: Account the project is related to.
returned: success
type: string
sample: example account
tags:
description: List of resource tags associated with the project.
returned: success
type: dict
sample: '[ { "key": "foo", "value": "bar" } ]'
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackProject(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.project = None
def get_displaytext(self):
displaytext = self.module.params.get('displaytext')
if not displaytext:
displaytext = self.module.params.get('name')
return displaytext
def get_project(self):
if not self.project:
project = self.module.params.get('name')
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
projects = self.cs.listProjects(**args)
if projects:
for p in projects['project']:
if project.lower() in [ p['name'].lower(), p['id']]:
self.project = p
break
return self.project
def present_project(self):
project = self.get_project()
if not project:
project = self.create_project(project)
else:
project = self.update_project(project)
return project
def update_project(self, project):
args = {}
args['id'] = project['id']
args['displaytext'] = self.get_displaytext()
if self._has_changed(args, project):
self.result['changed'] = True
if not self.module.check_mode:
project = self.cs.updateProject(**args)
if 'errortext' in project:
self.module.fail_json(msg="Failed: '%s'" % project['errortext'])
poll_async = self.module.params.get('poll_async')
if project and poll_async:
project = self._poll_job(project, 'project')
return project
def create_project(self, project):
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['displaytext'] = self.get_displaytext()
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
if not self.module.check_mode:
project = self.cs.createProject(**args)
if 'errortext' in project:
self.module.fail_json(msg="Failed: '%s'" % project['errortext'])
poll_async = self.module.params.get('poll_async')
if project and poll_async:
project = self._poll_job(project, 'project')
return project
def state_project(self, state=None):
project = self.get_project()
if not project:
self.module.fail_json(msg="No project named '%s' found." % self.module.params('name'))
if project['state'].lower() != state:
self.result['changed'] = True
args = {}
args['id'] = project['id']
if not self.module.check_mode:
if state == 'suspended':
project = self.cs.suspendProject(**args)
else:
project = self.cs.activateProject(**args)
if 'errortext' in project:
self.module.fail_json(msg="Failed: '%s'" % project['errortext'])
poll_async = self.module.params.get('poll_async')
if project and poll_async:
project = self._poll_job(project, 'project')
return project
def absent_project(self):
project = self.get_project()
if project:
self.result['changed'] = True
args = {}
args['id'] = project['id']
if not self.module.check_mode:
res = self.cs.deleteProject(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'project')
return project
def get_result(self, project):
if project:
if 'name' in project:
self.result['name'] = project['name']
if 'displaytext' in project:
self.result['displaytext'] = project['displaytext']
if 'account' in project:
self.result['account'] = project['account']
if 'domain' in project:
self.result['domain'] = project['domain']
if 'state' in project:
self.result['state'] = project['state']
if 'tags' in project:
self.result['tags'] = []
for tag in project['tags']:
result_tag = {}
result_tag['key'] = tag['key']
result_tag['value'] = tag['value']
self.result['tags'].append(result_tag)
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
displaytext = dict(default=None),
state = dict(choices=['present', 'absent', 'active', 'suspended' ], default='present'),
domain = dict(default=None),
account = dict(default=None),
poll_async = dict(type='bool', choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_project = AnsibleCloudStackProject(module)
state = module.params.get('state')
if state in ['absent']:
project = acs_project.absent_project()
elif state in ['active', 'suspended']:
project = acs_project.state_project(state=state)
else:
project = acs_project.present_project()
result = acs_project.get_result(project)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -25,7 +25,7 @@ short_description: Manages security groups on Apache CloudStack based clouds.
description:
- Create and remove security groups.
version_added: '2.0'
author: René Moser
author: "René Moser (@resmo)"
options:
name:
description:
@ -51,14 +51,12 @@ extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
---
# Create a security group
- local_action:
module: cs_securitygroup
name: default
description: default security group
# Remove a security group
- local_action:
module: cs_securitygroup
@ -94,9 +92,6 @@ class AnsibleCloudStackSecurityGroup(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
self.security_group = None
@ -104,7 +99,7 @@ class AnsibleCloudStackSecurityGroup(AnsibleCloudStack):
if not self.security_group:
sg_name = self.module.params.get('name')
args = {}
args['projectid'] = self.get_project_id()
args['projectid'] = self.get_project('id')
sgs = self.cs.listSecurityGroups(**args)
if sgs:
for s in sgs['securitygroup']:
@ -121,7 +116,7 @@ class AnsibleCloudStackSecurityGroup(AnsibleCloudStack):
args = {}
args['name'] = self.module.params.get('name')
args['projectid'] = self.get_project_id()
args['projectid'] = self.get_project('id')
args['description'] = self.module.params.get('description')
if not self.module.check_mode:
@ -140,7 +135,7 @@ class AnsibleCloudStackSecurityGroup(AnsibleCloudStack):
args = {}
args['name'] = self.module.params.get('name')
args['projectid'] = self.get_project_id()
args['projectid'] = self.get_project('id')
if not self.module.check_mode:
res = self.cs.deleteSecurityGroup(**args)
@ -167,9 +162,13 @@ def main():
state = dict(choices=['present', 'absent'], default='present'),
project = dict(default=None),
api_key = dict(default=None),
api_secret = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(default='get'),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
@ -191,6 +190,9 @@ def main():
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets

@ -25,7 +25,7 @@ short_description: Manages security group rules on Apache CloudStack based cloud
description:
- Add and remove security group rules.
version_added: '2.0'
author: René Moser
author: "René Moser (@resmo)"
options:
security_group:
description:
@ -102,7 +102,6 @@ EXAMPLES = '''
port: 80
cidr: 1.2.3.4/32
# Allow tcp/udp outbound added to security group 'default'
- local_action:
module: cs_securitygroup_rule
@ -115,7 +114,6 @@ EXAMPLES = '''
- tcp
- udp
# Allow inbound icmp from 0.0.0.0/0 added to security group 'default'
- local_action:
module: cs_securitygroup_rule
@ -124,7 +122,6 @@ EXAMPLES = '''
icmp_code: -1
icmp_type: -1
# Remove rule inbound port 80/tcp from 0.0.0.0/0 from security group 'default'
- local_action:
module: cs_securitygroup_rule
@ -132,7 +129,6 @@ EXAMPLES = '''
port: 80
state: absent
# Allow inbound port 80/tcp from security group web added to security group 'default'
- local_action:
module: cs_securitygroup_rule
@ -194,9 +190,6 @@ class AnsibleCloudStackSecurityGroupRule(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
def _tcp_udp_match(self, rule, protocol, start_port, end_port):
@ -229,18 +222,21 @@ class AnsibleCloudStackSecurityGroupRule(AnsibleCloudStack):
and cidr == rule['cidr']
def get_end_port(self):
if self.module.params.get('end_port'):
return self.module.params.get('end_port')
return self.module.params.get('start_port')
def _get_rule(self, rules):
user_security_group_name = self.module.params.get('user_security_group')
cidr = self.module.params.get('cidr')
protocol = self.module.params.get('protocol')
start_port = self.module.params.get('start_port')
end_port = self.module.params.get('end_port')
end_port = self.get_end_port()
icmp_code = self.module.params.get('icmp_code')
icmp_type = self.module.params.get('icmp_type')
if not end_port:
end_port = start_port
if protocol in ['tcp', 'udp'] and not (start_port and end_port):
self.module.fail_json(msg="no start_port or end_port set for protocol '%s'" % protocol)
@ -268,7 +264,7 @@ class AnsibleCloudStackSecurityGroupRule(AnsibleCloudStack):
security_group_name = self.module.params.get('security_group')
args = {}
args['securitygroupname'] = security_group_name
args['projectid'] = self.get_project_id()
args['projectid'] = self.get_project('id')
sgs = self.cs.listSecurityGroups(**args)
if not sgs or 'securitygroup' not in sgs:
self.module.fail_json(msg="security group '%s' not found" % security_group_name)
@ -295,26 +291,23 @@ class AnsibleCloudStackSecurityGroupRule(AnsibleCloudStack):
args['protocol'] = self.module.params.get('protocol')
args['startport'] = self.module.params.get('start_port')
args['endport'] = self.module.params.get('end_port')
args['endport'] = self.get_end_port()
args['icmptype'] = self.module.params.get('icmp_type')
args['icmpcode'] = self.module.params.get('icmp_code')
args['projectid'] = self.get_project_id()
args['projectid'] = self.get_project('id')
args['securitygroupid'] = security_group['id']
if not args['endport']:
args['endport'] = args['startport']
rule = None
res = None
type = self.module.params.get('type')
if type == 'ingress':
sg_type = self.module.params.get('type')
if sg_type == 'ingress':
rule = self._get_rule(security_group['ingressrule'])
if not rule:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.authorizeSecurityGroupIngress(**args)
elif type == 'egress':
elif sg_type == 'egress':
rule = self._get_rule(security_group['egressrule'])
if not rule:
self.result['changed'] = True
@ -327,22 +320,25 @@ class AnsibleCloudStackSecurityGroupRule(AnsibleCloudStack):
poll_async = self.module.params.get('poll_async')
if res and poll_async:
security_group = self._poll_job(res, 'securitygroup')
return security_group
key = sg_type + "rule" # ingressrule / egressrule
if key in security_group:
rule = security_group[key][0]
return rule
def remove_rule(self):
security_group = self.get_security_group()
rule = None
res = None
type = self.module.params.get('type')
if type == 'ingress':
sg_type = self.module.params.get('type')
if sg_type == 'ingress':
rule = self._get_rule(security_group['ingressrule'])
if rule:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.revokeSecurityGroupIngress(id=rule['ruleid'])
elif type == 'egress':
elif sg_type == 'egress':
rule = self._get_rule(security_group['egressrule'])
if rule:
self.result['changed'] = True
@ -355,34 +351,30 @@ class AnsibleCloudStackSecurityGroupRule(AnsibleCloudStack):
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'securitygroup')
return security_group
return rule
def get_result(self, security_group_rule):
type = self.module.params.get('type')
key = 'ingressrule'
if type == 'egress':
key = 'egressrule'
self.result['type'] = type
self.result['type'] = self.module.params.get('type')
self.result['security_group'] = self.module.params.get('security_group')
if key in security_group_rule and security_group_rule[key]:
if 'securitygroupname' in security_group_rule[key][0]:
self.result['user_security_group'] = security_group_rule[key][0]['securitygroupname']
if 'cidr' in security_group_rule[key][0]:
self.result['cidr'] = security_group_rule[key][0]['cidr']
if 'protocol' in security_group_rule[key][0]:
self.result['protocol'] = security_group_rule[key][0]['protocol']
if 'startport' in security_group_rule[key][0]:
self.result['start_port'] = security_group_rule[key][0]['startport']
if 'endport' in security_group_rule[key][0]:
self.result['end_port'] = security_group_rule[key][0]['endport']
if 'icmpcode' in security_group_rule[key][0]:
self.result['icmp_code'] = security_group_rule[key][0]['icmpcode']
if 'icmptype' in security_group_rule[key][0]:
self.result['icmp_type'] = security_group_rule[key][0]['icmptype']
if security_group_rule:
rule = security_group_rule
if 'securitygroupname' in rule:
self.result['user_security_group'] = rule['securitygroupname']
if 'cidr' in rule:
self.result['cidr'] = rule['cidr']
if 'protocol' in rule:
self.result['protocol'] = rule['protocol']
if 'startport' in rule:
self.result['start_port'] = rule['startport']
if 'endport' in rule:
self.result['end_port'] = rule['endport']
if 'icmpcode' in rule:
self.result['icmp_code'] = rule['icmpcode']
if 'icmptype' in rule:
self.result['icmp_type'] = rule['icmptype']
return self.result
@ -402,9 +394,14 @@ def main():
project = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(default='get'),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['icmp_type', 'icmp_code'],
['api_key', 'api_secret', 'api_url'],
),
mutually_exclusive = (
['icmp_type', 'start_port'],
@ -432,6 +429,9 @@ def main():
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets

@ -23,15 +23,26 @@ DOCUMENTATION = '''
module: cs_sshkeypair
short_description: Manages SSH keys on Apache CloudStack based clouds.
description:
- Create, register and remove SSH keys.
- If no key was found and no public key was provided and a new SSH
private/public key pair will be created and the private key will be returned.
version_added: '2.0'
author: René Moser
author: "René Moser (@resmo)"
options:
name:
description:
- Name of public key.
required: true
domain:
description:
- Domain the public key is related to.
required: false
default: null
account:
description:
- Account the public key is related to.
required: false
default: null
project:
description:
- Name of the project the public key to be registered in.
@ -52,7 +63,6 @@ extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
---
# create a new private / public key pair:
- local_action: cs_sshkeypair name=linus@example.com
register: key
@ -103,18 +113,16 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
self.ssh_key = None
def register_ssh_key(self, public_key):
ssh_key = self.get_ssh_key()
args = {}
args['projectid'] = self.get_project_id()
args['name'] = self.module.params.get('name')
args = {}
args['domainid'] = self.get_domain('id')
args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
res = None
if not ssh_key:
@ -142,9 +150,11 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
ssh_key = self.get_ssh_key()
if not ssh_key:
self.result['changed'] = True
args = {}
args['projectid'] = self.get_project_id()
args['name'] = self.module.params.get('name')
args = {}
args['domainid'] = self.get_domain('id')
args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
if not self.module.check_mode:
res = self.cs.createSSHKeyPair(**args)
ssh_key = res['keypair']
@ -155,9 +165,11 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
ssh_key = self.get_ssh_key()
if ssh_key:
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['projectid'] = self.get_project_id()
args = {}
args['domainid'] = self.get_domain('id')
args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
if not self.module.check_mode:
res = self.cs.deleteSSHKeyPair(**args)
return ssh_key
@ -165,9 +177,11 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
def get_ssh_key(self):
if not self.ssh_key:
args = {}
args['projectid'] = self.get_project_id()
args['name'] = self.module.params.get('name')
args = {}
args['domainid'] = self.get_domain('id')
args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
ssh_keys = self.cs.listSSHKeyPairs(**args)
if ssh_keys and 'sshkeypair' in ssh_keys:
@ -179,10 +193,8 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
if ssh_key:
if 'fingerprint' in ssh_key:
self.result['fingerprint'] = ssh_key['fingerprint']
if 'name' in ssh_key:
self.result['name'] = ssh_key['name']
if 'privatekey' in ssh_key:
self.result['private_key'] = ssh_key['privatekey']
return self.result
@ -196,14 +208,20 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True, default=None),
name = dict(required=True),
public_key = dict(default=None),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
state = dict(choices=['present', 'absent'], default='present'),
api_key = dict(default=None),
api_secret = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(default='get'),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
@ -231,6 +249,9 @@ def main():
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets

@ -0,0 +1,633 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_template
short_description: Manages templates on Apache CloudStack based clouds.
description:
- Register a template from URL, create a template from a ROOT volume of a stopped VM or its snapshot and delete templates.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the template.
required: true
url:
description:
- URL of where the template is hosted.
- Mutually exclusive with C(vm).
required: false
default: null
vm:
description:
- VM name the template will be created from its volume or alternatively from a snapshot.
- VM must be in stopped state if created from its volume.
- Mutually exclusive with C(url).
required: false
default: null
snapshot:
description:
- Name of the snapshot, created from the VM ROOT volume, the template will be created from.
- C(vm) is required together with this argument.
required: false
default: null
os_type:
description:
- OS type that best represents the OS of this template.
required: false
default: null
checksum:
description:
- The MD5 checksum value of this template.
- If set, we search by checksum instead of name.
required: false
default: false
is_ready:
description:
- This flag is used for searching existing templates.
- If set to C(true), it will only list template ready for deployment e.g. successfully downloaded and installed.
- Recommended to set it to C(false).
required: false
default: false
is_public:
description:
- Register the template to be publicly available to all users.
- Only used if C(state) is present.
required: false
default: false
is_featured:
description:
- Register the template to be featured.
- Only used if C(state) is present.
required: false
default: false
is_dynamically_scalable:
description:
- Register the template having XS/VMWare tools installed in order to support dynamic scaling of VM CPU/memory.
- Only used if C(state) is present.
required: false
default: false
project:
description:
- Name of the project the template to be registered in.
required: false
default: null
zone:
description:
- Name of the zone you wish the template to be registered or deleted from.
- If not specified, first found zone will be used.
required: false
default: null
template_filter:
description:
- Name of the filter used to search for the template.
required: false
default: 'self'
choices: [ 'featured', 'self', 'selfexecutable', 'sharedexecutable', 'executable', 'community' ]
hypervisor:
description:
- Name the hypervisor to be used for creating the new template.
- Relevant when using C(state=present).
required: false
default: none
choices: [ 'KVM', 'VMware', 'BareMetal', 'XenServer', 'LXC', 'HyperV', 'UCS', 'OVM' ]
requires_hvm:
description:
- true if this template requires HVM.
required: false
default: false
password_enabled:
description:
- True if the template supports the password reset feature.
required: false
default: false
template_tag:
description:
- the tag for this template.
required: false
default: null
sshkey_enabled:
description:
- True if the template supports the sshkey upload feature.
required: false
default: false
is_routing:
description:
- True if the template type is routing i.e., if template is used to deploy router.
- Only considered if C(url) is used.
required: false
default: false
format:
description:
- The format for the template.
- Relevant when using C(state=present).
required: false
default: null
choices: [ 'QCOW2', 'RAW', 'VHD', 'OVA' ]
is_extractable:
description:
- True if the template or its derivatives are extractable.
required: false
default: false
details:
description:
- Template details in key/value pairs.
required: false
default: null
bits:
description:
- 32 or 64 bits support.
required: false
default: '64'
displaytext:
description:
- the display text of the template.
required: true
default: null
state:
description:
- State of the template.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Register a systemvm template
- local_action:
module: cs_template
name: systemvm-4.5
url: "http://packages.shapeblue.com/systemvmtemplate/4.5/systemvm64template-4.5-vmware.ova"
hypervisor: VMware
format: OVA
zone: tokio-ix
os_type: Debian GNU/Linux 7(64-bit)
is_routing: yes
# Create a template from a stopped virtual machine's volume
- local_action:
module: cs_template
name: debian-base-template
vm: debian-base-vm
os_type: Debian GNU/Linux 7(64-bit)
zone: tokio-ix
password_enabled: yes
is_public: yes
# Create a template from a virtual machine's root volume snapshot
- local_action:
module: cs_template
name: debian-base-template
vm: debian-base-vm
snapshot: ROOT-233_2015061509114
os_type: Debian GNU/Linux 7(64-bit)
zone: tokio-ix
password_enabled: yes
is_public: yes
# Remove a template
- local_action:
module: cs_template
name: systemvm-4.2
state: absent
'''
RETURN = '''
---
name:
description: Name of the template.
returned: success
type: string
sample: Debian 7 64-bit
displaytext:
description: Displaytext of the template.
returned: success
type: string
sample: Debian 7.7 64-bit minimal 2015-03-19
checksum:
description: MD5 checksum of the template.
returned: success
type: string
sample: 0b31bccccb048d20b551f70830bb7ad0
status:
description: Status of the template.
returned: success
type: string
sample: Download Complete
is_ready:
description: True if the template is ready to be deployed from.
returned: success
type: boolean
sample: true
is_public:
description: True if the template is public.
returned: success
type: boolean
sample: true
is_featured:
description: True if the template is featured.
returned: success
type: boolean
sample: true
is_extractable:
description: True if the template is extractable.
returned: success
type: boolean
sample: true
format:
description: Format of the template.
returned: success
type: string
sample: OVA
os_type:
description: Typo of the OS.
returned: success
type: string
sample: CentOS 6.5 (64-bit)
password_enabled:
description: True if the reset password feature is enabled, false otherwise.
returned: success
type: boolean
sample: false
sshkey_enabled:
description: true if template is sshkey enabled, false otherwise.
returned: success
type: boolean
sample: false
cross_zones:
description: true if the template is managed across all zones, false otherwise.
returned: success
type: boolean
sample: false
template_type:
description: Type of the template.
returned: success
type: string
sample: USER
created:
description: Date of registering.
returned: success
type: string
sample: 2015-03-29T14:57:06+0200
template_tag:
description: Template tag related to this template.
returned: success
type: string
sample: special
hypervisor:
description: Hypervisor related to this template.
returned: success
type: string
sample: VMware
tags:
description: List of resource tags associated with the template.
returned: success
type: dict
sample: '[ { "key": "foo", "value": "bar" } ]'
zone:
description: Name of zone the template is registered in.
returned: success
type: string
sample: zuerich
domain:
description: Domain the template is related to.
returned: success
type: string
sample: example domain
account:
description: Account the template is related to.
returned: success
type: string
sample: example account
project:
description: Name of project the template is related to.
returned: success
type: string
sample: Production
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackTemplate(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
def _get_args(self):
args = {}
args['name'] = self.module.params.get('name')
args['displaytext'] = self.module.params.get('displaytext')
args['bits'] = self.module.params.get('bits')
args['isdynamicallyscalable'] = self.module.params.get('is_dynamically_scalable')
args['isextractable'] = self.module.params.get('is_extractable')
args['isfeatured'] = self.module.params.get('is_featured')
args['ispublic'] = self.module.params.get('is_public')
args['passwordenabled'] = self.module.params.get('password_enabled')
args['requireshvm'] = self.module.params.get('requires_hvm')
args['templatetag'] = self.module.params.get('template_tag')
args['ostypeid'] = self.get_os_type(key='id')
if not args['ostypeid']:
self.module.fail_json(msg="Missing required arguments: os_type")
if not args['displaytext']:
args['displaytext'] = self.module.params.get('name')
return args
def get_root_volume(self, key=None):
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
args['virtualmachineid'] = self.get_vm(key='id')
args['type'] = "ROOT"
volumes = self.cs.listVolumes(**args)
if volumes:
return self._get_by_key(key, volumes['volume'][0])
self.module.fail_json(msg="Root volume for '%s' not found" % self.get_vm('name'))
def get_snapshot(self, key=None):
snapshot = self.module.params.get('snapshot')
if not snapshot:
return None
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
args['volumeid'] = self.get_root_volume('id')
snapshots = self.cs.listSnapshots(**args)
if snapshots:
for s in snapshots['snapshot']:
if snapshot in [ s['name'], s['id'] ]:
return self._get_by_key(key, s)
self.module.fail_json(msg="Snapshot '%s' not found" % snapshot)
def create_template(self):
template = self.get_template()
if not template:
self.result['changed'] = True
args = self._get_args()
snapshot_id = self.get_snapshot(key='id')
if snapshot_id:
args['snapshotid'] = snapshot_id
else:
args['volumeid'] = self.get_root_volume('id')
if not self.module.check_mode:
template = self.cs.createTemplate(**args)
if 'errortext' in template:
self.module.fail_json(msg="Failed: '%s'" % template['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
template = self._poll_job(template, 'template')
return template
def register_template(self):
template = self.get_template()
if not template:
self.result['changed'] = True
args = self._get_args()
args['url'] = self.module.params.get('url')
args['format'] = self.module.params.get('format')
args['checksum'] = self.module.params.get('checksum')
args['isextractable'] = self.module.params.get('is_extractable')
args['isrouting'] = self.module.params.get('is_routing')
args['sshkeyenabled'] = self.module.params.get('sshkey_enabled')
args['hypervisor'] = self.get_hypervisor()
args['zoneid'] = self.get_zone(key='id')
args['domainid'] = self.get_domain(key='id')
args['account'] = self.get_account(key='name')
args['projectid'] = self.get_project(key='id')
if not self.module.check_mode:
res = self.cs.registerTemplate(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
template = res['template']
return template
def get_template(self):
args = {}
args['isready'] = self.module.params.get('is_ready')
args['templatefilter'] = self.module.params.get('template_filter')
args['zoneid'] = self.get_zone(key='id')
args['domainid'] = self.get_domain(key='id')
args['account'] = self.get_account(key='name')
args['projectid'] = self.get_project(key='id')
# if checksum is set, we only look on that.
checksum = self.module.params.get('checksum')
if not checksum:
args['name'] = self.module.params.get('name')
templates = self.cs.listTemplates(**args)
if templates:
# if checksum is set, we only look on that.
if not checksum:
return templates['template'][0]
else:
for i in templates['template']:
if i['checksum'] == checksum:
return i
return None
def remove_template(self):
template = self.get_template()
if template:
self.result['changed'] = True
args = {}
args['id'] = template['id']
args['zoneid'] = self.get_zone(key='id')
if not self.module.check_mode:
res = self.cs.deleteTemplate(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
res = self._poll_job(res, 'template')
return template
def get_result(self, template):
if template:
if 'displaytext' in template:
self.result['displaytext'] = template['displaytext']
if 'name' in template:
self.result['name'] = template['name']
if 'hypervisor' in template:
self.result['hypervisor'] = template['hypervisor']
if 'zonename' in template:
self.result['zone'] = template['zonename']
if 'checksum' in template:
self.result['checksum'] = template['checksum']
if 'format' in template:
self.result['format'] = template['format']
if 'isready' in template:
self.result['is_ready'] = template['isready']
if 'ispublic' in template:
self.result['is_public'] = template['ispublic']
if 'isfeatured' in template:
self.result['is_featured'] = template['isfeatured']
if 'isextractable' in template:
self.result['is_extractable'] = template['isextractable']
# and yes! it is really camelCase!
if 'crossZones' in template:
self.result['cross_zones'] = template['crossZones']
if 'ostypename' in template:
self.result['os_type'] = template['ostypename']
if 'templatetype' in template:
self.result['template_type'] = template['templatetype']
if 'passwordenabled' in template:
self.result['password_enabled'] = template['passwordenabled']
if 'sshkeyenabled' in template:
self.result['sshkey_enabled'] = template['sshkeyenabled']
if 'status' in template:
self.result['status'] = template['status']
if 'created' in template:
self.result['created'] = template['created']
if 'templatetag' in template:
self.result['template_tag'] = template['templatetag']
if 'tags' in template:
self.result['tags'] = []
for tag in template['tags']:
result_tag = {}
result_tag['key'] = tag['key']
result_tag['value'] = tag['value']
self.result['tags'].append(result_tag)
if 'domain' in template:
self.result['domain'] = template['domain']
if 'account' in template:
self.result['account'] = template['account']
if 'project' in template:
self.result['project'] = template['project']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
displaytext = dict(default=None),
url = dict(default=None),
vm = dict(default=None),
snapshot = dict(default=None),
os_type = dict(default=None),
is_ready = dict(type='bool', choices=BOOLEANS, default=False),
is_public = dict(type='bool', choices=BOOLEANS, default=True),
is_featured = dict(type='bool', choices=BOOLEANS, default=False),
is_dynamically_scalable = dict(type='bool', choices=BOOLEANS, default=False),
is_extractable = dict(type='bool', choices=BOOLEANS, default=False),
is_routing = dict(type='bool', choices=BOOLEANS, default=False),
checksum = dict(default=None),
template_filter = dict(default='self', choices=['featured', 'self', 'selfexecutable', 'sharedexecutable', 'executable', 'community']),
hypervisor = dict(choices=['KVM', 'VMware', 'BareMetal', 'XenServer', 'LXC', 'HyperV', 'UCS', 'OVM'], default=None),
requires_hvm = dict(type='bool', choices=BOOLEANS, default=False),
password_enabled = dict(type='bool', choices=BOOLEANS, default=False),
template_tag = dict(default=None),
sshkey_enabled = dict(type='bool', choices=BOOLEANS, default=False),
format = dict(choices=['QCOW2', 'RAW', 'VHD', 'OVA'], default=None),
details = dict(default=None),
bits = dict(type='int', choices=[ 32, 64 ], default=64),
state = dict(choices=['present', 'absent'], default='present'),
zone = dict(default=None),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
poll_async = dict(type='bool', choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
mutually_exclusive = (
['url', 'vm'],
),
required_together = (
['api_key', 'api_secret', 'api_url'],
['format', 'url', 'hypervisor'],
),
required_one_of = (
['url', 'vm'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_tpl = AnsibleCloudStackTemplate(module)
state = module.params.get('state')
if state in ['absent']:
tpl = acs_tpl.remove_template()
else:
url = module.params.get('url')
if url:
tpl = acs_tpl.register_template()
else:
tpl = acs_tpl.create_template()
result = acs_tpl.get_result(tpl)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -25,7 +25,7 @@ short_description: Manages VM snapshots on Apache CloudStack based clouds.
description:
- Create, remove and revert VM from snapshots.
version_added: '2.0'
author: René Moser
author: "René Moser (@resmo)"
options:
name:
description:
@ -62,6 +62,16 @@ options:
required: false
default: 'present'
choices: [ 'present', 'absent', 'revert' ]
domain:
description:
- Domain the VM snapshot is related to.
required: false
default: null
account:
description:
- Account the VM snapshot is related to.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
@ -71,7 +81,6 @@ extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
---
# Create a VM snapshot of disk and memory before an upgrade
- local_action:
module: cs_vmsnapshot
@ -79,7 +88,6 @@ EXAMPLES = '''
vm: web-01
snapshot_memory: yes
# Revert a VM to a snapshot after a failed upgrade
- local_action:
module: cs_vmsnapshot
@ -87,7 +95,6 @@ EXAMPLES = '''
vm: web-01
state: revert
# Remove a VM snapshot after successful upgrade
- local_action:
module: cs_vmsnapshot
@ -134,6 +141,21 @@ description:
returned: success
type: string
sample: snapshot brought to you by Ansible
domain:
description: Domain the the vm snapshot is related to.
returned: success
type: string
sample: example domain
account:
description: Account the vm snapshot is related to.
returned: success
type: string
sample: example account
project:
description: Name of project the vm snapshot is related to.
returned: success
type: string
sample: Production
'''
try:
@ -150,16 +172,15 @@ class AnsibleCloudStackVmSnapshot(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
def get_snapshot(self):
args = {}
args['virtualmachineid'] = self.get_vm_id()
args['projectid'] = self.get_project_id()
args['name'] = self.module.params.get('name')
args = {}
args['virtualmachineid'] = self.get_vm('id')
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
snapshots = self.cs.listVMSnapshot(**args)
if snapshots:
@ -172,11 +193,11 @@ class AnsibleCloudStackVmSnapshot(AnsibleCloudStack):
if not snapshot:
self.result['changed'] = True
args = {}
args['virtualmachineid'] = self.get_vm_id()
args['name'] = self.module.params.get('name')
args['description'] = self.module.params.get('description')
args['snapshotmemory'] = self.module.params.get('snapshot_memory')
args = {}
args['virtualmachineid'] = self.get_vm('id')
args['name'] = self.module.params.get('name')
args['description'] = self.module.params.get('description')
args['snapshotmemory'] = self.module.params.get('snapshot_memory')
if not self.module.check_mode:
res = self.cs.createVMSnapshot(**args)
@ -242,6 +263,12 @@ class AnsibleCloudStackVmSnapshot(AnsibleCloudStack):
self.result['name'] = snapshot['name']
if 'description' in snapshot:
self.result['description'] = snapshot['description']
if 'domain' in snapshot:
self.result['domain'] = snapshot['domain']
if 'account' in snapshot:
self.result['account'] = snapshot['account']
if 'project' in snapshot:
self.result['project'] = snapshot['project']
return self.result
@ -251,15 +278,22 @@ def main():
name = dict(required=True, aliases=['displayname']),
vm = dict(required=True),
description = dict(default=None),
project = dict(default=None),
zone = dict(default=None),
snapshot_memory = dict(choices=BOOLEANS, default=False),
state = dict(choices=['present', 'absent', 'revert'], default='present'),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(default='get'),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['icmp_type', 'icmp_code'],
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
@ -283,6 +317,9 @@ def main():
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
except Exception, e:
module.fail_json(msg='Exception: %s' % str(e))
module.exit_json(**result)
# import module snippets

@ -78,8 +78,10 @@ options:
default: null
aliases: []
requirements: [ "libcloud" ]
author: Peter Tan <ptan@google.com>
requirements:
- "python >= 2.6"
- "apache-libcloud"
author: "Peter Tan (@tanpeter)"
'''
EXAMPLES = '''

@ -26,7 +26,7 @@ short_description: Manage LXC Containers
version_added: 1.8.0
description:
- Management of LXC containers
author: Kevin Carter
author: "Kevin Carter (@cloudnull)"
options:
name:
description:
@ -38,6 +38,7 @@ options:
- lvm
- loop
- btrfs
- overlayfs
description:
- Backend storage type for the container.
required: false
@ -112,6 +113,24 @@ options:
- Set the log level for a container where *container_log* was set.
required: false
default: INFO
clone_name:
version_added: "2.0"
description:
- Name of the new cloned server. This is only used when state is
clone.
required: false
default: false
clone_snapshot:
version_added: "2.0"
required: false
choices:
- true
- false
description:
- Create a snapshot a container when cloning. This is not supported
by all container storage backends. Enabling this may fail if the
backing store does not support snapshots.
default: false
archive:
choices:
- true
@ -142,14 +161,21 @@ options:
- absent
- frozen
description:
- Start a container right after it's created.
- Define the state of a container. If you clone a container using
`clone_name` the newly cloned container created in a stopped state.
The running container will be stopped while the clone operation is
happening and upon completion of the clone the original container
state will be restored.
required: false
default: started
container_config:
description:
- list of 'key=value' options to use when configuring a container.
required: false
requirements: ['lxc >= 1.0', 'python2-lxc >= 0.1']
requirements:
- 'lxc >= 1.0 # OS package'
- 'python >= 2.6 # OS Package'
- 'lxc-python2 >= 0.1 # PIP Package from https://github.com/lxc/python2-lxc'
notes:
- Containers must have a unique name. If you attempt to create a container
with a name that already exists in the users namespace the module will
@ -169,7 +195,8 @@ notes:
creating the archive.
- If your distro does not have a package for "python2-lxc", which is a
requirement for this module, it can be installed from source at
"https://github.com/lxc/python2-lxc"
"https://github.com/lxc/python2-lxc" or installed via pip using the package
name lxc-python2.
"""
EXAMPLES = """
@ -203,6 +230,7 @@ EXAMPLES = """
- name: Create filesystem container
lxc_container:
name: test-container-config
backing_store: dir
container_log: true
template: ubuntu
state: started
@ -216,7 +244,7 @@ EXAMPLES = """
# Create an lvm container, run a complex command in it, add additional
# configuration to it, create an archive of it, and finally leave the container
# in a frozen state. The container archive will be compressed using bzip2
- name: Create an lvm container
- name: Create a frozen lvm container
lxc_container:
name: test-container-lvm
container_log: true
@ -241,14 +269,6 @@ EXAMPLES = """
- name: Debug info on container "test-container-lvm"
debug: var=lvm_container_info
- name: Get information on a given container.
lxc_container:
name: test-container-config
register: config_container_info
- name: debug info on container "test-container"
debug: var=config_container_info
- name: Run a command in a container and ensure its in a "stopped" state.
lxc_container:
name: test-container-started
@ -263,19 +283,19 @@ EXAMPLES = """
container_command: |
echo 'hello world.' | tee /opt/frozen
- name: Start a container.
- name: Start a container
lxc_container:
name: test-container-stopped
state: started
- name: Run a command in a container and then restart it.
- name: Run a command in a container and then restart it
lxc_container:
name: test-container-started
state: restarted
container_command: |
echo 'hello world.' | tee /opt/restarted
- name: Run a complex command within a "running" container.
- name: Run a complex command within a "running" container
lxc_container:
name: test-container-started
container_command: |
@ -295,7 +315,53 @@ EXAMPLES = """
archive: true
archive_path: /opt/archives
- name: Destroy a container.
# Create a container using overlayfs, create an archive of it, create a
# snapshot clone of the container and and finally leave the container
# in a frozen state. The container archive will be compressed using gzip.
- name: Create an overlayfs container archive and clone it
lxc_container:
name: test-container-overlayfs
container_log: true
template: ubuntu
state: started
backing_store: overlayfs
template_options: --release trusty
clone_snapshot: true
clone_name: test-container-overlayfs-clone-snapshot
archive: true
archive_compression: gzip
register: clone_container_info
- name: debug info on container "test-container"
debug: var=clone_container_info
- name: Clone a container using snapshot
lxc_container:
name: test-container-overlayfs-clone-snapshot
backing_store: overlayfs
clone_name: test-container-overlayfs-clone-snapshot2
clone_snapshot: true
- name: Create a new container and clone it
lxc_container:
name: test-container-new-archive
backing_store: dir
clone_name: test-container-new-archive-clone
- name: Archive and clone a container then destroy it
lxc_container:
name: test-container-new-archive
state: absent
clone_name: test-container-new-archive-destroyed-clone
archive: true
archive_compression: gzip
- name: Start a cloned container.
lxc_container:
name: test-container-new-archive-destroyed-clone
state: started
- name: Destroy a container
lxc_container:
name: "{{ item }}"
state: absent
@ -305,15 +371,22 @@ EXAMPLES = """
- test-container-frozen
- test-container-lvm
- test-container-config
- test-container-overlayfs
- test-container-overlayfs-clone
- test-container-overlayfs-clone-snapshot
- test-container-overlayfs-clone-snapshot2
- test-container-new-archive
- test-container-new-archive-clone
- test-container-new-archive-destroyed-clone
"""
try:
import lxc
except ImportError:
msg = 'The lxc module is not importable. Check the requirements.'
print("failed=True msg='%s'" % msg)
raise SystemExit(msg)
HAS_LXC = False
else:
HAS_LXC = True
# LXC_COMPRESSION_MAP is a map of available compression types when creating
@ -351,6 +424,15 @@ LXC_COMMAND_MAP = {
'directory': '--dir',
'zfs_root': '--zfsroot'
}
},
'clone': {
'variables': {
'backing_store': '--backingstore',
'lxc_path': '--lxcpath',
'fs_size': '--fssize',
'name': '--orig',
'clone_name': '--new'
}
}
}
@ -369,6 +451,9 @@ LXC_BACKING_STORE = {
],
'loop': [
'lv_name', 'vg_name', 'thinpool', 'zfs_root'
],
'overlayfs': [
'lv_name', 'vg_name', 'fs_type', 'fs_size', 'thinpool', 'zfs_root'
]
}
@ -388,7 +473,8 @@ LXC_ANSIBLE_STATES = {
'stopped': '_stopped',
'restarted': '_restarted',
'absent': '_destroyed',
'frozen': '_frozen'
'frozen': '_frozen',
'clone': '_clone'
}
@ -439,18 +525,16 @@ def create_script(command):
f.close()
# Ensure the script is executable.
os.chmod(script_file, 0755)
os.chmod(script_file, 1755)
# Get temporary directory.
tempdir = tempfile.gettempdir()
# Output log file.
stdout = path.join(tempdir, 'lxc-attach-script.log')
stdout_file = open(stdout, 'ab')
stdout_file = open(path.join(tempdir, 'lxc-attach-script.log'), 'ab')
# Error log file.
stderr = path.join(tempdir, 'lxc-attach-script.err')
stderr_file = open(stderr, 'ab')
stderr_file = open(path.join(tempdir, 'lxc-attach-script.err'), 'ab')
# Execute the script command.
try:
@ -482,6 +566,7 @@ class LxcContainerManagement(object):
self.container_name = self.module.params['name']
self.container = self.get_container_bind()
self.archive_info = None
self.clone_info = None
def get_container_bind(self):
return lxc.Container(name=self.container_name)
@ -502,15 +587,15 @@ class LxcContainerManagement(object):
return num
@staticmethod
def _container_exists(name):
def _container_exists(container_name):
"""Check if a container exists.
:param name: Name of the container.
:param container_name: Name of the container.
:type: ``str``
:returns: True or False if the container is found.
:rtype: ``bol``
"""
if [i for i in lxc.list_containers() if i == name]:
if [i for i in lxc.list_containers() if i == container_name]:
return True
else:
return False
@ -543,6 +628,7 @@ class LxcContainerManagement(object):
"""
# Remove incompatible storage backend options.
variables = variables.copy()
for v in LXC_BACKING_STORE[self.module.params['backing_store']]:
variables.pop(v, None)
@ -624,7 +710,7 @@ class LxcContainerManagement(object):
for option_line in container_config:
# Look for key in config
if option_line.startswith(key):
_, _value = option_line.split('=')
_, _value = option_line.split('=', 1)
config_value = ' '.join(_value.split())
line_index = container_config.index(option_line)
# If the sanitized values don't match replace them
@ -655,6 +741,68 @@ class LxcContainerManagement(object):
self._container_startup()
self.container.freeze()
def _container_create_clone(self):
"""Clone a new LXC container from an existing container.
This method will clone an existing container to a new container using
the `clone_name` variable as the new container name. The method will
create a container if the container `name` does not exist.
Note that cloning a container will ensure that the original container
is "stopped" before the clone can be done. Because this operation can
require a state change the method will return the original container
to its prior state upon completion of the clone.
Once the clone is complete the new container will be left in a stopped
state.
"""
# Ensure that the state of the original container is stopped
container_state = self._get_state()
if container_state != 'stopped':
self.state_change = True
self.container.stop()
build_command = [
self.module.get_bin_path('lxc-clone', True),
]
build_command = self._add_variables(
variables_dict=self._get_vars(
variables=LXC_COMMAND_MAP['clone']['variables']
),
build_command=build_command
)
# Load logging for the instance when creating it.
if self.module.params.get('clone_snapshot') in BOOLEANS_TRUE:
build_command.append('--snapshot')
# Check for backing_store == overlayfs if so force the use of snapshot
# If overlay fs is used and snapshot is unset the clone command will
# fail with an unsupported type.
elif self.module.params.get('backing_store') == 'overlayfs':
build_command.append('--snapshot')
rc, return_data, err = self._run_command(build_command)
if rc != 0:
message = "Failed executing lxc-clone."
self.failure(
err=err, rc=rc, msg=message, command=' '.join(
build_command
)
)
else:
self.state_change = True
# Restore the original state of the origin container if it was
# not in a stopped state.
if container_state == 'running':
self.container.start()
elif container_state == 'frozen':
self.container.start()
self.container.freeze()
return True
def _create(self):
"""Create a new LXC container.
@ -709,9 +857,9 @@ class LxcContainerManagement(object):
rc, return_data, err = self._run_command(build_command)
if rc != 0:
msg = "Failed executing lxc-create."
message = "Failed executing lxc-create."
self.failure(
err=err, rc=rc, msg=msg, command=' '.join(build_command)
err=err, rc=rc, msg=message, command=' '.join(build_command)
)
else:
self.state_change = True
@ -751,7 +899,7 @@ class LxcContainerManagement(object):
:rtype: ``str``
"""
if self._container_exists(name=self.container_name):
if self._container_exists(container_name=self.container_name):
return str(self.container.state).lower()
else:
return str('absent')
@ -794,7 +942,7 @@ class LxcContainerManagement(object):
rc=1,
msg='The container [ %s ] failed to start. Check to lxc is'
' available and that the container is in a functional'
' state.'
' state.' % self.container_name
)
def _check_archive(self):
@ -808,6 +956,23 @@ class LxcContainerManagement(object):
'archive': self._container_create_tar()
}
def _check_clone(self):
"""Create a compressed archive of a container.
This will store archive_info in as self.archive_info
"""
clone_name = self.module.params.get('clone_name')
if clone_name:
if not self._container_exists(container_name=clone_name):
self.clone_info = {
'cloned': self._container_create_clone()
}
else:
self.clone_info = {
'cloned': False
}
def _destroyed(self, timeout=60):
"""Ensure a container is destroyed.
@ -816,12 +981,15 @@ class LxcContainerManagement(object):
"""
for _ in xrange(timeout):
if not self._container_exists(name=self.container_name):
if not self._container_exists(container_name=self.container_name):
break
# Check if the container needs to have an archive created.
self._check_archive()
# Check if the container is to be cloned
self._check_clone()
if self._get_state() != 'stopped':
self.state_change = True
self.container.stop()
@ -852,7 +1020,7 @@ class LxcContainerManagement(object):
"""
self.check_count(count=count, method='frozen')
if self._container_exists(name=self.container_name):
if self._container_exists(container_name=self.container_name):
self._execute_command()
# Perform any configuration updates
@ -871,6 +1039,9 @@ class LxcContainerManagement(object):
# Check if the container needs to have an archive created.
self._check_archive()
# Check if the container is to be cloned
self._check_clone()
else:
self._create()
count += 1
@ -886,7 +1057,7 @@ class LxcContainerManagement(object):
"""
self.check_count(count=count, method='restart')
if self._container_exists(name=self.container_name):
if self._container_exists(container_name=self.container_name):
self._execute_command()
# Perform any configuration updates
@ -896,8 +1067,14 @@ class LxcContainerManagement(object):
self.container.stop()
self.state_change = True
# Run container startup
self._container_startup()
# Check if the container needs to have an archive created.
self._check_archive()
# Check if the container is to be cloned
self._check_clone()
else:
self._create()
count += 1
@ -913,7 +1090,7 @@ class LxcContainerManagement(object):
"""
self.check_count(count=count, method='stop')
if self._container_exists(name=self.container_name):
if self._container_exists(container_name=self.container_name):
self._execute_command()
# Perform any configuration updates
@ -925,6 +1102,9 @@ class LxcContainerManagement(object):
# Check if the container needs to have an archive created.
self._check_archive()
# Check if the container is to be cloned
self._check_clone()
else:
self._create()
count += 1
@ -940,7 +1120,7 @@ class LxcContainerManagement(object):
"""
self.check_count(count=count, method='start')
if self._container_exists(name=self.container_name):
if self._container_exists(container_name=self.container_name):
container_state = self._get_state()
if container_state == 'running':
pass
@ -965,6 +1145,9 @@ class LxcContainerManagement(object):
# Check if the container needs to have an archive created.
self._check_archive()
# Check if the container is to be cloned
self._check_clone()
else:
self._create()
count += 1
@ -1007,18 +1190,18 @@ class LxcContainerManagement(object):
all_lvms = [i.split() for i in stdout.splitlines()][1:]
return [lv_entry[0] for lv_entry in all_lvms if lv_entry[1] == vg]
def _get_vg_free_pe(self, name):
def _get_vg_free_pe(self, vg_name):
"""Return the available size of a given VG.
:param name: Name of volume.
:type name: ``str``
:param vg_name: Name of volume.
:type vg_name: ``str``
:returns: size and measurement of an LV
:type: ``tuple``
"""
build_command = [
'vgdisplay',
name,
vg_name,
'--units',
'g'
]
@ -1027,7 +1210,7 @@ class LxcContainerManagement(object):
self.failure(
err=err,
rc=rc,
msg='failed to read vg %s' % name,
msg='failed to read vg %s' % vg_name,
command=' '.join(build_command)
)
@ -1036,17 +1219,17 @@ class LxcContainerManagement(object):
_free_pe = free_pe[0].split()
return float(_free_pe[-2]), _free_pe[-1]
def _get_lv_size(self, name):
def _get_lv_size(self, lv_name):
"""Return the available size of a given LV.
:param name: Name of volume.
:type name: ``str``
:param lv_name: Name of volume.
:type lv_name: ``str``
:returns: size and measurement of an LV
:type: ``tuple``
"""
vg = self._get_lxc_vg()
lv = os.path.join(vg, name)
lv = os.path.join(vg, lv_name)
build_command = [
'lvdisplay',
lv,
@ -1080,7 +1263,7 @@ class LxcContainerManagement(object):
"""
vg = self._get_lxc_vg()
free_space, messurement = self._get_vg_free_pe(name=vg)
free_space, messurement = self._get_vg_free_pe(vg_name=vg)
if free_space < float(snapshot_size_gb):
message = (
@ -1183,25 +1366,25 @@ class LxcContainerManagement(object):
return archive_name
def _lvm_lv_remove(self, name):
def _lvm_lv_remove(self, lv_name):
"""Remove an LV.
:param name: The name of the logical volume
:type name: ``str``
:param lv_name: The name of the logical volume
:type lv_name: ``str``
"""
vg = self._get_lxc_vg()
build_command = [
self.module.get_bin_path('lvremove', True),
"-f",
"%s/%s" % (vg, name),
"%s/%s" % (vg, lv_name),
]
rc, stdout, err = self._run_command(build_command)
if rc != 0:
self.failure(
err=err,
rc=rc,
msg='Failed to remove LVM LV %s/%s' % (vg, name),
msg='Failed to remove LVM LV %s/%s' % (vg, lv_name),
command=' '.join(build_command)
)
@ -1213,31 +1396,71 @@ class LxcContainerManagement(object):
:param temp_dir: path to the temporary local working directory
:type temp_dir: ``str``
"""
# This loop is created to support overlayfs archives. This should
# squash all of the layers into a single archive.
fs_paths = container_path.split(':')
if 'overlayfs' in fs_paths:
fs_paths.pop(fs_paths.index('overlayfs'))
for fs_path in fs_paths:
# Set the path to the container data
fs_path = os.path.dirname(fs_path)
# Run the sync command
build_command = [
self.module.get_bin_path('rsync', True),
'-aHAX',
fs_path,
temp_dir
]
rc, stdout, err = self._run_command(
build_command,
unsafe_shell=True
)
if rc != 0:
self.failure(
err=err,
rc=rc,
msg='failed to perform archive',
command=' '.join(build_command)
)
def _unmount(self, mount_point):
"""Unmount a file system.
:param mount_point: path on the file system that is mounted.
:type mount_point: ``str``
"""
build_command = [
self.module.get_bin_path('rsync', True),
'-aHAX',
container_path,
temp_dir
self.module.get_bin_path('umount', True),
mount_point,
]
rc, stdout, err = self._run_command(build_command, unsafe_shell=True)
rc, stdout, err = self._run_command(build_command)
if rc != 0:
self.failure(
err=err,
rc=rc,
msg='failed to perform archive',
msg='failed to unmount [ %s ]' % mount_point,
command=' '.join(build_command)
)
def _unmount(self, mount_point):
"""Unmount a file system.
def _overlayfs_mount(self, lowerdir, upperdir, mount_point):
"""mount an lv.
:param lowerdir: name/path of the lower directory
:type lowerdir: ``str``
:param upperdir: name/path of the upper directory
:type upperdir: ``str``
:param mount_point: path on the file system that is mounted.
:type mount_point: ``str``
"""
build_command = [
self.module.get_bin_path('umount', True),
self.module.get_bin_path('mount', True),
'-t overlayfs',
'-o lowerdir=%s,upperdir=%s' % (lowerdir, upperdir),
'overlayfs',
mount_point,
]
rc, stdout, err = self._run_command(build_command)
@ -1245,8 +1468,8 @@ class LxcContainerManagement(object):
self.failure(
err=err,
rc=rc,
msg='failed to unmount [ %s ]' % mount_point,
command=' '.join(build_command)
msg='failed to mount overlayfs:%s:%s to %s -- Command: %s'
% (lowerdir, upperdir, mount_point, build_command)
)
def _container_create_tar(self):
@ -1275,13 +1498,15 @@ class LxcContainerManagement(object):
# Test if the containers rootfs is a block device
block_backed = lxc_rootfs.startswith(os.path.join(os.sep, 'dev'))
# Test if the container is using overlayfs
overlayfs_backed = lxc_rootfs.startswith('overlayfs')
mount_point = os.path.join(work_dir, 'rootfs')
# Set the snapshot name if needed
snapshot_name = '%s_lxc_snapshot' % self.container_name
# Set the path to the container data
container_path = os.path.dirname(lxc_rootfs)
container_state = self._get_state()
try:
# Ensure the original container is stopped or frozen
@ -1292,7 +1517,7 @@ class LxcContainerManagement(object):
self.container.stop()
# Sync the container data from the container_path to work_dir
self._rsync_data(container_path, temp_dir)
self._rsync_data(lxc_rootfs, temp_dir)
if block_backed:
if snapshot_name not in self._lvm_lv_list():
@ -1301,7 +1526,7 @@ class LxcContainerManagement(object):
# Take snapshot
size, measurement = self._get_lv_size(
name=self.container_name
lv_name=self.container_name
)
self._lvm_snapshot_create(
source_lv=self.container_name,
@ -1322,25 +1547,33 @@ class LxcContainerManagement(object):
' up old snapshot of containers before continuing.'
% snapshot_name
)
# Restore original state of container
if container_state == 'running':
if self._get_state() == 'frozen':
self.container.unfreeze()
else:
self.container.start()
elif overlayfs_backed:
lowerdir, upperdir = lxc_rootfs.split(':')[1:]
self._overlayfs_mount(
lowerdir=lowerdir,
upperdir=upperdir,
mount_point=mount_point
)
# Set the state as changed and set a new fact
self.state_change = True
return self._create_tar(source_dir=work_dir)
finally:
if block_backed:
if block_backed or overlayfs_backed:
# unmount snapshot
self._unmount(mount_point)
if block_backed:
# Remove snapshot
self._lvm_lv_remove(snapshot_name)
# Restore original state of container
if container_state == 'running':
if self._get_state() == 'frozen':
self.container.unfreeze()
else:
self.container.start()
# Remove tmpdir
shutil.rmtree(temp_dir)
@ -1374,6 +1607,9 @@ class LxcContainerManagement(object):
if self.archive_info:
outcome.update(self.archive_info)
if self.clone_info:
outcome.update(self.clone_info)
self.module.exit_json(
changed=self.state_change,
lxc_container=outcome
@ -1450,6 +1686,14 @@ def main():
choices=[n for i in LXC_LOGGING_LEVELS.values() for n in i],
default='INFO'
),
clone_name=dict(
type='str',
required=False
),
clone_snapshot=dict(
choices=BOOLEANS,
default='false'
),
archive=dict(
choices=BOOLEANS,
default='false'
@ -1466,6 +1710,11 @@ def main():
supports_check_mode=False,
)
if not HAS_LXC:
module.fail_json(
msg='The `lxc` module is not importable. Check the requirements.'
)
lv_name = module.params.get('lv_name')
if not lv_name:
module.params['lv_name'] = module.params.get('name')
@ -1477,4 +1726,3 @@ def main():
# import module bits
from ansible.module_utils.basic import *
main()

@ -20,7 +20,7 @@
DOCUMENTATION = '''
---
module: ovirt
author: Vincent Van der Kussen
author: "Vincent Van der Kussen (@vincentvdk)"
short_description: oVirt/RHEV platform management
description:
- allows you to create new instances, either from scratch or an image, in addition to deleting or stopping instances on the oVirt/RHEV platform
@ -152,7 +152,9 @@ options:
aliases: []
choices: ['present', 'absent', 'shutdown', 'started', 'restarted']
requirements: [ "ovirt-engine-sdk" ]
requirements:
- "python >= 2.6"
- "ovirt-engine-sdk-python"
'''
EXAMPLES = '''
# Basic example provisioning from image.

@ -0,0 +1,433 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: proxmox
short_description: management of instances in Proxmox VE cluster
description:
- allows you to create/delete/stop instances in Proxmox VE cluster
version_added: "2.0"
options:
api_host:
description:
- the host of the Proxmox VE cluster
required: true
api_user:
description:
- the user to authenticate with
required: true
api_password:
description:
- the password to authenticate with
- you can use PROXMOX_PASSWORD environment variable
default: null
required: false
vmid:
description:
- the instance id
default: null
required: true
validate_certs:
description:
- enable / disable https certificate verification
default: false
required: false
type: boolean
node:
description:
- Proxmox VE node, when new VM will be created
- required only for C(state=present)
- for another states will be autodiscovered
default: null
required: false
password:
description:
- the instance root password
- required only for C(state=present)
default: null
required: false
hostname:
description:
- the instance hostname
- required only for C(state=present)
default: null
required: false
ostemplate:
description:
- the template for VM creating
- required only for C(state=present)
default: null
required: false
disk:
description:
- hard disk size in GB for instance
default: 3
required: false
cpus:
description:
- numbers of allocated cpus for instance
default: 1
required: false
memory:
description:
- memory size in MB for instance
default: 512
required: false
swap:
description:
- swap memory size in MB for instance
default: 0
required: false
netif:
description:
- specifies network interfaces for the container
default: null
required: false
type: string
ip_address:
description:
- specifies the address the container will be assigned
default: null
required: false
type: string
onboot:
description:
- specifies whether a VM will be started during system bootup
default: false
required: false
type: boolean
storage:
description:
- target storage
default: 'local'
required: false
type: string
cpuunits:
description:
- CPU weight for a VM
default: 1000
required: false
type: integer
nameserver:
description:
- sets DNS server IP address for a container
default: null
required: false
type: string
searchdomain:
description:
- sets DNS search domain for a container
default: null
required: false
type: string
timeout:
description:
- timeout for operations
default: 30
required: false
type: integer
force:
description:
- forcing operations
- can be used only with states C(present), C(stopped), C(restarted)
- with C(state=present) force option allow to overwrite existing container
- with states C(stopped) , C(restarted) allow to force stop instance
default: false
required: false
type: boolean
state:
description:
- Indicate desired state of the instance
choices: ['present', 'started', 'absent', 'stopped', 'restarted']
default: present
notes:
- Requires proxmoxer and requests modules on host. This modules can be installed with pip.
requirements: [ "proxmoxer", "requests" ]
author: "Sergei Antipov @UnderGreen"
'''
EXAMPLES = '''
# Create new container with minimal options
- proxmox: vmid=100 node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' password='123456' hostname='example.org' ostemplate='local:vztmpl/ubuntu-14.04-x86_64.tar.gz'
# Create new container with minimal options with force(it will rewrite existing container)
- proxmox: vmid=100 node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' password='123456' hostname='example.org' ostemplate='local:vztmpl/ubuntu-14.04-x86_64.tar.gz' force=yes
# Create new container with minimal options use environment PROXMOX_PASSWORD variable(you should export it before)
- proxmox: vmid=100 node='uk-mc02' api_user='root@pam' api_host='node1' password='123456' hostname='example.org' ostemplate='local:vztmpl/ubuntu-14.04-x86_64.tar.gz'
# Start container
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' state=started
# Stop container
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' state=stopped
# Stop container with force
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' force=yes state=stopped
# Restart container(stopped or mounted container you can't restart)
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' state=stopped
# Remove container
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' state=absent
'''
import os
import time
try:
from proxmoxer import ProxmoxAPI
HAS_PROXMOXER = True
except ImportError:
HAS_PROXMOXER = False
def get_instance(proxmox, vmid):
return [ vm for vm in proxmox.cluster.resources.get(type='vm') if vm['vmid'] == int(vmid) ]
def content_check(proxmox, node, ostemplate, storage):
return [ True for cnt in proxmox.nodes(node).storage(storage).content.get() if cnt['volid'] == ostemplate ]
def node_check(proxmox, node):
return [ True for nd in proxmox.nodes.get() if nd['node'] == node ]
def create_instance(module, proxmox, vmid, node, disk, storage, cpus, memory, swap, timeout, **kwargs):
proxmox_node = proxmox.nodes(node)
taskid = proxmox_node.openvz.create(vmid=vmid, storage=storage, memory=memory, swap=swap,
cpus=cpus, disk=disk, **kwargs)
while timeout:
if ( proxmox_node.tasks(taskid).status.get()['status'] == 'stopped'
and proxmox_node.tasks(taskid).status.get()['exitstatus'] == 'OK' ):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for creating VM. Last line in task before timeout: %s'
% proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def start_instance(module, proxmox, vm, vmid, timeout):
taskid = proxmox.nodes(vm[0]['node']).openvz(vmid).status.start.post()
while timeout:
if ( proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped'
and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK' ):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for starting VM. Last line in task before timeout: %s'
% proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def stop_instance(module, proxmox, vm, vmid, timeout, force):
if force:
taskid = proxmox.nodes(vm[0]['node']).openvz(vmid).status.shutdown.post(forceStop=1)
else:
taskid = proxmox.nodes(vm[0]['node']).openvz(vmid).status.shutdown.post()
while timeout:
if ( proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped'
and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK' ):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for stopping VM. Last line in task before timeout: %s'
% proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def umount_instance(module, proxmox, vm, vmid, timeout):
taskid = proxmox.nodes(vm[0]['node']).openvz(vmid).status.umount.post()
while timeout:
if ( proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped'
and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK' ):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for unmounting VM. Last line in task before timeout: %s'
% proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def main():
module = AnsibleModule(
argument_spec = dict(
api_host = dict(required=True),
api_user = dict(required=True),
api_password = dict(no_log=True),
vmid = dict(required=True),
validate_certs = dict(type='bool', choices=BOOLEANS, default='no'),
node = dict(),
password = dict(no_log=True),
hostname = dict(),
ostemplate = dict(),
disk = dict(type='int', default=3),
cpus = dict(type='int', default=1),
memory = dict(type='int', default=512),
swap = dict(type='int', default=0),
netif = dict(),
ip_address = dict(),
onboot = dict(type='bool', choices=BOOLEANS, default='no'),
storage = dict(default='local'),
cpuunits = dict(type='int', default=1000),
nameserver = dict(),
searchdomain = dict(),
timeout = dict(type='int', default=30),
force = dict(type='bool', choices=BOOLEANS, default='no'),
state = dict(default='present', choices=['present', 'absent', 'stopped', 'started', 'restarted']),
)
)
if not HAS_PROXMOXER:
module.fail_json(msg='proxmoxer required for this module')
state = module.params['state']
api_user = module.params['api_user']
api_host = module.params['api_host']
api_password = module.params['api_password']
vmid = module.params['vmid']
validate_certs = module.params['validate_certs']
node = module.params['node']
disk = module.params['disk']
cpus = module.params['cpus']
memory = module.params['memory']
swap = module.params['swap']
storage = module.params['storage']
timeout = module.params['timeout']
# If password not set get it from PROXMOX_PASSWORD env
if not api_password:
try:
api_password = os.environ['PROXMOX_PASSWORD']
except KeyError, e:
module.fail_json(msg='You should set api_password param or use PROXMOX_PASSWORD environment variable')
try:
proxmox = ProxmoxAPI(api_host, user=api_user, password=api_password, verify_ssl=validate_certs)
except Exception, e:
module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e)
if state == 'present':
try:
if get_instance(proxmox, vmid) and not module.params['force']:
module.exit_json(changed=False, msg="VM with vmid = %s is already exists" % vmid)
elif not (node, module.params['hostname'] and module.params['password'] and module.params['ostemplate']):
module.fail_json(msg='node, hostname, password and ostemplate are mandatory for creating vm')
elif not node_check(proxmox, node):
module.fail_json(msg="node '%s' not exists in cluster" % node)
elif not content_check(proxmox, node, module.params['ostemplate'], storage):
module.fail_json(msg="ostemplate '%s' not exists on node %s and storage %s"
% (module.params['ostemplate'], node, storage))
create_instance(module, proxmox, vmid, node, disk, storage, cpus, memory, swap, timeout,
password = module.params['password'],
hostname = module.params['hostname'],
ostemplate = module.params['ostemplate'],
netif = module.params['netif'],
ip_address = module.params['ip_address'],
onboot = int(module.params['onboot']),
cpuunits = module.params['cpuunits'],
nameserver = module.params['nameserver'],
searchdomain = module.params['searchdomain'],
force = int(module.params['force']))
module.exit_json(changed=True, msg="deployed VM %s from template %s" % (vmid, module.params['ostemplate']))
except Exception, e:
module.fail_json(msg="creation of VM %s failed with exception: %s" % ( vmid, e ))
elif state == 'started':
try:
vm = get_instance(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s not exists in cluster' % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'running':
module.exit_json(changed=False, msg="VM %s is already running" % vmid)
if start_instance(module, proxmox, vm, vmid, timeout):
module.exit_json(changed=True, msg="VM %s started" % vmid)
except Exception, e:
module.fail_json(msg="starting of VM %s failed with exception: %s" % ( vmid, e ))
elif state == 'stopped':
try:
vm = get_instance(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s not exists in cluster' % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'mounted':
if module.params['force']:
if umount_instance(module, proxmox, vm, vmid, timeout):
module.exit_json(changed=True, msg="VM %s is shutting down" % vmid)
else:
module.exit_json(changed=False, msg=("VM %s is already shutdown, but mounted. "
"You can use force option to umount it.") % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'stopped':
module.exit_json(changed=False, msg="VM %s is already shutdown" % vmid)
if stop_instance(module, proxmox, vm, vmid, timeout, force = module.params['force']):
module.exit_json(changed=True, msg="VM %s is shutting down" % vmid)
except Exception, e:
module.fail_json(msg="stopping of VM %s failed with exception: %s" % ( vmid, e ))
elif state == 'restarted':
try:
vm = get_instance(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s not exists in cluster' % vmid)
if ( proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'stopped'
or proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'mounted' ):
module.exit_json(changed=False, msg="VM %s is not running" % vmid)
if ( stop_instance(module, proxmox, vm, vmid, timeout, force = module.params['force']) and
start_instance(module, proxmox, vm, vmid, timeout) ):
module.exit_json(changed=True, msg="VM %s is restarted" % vmid)
except Exception, e:
module.fail_json(msg="restarting of VM %s failed with exception: %s" % ( vmid, e ))
elif state == 'absent':
try:
vm = get_instance(proxmox, vmid)
if not vm:
module.exit_json(changed=False, msg="VM %s does not exist" % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'running':
module.exit_json(changed=False, msg="VM %s is running. Stop it before deletion." % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'mounted':
module.exit_json(changed=False, msg="VM %s is mounted. Stop it with force option before deletion." % vmid)
taskid = proxmox.nodes(vm[0]['node']).openvz.delete(vmid)
while timeout:
if ( proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped'
and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK' ):
module.exit_json(changed=True, msg="VM %s removed" % vmid)
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for removing VM. Last line in task before timeout: %s'
% proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
except Exception, e:
module.fail_json(msg="deletion of VM %s failed with exception: %s" % ( vmid, e ))
# import module snippets
from ansible.module_utils.basic import *
main()

@ -0,0 +1,232 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: proxmox_template
short_description: management of OS templates in Proxmox VE cluster
description:
- allows you to upload/delete templates in Proxmox VE cluster
version_added: "2.0"
options:
api_host:
description:
- the host of the Proxmox VE cluster
required: true
api_user:
description:
- the user to authenticate with
required: true
api_password:
description:
- the password to authenticate with
- you can use PROXMOX_PASSWORD environment variable
default: null
required: false
validate_certs:
description:
- enable / disable https certificate verification
default: false
required: false
type: boolean
node:
description:
- Proxmox VE node, when you will operate with template
default: null
required: true
src:
description:
- path to uploaded file
- required only for C(state=present)
default: null
required: false
aliases: ['path']
template:
description:
- the template name
- required only for states C(absent), C(info)
default: null
required: false
content_type:
description:
- content type
- required only for C(state=present)
default: 'vztmpl'
required: false
choices: ['vztmpl', 'iso']
storage:
description:
- target storage
default: 'local'
required: false
type: string
timeout:
description:
- timeout for operations
default: 30
required: false
type: integer
force:
description:
- can be used only with C(state=present), exists template will be overwritten
default: false
required: false
type: boolean
state:
description:
- Indicate desired state of the template
choices: ['present', 'absent']
default: present
notes:
- Requires proxmoxer and requests modules on host. This modules can be installed with pip.
requirements: [ "proxmoxer", "requests" ]
author: "Sergei Antipov @UnderGreen"
'''
EXAMPLES = '''
# Upload new openvz template with minimal options
- proxmox_template: node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' src='~/ubuntu-14.04-x86_64.tar.gz'
# Upload new openvz template with minimal options use environment PROXMOX_PASSWORD variable(you should export it before)
- proxmox_template: node='uk-mc02' api_user='root@pam' api_host='node1' src='~/ubuntu-14.04-x86_64.tar.gz'
# Upload new openvz template with all options and force overwrite
- proxmox_template: node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' storage='local' content_type='vztmpl' src='~/ubuntu-14.04-x86_64.tar.gz' force=yes
# Delete template with minimal options
- proxmox_template: node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' template='ubuntu-14.04-x86_64.tar.gz' state=absent
'''
import os
import time
try:
from proxmoxer import ProxmoxAPI
HAS_PROXMOXER = True
except ImportError:
HAS_PROXMOXER = False
def get_template(proxmox, node, storage, content_type, template):
return [ True for tmpl in proxmox.nodes(node).storage(storage).content.get()
if tmpl['volid'] == '%s:%s/%s' % (storage, content_type, template) ]
def upload_template(module, proxmox, api_host, node, storage, content_type, realpath, timeout):
taskid = proxmox.nodes(node).storage(storage).upload.post(content=content_type, filename=open(realpath))
while timeout:
task_status = proxmox.nodes(api_host.split('.')[0]).tasks(taskid).status.get()
if task_status['status'] == 'stopped' and task_status['exitstatus'] == 'OK':
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for uploading template. Last line in task before timeout: %s'
% proxmox.node(node).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def delete_template(module, proxmox, node, storage, content_type, template, timeout):
volid = '%s:%s/%s' % (storage, content_type, template)
proxmox.nodes(node).storage(storage).content.delete(volid)
while timeout:
if not get_template(proxmox, node, storage, content_type, template):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for deleting template.')
time.sleep(1)
return False
def main():
module = AnsibleModule(
argument_spec = dict(
api_host = dict(required=True),
api_user = dict(required=True),
api_password = dict(no_log=True),
validate_certs = dict(type='bool', choices=BOOLEANS, default='no'),
node = dict(),
src = dict(),
template = dict(),
content_type = dict(default='vztmpl', choices=['vztmpl','iso']),
storage = dict(default='local'),
timeout = dict(type='int', default=30),
force = dict(type='bool', choices=BOOLEANS, default='no'),
state = dict(default='present', choices=['present', 'absent']),
)
)
if not HAS_PROXMOXER:
module.fail_json(msg='proxmoxer required for this module')
state = module.params['state']
api_user = module.params['api_user']
api_host = module.params['api_host']
api_password = module.params['api_password']
validate_certs = module.params['validate_certs']
node = module.params['node']
storage = module.params['storage']
timeout = module.params['timeout']
# If password not set get it from PROXMOX_PASSWORD env
if not api_password:
try:
api_password = os.environ['PROXMOX_PASSWORD']
except KeyError, e:
module.fail_json(msg='You should set api_password param or use PROXMOX_PASSWORD environment variable')
try:
proxmox = ProxmoxAPI(api_host, user=api_user, password=api_password, verify_ssl=validate_certs)
except Exception, e:
module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e)
if state == 'present':
try:
content_type = module.params['content_type']
src = module.params['src']
from ansible import utils
realpath = utils.path_dwim(None, src)
template = os.path.basename(realpath)
if get_template(proxmox, node, storage, content_type, template) and not module.params['force']:
module.exit_json(changed=False, msg='template with volid=%s:%s/%s is already exists' % (storage, content_type, template))
elif not src:
module.fail_json(msg='src param to uploading template file is mandatory')
elif not (os.path.exists(realpath) and os.path.isfile(realpath)):
module.fail_json(msg='template file on path %s not exists' % realpath)
if upload_template(module, proxmox, api_host, node, storage, content_type, realpath, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s uploaded' % (storage, content_type, template))
except Exception, e:
module.fail_json(msg="uploading of template %s failed with exception: %s" % ( template, e ))
elif state == 'absent':
try:
content_type = module.params['content_type']
template = module.params['template']
if not template:
module.fail_json(msg='template param is mandatory')
elif not get_template(proxmox, node, storage, content_type, template):
module.exit_json(changed=False, msg='template with volid=%s:%s/%s is already deleted' % (storage, content_type, template))
if delete_template(module, proxmox, node, storage, content_type, template, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s deleted' % (storage, content_type, template))
except Exception, e:
module.fail_json(msg="deleting of template %s failed with exception: %s" % ( template, e ))
# import module snippets
from ansible.module_utils.basic import *
main()

@ -55,8 +55,13 @@ options:
- XML document used with the define command
required: false
default: null
requirements: [ "libvirt" ]
author: Michael DeHaan, Seth Vidal
requirements:
- "python >= 2.6"
- "libvirt-python"
author:
- "Ansible Core Team"
- "Michael DeHaan"
- "Seth Vidal"
'''
EXAMPLES = '''

@ -0,0 +1,227 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_alarm
short_description: Create or delete a Rackspace Cloud Monitoring alarm.
description:
- Create or delete a Rackspace Cloud Monitoring alarm that associates an
existing rax_mon_entity, rax_mon_check, and rax_mon_notification_plan with
criteria that specify what conditions will trigger which levels of
notifications. Rackspace monitoring module flow | rax_mon_entity ->
rax_mon_check -> rax_mon_notification -> rax_mon_notification_plan ->
*rax_mon_alarm*
version_added: "2.0"
options:
state:
description:
- Ensure that the alarm with this C(label) exists or does not exist.
choices: [ "present", "absent" ]
required: false
default: present
label:
description:
- Friendly name for this alarm, used to achieve idempotence. Must be a String
between 1 and 255 characters long.
required: true
entity_id:
description:
- ID of the entity this alarm is attached to. May be acquired by registering
the value of a rax_mon_entity task.
required: true
check_id:
description:
- ID of the check that should be alerted on. May be acquired by registering
the value of a rax_mon_check task.
required: true
notification_plan_id:
description:
- ID of the notification plan to trigger if this alarm fires. May be acquired
by registering the value of a rax_mon_notification_plan task.
required: true
criteria:
description:
- Alarm DSL that describes alerting conditions and their output states. Must
be between 1 and 16384 characters long. See
http://docs.rackspace.com/cm/api/v1.0/cm-devguide/content/alerts-language.html
for a reference on the alerting language.
disabled:
description:
- If yes, create this alarm, but leave it in an inactive state. Defaults to
no.
choices: [ "yes", "no" ]
metadata:
description:
- Arbitrary key/value pairs to accompany the alarm. Must be a hash of String
keys and values between 1 and 255 characters long.
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Alarm example
gather_facts: False
hosts: local
connection: local
tasks:
- name: Ensure that a specific alarm exists.
rax_mon_alarm:
credentials: ~/.rax_pub
state: present
label: uhoh
entity_id: "{{ the_entity['entity']['id'] }}"
check_id: "{{ the_check['check']['id'] }}"
notification_plan_id: "{{ defcon1['notification_plan']['id'] }}"
criteria: >
if (rate(metric['average']) > 10) {
return new AlarmStatus(WARNING);
}
return new AlarmStatus(OK);
register: the_alarm
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def alarm(module, state, label, entity_id, check_id, notification_plan_id, criteria,
disabled, metadata):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
if criteria and len(criteria) < 1 or len(criteria) > 16384:
module.fail_json(msg='criteria must be between 1 and 16384 characters long')
# Coerce attributes.
changed = False
alarm = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = [a for a in cm.list_alarms(entity_id) if a.label == label]
if existing:
alarm = existing[0]
if state == 'present':
should_create = False
should_update = False
should_delete = False
if len(existing) > 1:
module.fail_json(msg='%s existing alarms have the label %s.' %
(len(existing), label))
if alarm:
if check_id != alarm.check_id or notification_plan_id != alarm.notification_plan_id:
should_delete = should_create = True
should_update = (disabled and disabled != alarm.disabled) or \
(metadata and metadata != alarm.metadata) or \
(criteria and criteria != alarm.criteria)
if should_update and not should_delete:
cm.update_alarm(entity=entity_id, alarm=alarm,
criteria=criteria, disabled=disabled,
label=label, metadata=metadata)
changed = True
if should_delete:
alarm.delete()
changed = True
else:
should_create = True
if should_create:
alarm = cm.create_alarm(entity=entity_id, check=check_id,
notification_plan=notification_plan_id,
criteria=criteria, disabled=disabled, label=label,
metadata=metadata)
changed = True
else:
for a in existing:
a.delete()
changed = True
if alarm:
alarm_dict = {
"id": alarm.id,
"label": alarm.label,
"check_id": alarm.check_id,
"notification_plan_id": alarm.notification_plan_id,
"criteria": alarm.criteria,
"disabled": alarm.disabled,
"metadata": alarm.metadata
}
module.exit_json(changed=changed, alarm=alarm_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
entity_id=dict(required=True),
check_id=dict(required=True),
notification_plan_id=dict(required=True),
criteria=dict(),
disabled=dict(type='bool', default=False),
metadata=dict(type='dict')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
entity_id = module.params.get('entity_id')
check_id = module.params.get('check_id')
notification_plan_id = module.params.get('notification_plan_id')
criteria = module.params.get('criteria')
disabled = module.boolean(module.params.get('disabled'))
metadata = module.params.get('metadata')
setup_rax_module(module, pyrax)
alarm(module, state, label, entity_id, check_id, notification_plan_id,
criteria, disabled, metadata)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -0,0 +1,313 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_check
short_description: Create or delete a Rackspace Cloud Monitoring check for an
existing entity.
description:
- Create or delete a Rackspace Cloud Monitoring check associated with an
existing rax_mon_entity. A check is a specific test or measurement that is
performed, possibly from different monitoring zones, on the systems you
monitor. Rackspace monitoring module flow | rax_mon_entity ->
*rax_mon_check* -> rax_mon_notification -> rax_mon_notification_plan ->
rax_mon_alarm
version_added: "2.0"
options:
state:
description:
- Ensure that a check with this C(label) exists or does not exist.
choices: ["present", "absent"]
entity_id:
description:
- ID of the rax_mon_entity to target with this check.
required: true
label:
description:
- Defines a label for this check, between 1 and 64 characters long.
required: true
check_type:
description:
- The type of check to create. C(remote.) checks may be created on any
rax_mon_entity. C(agent.) checks may only be created on rax_mon_entities
that have a non-null C(agent_id).
choices:
- remote.dns
- remote.ftp-banner
- remote.http
- remote.imap-banner
- remote.mssql-banner
- remote.mysql-banner
- remote.ping
- remote.pop3-banner
- remote.postgresql-banner
- remote.smtp-banner
- remote.smtp
- remote.ssh
- remote.tcp
- remote.telnet-banner
- agent.filesystem
- agent.memory
- agent.load_average
- agent.cpu
- agent.disk
- agent.network
- agent.plugin
required: true
monitoring_zones_poll:
description:
- Comma-separated list of the names of the monitoring zones the check should
run from. Available monitoring zones include mzdfw, mzhkg, mziad, mzlon,
mzord and mzsyd. Required for remote.* checks; prohibited for agent.* checks.
target_hostname:
description:
- One of `target_hostname` and `target_alias` is required for remote.* checks,
but prohibited for agent.* checks. The hostname this check should target.
Must be a valid IPv4, IPv6, or FQDN.
target_alias:
description:
- One of `target_alias` and `target_hostname` is required for remote.* checks,
but prohibited for agent.* checks. Use the corresponding key in the entity's
`ip_addresses` hash to resolve an IP address to target.
details:
description:
- Additional details specific to the check type. Must be a hash of strings
between 1 and 255 characters long, or an array or object containing 0 to
256 items.
disabled:
description:
- If "yes", ensure the check is created, but don't actually use it yet.
choices: [ "yes", "no" ]
metadata:
description:
- Hash of arbitrary key-value pairs to accompany this check if it fires.
Keys and values must be strings between 1 and 255 characters long.
period:
description:
- The number of seconds between each time the check is performed. Must be
greater than the minimum period set on your account.
timeout:
description:
- The number of seconds this check will wait when attempting to collect
results. Must be less than the period.
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Create a monitoring check
gather_facts: False
hosts: local
connection: local
tasks:
- name: Associate a check with an existing entity.
rax_mon_check:
credentials: ~/.rax_pub
state: present
entity_id: "{{ the_entity['entity']['id'] }}"
label: the_check
check_type: remote.ping
monitoring_zones_poll: mziad,mzord,mzdfw
details:
count: 10
meta:
hurf: durf
register: the_check
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def cloud_check(module, state, entity_id, label, check_type,
monitoring_zones_poll, target_hostname, target_alias, details,
disabled, metadata, period, timeout):
# Coerce attributes.
if monitoring_zones_poll and not isinstance(monitoring_zones_poll, list):
monitoring_zones_poll = [monitoring_zones_poll]
if period:
period = int(period)
if timeout:
timeout = int(timeout)
changed = False
check = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
entity = cm.get_entity(entity_id)
if not entity:
module.fail_json(msg='Failed to instantiate entity. "%s" may not be'
' a valid entity id.' % entity_id)
existing = [e for e in entity.list_checks() if e.label == label]
if existing:
check = existing[0]
if state == 'present':
if len(existing) > 1:
module.fail_json(msg='%s existing checks have a label of %s.' %
(len(existing), label))
should_delete = False
should_create = False
should_update = False
if check:
# Details may include keys set to default values that are not
# included in the initial creation.
#
# Only force a recreation of the check if one of the *specified*
# keys is missing or has a different value.
if details:
for (key, value) in details.iteritems():
if key not in check.details:
should_delete = should_create = True
elif value != check.details[key]:
should_delete = should_create = True
should_update = label != check.label or \
(target_hostname and target_hostname != check.target_hostname) or \
(target_alias and target_alias != check.target_alias) or \
(disabled != check.disabled) or \
(metadata and metadata != check.metadata) or \
(period and period != check.period) or \
(timeout and timeout != check.timeout) or \
(monitoring_zones_poll and monitoring_zones_poll != check.monitoring_zones_poll)
if should_update and not should_delete:
check.update(label=label,
disabled=disabled,
metadata=metadata,
monitoring_zones_poll=monitoring_zones_poll,
timeout=timeout,
period=period,
target_alias=target_alias,
target_hostname=target_hostname)
changed = True
else:
# The check doesn't exist yet.
should_create = True
if should_delete:
check.delete()
if should_create:
check = cm.create_check(entity,
label=label,
check_type=check_type,
target_hostname=target_hostname,
target_alias=target_alias,
monitoring_zones_poll=monitoring_zones_poll,
details=details,
disabled=disabled,
metadata=metadata,
period=period,
timeout=timeout)
changed = True
elif state == 'absent':
if check:
check.delete()
changed = True
else:
module.fail_json(msg='state must be either present or absent.')
if check:
check_dict = {
"id": check.id,
"label": check.label,
"type": check.type,
"target_hostname": check.target_hostname,
"target_alias": check.target_alias,
"monitoring_zones_poll": check.monitoring_zones_poll,
"details": check.details,
"disabled": check.disabled,
"metadata": check.metadata,
"period": check.period,
"timeout": check.timeout
}
module.exit_json(changed=changed, check=check_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
entity_id=dict(required=True),
label=dict(required=True),
check_type=dict(required=True),
monitoring_zones_poll=dict(),
target_hostname=dict(),
target_alias=dict(),
details=dict(type='dict', default={}),
disabled=dict(type='bool', default=False),
metadata=dict(type='dict', default={}),
period=dict(type='int'),
timeout=dict(type='int'),
state=dict(default='present', choices=['present', 'absent'])
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
entity_id = module.params.get('entity_id')
label = module.params.get('label')
check_type = module.params.get('check_type')
monitoring_zones_poll = module.params.get('monitoring_zones_poll')
target_hostname = module.params.get('target_hostname')
target_alias = module.params.get('target_alias')
details = module.params.get('details')
disabled = module.boolean(module.params.get('disabled'))
metadata = module.params.get('metadata')
period = module.params.get('period')
timeout = module.params.get('timeout')
state = module.params.get('state')
setup_rax_module(module, pyrax)
cloud_check(module, state, entity_id, label, check_type,
monitoring_zones_poll, target_hostname, target_alias, details,
disabled, metadata, period, timeout)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -0,0 +1,192 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_entity
short_description: Create or delete a Rackspace Cloud Monitoring entity
description:
- Create or delete a Rackspace Cloud Monitoring entity, which represents a device
to monitor. Entities associate checks and alarms with a target system and
provide a convenient, centralized place to store IP addresses. Rackspace
monitoring module flow | *rax_mon_entity* -> rax_mon_check ->
rax_mon_notification -> rax_mon_notification_plan -> rax_mon_alarm
version_added: "2.0"
options:
label:
description:
- Defines a name for this entity. Must be a non-empty string between 1 and
255 characters long.
required: true
state:
description:
- Ensure that an entity with this C(name) exists or does not exist.
choices: ["present", "absent"]
agent_id:
description:
- Rackspace monitoring agent on the target device to which this entity is
bound. Necessary to collect C(agent.) rax_mon_checks against this entity.
named_ip_addresses:
description:
- Hash of IP addresses that may be referenced by name by rax_mon_checks
added to this entity. Must be a dictionary of with keys that are names
between 1 and 64 characters long, and values that are valid IPv4 or IPv6
addresses.
metadata:
description:
- Hash of arbitrary C(name), C(value) pairs that are passed to associated
rax_mon_alarms. Names and values must all be between 1 and 255 characters
long.
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Entity example
gather_facts: False
hosts: local
connection: local
tasks:
- name: Ensure an entity exists
rax_mon_entity:
credentials: ~/.rax_pub
state: present
label: my_entity
named_ip_addresses:
web_box: 192.168.0.10
db_box: 192.168.0.11
meta:
hurf: durf
register: the_entity
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def cloud_monitoring(module, state, label, agent_id, named_ip_addresses,
metadata):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
changed = False
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = []
for entity in cm.list_entities():
if label == entity.label:
existing.append(entity)
entity = None
if existing:
entity = existing[0]
if state == 'present':
should_update = False
should_delete = False
should_create = False
if len(existing) > 1:
module.fail_json(msg='%s existing entities have the label %s.' %
(len(existing), label))
if entity:
if named_ip_addresses and named_ip_addresses != entity.ip_addresses:
should_delete = should_create = True
# Change an existing Entity, unless there's nothing to do.
should_update = agent_id and agent_id != entity.agent_id or \
(metadata and metadata != entity.metadata)
if should_update and not should_delete:
entity.update(agent_id, metadata)
changed = True
if should_delete:
entity.delete()
else:
should_create = True
if should_create:
# Create a new Entity.
entity = cm.create_entity(label=label, agent=agent_id,
ip_addresses=named_ip_addresses,
metadata=metadata)
changed = True
else:
# Delete the existing Entities.
for e in existing:
e.delete()
changed = True
if entity:
entity_dict = {
"id": entity.id,
"name": entity.name,
"agent_id": entity.agent_id,
}
module.exit_json(changed=changed, entity=entity_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
agent_id=dict(),
named_ip_addresses=dict(type='dict', default={}),
metadata=dict(type='dict', default={})
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
agent_id = module.params.get('agent_id')
named_ip_addresses = module.params.get('named_ip_addresses')
metadata = module.params.get('metadata')
setup_rax_module(module, pyrax)
cloud_monitoring(module, state, label, agent_id, named_ip_addresses, metadata)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -0,0 +1,176 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_notification
short_description: Create or delete a Rackspace Cloud Monitoring notification.
description:
- Create or delete a Rackspace Cloud Monitoring notification that specifies a
channel that can be used to communicate alarms, such as email, webhooks, or
PagerDuty. Rackspace monitoring module flow | rax_mon_entity -> rax_mon_check ->
*rax_mon_notification* -> rax_mon_notification_plan -> rax_mon_alarm
version_added: "2.0"
options:
state:
description:
- Ensure that the notification with this C(label) exists or does not exist.
choices: ['present', 'absent']
label:
description:
- Defines a friendly name for this notification. String between 1 and 255
characters long.
required: true
notification_type:
description:
- A supported notification type.
choices: ["webhook", "email", "pagerduty"]
required: true
details:
description:
- Dictionary of key-value pairs used to initialize the notification.
Required keys and meanings vary with notification type. See
http://docs.rackspace.com/cm/api/v1.0/cm-devguide/content/
service-notification-types-crud.html for details.
required: true
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Monitoring notification example
gather_facts: False
hosts: local
connection: local
tasks:
- name: Email me when something goes wrong.
rax_mon_entity:
credentials: ~/.rax_pub
label: omg
type: email
details:
address: me@mailhost.com
register: the_notification
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def notification(module, state, label, notification_type, details):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
changed = False
notification = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = []
for n in cm.list_notifications():
if n.label == label:
existing.append(n)
if existing:
notification = existing[0]
if state == 'present':
should_update = False
should_delete = False
should_create = False
if len(existing) > 1:
module.fail_json(msg='%s existing notifications are labelled %s.' %
(len(existing), label))
if notification:
should_delete = (notification_type != notification.type)
should_update = (details != notification.details)
if should_update and not should_delete:
notification.update(details=notification.details)
changed = True
if should_delete:
notification.delete()
else:
should_create = True
if should_create:
notification = cm.create_notification(notification_type,
label=label, details=details)
changed = True
else:
for n in existing:
n.delete()
changed = True
if notification:
notification_dict = {
"id": notification.id,
"type": notification.type,
"label": notification.label,
"details": notification.details
}
module.exit_json(changed=changed, notification=notification_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
notification_type=dict(required=True, choices=['webhook', 'email', 'pagerduty']),
details=dict(required=True, type='dict')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
notification_type = module.params.get('notification_type')
details = module.params.get('details')
setup_rax_module(module, pyrax)
notification(module, state, label, notification_type, details)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -0,0 +1,181 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_notification_plan
short_description: Create or delete a Rackspace Cloud Monitoring notification
plan.
description:
- Create or delete a Rackspace Cloud Monitoring notification plan by
associating existing rax_mon_notifications with severity levels. Rackspace
monitoring module flow | rax_mon_entity -> rax_mon_check ->
rax_mon_notification -> *rax_mon_notification_plan* -> rax_mon_alarm
version_added: "2.0"
options:
state:
description:
- Ensure that the notification plan with this C(label) exists or does not
exist.
choices: ['present', 'absent']
label:
description:
- Defines a friendly name for this notification plan. String between 1 and
255 characters long.
required: true
critical_state:
description:
- Notification list to use when the alarm state is CRITICAL. Must be an
array of valid rax_mon_notification ids.
warning_state:
description:
- Notification list to use when the alarm state is WARNING. Must be an array
of valid rax_mon_notification ids.
ok_state:
description:
- Notification list to use when the alarm state is OK. Must be an array of
valid rax_mon_notification ids.
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Example notification plan
gather_facts: False
hosts: local
connection: local
tasks:
- name: Establish who gets called when.
rax_mon_notification_plan:
credentials: ~/.rax_pub
state: present
label: defcon1
critical_state:
- "{{ everyone['notification']['id'] }}"
warning_state:
- "{{ opsfloor['notification']['id'] }}"
register: defcon1
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def notification_plan(module, state, label, critical_state, warning_state, ok_state):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
changed = False
notification_plan = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = []
for n in cm.list_notification_plans():
if n.label == label:
existing.append(n)
if existing:
notification_plan = existing[0]
if state == 'present':
should_create = False
should_delete = False
if len(existing) > 1:
module.fail_json(msg='%s notification plans are labelled %s.' %
(len(existing), label))
if notification_plan:
should_delete = (critical_state and critical_state != notification_plan.critical_state) or \
(warning_state and warning_state != notification_plan.warning_state) or \
(ok_state and ok_state != notification_plan.ok_state)
if should_delete:
notification_plan.delete()
should_create = True
else:
should_create = True
if should_create:
notification_plan = cm.create_notification_plan(label=label,
critical_state=critical_state,
warning_state=warning_state,
ok_state=ok_state)
changed = True
else:
for np in existing:
np.delete()
changed = True
if notification_plan:
notification_plan_dict = {
"id": notification_plan.id,
"critical_state": notification_plan.critical_state,
"warning_state": notification_plan.warning_state,
"ok_state": notification_plan.ok_state,
"metadata": notification_plan.metadata
}
module.exit_json(changed=changed, notification_plan=notification_plan_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
critical_state=dict(type='list'),
warning_state=dict(type='list'),
ok_state=dict(type='list')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
critical_state = module.params.get('critical_state')
warning_state = module.params.get('warning_state')
ok_state = module.params.get('ok_state')
setup_rax_module(module, pyrax)
notification_plan(module, state, label, critical_state, warning_state, ok_state)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -25,10 +25,11 @@ short_description: Manage VMware vSphere Datacenters
description:
- Manage VMware vSphere Datacenters
version_added: 2.0
author: Joseph Callen
author: "Joseph Callen (@jcpowermac)"
notes:
- Tested on vSphere 5.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
hostname:

@ -0,0 +1,151 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2015 Dag Wieers <dag@wieers.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: vsphere_copy
short_description: Copy a file to a vCenter datastore
description: Upload files to a vCenter datastore
version_added: 2.0
author: Dag Wieers (@dagwieers) <dag@wieers.com>
options:
host:
description:
- The vCenter server on which the datastore is available.
required: true
login:
description:
- The login name to authenticate on the vCenter server.
required: true
password:
description:
- The password to authenticate on the vCenter server.
required: true
src:
description:
- The file to push to vCenter
required: true
datacenter:
description:
- The datacenter on the vCenter server that holds the datastore.
required: true
datastore:
description:
- The datastore on the vCenter server to push files to.
required: true
path:
description:
- The file to push to the datastore on the vCenter server.
required: true
notes:
- "This module ought to be run from a system that can access vCenter directly and has the file to transfer.
It can be the normal remote target or you can change it either by using C(transport: local) or using C(delegate_to)."
- Tested on vSphere 5.5
'''
EXAMPLES = '''
- vsphere_copy: host=vhost login=vuser password=vpass src=/some/local/file datacenter='DC1 Someplace' datastore=datastore1 path=some/remote/file
transport: local
- vsphere_copy: host=vhost login=vuser password=vpass src=/other/local/file datacenter='DC2 Someplace' datastore=datastore2 path=other/remote/file
delegate_to: other_system
'''
import atexit
import base64
import httplib
import urllib
import mmap
import errno
import socket
def vmware_path(datastore, datacenter, path):
''' Constructs a URL path that VSphere accepts reliably '''
path = "/folder/%s" % path.lstrip("/")
if not path.startswith("/"):
path = "/" + path
params = dict( dsName = datastore )
if datacenter:
params["dcPath"] = datacenter
params = urllib.urlencode(params)
return "%s?%s" % (path, params)
def main():
module = AnsibleModule(
argument_spec = dict(
host = dict(required=True, aliases=[ 'hostname' ]),
login = dict(required=True, aliases=[ 'username' ]),
password = dict(required=True),
src = dict(required=True, aliases=[ 'name' ]),
datacenter = dict(required=True),
datastore = dict(required=True),
dest = dict(required=True, aliases=[ 'path' ]),
),
# Implementing check-mode using HEAD is impossible, since size/date is not 100% reliable
supports_check_mode = False,
)
host = module.params.get('host')
login = module.params.get('login')
password = module.params.get('password')
src = module.params.get('src')
datacenter = module.params.get('datacenter')
datastore = module.params.get('datastore')
dest = module.params.get('dest')
fd = open(src, "rb")
atexit.register(fd.close)
data = mmap.mmap(fd.fileno(), 0, access=mmap.ACCESS_READ)
atexit.register(data.close)
conn = httplib.HTTPSConnection(host)
atexit.register(conn.close)
remote_path = vmware_path(datastore, datacenter, dest)
auth = base64.encodestring('%s:%s' % (login, password)).rstrip()
headers = {
"Content-Type": "application/octet-stream",
"Content-Length": str(len(data)),
"Authorization": "Basic %s" % auth,
}
# URL is only used in JSON output (helps troubleshooting)
url = 'https://%s%s' % (host, remote_path)
try:
conn.request("PUT", remote_path, body=data, headers=headers)
except socket.error, e:
if isinstance(e.args, tuple) and e[0] == errno.ECONNRESET:
# VSphere resets connection if the file is in use and cannot be replaced
module.fail_json(msg='Failed to upload, image probably in use', status=e[0], reason=str(e), url=url)
else:
module.fail_json(msg=str(e), status=e[0], reason=str(e), url=url)
resp = conn.getresponse()
if resp.status in range(200, 300):
module.exit_json(changed=True, status=resp.status, reason=resp.reason, url=url)
else:
module.fail_json(msg='Failed to upload', status=resp.status, reason=resp.reason, length=resp.length, version=resp.version, headers=resp.getheaders(), chunked=resp.chunked, url=url)
# this is magic, see lib/ansible/module_common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()

@ -0,0 +1,180 @@
#! /usr/bin/python
#
# Create a Webfaction application using Ansible and the Webfaction API
#
# Valid application types can be found by looking here:
# http://docs.webfaction.com/xmlrpc-api/apps.html#application-types
#
# ------------------------------------------
#
# (c) Quentin Stafford-Fraser 2015
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_app
short_description: Add or remove applications on a Webfaction host
description:
- Add or remove applications on a Webfaction host. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
name:
description:
- The name of the application
required: true
state:
description:
- Whether the application should exist
required: false
choices: ['present', 'absent']
default: "present"
type:
description:
- The type of application to create. See the Webfaction docs at http://docs.webfaction.com/xmlrpc-api/apps.html for a list.
required: true
autostart:
description:
- Whether the app should restart with an autostart.cgi script
required: false
default: "no"
extra_info:
description:
- Any extra parameters required by the app
required: false
default: null
open_port:
required: false
default: false
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
'''
EXAMPLES = '''
- name: Create a test app
webfaction_app:
name="my_wsgi_app1"
state=present
type=mod_wsgi35-python27
login_name={{webfaction_user}}
login_password={{webfaction_passwd}}
'''
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(required=False, choices=['present', 'absent'], default='present'),
type = dict(required=True),
autostart = dict(required=False, choices=BOOLEANS, default=False),
extra_info = dict(required=False, default=""),
port_open = dict(required=False, choices=BOOLEANS, default=False),
login_name = dict(required=True),
login_password = dict(required=True),
),
supports_check_mode=True
)
app_name = module.params['name']
app_type = module.params['type']
app_state = module.params['state']
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
app_list = webfaction.list_apps(session_id)
app_map = dict([(i['name'], i) for i in app_list])
existing_app = app_map.get(app_name)
result = {}
# Here's where the real stuff happens
if app_state == 'present':
# Does an app with this name already exist?
if existing_app:
if existing_app['type'] != app_type:
module.fail_json(msg="App already exists with different type. Please fix by hand.")
# If it exists with the right type, we don't change it
# Should check other parameters.
module.exit_json(
changed = False,
)
if not module.check_mode:
# If this isn't a dry run, create the app
result.update(
webfaction.create_app(
session_id, app_name, app_type,
module.boolean(module.params['autostart']),
module.params['extra_info'],
module.boolean(module.params['port_open'])
)
)
elif app_state == 'absent':
# If the app's already not there, nothing changed.
if not existing_app:
module.exit_json(
changed = False,
)
if not module.check_mode:
# If this isn't a dry run, delete the app
result.update(
webfaction.delete_app(session_id, app_name)
)
else:
module.fail_json(msg="Unknown state specified: {}".format(app_state))
module.exit_json(
changed = True,
result = result
)
from ansible.module_utils.basic import *
main()

@ -0,0 +1,184 @@
#! /usr/bin/python
#
# Create a webfaction database using Ansible and the Webfaction API
#
# ------------------------------------------
#
# (c) Quentin Stafford-Fraser and Andy Baker 2015
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_db
short_description: Add or remove a database on Webfaction
description:
- Add or remove a database on a Webfaction host. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
name:
description:
- The name of the database
required: true
state:
description:
- Whether the database should exist
required: false
choices: ['present', 'absent']
default: "present"
type:
description:
- The type of database to create.
required: true
choices: ['mysql', 'postgresql']
password:
description:
- The password for the new database user.
required: false
default: None
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
'''
EXAMPLES = '''
# This will also create a default DB user with the same
# name as the database, and the specified password.
- name: Create a database
webfaction_db:
name: "{{webfaction_user}}_db1"
password: mytestsql
type: mysql
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
# Note that, for symmetry's sake, deleting a database using
# 'state: absent' will also delete the matching user.
'''
import socket
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(required=False, choices=['present', 'absent'], default='present'),
# You can specify an IP address or hostname.
type = dict(required=True),
password = dict(required=False, default=None),
login_name = dict(required=True),
login_password = dict(required=True),
),
supports_check_mode=True
)
db_name = module.params['name']
db_state = module.params['state']
db_type = module.params['type']
db_passwd = module.params['password']
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
db_list = webfaction.list_dbs(session_id)
db_map = dict([(i['name'], i) for i in db_list])
existing_db = db_map.get(db_name)
user_list = webfaction.list_db_users(session_id)
user_map = dict([(i['username'], i) for i in user_list])
existing_user = user_map.get(db_name)
result = {}
# Here's where the real stuff happens
if db_state == 'present':
# Does an database with this name already exist?
if existing_db:
# Yes, but of a different type - fail
if existing_db['db_type'] != db_type:
module.fail_json(msg="Database already exists but is a different type. Please fix by hand.")
# If it exists with the right type, we don't change anything.
module.exit_json(
changed = False,
)
if not module.check_mode:
# If this isn't a dry run, create the db
# and default user.
result.update(
webfaction.create_db(
session_id, db_name, db_type, db_passwd
)
)
elif db_state == 'absent':
# If this isn't a dry run...
if not module.check_mode:
if not (existing_db or existing_user):
module.exit_json(changed = False,)
if existing_db:
# Delete the db if it exists
result.update(
webfaction.delete_db(session_id, db_name, db_type)
)
if existing_user:
# Delete the default db user if it exists
result.update(
webfaction.delete_db_user(session_id, db_name, db_type)
)
else:
module.fail_json(msg="Unknown state specified: {}".format(db_state))
module.exit_json(
changed = True,
result = result
)
from ansible.module_utils.basic import *
main()

@ -0,0 +1,171 @@
#! /usr/bin/python
#
# Create Webfaction domains and subdomains using Ansible and the Webfaction API
#
# ------------------------------------------
#
# (c) Quentin Stafford-Fraser 2015
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_domain
short_description: Add or remove domains and subdomains on Webfaction
description:
- Add or remove domains or subdomains on a Webfaction host. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- If you are I(deleting) domains by using C(state=absent), then note that if you specify subdomains, just those particular subdomains will be deleted. If you don't specify subdomains, the domain will be deleted.
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
name:
description:
- The name of the domain
required: true
state:
description:
- Whether the domain should exist
required: false
choices: ['present', 'absent']
default: "present"
subdomains:
description:
- Any subdomains to create.
required: false
default: null
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
'''
EXAMPLES = '''
- name: Create a test domain
webfaction_domain:
name: mydomain.com
state: present
subdomains:
- www
- blog
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
- name: Delete test domain and any subdomains
webfaction_domain:
name: mydomain.com
state: absent
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
'''
import socket
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(required=False, choices=['present', 'absent'], default='present'),
subdomains = dict(required=False, default=[]),
login_name = dict(required=True),
login_password = dict(required=True),
),
supports_check_mode=True
)
domain_name = module.params['name']
domain_state = module.params['state']
domain_subdomains = module.params['subdomains']
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
domain_list = webfaction.list_domains(session_id)
domain_map = dict([(i['domain'], i) for i in domain_list])
existing_domain = domain_map.get(domain_name)
result = {}
# Here's where the real stuff happens
if domain_state == 'present':
# Does an app with this name already exist?
if existing_domain:
if set(existing_domain['subdomains']) >= set(domain_subdomains):
# If it exists with the right subdomains, we don't change anything.
module.exit_json(
changed = False,
)
positional_args = [session_id, domain_name] + domain_subdomains
if not module.check_mode:
# If this isn't a dry run, create the app
# print positional_args
result.update(
webfaction.create_domain(
*positional_args
)
)
elif domain_state == 'absent':
# If the app's already not there, nothing changed.
if not existing_domain:
module.exit_json(
changed = False,
)
positional_args = [session_id, domain_name] + domain_subdomains
if not module.check_mode:
# If this isn't a dry run, delete the app
result.update(
webfaction.delete_domain(*positional_args)
)
else:
module.fail_json(msg="Unknown state specified: {}".format(domain_state))
module.exit_json(
changed = True,
result = result
)
from ansible.module_utils.basic import *
main()

@ -0,0 +1,139 @@
#! /usr/bin/python
#
# Create webfaction mailbox using Ansible and the Webfaction API
#
# ------------------------------------------
# (c) Quentin Stafford-Fraser and Andy Baker 2015
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_mailbox
short_description: Add or remove mailboxes on Webfaction
description:
- Add or remove mailboxes on a Webfaction account. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
mailbox_name:
description:
- The name of the mailbox
required: true
mailbox_password:
description:
- The password for the mailbox
required: true
default: null
state:
description:
- Whether the mailbox should exist
required: false
choices: ['present', 'absent']
default: "present"
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
'''
EXAMPLES = '''
- name: Create a mailbox
webfaction_mailbox:
mailbox_name="mybox"
mailbox_password="myboxpw"
state=present
login_name={{webfaction_user}}
login_password={{webfaction_passwd}}
'''
import socket
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec=dict(
mailbox_name=dict(required=True),
mailbox_password=dict(required=True),
state=dict(required=False, choices=['present', 'absent'], default='present'),
login_name=dict(required=True),
login_password=dict(required=True),
),
supports_check_mode=True
)
mailbox_name = module.params['mailbox_name']
site_state = module.params['state']
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
mailbox_list = webfaction.list_mailboxes(session_id)
existing_mailbox = mailbox_name in mailbox_list
result = {}
# Here's where the real stuff happens
if site_state == 'present':
# Does a mailbox with this name already exist?
if existing_mailbox:
module.exit_json(changed=False,)
positional_args = [session_id, mailbox_name]
if not module.check_mode:
# If this isn't a dry run, create the mailbox
result.update(webfaction.create_mailbox(*positional_args))
elif site_state == 'absent':
# If the mailbox is already not there, nothing changed.
if not existing_mailbox:
module.exit_json(changed=False)
if not module.check_mode:
# If this isn't a dry run, delete the mailbox
result.update(webfaction.delete_mailbox(session_id, mailbox_name))
else:
module.fail_json(msg="Unknown state specified: {}".format(site_state))
module.exit_json(changed=True, result=result)
from ansible.module_utils.basic import *
main()

@ -0,0 +1,208 @@
#! /usr/bin/python
#
# Create Webfaction website using Ansible and the Webfaction API
#
# ------------------------------------------
#
# (c) Quentin Stafford-Fraser 2015
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_site
short_description: Add or remove a website on a Webfaction host
description:
- Add or remove a website on a Webfaction host. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- Sadly, you I(do) need to know your webfaction hostname for the C(host) parameter. But at least, unlike the API, you don't need to know the IP address - you can use a DNS name.
- If a site of the same name exists in the account but on a different host, the operation will exit.
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
name:
description:
- The name of the website
required: true
state:
description:
- Whether the website should exist
required: false
choices: ['present', 'absent']
default: "present"
host:
description:
- The webfaction host on which the site should be created.
required: true
https:
description:
- Whether or not to use HTTPS
required: false
choices: BOOLEANS
default: 'false'
site_apps:
description:
- A mapping of URLs to apps
required: false
subdomains:
description:
- A list of subdomains associated with this site.
required: false
default: null
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
'''
EXAMPLES = '''
- name: create website
webfaction_site:
name: testsite1
state: present
host: myhost.webfaction.com
subdomains:
- 'testsite1.my_domain.org'
site_apps:
- ['testapp1', '/']
https: no
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
'''
import socket
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(required=False, choices=['present', 'absent'], default='present'),
# You can specify an IP address or hostname.
host = dict(required=True),
https = dict(required=False, choices=BOOLEANS, default=False),
subdomains = dict(required=False, default=[]),
site_apps = dict(required=False, default=[]),
login_name = dict(required=True),
login_password = dict(required=True),
),
supports_check_mode=True
)
site_name = module.params['name']
site_state = module.params['state']
site_host = module.params['host']
site_ip = socket.gethostbyname(site_host)
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
site_list = webfaction.list_websites(session_id)
site_map = dict([(i['name'], i) for i in site_list])
existing_site = site_map.get(site_name)
result = {}
# Here's where the real stuff happens
if site_state == 'present':
# Does a site with this name already exist?
if existing_site:
# If yes, but it's on a different IP address, then fail.
# If we wanted to allow relocation, we could add a 'relocate=true' option
# which would get the existing IP address, delete the site there, and create it
# at the new address. A bit dangerous, perhaps, so for now we'll require manual
# deletion if it's on another host.
if existing_site['ip'] != site_ip:
module.fail_json(msg="Website already exists with a different IP address. Please fix by hand.")
# If it's on this host and the key parameters are the same, nothing needs to be done.
if (existing_site['https'] == module.boolean(module.params['https'])) and \
(set(existing_site['subdomains']) == set(module.params['subdomains'])) and \
(dict(existing_site['website_apps']) == dict(module.params['site_apps'])):
module.exit_json(
changed = False
)
positional_args = [
session_id, site_name, site_ip,
module.boolean(module.params['https']),
module.params['subdomains'],
]
for a in module.params['site_apps']:
positional_args.append( (a[0], a[1]) )
if not module.check_mode:
# If this isn't a dry run, create or modify the site
result.update(
webfaction.create_website(
*positional_args
) if not existing_site else webfaction.update_website (
*positional_args
)
)
elif site_state == 'absent':
# If the site's already not there, nothing changed.
if not existing_site:
module.exit_json(
changed = False,
)
if not module.check_mode:
# If this isn't a dry run, delete the site
result.update(
webfaction.delete_website(session_id, site_name, site_ip)
)
else:
module.fail_json(msg="Unknown state specified: {}".format(site_state))
module.exit_json(
changed = True,
result = result
)
from ansible.module_utils.basic import *
main()

@ -20,7 +20,7 @@
DOCUMENTATION = """
module: consul
short_description: "Add, modify & delete services within a consul cluster.
See http://conul.io for more details."
See http://consul.io for more details."
description:
- registers services and checks for an agent with a consul cluster. A service
is some process running on the agent node that should be advertised by
@ -38,10 +38,11 @@ description:
changed occurred. An api method is planned to supply this metadata so at that
stage change management will be added.
requirements:
- "python >= 2.6"
- python-consul
- requests
version_added: "1.9"
author: Steve Gargan (steve.gargan@gmail.com)
version_added: "2.0"
author: "Steve Gargan (@sgargan)"
options:
state:
description:

@ -25,11 +25,12 @@ description:
rules in a consul cluster via the agent. For more details on using and
configuring ACLs, see https://www.consul.io/docs/internals/acl.html.
requirements:
- "python >= 2.6"
- python-consul
- pyhcl
- requests
version_added: "1.9"
author: Steve Gargan (steve.gargan@gmail.com)
version_added: "2.0"
author: "Steve Gargan (@sgargan)"
options:
mgmt_token:
description:

@ -28,10 +28,11 @@ description:
represents a prefix then Note that when a value is removed, the existing
value if any is returned as part of the results.
requirements:
- "python >= 2.6"
- python-consul
- requests
version_added: "1.9"
author: Steve Gargan (steve.gargan@gmail.com)
version_added: "2.0"
author: "Steve Gargan (@sgargan)"
options:
state:
description:

@ -26,10 +26,11 @@ description:
to implement distributed locks. In depth documentation for working with
sessions can be found here http://www.consul.io/docs/internals/sessions.html
requirements:
- "python >= 2.6"
- python-consul
- requests
version_added: "1.9"
author: Steve Gargan (steve.gargan@gmail.com)
version_added: "2.0"
author: "Steve Gargan (@sgargan)"
options:
state:
description:

@ -0,0 +1,177 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Matt Martz <matt@sivel.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import datetime
try:
import pexpect
HAS_PEXPECT = True
except ImportError:
HAS_PEXPECT = False
DOCUMENTATION = '''
---
module: expect
version_added: 2.0
short_description: Executes a command and responds to prompts
description:
- The M(expect) module executes a command and responds to prompts
- The given command will be executed on all selected nodes. It will not be
processed through the shell, so variables like C($HOME) and operations
like C("<"), C(">"), C("|"), and C("&") will not work
options:
command:
description:
- the command module takes command to run.
required: true
creates:
description:
- a filename, when it already exists, this step will B(not) be run.
required: false
removes:
description:
- a filename, when it does not exist, this step will B(not) be run.
required: false
chdir:
description:
- cd into this directory before running the command
required: false
responses:
description:
- Mapping of expected string and string to respond with
required: true
timeout:
description:
- Amount of time in seconds to wait for the expected strings
default: 30
echo:
description:
- Whether or not to echo out your response strings
default: false
requirements:
- python >= 2.6
- pexpect >= 3.3
notes:
- If you want to run a command through the shell (say you are using C(<),
C(>), C(|), etc), you must specify a shell in the command such as
C(/bin/bash -c "/path/to/something | grep else")
author: "Matt Martz (@sivel)"
'''
EXAMPLES = '''
- expect:
command: passwd username
responses:
(?i)password: "MySekretPa$$word"
'''
def main():
module = AnsibleModule(
argument_spec=dict(
command=dict(required=True),
chdir=dict(),
creates=dict(),
removes=dict(),
responses=dict(type='dict', required=True),
timeout=dict(type='int', default=30),
echo=dict(type='bool', default=False),
)
)
if not HAS_PEXPECT:
module.fail_json(msg='The pexpect python module is required')
chdir = module.params['chdir']
args = module.params['command']
creates = module.params['creates']
removes = module.params['removes']
responses = module.params['responses']
timeout = module.params['timeout']
echo = module.params['echo']
events = dict()
for key, value in responses.iteritems():
events[key.decode()] = u'%s\n' % value.rstrip('\n').decode()
if args.strip() == '':
module.fail_json(rc=256, msg="no command given")
if chdir:
chdir = os.path.abspath(os.path.expanduser(chdir))
os.chdir(chdir)
if creates:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of command executions.
v = os.path.expanduser(creates)
if os.path.exists(v):
module.exit_json(
cmd=args,
stdout="skipped, since %s exists" % v,
changed=False,
stderr=False,
rc=0
)
if removes:
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of command executions.
v = os.path.expanduser(removes)
if not os.path.exists(v):
module.exit_json(
cmd=args,
stdout="skipped, since %s does not exist" % v,
changed=False,
stderr=False,
rc=0
)
startd = datetime.datetime.now()
try:
out, rc = pexpect.runu(args, timeout=timeout, withexitstatus=True,
events=events, cwd=chdir, echo=echo)
except pexpect.ExceptionPexpect, e:
module.fail_json(msg='%s' % e)
endd = datetime.datetime.now()
delta = endd - startd
if out is None:
out = ''
module.exit_json(
cmd=args,
stdout=out.rstrip('\r\n'),
rc=rc,
start=str(startd),
end=str(endd),
delta=str(delta),
changed=True,
)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -87,11 +87,19 @@ options:
required: false
default: present
choices: [ "present", "absent" ]
update_password:
required: false
default: always
choices: ['always', 'on_create']
version_added: "2.1"
description:
- C(always) will update passwords if they differ. C(on_create) will only set the password for newly created users.
notes:
- Requires the pymongo Python package on the remote host, version 2.4.2+. This
can be installed using pip or the OS package manager. @see http://api.mongodb.org/python/current/installation.html
requirements: [ "pymongo" ]
author: Elliott Foster
author: "Elliott Foster (@elliotttf)"
'''
EXAMPLES = '''
@ -134,7 +142,15 @@ else:
# MongoDB module specific support methods.
#
def user_find(client, user):
for mongo_user in client["admin"].system.users.find():
if mongo_user['user'] == user:
return mongo_user
return False
def user_add(module, client, db_name, user, password, roles):
#pymono's user_add is a _create_or_update_user so we won't know if it was changed or updated
#without reproducing a lot of the logic in database.py of pymongo
db = client[db_name]
if roles is None:
db.add_user(user, password, False)
@ -147,9 +163,13 @@ def user_add(module, client, db_name, user, password, roles):
err_msg = err_msg + ' (Note: you must be on mongodb 2.4+ and pymongo 2.5+ to use the roles param)'
module.fail_json(msg=err_msg)
def user_remove(client, db_name, user):
db = client[db_name]
db.remove_user(user)
def user_remove(module, client, db_name, user):
exists = user_find(client, user)
if exists:
db = client[db_name]
db.remove_user(user)
else:
module.exit_json(changed=False, user=user)
def load_mongocnf():
config = ConfigParser.RawConfigParser()
@ -184,6 +204,7 @@ def main():
ssl=dict(default=False),
roles=dict(default=None, type='list'),
state=dict(default='present', choices=['absent', 'present']),
update_password=dict(default="always", choices=["always", "on_create"]),
)
)
@ -201,6 +222,7 @@ def main():
ssl = module.params['ssl']
roles = module.params['roles']
state = module.params['state']
update_password = module.params['update_password']
try:
if replica_set:
@ -208,32 +230,30 @@ def main():
else:
client = MongoClient(login_host, int(login_port), ssl=ssl)
# try to authenticate as a target user to check if it already exists
try:
client[db_name].authenticate(user, password)
if state == 'present':
module.exit_json(changed=False, user=user)
except OperationFailure:
if state == 'absent':
module.exit_json(changed=False, user=user)
if login_user is None and login_password is None:
mongocnf_creds = load_mongocnf()
if mongocnf_creds is not False:
login_user = mongocnf_creds['user']
login_password = mongocnf_creds['password']
elif login_password is None and login_user is not None:
elif login_password is None or login_user is None:
module.fail_json(msg='when supplying login arguments, both login_user and login_password must be provided')
if login_user is not None and login_password is not None:
client.admin.authenticate(login_user, login_password)
elif LooseVersion(PyMongoVersion) >= LooseVersion('3.0'):
if db_name != "admin":
module.fail_json(msg='The localhost login exception only allows the first admin account to be created')
#else: this has to be the first admin user added
except ConnectionFailure, e:
module.fail_json(msg='unable to connect to database: %s' % str(e))
if state == 'present':
if password is None:
module.fail_json(msg='password parameter required when adding a user')
if password is None and update_password == 'always':
module.fail_json(msg='password parameter required when adding a user unless update_password is set to on_create')
if update_password != 'always' and user_find(client, user):
password = None
try:
user_add(module, client, db_name, user, password, roles)
@ -242,7 +262,7 @@ def main():
elif state == 'absent':
try:
user_remove(client, db_name, user)
user_remove(module, client, db_name, user)
except OperationFailure, e:
module.fail_json(msg='Unable to remove user: %s' % str(e))

@ -98,7 +98,7 @@ notes:
this needs to be in the redis.conf in the masterauth variable
requirements: [ redis ]
author: Xabier Larrakoetxea
author: "Xabier Larrakoetxea (@slok)"
'''
EXAMPLES = '''

@ -26,6 +26,9 @@ description:
- This module can be used to join nodes to a cluster, check
the status of the cluster.
version_added: "1.2"
author:
- "James Martin (@jsmartin)"
- "Drew Kerrigan (@drewkerrigan)"
options:
command:
description:

@ -30,6 +30,7 @@ short_description: Manage MySQL replication
description:
- Manages MySQL server replication, slave, master status get and change master host.
version_added: "1.3"
author: "Balazs Pocze (@banyek)"
options:
mode:
description:
@ -93,7 +94,7 @@ options:
master_ssl:
description:
- same as mysql variable
possible values: 0,1
choices: [ 0, 1 ]
master_ssl_ca:
description:
- same as mysql variable
@ -109,7 +110,12 @@ options:
master_ssl_cipher:
description:
- same as mysql variable
master_auto_position:
description:
- does the host uses GTID based replication or not
required: false
default: null
version_added: "2.0"
'''
EXAMPLES = '''
@ -242,6 +248,7 @@ def main():
login_port=dict(default=3306, type='int'),
login_unix_socket=dict(default=None),
mode=dict(default="getslave", choices=["getmaster", "getslave", "changemaster", "stopslave", "startslave"]),
master_auto_position=dict(default=False, type='bool'),
master_host=dict(default=None),
master_user=dict(default=None),
master_password=dict(default=None),
@ -279,6 +286,7 @@ def main():
master_ssl_cert = module.params["master_ssl_cert"]
master_ssl_key = module.params["master_ssl_key"]
master_ssl_cipher = module.params["master_ssl_cipher"]
master_auto_position = module.params["master_auto_position"]
if not mysqldb_found:
module.fail_json(msg="the python mysqldb module is required")
@ -376,6 +384,8 @@ def main():
if master_ssl_cipher:
chm.append("MASTER_SSL_CIPHER=%(master_ssl_cipher)s")
chm_params['master_ssl_cipher'] = master_ssl_cipher
if master_auto_position:
chm.append("MASTER_AUTO_POSITION = 1")
changemaster(cursor, chm, chm_params)
module.exit_json(changed=True)
elif mode in "startslave":

@ -65,7 +65,7 @@ notes:
- This module uses I(psycopg2), a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed on
the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the C(postgresql), C(libpq-dev), and C(python-psycopg2) packages on the remote host before using this module.
requirements: [ psycopg2 ]
author: Daniel Schep
author: "Daniel Schep (@dschep)"
'''
EXAMPLES = '''

@ -95,7 +95,7 @@ notes:
systems, install the postgresql, libpq-dev, and python-psycopg2 packages
on the remote host before using this module.
requirements: [ psycopg2 ]
author: Jens Depuydt
author: "Jens Depuydt (@jensdepuydt)"
'''
EXAMPLES = '''

@ -67,7 +67,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek
author: "Dariusz Owczarek (@dareko)"
"""
EXAMPLES = """

@ -59,7 +59,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek
author: "Dariusz Owczarek (@dareko)"
"""
EXAMPLES = """

@ -75,7 +75,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek
author: "Dariusz Owczarek (@dareko)"
"""
EXAMPLES = """

@ -91,7 +91,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek
author: "Dariusz Owczarek (@dareko)"
"""
EXAMPLES = """

@ -107,7 +107,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek
author: "Dariusz Owczarek (@dareko)"
"""
EXAMPLES = """
@ -233,7 +233,10 @@ def present(user_facts, cursor, user, profile, resource_pool,
changed = False
query_fragments = ["alter user {0}".format(user)]
if locked is not None and locked != (user_facts[user_key]['locked'] == 'True'):
state = 'lock' if locked else 'unlock'
if locked:
state = 'lock'
else:
state = 'unlock'
query_fragments.append("account {0}".format(state))
changed = True
if password and password != user_facts[user_key]['password']:

@ -22,7 +22,9 @@
DOCUMENTATION = '''
---
module: patch
author: Luis Alberto Perez Lazaro, Jakub Jirutka
author:
- "Jakub Jirutka (@jirutka)"
- "Luis Alberto Perez Lazaro (@luisperlaz)"
version_added: 1.9
description:
- Apply patch files using the GNU patch tool.
@ -63,6 +65,14 @@ options:
required: false
type: "int"
default: "0"
backup:
version_added: "2.0"
description:
- passes --backup --version-control=numbered to patch,
producing numbered backup copies
required: false
type: "bool"
default: "False"
note:
- This module requires GNU I(patch) utility to be installed on the remote host.
'''
@ -99,7 +109,7 @@ def is_already_applied(patch_func, patch_file, basedir, dest_file=None, strip=0)
return rc == 0
def apply_patch(patch_func, patch_file, basedir, dest_file=None, strip=0, dry_run=False):
def apply_patch(patch_func, patch_file, basedir, dest_file=None, strip=0, dry_run=False, backup=False):
opts = ['--quiet', '--forward', '--batch', '--reject-file=-',
"--strip=%s" % strip, "--directory='%s'" % basedir,
"--input='%s'" % patch_file]
@ -107,10 +117,12 @@ def apply_patch(patch_func, patch_file, basedir, dest_file=None, strip=0, dry_ru
opts.append('--dry-run')
if dest_file:
opts.append("'%s'" % dest_file)
if backup:
opts.append('--backup --version-control=numbered')
(rc, out, err) = patch_func(opts)
if rc != 0:
msg = out if not err else err
msg = err or out
raise PatchError(msg)
@ -122,6 +134,9 @@ def main():
'basedir': {},
'strip': {'default': 0, 'type': 'int'},
'remote_src': {'default': False, 'type': 'bool'},
# NB: for 'backup' parameter, semantics is slightly different from standard
# since patch will create numbered copies, not strftime("%Y-%m-%d@%H:%M:%S~")
'backup': { 'default': False, 'type': 'bool' }
},
required_one_of=[['dest', 'basedir']],
supports_check_mode=True
@ -154,8 +169,8 @@ def main():
changed = False
if not is_already_applied(patch_func, p.src, p.basedir, dest_file=p.dest, strip=p.strip):
try:
apply_patch(patch_func, p.src, p.basedir, dest_file=p.dest, strip=p.strip,
dry_run=module.check_mode)
apply_patch( patch_func, p.src, p.basedir, dest_file=p.dest, strip=p.strip,
dry_run=module.check_mode, backup=p.backup )
changed = True
except PatchError, e:
module.fail_json(msg=str(e))

@ -0,0 +1,214 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Manuel Sousa <manuel.sousa@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: rabbitmq_binding
author: "Manuel Sousa (@manuel-sousa)"
version_added: "2.0"
short_description: This module manages rabbitMQ bindings
description:
- This module uses rabbitMQ Rest API to create/delete bindings
requirements: [ python requests ]
options:
state:
description:
- Whether the exchange should be present or absent
- Only present implemented atm
choices: [ "present", "absent" ]
required: false
default: present
name:
description:
- source exchange to create binding on
required: true
aliases: [ "src", "source" ]
login_user:
description:
- rabbitMQ user for connection
required: false
default: guest
login_password:
description:
- rabbitMQ password for connection
required: false
default: false
login_host:
description:
- rabbitMQ host for connection
required: false
default: localhost
login_port:
description:
- rabbitMQ management api port
required: false
default: 15672
vhost:
description:
- rabbitMQ virtual host
- default vhost is /
required: false
default: "/"
destination:
description:
- destination exchange or queue for the binding
required: true
aliases: [ "dst", "dest" ]
destination_type:
description:
- Either queue or exchange
required: true
choices: [ "queue", "exchange" ]
aliases: [ "type", "dest_type" ]
routing_key:
description:
- routing key for the binding
- default is #
required: false
default: "#"
arguments:
description:
- extra arguments for exchange. If defined this argument is a key/value dictionary
required: false
default: {}
'''
EXAMPLES = '''
# Bind myQueue to directExchange with routing key info
- rabbitmq_binding: name=directExchange destination=myQueue type=queue routing_key=info
# Bind directExchange to topicExchange with routing key *.info
- rabbitmq_binding: name=topicExchange destination=topicExchange type=exchange routing_key="*.info"
'''
import requests
import urllib
import json
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(default='present', choices=['present', 'absent'], type='str'),
name = dict(required=True, aliases=[ "src", "source" ], type='str'),
login_user = dict(default='guest', type='str'),
login_password = dict(default='guest', type='str', no_log=True),
login_host = dict(default='localhost', type='str'),
login_port = dict(default='15672', type='str'),
vhost = dict(default='/', type='str'),
destination = dict(required=True, aliases=[ "dst", "dest"], type='str'),
destination_type = dict(required=True, aliases=[ "type", "dest_type"], choices=[ "queue", "exchange" ],type='str'),
routing_key = dict(default='#', type='str'),
arguments = dict(default=dict(), type='dict')
),
supports_check_mode = True
)
if module.params['destination_type'] == "queue":
dest_type="q"
else:
dest_type="e"
url = "http://%s:%s/api/bindings/%s/e/%s/%s/%s/%s" % (
module.params['login_host'],
module.params['login_port'],
urllib.quote(module.params['vhost'],''),
module.params['name'],
dest_type,
module.params['destination'],
urllib.quote(module.params['routing_key'],'')
)
# Check if exchange already exists
r = requests.get( url, auth=(module.params['login_user'],module.params['login_password']))
if r.status_code==200:
binding_exists = True
response = r.json()
elif r.status_code==404:
binding_exists = False
response = r.text
else:
module.fail_json(
msg = "Invalid response from RESTAPI when trying to check if exchange exists",
details = r.text
)
if module.params['state']=='present':
change_required = not binding_exists
else:
change_required = binding_exists
# Exit if check_mode
if module.check_mode:
module.exit_json(
changed= change_required,
name = module.params['name'],
details = response,
arguments = module.params['arguments']
)
# Do changes
if change_required:
if module.params['state'] == 'present':
url = "http://%s:%s/api/bindings/%s/e/%s/%s/%s" % (
module.params['login_host'],
module.params['login_port'],
urllib.quote(module.params['vhost'],''),
module.params['name'],
dest_type,
module.params['destination']
)
r = requests.post(
url,
auth = (module.params['login_user'],module.params['login_password']),
headers = { "content-type": "application/json"},
data = json.dumps({
"routing_key": module.params['routing_key'],
"arguments": module.params['arguments']
})
)
elif module.params['state'] == 'absent':
r = requests.delete( url, auth = (module.params['login_user'],module.params['login_password']))
if r.status_code == 204 or r.status_code == 201:
module.exit_json(
changed = True,
name = module.params['name'],
destination = module.params['destination']
)
else:
module.fail_json(
msg = "Error creating exchange",
status = r.status_code,
details = r.text
)
else:
module.exit_json(
changed = False,
name = module.params['name']
)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -0,0 +1,218 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Manuel Sousa <manuel.sousa@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: rabbitmq_exchange
author: "Manuel Sousa (@manuel-sousa)"
version_added: "2.0"
short_description: This module manages rabbitMQ exchanges
description:
- This module uses rabbitMQ Rest API to create/delete exchanges
requirements: [ python requests ]
options:
name:
description:
- Name of the exchange to create
required: true
state:
description:
- Whether the exchange should be present or absent
- Only present implemented atm
choices: [ "present", "absent" ]
required: false
default: present
login_user:
description:
- rabbitMQ user for connection
required: false
default: guest
login_password:
description:
- rabbitMQ password for connection
required: false
default: false
login_host:
description:
- rabbitMQ host for connection
required: false
default: localhost
login_port:
description:
- rabbitMQ management api port
required: false
default: 15672
vhost:
description:
- rabbitMQ virtual host
required: false
default: "/"
durable:
description:
- whether exchange is durable or not
required: false
choices: [ "yes", "no" ]
default: yes
exchange_type:
description:
- type for the exchange
required: false
choices: [ "fanout", "direct", "headers", "topic" ]
aliases: [ "type" ]
default: direct
auto_delete:
description:
- if the exchange should delete itself after all queues/exchanges unbound from it
required: false
choices: [ "yes", "no" ]
default: no
internal:
description:
- exchange is available only for other exchanges
required: false
choices: [ "yes", "no" ]
default: no
arguments:
description:
- extra arguments for exchange. If defined this argument is a key/value dictionary
required: false
default: {}
'''
EXAMPLES = '''
# Create direct exchange
- rabbitmq_exchange: name=directExchange
# Create topic exchange on vhost
- rabbitmq_exchange: name=topicExchange type=topic vhost=myVhost
'''
import requests
import urllib
import json
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(default='present', choices=['present', 'absent'], type='str'),
name = dict(required=True, type='str'),
login_user = dict(default='guest', type='str'),
login_password = dict(default='guest', type='str', no_log=True),
login_host = dict(default='localhost', type='str'),
login_port = dict(default='15672', type='str'),
vhost = dict(default='/', type='str'),
durable = dict(default=True, choices=BOOLEANS, type='bool'),
auto_delete = dict(default=False, choices=BOOLEANS, type='bool'),
internal = dict(default=False, choices=BOOLEANS, type='bool'),
exchange_type = dict(default='direct', aliases=['type'], type='str'),
arguments = dict(default=dict(), type='dict')
),
supports_check_mode = True
)
url = "http://%s:%s/api/exchanges/%s/%s" % (
module.params['login_host'],
module.params['login_port'],
urllib.quote(module.params['vhost'],''),
module.params['name']
)
# Check if exchange already exists
r = requests.get( url, auth=(module.params['login_user'],module.params['login_password']))
if r.status_code==200:
exchange_exists = True
response = r.json()
elif r.status_code==404:
exchange_exists = False
response = r.text
else:
module.fail_json(
msg = "Invalid response from RESTAPI when trying to check if exchange exists",
details = r.text
)
if module.params['state']=='present':
change_required = not exchange_exists
else:
change_required = exchange_exists
# Check if attributes change on existing exchange
if not change_required and r.status_code==200 and module.params['state'] == 'present':
if not (
response['durable'] == module.params['durable'] and
response['auto_delete'] == module.params['auto_delete'] and
response['internal'] == module.params['internal'] and
response['type'] == module.params['exchange_type']
):
module.fail_json(
msg = "RabbitMQ RESTAPI doesn't support attribute changes for existing exchanges"
)
# Exit if check_mode
if module.check_mode:
module.exit_json(
changed= change_required,
name = module.params['name'],
details = response,
arguments = module.params['arguments']
)
# Do changes
if change_required:
if module.params['state'] == 'present':
r = requests.put(
url,
auth = (module.params['login_user'],module.params['login_password']),
headers = { "content-type": "application/json"},
data = json.dumps({
"durable": module.params['durable'],
"auto_delete": module.params['auto_delete'],
"internal": module.params['internal'],
"type": module.params['exchange_type'],
"arguments": module.params['arguments']
})
)
elif module.params['state'] == 'absent':
r = requests.delete( url, auth = (module.params['login_user'],module.params['login_password']))
if r.status_code == 204:
module.exit_json(
changed = True,
name = module.params['name']
)
else:
module.fail_json(
msg = "Error creating exchange",
status = r.status_code,
details = r.text
)
else:
module.exit_json(
changed = False,
name = module.params['name']
)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -25,7 +25,7 @@ short_description: Adds or removes parameters to RabbitMQ
description:
- Manage dynamic, cluster-wide parameters for RabbitMQ
version_added: "1.1"
author: Chris Hoffman
author: '"Chris Hoffman (@chrishoffman)"'
options:
component:
description:

@ -25,7 +25,7 @@ short_description: Adds or removes plugins to RabbitMQ
description:
- Enables or disables RabbitMQ plugins
version_added: "1.1"
author: Chris Hoffman
author: '"Chris Hoffman (@chrishoffman)"'
options:
names:
description:

@ -26,7 +26,7 @@ short_description: Manage the state of policies in RabbitMQ.
description:
- Manage the state of a virtual host in RabbitMQ.
version_added: "1.5"
author: John Dewey
author: "John Dewey (@retr0h)"
options:
name:
description:

@ -0,0 +1,263 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Manuel Sousa <manuel.sousa@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: rabbitmq_queue
author: "Manuel Sousa (@manuel-sousa)"
version_added: "2.0"
short_description: This module manages rabbitMQ queues
description:
- This module uses rabbitMQ Rest API to create/delete queues
requirements: [ python requests ]
options:
name:
description:
- Name of the queue to create
required: true
state:
description:
- Whether the queue should be present or absent
- Only present implemented atm
choices: [ "present", "absent" ]
required: false
default: present
login_user:
description:
- rabbitMQ user for connection
required: false
default: guest
login_password:
description:
- rabbitMQ password for connection
required: false
default: false
login_host:
description:
- rabbitMQ host for connection
required: false
default: localhost
login_port:
description:
- rabbitMQ management api port
required: false
default: 15672
vhost:
description:
- rabbitMQ virtual host
required: false
default: "/"
durable:
description:
- whether queue is durable or not
required: false
choices: [ "yes", "no" ]
default: yes
auto_delete:
description:
- if the queue should delete itself after all queues/queues unbound from it
required: false
choices: [ "yes", "no" ]
default: no
message_ttl:
description:
- How long a message can live in queue before it is discarded (milliseconds)
required: False
default: forever
auto_expires:
description:
- How long a queue can be unused before it is automatically deleted (milliseconds)
required: false
default: forever
max_length:
description:
- How many messages can the queue contain before it starts rejecting
required: false
default: no limit
dead_letter_exchange:
description:
- Optional name of an exchange to which messages will be republished if they
- are rejected or expire
required: false
default: None
dead_letter_routing_key:
description:
- Optional replacement routing key to use when a message is dead-lettered.
- Original routing key will be used if unset
required: false
default: None
arguments:
description:
- extra arguments for queue. If defined this argument is a key/value dictionary
required: false
default: {}
'''
EXAMPLES = '''
# Create a queue
- rabbitmq_queue: name=myQueue
# Create a queue on remote host
- rabbitmq_queue: name=myRemoteQueue login_user=user login_password=secret login_host=remote.example.org
'''
import requests
import urllib
import json
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(default='present', choices=['present', 'absent'], type='str'),
name = dict(required=True, type='str'),
login_user = dict(default='guest', type='str'),
login_password = dict(default='guest', type='str', no_log=True),
login_host = dict(default='localhost', type='str'),
login_port = dict(default='15672', type='str'),
vhost = dict(default='/', type='str'),
durable = dict(default=True, choices=BOOLEANS, type='bool'),
auto_delete = dict(default=False, choices=BOOLEANS, type='bool'),
message_ttl = dict(default=None, type='int'),
auto_expires = dict(default=None, type='int'),
max_length = dict(default=None, type='int'),
dead_letter_exchange = dict(default=None, type='str'),
dead_letter_routing_key = dict(default=None, type='str'),
arguments = dict(default=dict(), type='dict')
),
supports_check_mode = True
)
url = "http://%s:%s/api/queues/%s/%s" % (
module.params['login_host'],
module.params['login_port'],
urllib.quote(module.params['vhost'],''),
module.params['name']
)
# Check if queue already exists
r = requests.get( url, auth=(module.params['login_user'],module.params['login_password']))
if r.status_code==200:
queue_exists = True
response = r.json()
elif r.status_code==404:
queue_exists = False
response = r.text
else:
module.fail_json(
msg = "Invalid response from RESTAPI when trying to check if queue exists",
details = r.text
)
if module.params['state']=='present':
change_required = not queue_exists
else:
change_required = queue_exists
# Check if attributes change on existing queue
if not change_required and r.status_code==200 and module.params['state'] == 'present':
if not (
response['durable'] == module.params['durable'] and
response['auto_delete'] == module.params['auto_delete'] and
(
( 'x-message-ttl' in response['arguments'] and response['arguments']['x-message-ttl'] == module.params['message_ttl'] ) or
( 'x-message-ttl' not in response['arguments'] and module.params['message_ttl'] is None )
) and
(
( 'x-expires' in response['arguments'] and response['arguments']['x-expires'] == module.params['auto_expires'] ) or
( 'x-expires' not in response['arguments'] and module.params['auto_expires'] is None )
) and
(
( 'x-max-length' in response['arguments'] and response['arguments']['x-max-length'] == module.params['max_length'] ) or
( 'x-max-length' not in response['arguments'] and module.params['max_length'] is None )
) and
(
( 'x-dead-letter-exchange' in response['arguments'] and response['arguments']['x-dead-letter-exchange'] == module.params['dead_letter_exchange'] ) or
( 'x-dead-letter-exchange' not in response['arguments'] and module.params['dead_letter_exchange'] is None )
) and
(
( 'x-dead-letter-routing-key' in response['arguments'] and response['arguments']['x-dead-letter-routing-key'] == module.params['dead_letter_routing_key'] ) or
( 'x-dead-letter-routing-key' not in response['arguments'] and module.params['dead_letter_routing_key'] is None )
)
):
module.fail_json(
msg = "RabbitMQ RESTAPI doesn't support attribute changes for existing queues",
)
# Copy parameters to arguments as used by RabbitMQ
for k,v in {
'message_ttl': 'x-message-ttl',
'auto_expires': 'x-expires',
'max_length': 'x-max-length',
'dead_letter_exchange': 'x-dead-letter-exchange',
'dead_letter_routing_key': 'x-dead-letter-routing-key'
}.items():
if module.params[k]:
module.params['arguments'][v] = module.params[k]
# Exit if check_mode
if module.check_mode:
module.exit_json(
changed= change_required,
name = module.params['name'],
details = response,
arguments = module.params['arguments']
)
# Do changes
if change_required:
if module.params['state'] == 'present':
r = requests.put(
url,
auth = (module.params['login_user'],module.params['login_password']),
headers = { "content-type": "application/json"},
data = json.dumps({
"durable": module.params['durable'],
"auto_delete": module.params['auto_delete'],
"arguments": module.params['arguments']
})
)
elif module.params['state'] == 'absent':
r = requests.delete( url, auth = (module.params['login_user'],module.params['login_password']))
if r.status_code == 204:
module.exit_json(
changed = True,
name = module.params['name']
)
else:
module.fail_json(
msg = "Error creating queue",
status = r.status_code,
details = r.text
)
else:
module.exit_json(
changed = False,
name = module.params['name']
)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -25,7 +25,7 @@ short_description: Adds or removes users to RabbitMQ
description:
- Add or remove users to RabbitMQ and assign permissions
version_added: "1.1"
author: Chris Hoffman
author: '"Chris Hoffman (@chrishoffman)"'
options:
user:
description:

@ -26,7 +26,7 @@ short_description: Manage the state of a virtual host in RabbitMQ
description:
- Manage the state of a virtual host in RabbitMQ
version_added: "1.1"
author: Chris Hoffman
author: '"Chris Hoffman (@choffman)"'
options:
name:
description:

@ -22,7 +22,7 @@ DOCUMENTATION = '''
---
module: airbrake_deployment
version_added: "1.2"
author: Bruce Pennypacker
author: "Bruce Pennypacker (@bpennypacker)"
short_description: Notify airbrake about app deployments
description:
- Notify airbrake about app deployments (see http://help.airbrake.io/kb/api-2/deploy-tracking)
@ -61,8 +61,7 @@ options:
default: 'yes'
choices: ['yes', 'no']
# informational: requirements for nodes
requirements: [ urllib, urllib2 ]
requirements: []
'''
EXAMPLES = '''
@ -72,6 +71,8 @@ EXAMPLES = '''
revision=4.2
'''
import urllib
# ===========================================
# Module execution.
#

@ -3,7 +3,7 @@
DOCUMENTATION = '''
---
module: bigpanda
author: BigPanda
author: "Hagai Kariti (@hkariti)"
short_description: Notify BigPanda about deployments
version_added: "1.8"
description:
@ -162,7 +162,7 @@ def main():
module.exit_json(changed=True, **deployment)
else:
module.fail_json(msg=json.dumps(info))
except Exception as e:
except Exception, e:
module.fail_json(msg=str(e))
# import module snippets

@ -34,7 +34,7 @@ short_description: Manage boundary meters
description:
- This module manages boundary meters
version_added: "1.3"
author: curtis@serverascode.com
author: "curtis (@ccollicutt)"
requirements:
- Boundary API access
- bprobe is required to send data, but not to register a meter
@ -213,7 +213,7 @@ def download_request(module, name, apiid, apikey, cert_type):
cert_file = open(cert_file_path, 'w')
cert_file.write(body)
cert_file.close
os.chmod(cert_file_path, 0o600)
os.chmod(cert_file_path, 0600)
except:
module.fail_json("Could not write to certificate file")

@ -0,0 +1,132 @@
#!/usr/bin/python
# (c) 2014-2015, Epic Games, Inc.
import requests
import time
import json
DOCUMENTATION = '''
---
module: circonus_annotation
short_description: create an annotation in circonus
description:
- Create an annotation event with a given category, title and description. Optionally start, end or durations can be provided
author: "Nick Harring (@NickatEpic)"
version_added: 2.0
requirements:
- urllib3
- requests
- time
options:
api_key:
description:
- Circonus API key
required: true
category:
description:
- Annotation Category
required: true
description:
description:
- Description of annotation
required: true
title:
description:
- Title of annotation
required: true
start:
description:
- Unix timestamp of event start, defaults to now
required: false
stop:
description:
- Unix timestamp of event end, defaults to now + duration
required: false
duration:
description:
- Duration in seconds of annotation, defaults to 0
required: false
'''
EXAMPLES = '''
# Create a simple annotation event with a source, defaults to start and end time of now
- circonus_annotation:
api_key: XXXXXXXXXXXXXXXXX
title: 'App Config Change'
description: 'This is a detailed description of the config change'
category: 'This category groups like annotations'
# Create an annotation with a duration of 5 minutes and a default start time of now
- circonus_annotation:
api_key: XXXXXXXXXXXXXXXXX
title: 'App Config Change'
description: 'This is a detailed description of the config change'
category: 'This category groups like annotations'
duration: 300
# Create an annotation with a start_time and end_time
- circonus_annotation:
api_key: XXXXXXXXXXXXXXXXX
title: 'App Config Change'
description: 'This is a detailed description of the config change'
category: 'This category groups like annotations'
start_time: 1395940006
end_time: 1395954407
'''
def post_annotation(annotation, api_key):
''' Takes annotation dict and api_key string'''
base_url = 'https://api.circonus.com/v2'
anootate_post_endpoint = '/annotation'
resp = requests.post(base_url + anootate_post_endpoint,
headers=build_headers(api_key), data=json.dumps(annotation))
resp.raise_for_status()
return resp
def create_annotation(module):
''' Takes ansible module object '''
annotation = {}
if module.params['duration'] != None:
duration = module.params['duration']
else:
duration = 0
if module.params['start'] != None:
start = module.params['start']
else:
start = int(time.time())
if module.params['stop'] != None:
stop = module.params['stop']
else:
stop = int(time.time())+ duration
annotation['start'] = int(start)
annotation['stop'] = int(stop)
annotation['category'] = module.params['category']
annotation['description'] = module.params['description']
annotation['title'] = module.params['title']
return annotation
def build_headers(api_token):
'''Takes api token, returns headers with it included.'''
headers = {'X-Circonus-App-Name': 'ansible',
'Host': 'api.circonus.com', 'X-Circonus-Auth-Token': api_token,
'Accept': 'application/json'}
return headers
def main():
'''Main function, dispatches logic'''
module = AnsibleModule(
argument_spec=dict(
start=dict(required=False, type='int'),
stop=dict(required=False, type='int'),
category=dict(required=True),
title=dict(required=True),
description=dict(required=True),
duration=dict(required=False, type='int'),
api_key=dict(required=True)
)
)
annotation = create_annotation(module)
try:
resp = post_annotation(annotation, module.params['api_key'])
except requests.exceptions.RequestException, err_str:
module.fail_json(msg='Request Failed', reason=err_str)
module.exit_json(changed=True, annotation=resp.json())
from ansible.module_utils.basic import *
main()

@ -14,7 +14,7 @@ description:
- "Allows to post events to DataDog (www.datadoghq.com) service."
- "Uses http://docs.datadoghq.com/api/#events API."
version_added: "1.3"
author: Artūras 'arturaz' Šlajus <x11@arturaz.net>
author: "Artūras `arturaz` Šlajus (@arturaz)"
notes: []
requirements: [urllib2]
options:
@ -71,7 +71,7 @@ datadog_event: title="Testing from ansible" text="Test!" priority="low"
# Post an event with several tags
datadog_event: title="Testing from ansible" text="Test!"
api_key="6873258723457823548234234234"
tags=aa,bb,cc
tags=aa,bb,#host:{{ inventory_hostname }}
'''
import socket
@ -86,7 +86,7 @@ def main():
priority=dict(
required=False, default='normal', choices=['normal', 'low']
),
tags=dict(required=False, default=None),
tags=dict(required=False, default=None, type='list'),
alert_type=dict(
required=False, default='info',
choices=['error', 'warning', 'info', 'success']
@ -116,7 +116,7 @@ def post_event(module):
if module.params['date_happened'] != None:
body['date_happened'] = module.params['date_happened']
if module.params['tags'] != None:
body['tags'] = module.params['tags'].split(",")
body['tags'] = module.params['tags']
if module.params['aggregation_key'] != None:
body['aggregation_key'] = module.params['aggregation_key']
if module.params['source_type_name'] != None:

@ -0,0 +1,283 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Sebastian Kornehl <sebastian.kornehl@asideas.de>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# import module snippets
# Import Datadog
try:
from datadog import initialize, api
HAS_DATADOG = True
except:
HAS_DATADOG = False
DOCUMENTATION = '''
---
module: datadog_monitor
short_description: Manages Datadog monitors
description:
- "Manages monitors within Datadog"
- "Options like described on http://docs.datadoghq.com/api/"
version_added: "2.0"
author: "Sebastian Kornehl (@skornehl)"
notes: []
requirements: [datadog]
options:
api_key:
description: ["Your DataDog API key."]
required: true
app_key:
description: ["Your DataDog app key."]
required: true
state:
description: ["The designated state of the monitor."]
required: true
choices: ['present', 'absent', 'muted', 'unmuted']
type:
description: ["The type of the monitor."]
required: false
default: null
choices: ['metric alert', 'service check']
query:
description: ["he monitor query to notify on with syntax varying depending on what type of monitor you are creating."]
required: false
default: null
name:
description: ["The name of the alert."]
required: true
message:
description: ["A message to include with notifications for this monitor. Email notifications can be sent to specific users by using the same '@username' notation as events."]
required: false
default: null
silenced:
description: ["Dictionary of scopes to timestamps or None. Each scope will be muted until the given POSIX timestamp or forever if the value is None. "]
required: false
default: ""
notify_no_data:
description: ["A boolean indicating whether this monitor will notify when data stops reporting.."]
required: false
default: False
no_data_timeframe:
description: ["The number of minutes before a monitor will notify when data stops reporting. Must be at least 2x the monitor timeframe for metric alerts or 2 minutes for service checks."]
required: false
default: 2x timeframe for metric, 2 minutes for service
timeout_h:
description: ["The number of hours of the monitor not reporting data before it will automatically resolve from a triggered state."]
required: false
default: null
renotify_interval:
description: ["The number of minutes after the last notification before a monitor will re-notify on the current status. It will only re-notify if it's not resolved."]
required: false
default: null
escalation_message:
description: ["A message to include with a re-notification. Supports the '@username' notification we allow elsewhere. Not applicable if renotify_interval is None"]
required: false
default: null
notify_audit:
description: ["A boolean indicating whether tagged users will be notified on changes to this monitor."]
required: false
default: False
thresholds:
description: ["A dictionary of thresholds by status. Because service checks can have multiple thresholds, we don't define them directly in the query."]
required: false
default: {'ok': 1, 'critical': 1, 'warning': 1}
'''
EXAMPLES = '''
# Create a metric monitor
datadog_monitor:
type: "metric alert"
name: "Test monitor"
state: "present"
query: "datadog.agent.up".over("host:host1").last(2).count_by_status()"
message: "Some message."
api_key: "9775a026f1ca7d1c6c5af9d94d9595a4"
app_key: "87ce4a24b5553d2e482ea8a8500e71b8ad4554ff"
# Deletes a monitor
datadog_monitor:
name: "Test monitor"
state: "absent"
api_key: "9775a026f1ca7d1c6c5af9d94d9595a4"
app_key: "87ce4a24b5553d2e482ea8a8500e71b8ad4554ff"
# Mutes a monitor
datadog_monitor:
name: "Test monitor"
state: "mute"
silenced: '{"*":None}'
api_key: "9775a026f1ca7d1c6c5af9d94d9595a4"
app_key: "87ce4a24b5553d2e482ea8a8500e71b8ad4554ff"
# Unmutes a monitor
datadog_monitor:
name: "Test monitor"
state: "unmute"
api_key: "9775a026f1ca7d1c6c5af9d94d9595a4"
app_key: "87ce4a24b5553d2e482ea8a8500e71b8ad4554ff"
'''
def main():
module = AnsibleModule(
argument_spec=dict(
api_key=dict(required=True),
app_key=dict(required=True),
state=dict(required=True, choises=['present', 'absent', 'mute', 'unmute']),
type=dict(required=False, choises=['metric alert', 'service check']),
name=dict(required=True),
query=dict(required=False),
message=dict(required=False, default=None),
silenced=dict(required=False, default=None, type='dict'),
notify_no_data=dict(required=False, default=False, choices=BOOLEANS),
no_data_timeframe=dict(required=False, default=None),
timeout_h=dict(required=False, default=None),
renotify_interval=dict(required=False, default=None),
escalation_message=dict(required=False, default=None),
notify_audit=dict(required=False, default=False, choices=BOOLEANS),
thresholds=dict(required=False, type='dict', default={'ok': 1, 'critical': 1, 'warning': 1}),
)
)
# Prepare Datadog
if not HAS_DATADOG:
module.fail_json(msg='datadogpy required for this module')
options = {
'api_key': module.params['api_key'],
'app_key': module.params['app_key']
}
initialize(**options)
if module.params['state'] == 'present':
install_monitor(module)
elif module.params['state'] == 'absent':
delete_monitor(module)
elif module.params['state'] == 'mute':
mute_monitor(module)
elif module.params['state'] == 'unmute':
unmute_monitor(module)
def _get_monitor(module):
for monitor in api.Monitor.get_all():
if monitor['name'] == module.params['name']:
return monitor
return {}
def _post_monitor(module, options):
try:
msg = api.Monitor.create(type=module.params['type'], query=module.params['query'],
name=module.params['name'], message=module.params['message'],
options=options)
if 'errors' in msg:
module.fail_json(msg=str(msg['errors']))
else:
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
def _equal_dicts(a, b, ignore_keys):
ka = set(a).difference(ignore_keys)
kb = set(b).difference(ignore_keys)
return ka == kb and all(a[k] == b[k] for k in ka)
def _update_monitor(module, monitor, options):
try:
msg = api.Monitor.update(id=monitor['id'], query=module.params['query'],
name=module.params['name'], message=module.params['message'],
options=options)
if 'errors' in msg:
module.fail_json(msg=str(msg['errors']))
elif _equal_dicts(msg, monitor, ['creator', 'overall_state']):
module.exit_json(changed=False, msg=msg)
else:
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
def install_monitor(module):
options = {
"silenced": module.params['silenced'],
"notify_no_data": module.boolean(module.params['notify_no_data']),
"no_data_timeframe": module.params['no_data_timeframe'],
"timeout_h": module.params['timeout_h'],
"renotify_interval": module.params['renotify_interval'],
"escalation_message": module.params['escalation_message'],
"notify_audit": module.boolean(module.params['notify_audit']),
}
if module.params['type'] == "service check":
options["thresholds"] = module.params['thresholds']
monitor = _get_monitor(module)
if not monitor:
_post_monitor(module, options)
else:
_update_monitor(module, monitor, options)
def delete_monitor(module):
monitor = _get_monitor(module)
if not monitor:
module.exit_json(changed=False)
try:
msg = api.Monitor.delete(monitor['id'])
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
def mute_monitor(module):
monitor = _get_monitor(module)
if not monitor:
module.fail_json(msg="Monitor %s not found!" % module.params['name'])
elif monitor['options']['silenced']:
module.fail_json(msg="Monitor is already muted. Datadog does not allow to modify muted alerts, consider unmuting it first.")
elif (module.params['silenced'] is not None
and len(set(monitor['options']['silenced']) - set(module.params['silenced'])) == 0):
module.exit_json(changed=False)
try:
if module.params['silenced'] is None or module.params['silenced'] == "":
msg = api.Monitor.mute(id=monitor['id'])
else:
msg = api.Monitor.mute(id=monitor['id'], silenced=module.params['silenced'])
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
def unmute_monitor(module):
monitor = _get_monitor(module)
if not monitor:
module.fail_json(msg="Monitor %s not found!" % module.params['name'])
elif not monitor['options']['silenced']:
module.exit_json(changed=False)
try:
msg = api.Monitor.unmute(monitor['id'])
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
main()

@ -29,9 +29,8 @@ short_description: create an annotation in librato
description:
- Create an annotation event on the given annotation stream :name. If the annotation stream does not exist, it will be created automatically
version_added: "1.6"
author: Seth Edwards
author: "Seth Edwards (@sedward)"
requirements:
- urllib2
- base64
options:
user:
@ -107,11 +106,7 @@ EXAMPLES = '''
'''
try:
import urllib2
HAS_URLLIB2 = True
except ImportError:
HAS_URLLIB2 = False
import urllib2
def post_annotation(module):
user = module.params['user']
@ -138,11 +133,11 @@ def post_annotation(module):
headers = {}
headers['Content-Type'] = 'application/json'
headers['Authorization'] = b"Basic " + base64.b64encode(user + b":" + api_key).strip()
headers['Authorization'] = "Basic " + base64.b64encode(user + ":" + api_key).strip()
req = urllib2.Request(url, json_body, headers)
try:
response = urllib2.urlopen(req)
except urllib2.HTTPError as e:
except urllib2.HTTPError, e:
module.fail_json(msg="Request Failed", reason=e.reason)
response = response.read()
module.exit_json(changed=True, annotation=response)

@ -19,8 +19,8 @@
DOCUMENTATION = '''
---
module: logentries
author: Ivan Vanderbyl
short_description: Module for tracking logs via logentries.com
author: "Ivan Vanderbyl (@ivanvanderbyl)"
short_description: Module for tracking logs via logentries.com
description:
- Sends logs to LogEntries in realtime
version_added: "1.6"

@ -39,7 +39,7 @@ options:
default: null
choices: [ "present", "started", "stopped", "restarted", "monitored", "unmonitored", "reloaded" ]
requirements: [ ]
author: Darryl Stoflet
author: "Darryl Stoflet (@dstoflet)"
'''
EXAMPLES = '''
@ -77,7 +77,7 @@ def main():
# Process 'name' Running - restart pending
parts = line.split()
if len(parts) > 2 and parts[0].lower() == 'process' and parts[1] == "'%s'" % name:
return ' '.join(parts[2:])
return ' '.join(parts[2:]).lower()
else:
return ''
@ -86,7 +86,8 @@ def main():
module.run_command('%s %s %s' % (MONIT, command, name), check_rc=True)
return status()
present = status() != ''
process_status = status()
present = process_status != ''
if not present and not state == 'present':
module.fail_json(msg='%s process not presently configured with monit' % name, name=name, state=state)
@ -102,7 +103,7 @@ def main():
module.exit_json(changed=True, name=name, state=state)
module.exit_json(changed=False, name=name, state=state)
running = 'running' in status()
running = 'running' in process_status
if running and state in ['started', 'monitored']:
module.exit_json(changed=False, name=name, state=state)
@ -119,7 +120,7 @@ def main():
if module.check_mode:
module.exit_json(changed=True)
status = run_command('unmonitor')
if status in ['not monitored']:
if status in ['not monitored'] or 'unmonitor pending' in status:
module.exit_json(changed=True, name=name, state=state)
module.fail_json(msg='%s process not unmonitored' % name, status=status)

@ -9,7 +9,7 @@
# Tim Bielawa <tbielawa@redhat.com>
#
# This software may be freely redistributed under the terms of the GNU
# general public license version 2.
# general public license version 2 or any later version.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
@ -30,10 +30,12 @@ options:
action:
description:
- Action to take.
- servicegroup options were added in 2.0.
required: true
default: null
choices: [ "downtime", "enable_alerts", "disable_alerts", "silence", "unsilence",
"silence_nagios", "unsilence_nagios", "command" ]
"silence_nagios", "unsilence_nagios", "command", "servicegroup_service_downtime",
"servicegroup_host_downtime" ]
host:
description:
- Host to operate on in Nagios.
@ -51,6 +53,12 @@ options:
Only usable with the C(downtime) action.
required: false
default: Ansible
comment:
version_added: "2.0"
description:
- Comment for C(downtime) action.
required: false
default: Scheduling downtime
minutes:
description:
- Minutes to schedule downtime for.
@ -65,6 +73,11 @@ options:
aliases: [ "service" ]
required: true
default: null
servicegroup:
version_added: "2.0"
description:
- the Servicegroup we want to set downtimes/alerts for.
B(Required) option when using the C(servicegroup_service_downtime) amd C(servicegroup_host_downtime).
command:
description:
- The raw command to send to nagios, which
@ -73,7 +86,7 @@ options:
required: true
default: null
author: Tim Bielawa
author: "Tim Bielawa (@tbielawa)"
requirements: [ "Nagios" ]
'''
@ -84,12 +97,22 @@ EXAMPLES = '''
# schedule an hour of HOST downtime
- nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }}
# schedule an hour of HOST downtime, with a comment describing the reason
- nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }}
comment='This host needs disciplined'
# schedule downtime for ALL services on HOST
- nagios: action=downtime minutes=45 service=all host={{ inventory_hostname }}
# schedule downtime for a few services
- nagios: action=downtime services=frob,foobar,qeuz host={{ inventory_hostname }}
# set 30 minutes downtime for all services in servicegroup foo
- nagios: action=servicegroup_service_downtime minutes=30 servicegroup=foo host={{ inventory_hostname }}
# set 30 minutes downtime for all host in servicegroup foo
- nagios: action=servicegroup_host_downtime minutes=30 servicegroup=foo host={{ inventory_hostname }}
# enable SMART disk alerts
- nagios: action=enable_alerts service=smart host={{ inventory_hostname }}
@ -169,13 +192,18 @@ def main():
'silence_nagios',
'unsilence_nagios',
'command',
'servicegroup_host_downtime',
'servicegroup_service_downtime',
]
module = AnsibleModule(
argument_spec=dict(
action=dict(required=True, default=None, choices=ACTION_CHOICES),
author=dict(default='Ansible'),
comment=dict(default='Scheduling downtime'),
host=dict(required=False, default=None),
servicegroup=dict(required=False, default=None),
minutes=dict(default=30),
cmdfile=dict(default=which_cmdfile()),
services=dict(default=None, aliases=['service']),
@ -185,11 +213,12 @@ def main():
action = module.params['action']
host = module.params['host']
servicegroup = module.params['servicegroup']
minutes = module.params['minutes']
services = module.params['services']
cmdfile = module.params['cmdfile']
command = module.params['command']
##################################################################
# Required args per action:
# downtime = (minutes, service, host)
@ -217,6 +246,20 @@ def main():
except Exception:
module.fail_json(msg='invalid entry for minutes')
######################################################################
if action in ['servicegroup_service_downtime', 'servicegroup_host_downtime']:
# Make sure there's an actual servicegroup selected
if not servicegroup:
module.fail_json(msg='no servicegroup selected to set downtime for')
# Make sure minutes is a number
try:
m = int(minutes)
if not isinstance(m, types.IntType):
module.fail_json(msg='minutes must be a number')
except Exception:
module.fail_json(msg='invalid entry for minutes')
##################################################################
if action in ['enable_alerts', 'disable_alerts']:
if not services:
@ -258,7 +301,9 @@ class Nagios(object):
self.module = module
self.action = kwargs['action']
self.author = kwargs['author']
self.comment = kwargs['comment']
self.host = kwargs['host']
self.servicegroup = kwargs['servicegroup']
self.minutes = int(kwargs['minutes'])
self.cmdfile = kwargs['cmdfile']
self.command = kwargs['command']
@ -293,7 +338,7 @@ class Nagios(object):
cmdfile=self.cmdfile)
def _fmt_dt_str(self, cmd, host, duration, author=None,
comment="Scheduling downtime", start=None,
comment=None, start=None,
svc=None, fixed=1, trigger=0):
"""
Format an external-command downtime string.
@ -326,6 +371,9 @@ class Nagios(object):
if not author:
author = self.author
if not comment:
comment = self.comment
if svc is not None:
dt_args = [svc, str(start), str(end), str(fixed), str(trigger),
str(duration_s), author, comment]
@ -356,7 +404,7 @@ class Nagios(object):
notif_str = "[%s] %s" % (entry_time, cmd)
if host is not None:
notif_str += ";%s" % host
if svc is not None:
notif_str += ";%s" % svc
@ -796,42 +844,42 @@ class Nagios(object):
return return_str_list
else:
return "Fail: could not write to the command file"
def silence_nagios(self):
"""
This command is used to disable notifications for all hosts and services
in nagios.
This is a 'SHUT UP, NAGIOS' command
"""
cmd = 'DISABLE_NOTIFICATIONS'
self._write_command(self._fmt_notif_str(cmd))
def unsilence_nagios(self):
"""
This command is used to enable notifications for all hosts and services
in nagios.
This is a 'OK, NAGIOS, GO'' command
"""
cmd = 'ENABLE_NOTIFICATIONS'
self._write_command(self._fmt_notif_str(cmd))
def nagios_cmd(self, cmd):
"""
This sends an arbitrary command to nagios
It prepends the submitted time and appends a \n
You just have to provide the properly formatted command
"""
pre = '[%s]' % int(time.time())
post = '\n'
cmdstr = '%s %s %s' % (pre, cmd, post)
self._write_command(cmdstr)
def act(self):
"""
Figure out what you want to do from ansible, and then do the
@ -847,6 +895,12 @@ class Nagios(object):
self.schedule_svc_downtime(self.host,
services=self.services,
minutes=self.minutes)
elif self.action == "servicegroup_host_downtime":
if self.servicegroup:
self.schedule_servicegroup_host_downtime(servicegroup = self.servicegroup, minutes = self.minutes)
elif self.action == "servicegroup_service_downtime":
if self.servicegroup:
self.schedule_servicegroup_svc_downtime(servicegroup = self.servicegroup, minutes = self.minutes)
# toggle the host AND service alerts
elif self.action == 'silence':
@ -871,13 +925,13 @@ class Nagios(object):
services=self.services)
elif self.action == 'silence_nagios':
self.silence_nagios()
elif self.action == 'unsilence_nagios':
self.unsilence_nagios()
elif self.action == 'command':
self.nagios_cmd(self.command)
# wtf?
else:
self.module.fail_json(msg="unknown action specified: '%s'" % \

@ -22,14 +22,14 @@ DOCUMENTATION = '''
---
module: newrelic_deployment
version_added: "1.2"
author: Matt Coddington
author: "Matt Coddington (@mcodd)"
short_description: Notify newrelic about app deployments
description:
- Notify newrelic about app deployments (see http://newrelic.github.io/newrelic_api/NewRelicApi/Deployment.html)
- Notify newrelic about app deployments (see https://docs.newrelic.com/docs/apm/new-relic-apm/maintenance/deployment-notifications#api)
options:
token:
description:
- API token.
- API token, to place in the x-api-key header.
required: true
app_name:
description:
@ -72,8 +72,7 @@ options:
choices: ['yes', 'no']
version_added: 1.5.1
# informational: requirements for nodes
requirements: [ urllib, urllib2 ]
requirements: []
'''
EXAMPLES = '''
@ -83,6 +82,8 @@ EXAMPLES = '''
revision=1.0
'''
import urllib
# ===========================================
# Module execution.
#
@ -102,6 +103,7 @@ def main():
environment=dict(required=False),
validate_certs = dict(default='yes', type='bool'),
),
required_one_of=[['app_name', 'application_id']],
supports_check_mode=True
)

@ -7,7 +7,10 @@ short_description: Create PagerDuty maintenance windows
description:
- This module will let you create PagerDuty maintenance windows
version_added: "1.2"
author: Justin Johns
author:
- "Andrew Newdigate (@suprememoocow)"
- "Dylan Silva (@thaumos)"
- "Justin Johns"
requirements:
- PagerDuty API access
options:

@ -7,7 +7,9 @@ short_description: Pause/unpause Pingdom alerts
description:
- This module will let you pause/unpause Pingdom alerts
version_added: "1.2"
author: Justin Johns
author:
- "Dylan Silva (@thaumos)"
- "Justin Johns"
requirements:
- "This pingdom python library: https://github.com/mbabineau/pingdom-python"
options:

@ -22,7 +22,7 @@ DOCUMENTATION = '''
---
module: rollbar_deployment
version_added: 1.6
author: Max Riveiro
author: "Max Riveiro (@kavu)"
short_description: Notify Rollbar about app deployments
description:
- Notify Rollbar about app deployments
@ -76,6 +76,7 @@ EXAMPLES = '''
comment='Test Deploy'
'''
import urllib
def main():

@ -0,0 +1,336 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Anders Ingemann <aim@secoya.dk>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: sensu_check
short_description: Manage Sensu checks
version_added: 2.0
description:
- Manage the checks that should be run on a machine by I(Sensu).
- Most options do not have a default and will not be added to the check definition unless specified.
- All defaults except I(path), I(state), I(backup) and I(metric) are not managed by this module,
- they are simply specified for your convenience.
options:
name:
description:
- The name of the check
- This is the key that is used to determine whether a check exists
required: true
state:
description: Whether the check should be present or not
choices: [ 'present', 'absent' ]
required: false
default: present
path:
description:
- Path to the json file of the check to be added/removed.
- Will be created if it does not exist (unless I(state=absent)).
- The parent folders need to exist when I(state=present), otherwise an error will be thrown
required: false
default: /etc/sensu/conf.d/checks.json
backup:
description:
- Create a backup file (if yes), including the timestamp information so
- you can get the original file back if you somehow clobbered it incorrectly.
choices: [ 'yes', 'no' ]
required: false
default: no
command:
description:
- Path to the sensu check to run (not required when I(state=absent))
required: true
handlers:
description:
- List of handlers to notify when the check fails
required: false
default: []
subscribers:
description:
- List of subscribers/channels this check should run for
- See sensu_subscribers to subscribe a machine to a channel
required: false
default: []
interval:
description:
- Check interval in seconds
required: false
default: null
timeout:
description:
- Timeout for the check
required: false
default: 10
handle:
description:
- Whether the check should be handled or not
choices: [ 'yes', 'no' ]
required: false
default: yes
subdue_begin:
description:
- When to disable handling of check failures
required: false
default: null
subdue_end:
description:
- When to enable handling of check failures
required: false
default: null
dependencies:
description:
- Other checks this check depends on, if dependencies fail,
- handling of this check will be disabled
required: false
default: []
metric:
description: Whether the check is a metric
choices: [ 'yes', 'no' ]
required: false
default: no
standalone:
description:
- Whether the check should be scheduled by the sensu client or server
- This option obviates the need for specifying the I(subscribers) option
choices: [ 'yes', 'no' ]
required: false
default: no
publish:
description:
- Whether the check should be scheduled at all.
- You can still issue it via the sensu api
choices: [ 'yes', 'no' ]
required: false
default: yes
occurrences:
description:
- Number of event occurrences before the handler should take action
required: false
default: 1
refresh:
description:
- Number of seconds handlers should wait before taking second action
required: false
default: null
aggregate:
description:
- Classifies the check as an aggregate check,
- making it available via the aggregate API
choices: [ 'yes', 'no' ]
required: false
default: no
low_flap_threshold:
description:
- The low threshhold for flap detection
required: false
default: null
high_flap_threshold:
description:
- The low threshhold for flap detection
required: false
default: null
requirements: [ ]
author: Anders Ingemann
'''
EXAMPLES = '''
# Fetch metrics about the CPU load every 60 seconds,
# the sensu server has a handler called 'relay' which forwards stats to graphite
- name: get cpu metrics
sensu_check: name=cpu_load
command=/etc/sensu/plugins/system/cpu-mpstat-metrics.rb
metric=yes handlers=relay subscribers=common interval=60
# Check whether nginx is running
- name: check nginx process
sensu_check: name=nginx_running
command='/etc/sensu/plugins/processes/check-procs.rb -f /var/run/nginx.pid'
handlers=default subscribers=nginx interval=60
# Stop monitoring the disk capacity.
# Note that the check will still show up in the sensu dashboard,
# to remove it completely you need to issue a DELETE request to the sensu api.
- name: check disk
sensu_check: name=check_disk_capacity
'''
def sensu_check(module, path, name, state='present', backup=False):
changed = False
reasons = []
try:
import json
except ImportError:
import simplejson as json
try:
try:
stream = open(path, 'r')
config = json.load(stream.read())
except IOError, e:
if e.errno is 2: # File not found, non-fatal
if state == 'absent':
reasons.append('file did not exist and state is `absent\'')
return changed, reasons
config = {}
else:
module.fail_json(msg=str(e))
except ValueError:
msg = '{path} contains invalid JSON'.format(path=path)
module.fail_json(msg=msg)
finally:
if stream:
stream.close()
if 'checks' not in config:
if state == 'absent':
reasons.append('`checks\' section did not exist and state is `absent\'')
return changed, reasons
config['checks'] = {}
changed = True
reasons.append('`checks\' section did not exist')
if state == 'absent':
if name in config['checks']:
del config['checks'][name]
changed = True
reasons.append('check was present and state is `absent\'')
if state == 'present':
if name not in config['checks']:
check = {}
config['checks'][name] = check
changed = True
reasons.append('check was absent and state is `present\'')
else:
check = config['checks'][name]
simple_opts = ['command',
'handlers',
'subscribers',
'interval',
'timeout',
'handle',
'dependencies',
'standalone',
'publish',
'occurrences',
'refresh',
'aggregate',
'low_flap_threshold',
'high_flap_threshold',
]
for opt in simple_opts:
if module.params[opt] is not None:
if opt not in check or check[opt] != module.params[opt]:
check[opt] = module.params[opt]
changed = True
reasons.append('`{opt}\' did not exist or was different'.format(opt=opt))
else:
if opt in check:
del check[opt]
changed = True
reasons.append('`{opt}\' was removed'.format(opt=opt))
if module.params['metric']:
if 'type' not in check or check['type'] != 'metric':
check['type'] = 'metric'
changed = True
reasons.append('`type\' was not defined or not `metric\'')
if not module.params['metric'] and 'type' in check:
del check['type']
changed = True
reasons.append('`type\' was defined')
if module.params['subdue_begin'] is not None and module.params['subdue_end'] is not None:
subdue = {'begin': module.params['subdue_begin'],
'end': module.params['subdue_end'],
}
if 'subdue' not in check or check['subdue'] != subdue:
check['subdue'] = subdue
changed = True
reasons.append('`subdue\' did not exist or was different')
else:
if 'subdue' in check:
del check['subdue']
changed = True
reasons.append('`subdue\' was removed')
if changed and not module.check_mode:
if backup:
module.backup_local(path)
try:
try:
stream = open(path, 'w')
stream.write(json.dumps(config, indent=2) + '\n')
except IOError, e:
module.fail_json(msg=str(e))
finally:
if stream:
stream.close()
return changed, reasons
def main():
arg_spec = {'name': {'type': 'str', 'required': True},
'path': {'type': 'str', 'default': '/etc/sensu/conf.d/checks.json'},
'state': {'type': 'str', 'default': 'present', 'choices': ['present', 'absent']},
'backup': {'type': 'bool', 'default': 'no'},
'command': {'type': 'str'},
'handlers': {'type': 'list'},
'subscribers': {'type': 'list'},
'interval': {'type': 'int'},
'timeout': {'type': 'int'},
'handle': {'type': 'bool'},
'subdue_begin': {'type': 'str'},
'subdue_end': {'type': 'str'},
'dependencies': {'type': 'list'},
'metric': {'type': 'bool', 'default': 'no'},
'standalone': {'type': 'bool'},
'publish': {'type': 'bool'},
'occurrences': {'type': 'int'},
'refresh': {'type': 'int'},
'aggregate': {'type': 'bool'},
'low_flap_threshold': {'type': 'int'},
'high_flap_threshold': {'type': 'int'},
}
required_together = [['subdue_begin', 'subdue_end']]
module = AnsibleModule(argument_spec=arg_spec,
required_together=required_together,
supports_check_mode=True)
if module.params['state'] != 'absent' and module.params['command'] is None:
module.fail_json(msg="missing required arguments: %s" % ",".join(['command']))
path = module.params['path']
name = module.params['name']
state = module.params['state']
backup = module.params['backup']
changed, reasons = sensu_check(module, path, name, state, backup)
module.exit_json(path=path, changed=changed, msg='OK', name=name, reasons=reasons)
from ansible.module_utils.basic import *
main()

@ -8,7 +8,7 @@ short_description: Send code deploy and annotation events to stackdriver
description:
- Send code deploy and annotation events to Stackdriver
version_added: "1.6"
author: Ben Whaley
author: "Ben Whaley (@bwhaley)"
options:
key:
description:

@ -6,7 +6,7 @@ module: uptimerobot
short_description: Pause and start Uptime Robot monitoring
description:
- This module will let you start and pause Uptime Robot Monitoring
author: Nate Kingsley
author: "Nate Kingsley (@nate-kingsley)"
version_added: "1.9"
requirements:
- Valid Uptime Robot API Key

@ -1,7 +1,7 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, René Moser <mail@renemoser.net>
#
# (c) 2013-2014, Epic Games, Inc.
#
# This file is part of Ansible
#
@ -22,191 +22,188 @@
DOCUMENTATION = '''
---
module: zabbix_group
short_description: Add or remove a host group to Zabbix.
short_description: Zabbix host groups creates/deletes
description:
- This module uses the Zabbix API to add and remove host groups.
version_added: '1.8'
requirements: [ 'zabbix-api' ]
- Create host groups if they do not exist.
- Delete existing host groups if they exist.
version_added: "1.8"
author:
- "(@cove)"
- "Tony Minfei Ding"
- "Harrison Gu (@harrisongu)"
requirements:
- "python >= 2.6"
- zabbix-api
options:
state:
description:
- Whether the host group should be added or removed.
required: false
default: present
choices: [ 'present', 'absent' ]
host_group:
description:
- Name of the host group to be added or removed.
required: true
default: null
aliases: [ ]
server_url:
description:
- Url of Zabbix server, with protocol (http or https) e.g.
https://monitoring.example.com/zabbix. C(url) is an alias
for C(server_url). If not set environment variable
C(ZABBIX_SERVER_URL) is used.
- Url of Zabbix server, with protocol (http or https).
C(url) is an alias for C(server_url).
required: true
default: null
aliases: [ 'url' ]
aliases: [ "url" ]
login_user:
description:
- Zabbix user name. If not set environment variable
C(ZABBIX_LOGIN_USER) is used.
- Zabbix user name.
required: true
default: null
login_password:
description:
- Zabbix user password. If not set environment variable
C(ZABBIX_LOGIN_PASSWORD) is used.
- Zabbix user password.
required: true
state:
description:
- Create or delete host group.
required: false
default: "present"
choices: [ "present", "absent" ]
timeout:
description:
- The timeout of API request(seconds).
default: 10
host_groups:
description:
- List of host groups to create or delete.
required: true
aliases: [ "host_group" ]
notes:
- The module has been tested with Zabbix Server 2.2.
author: René Moser
- Too many concurrent updates to the same group may cause Zabbix to return errors, see examples for a workaround if needed.
'''
EXAMPLES = '''
---
# Add a new host group to Zabbix
- zabbix_group: host_group='Linux servers'
server_url=https://monitoring.example.com/zabbix
login_user=ansible
login_password=secure
# Add a new host group, login data is provided by environment variables:
# ZABBIX_LOGIN_USER, ZABBIX_LOGIN_PASSWORD, ZABBIX_SERVER_URL:
- zabbix_group: host_group=Webservers
# Remove a host group from Zabbix
- zabbix_group: host_group='Linux servers'
state=absent
server_url=https://monitoring.example.com/zabbix
login_user=ansible
login_password=secure
# Base create host groups example
- name: Create host groups
local_action:
module: zabbix_group
server_url: http://monitor.example.com
login_user: username
login_password: password
state: present
host_groups:
- Example group1
- Example group2
# Limit the Zabbix group creations to one host since Zabbix can return an error when doing concurent updates
- name: Create host groups
local_action:
module: zabbix_group
server_url: http://monitor.example.com
login_user: username
login_password: password
state: present
host_groups:
- Example group1
- Example group2
when: inventory_hostname==groups['group_name'][0]
'''
try:
from zabbix_api import ZabbixAPI
from zabbix_api import ZabbixAPI, ZabbixAPISubClass
from zabbix_api import Already_Exists
HAS_ZABBIX_API = True
except ImportError:
HAS_ZABBIX_API = False
def create_group(zbx, host_group):
try:
result = zbx.hostgroup.create(
{
'name': host_group
}
)
except BaseException as e:
return 1, None, str(e)
return 0, result['groupids'], None
def get_group(zbx, host_group):
try:
result = zbx.hostgroup.get(
{
'filter':
{
'name': host_group,
}
}
)
except BaseException as e:
return 1, None, str(e)
return 0, result[0]['groupid'], None
def delete_group(zbx, group_id):
try:
zbx.hostgroup.delete([ group_id ])
except BaseException as e:
return 1, None, str(e)
return 0, None, None
def check_group(zbx, host_group):
try:
result = zbx.hostgroup.exists(
{
'name': host_group
}
)
except BaseException as e:
return 1, None, str(e)
return 0, result, None
class HostGroup(object):
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
# create host group(s) if not exists
def create_host_group(self, group_names):
try:
group_add_list = []
for group_name in group_names:
result = self._zapi.hostgroup.exists({'name': group_name})
if not result:
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.hostgroup.create({'name': group_name})
group_add_list.append(group_name)
except Already_Exists:
return group_add_list
return group_add_list
except Exception, e:
self._module.fail_json(msg="Failed to create host group(s): %s" % e)
# delete host group(s)
def delete_host_group(self, group_ids):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.hostgroup.delete(group_ids)
except Exception, e:
self._module.fail_json(msg="Failed to delete host group(s), Exception: %s" % e)
# get group ids by name
def get_group_ids(self, host_groups):
group_ids = []
group_list = self._zapi.hostgroup.get({'output': 'extend', 'filter': {'name': host_groups}})
for group in group_list:
group_id = group['groupid']
group_ids.append(group_id)
return group_ids, group_list
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(default='present', choices=['present', 'absent']),
host_group=dict(required=True, default=None),
server_url=dict(default=None, aliases=['url']),
login_user=dict(default=None),
login_password=dict(default=None),
server_url=dict(required=True, aliases=['url']),
login_user=dict(required=True),
login_password=dict(required=True, no_log=True),
host_groups=dict(required=True, aliases=['host_group']),
state=dict(default="present", choices=['present','absent']),
timeout=dict(type='int', default=10)
),
supports_check_mode=True,
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg='Missing requried zabbix-api module (check docs or install with: pip install zabbix-api)')
module.fail_json(msg="Missing requried zabbix-api module (check docs or install with: pip install zabbix-api)")
try:
login_user = module.params['login_user'] or os.environ['ZABBIX_LOGIN_USER']
login_password = module.params['login_password'] or os.environ['ZABBIX_LOGIN_PASSWORD']
server_url = module.params['server_url'] or os.environ['ZABBIX_SERVER_URL']
except KeyError, e:
module.fail_json(msg='Missing login data: %s is not set.' % e.message)
host_group = module.params['host_group']
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
host_groups = module.params['host_groups']
state = module.params['state']
timeout = module.params['timeout']
zbx = None
# login to zabbix
try:
zbx = ZabbixAPI(server_url)
zbx = ZabbixAPI(server_url, timeout=timeout)
zbx.login(login_user, login_password)
except BaseException as e:
module.fail_json(msg='Failed to connect to Zabbix server: %s' % e)
changed = False
msg = ''
if state == 'present':
(rc, exists, error) = check_group(zbx, host_group)
if rc != 0:
module.fail_json(msg='Failed to check host group %s existance: %s' % (host_group, error))
if not exists:
if module.check_mode:
changed = True
else:
(rc, group, error) = create_group(zbx, host_group)
if rc == 0:
changed = True
else:
module.fail_json(msg='Failed to get host group: %s' % error)
if state == 'absent':
(rc, exists, error) = check_group(zbx, host_group)
if rc != 0:
module.fail_json(msg='Failed to check host group %s existance: %s' % (host_group, error))
if exists:
if module.check_mode:
changed = True
else:
(rc, group_id, error) = get_group(zbx, host_group)
if rc != 0:
module.fail_json(msg='Failed to get host group: %s' % error)
(rc, _, error) = delete_group(zbx, group_id)
if rc == 0:
changed = True
else:
module.fail_json(msg='Failed to remove host group: %s' % error)
module.exit_json(changed=changed)
except Exception, e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
hostGroup = HostGroup(module, zbx)
group_ids = []
group_list = []
if host_groups:
group_ids, group_list = hostGroup.get_group_ids(host_groups)
if state == "absent":
# delete host groups
if group_ids:
delete_group_names = []
hostGroup.delete_host_group(group_ids)
for group in group_list:
delete_group_names.append(group['name'])
module.exit_json(changed=True,
result="Successfully deleted host group(s): %s." % ",".join(delete_group_names))
else:
module.exit_json(changed=False, result="No host group(s) to delete.")
else:
# create host groups
group_add_list = hostGroup.create_host_group(host_groups)
if len(group_add_list) > 0:
module.exit_json(changed=True, result="Successfully created host group(s): %s" % group_add_list)
else:
module.exit_json(changed=False)
from ansible.module_utils.basic import *
main()

@ -26,9 +26,13 @@ short_description: Zabbix host creates/updates/deletes
description:
- This module allows you to create, modify and delete Zabbix host entries and associated group and template data.
version_added: "2.0"
author: Tony Minfei Ding, Harrison Gu
author:
- "(@cove)"
- "Tony Minfei Ding"
- "Harrison Gu (@harrisongu)"
requirements:
- zabbix-api python module
- "python >= 2.6"
- zabbix-api
options:
server_url:
description:
@ -59,24 +63,32 @@ options:
default: None
status:
description:
- 'Monitoring status of the host. Possible values are: "enabled" and "disabled".'
- Monitoring status of the host.
required: false
choices: ['enabled', 'disabled']
default: "enabled"
state:
description:
- 'Possible values are: "present" and "absent". If the host already exists, and the state is "present", it will just to update the host is the associated data is different. "absent" will remove a host if it exists.'
- State of the host.
- On C(present), it will create if host does not exist or update the host if the associated data is different.
- On C(absent) will remove a host if it exists.
required: false
choices: ['present', 'absent']
default: "present"
timeout:
description:
- The timeout of API request(seconds).
- The timeout of API request (seconds).
default: 10
proxy:
description:
- The name of the Zabbix Proxy to be used
default: None
interfaces:
description:
- List of interfaces to be created for the host (see example below).
- 'Available values are: dns, ip, main, port, type and useip.'
- Please review the interface documentation for more information on the supported properties
- https://www.zabbix.com/documentation/2.0/manual/appendix/api/hostinterface/definitions#host_interface
- 'https://www.zabbix.com/documentation/2.0/manual/appendix/api/hostinterface/definitions#host_interface'
required: false
default: []
'''
@ -110,11 +122,11 @@ EXAMPLES = '''
ip: 10.xx.xx.xx
dns: ""
port: 12345
proxy: a.zabbix.proxy
'''
import logging
import copy
from ansible.module_utils.basic import *
try:
from zabbix_api import ZabbixAPI, ZabbixAPISubClass
@ -167,21 +179,25 @@ class Host(object):
template_ids.append(template_id)
return template_ids
def add_host(self, host_name, group_ids, status, interfaces):
def add_host(self, host_name, group_ids, status, interfaces, proxy_id):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
host_list = self._zapi.host.create({'host': host_name, 'interfaces': interfaces, 'groups': group_ids, 'status': status})
parameters = {'host': host_name, 'interfaces': interfaces, 'groups': group_ids, 'status': status}
if proxy_id:
parameters['proxy_hostid'] = proxy_id
host_list = self._zapi.host.create(parameters)
if len(host_list) >= 1:
return host_list['hostids'][0]
except Exception, e:
self._module.fail_json(msg="Failed to create host %s: %s" % (host_name, e))
def update_host(self, host_name, group_ids, status, host_id, interfaces, exist_interface_list):
def update_host(self, host_name, group_ids, status, host_id, interfaces, exist_interface_list, proxy_id):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update({'hostid': host_id, 'groups': group_ids, 'status': status})
parameters = {'hostid': host_id, 'groups': group_ids, 'status': status, 'proxy_hostid': proxy_id}
self._zapi.host.update(parameters)
interface_list_copy = exist_interface_list
if interfaces:
for interface in interfaces:
@ -227,6 +243,14 @@ class Host(object):
else:
return host_list[0]
# get proxyid by proxy name
def get_proxyid_by_proxy_name(self, proxy_name):
proxy_list = self._zapi.proxy.get({'output': 'extend', 'filter': {'host': [proxy_name]}})
if len(proxy_list) < 1:
self._module.fail_json(msg="Proxy not found: %s" % proxy_name)
else:
return proxy_list[0]['proxyid']
# get group ids by group names
def get_group_ids_by_group_names(self, group_names):
group_ids = []
@ -287,7 +311,7 @@ class Host(object):
# check all the properties before link or clear template
def check_all_properties(self, host_id, host_groups, status, interfaces, template_ids,
exist_interfaces, host):
exist_interfaces, host, proxy_id):
# get the existing host's groups
exist_host_groups = self.get_host_groups_by_host_id(host_id)
if set(host_groups) != set(exist_host_groups):
@ -307,6 +331,9 @@ class Host(object):
if set(list(template_ids)) != set(exist_template_ids):
return True
if host['proxy_hostid'] != proxy_id:
return True
return False
# link or clear template of the host
@ -335,14 +362,15 @@ def main():
argument_spec=dict(
server_url=dict(required=True, aliases=['url']),
login_user=dict(required=True),
login_password=dict(required=True),
login_password=dict(required=True, no_log=True),
host_name=dict(required=True),
host_groups=dict(required=False),
link_templates=dict(required=False),
status=dict(default="enabled"),
state=dict(default="present"),
timeout=dict(default=10),
interfaces=dict(required=False)
status=dict(default="enabled", choices=['enabled', 'disabled']),
state=dict(default="present", choices=['present', 'absent']),
timeout=dict(type='int', default=10),
interfaces=dict(required=False),
proxy=dict(required=False)
),
supports_check_mode=True
)
@ -360,6 +388,7 @@ def main():
state = module.params['state']
timeout = module.params['timeout']
interfaces = module.params['interfaces']
proxy = module.params['proxy']
# convert enabled to 0; disabled to 1
status = 1 if status == "disabled" else 0
@ -389,6 +418,11 @@ def main():
if interface['type'] == 1:
ip = interface['ip']
proxy_id = "0"
if proxy:
proxy_id = host.get_proxyid_by_proxy_name(proxy)
# check if host exist
is_host_exist = host.is_host_exist(host_name)
@ -414,10 +448,10 @@ def main():
if len(exist_interfaces) > interfaces_len:
if host.check_all_properties(host_id, host_groups, status, interfaces, template_ids,
exist_interfaces, zabbix_host_obj):
exist_interfaces, zabbix_host_obj, proxy_id):
host.link_or_clear_template(host_id, template_ids)
host.update_host(host_name, group_ids, status, host_id,
interfaces, exist_interfaces)
interfaces, exist_interfaces, proxy_id)
module.exit_json(changed=True,
result="Successfully update host %s (%s) and linked with template '%s'"
% (host_name, ip, link_templates))
@ -425,8 +459,8 @@ def main():
module.exit_json(changed=False)
else:
if host.check_all_properties(host_id, host_groups, status, interfaces, template_ids,
exist_interfaces_copy, zabbix_host_obj):
host.update_host(host_name, group_ids, status, host_id, interfaces, exist_interfaces)
exist_interfaces_copy, zabbix_host_obj, proxy_id):
host.update_host(host_name, group_ids, status, host_id, interfaces, exist_interfaces, proxy_id)
host.link_or_clear_template(host_id, template_ids)
module.exit_json(changed=True,
result="Successfully update host %s (%s) and linked with template '%s'"
@ -441,7 +475,7 @@ def main():
module.fail_json(msg="Specify at least one interface for creating host '%s'." % host_name)
# create host
host_id = host.add_host(host_name, group_ids, status, interfaces)
host_id = host.add_host(host_name, group_ids, status, interfaces, proxy_id)
host.link_or_clear_template(host_id, template_ids)
module.exit_json(changed=True, result="Successfully added host %s (%s) and linked with template '%s'" % (
host_name, ip, link_templates))

@ -26,9 +26,12 @@ short_description: Zabbix host macro creates/updates/deletes
description:
- manages Zabbix host macros, it can create, update or delete them.
version_added: "2.0"
author: Dean Hailin Song
author:
- "(@cave)"
- Dean Hailin Song
requirements:
- zabbix-api python module
- "python >= 2.6"
- zabbix-api
options:
server_url:
description:
@ -57,12 +60,15 @@ options:
required: true
state:
description:
- 'Possible values are: "present" and "absent". If the macro already exists, and the state is "present", it will just to update the macro if needed.'
- State of the macro.
- On C(present), it will create if macro does not exist or update the macro if the associated data is different.
- On C(absent) will remove a macro if it exists.
required: false
choices: ['present', 'absent']
default: "present"
timeout:
description:
- The timeout of API request(seconds).
- The timeout of API request (seconds).
default: 10
'''
@ -81,7 +87,6 @@ EXAMPLES = '''
import logging
import copy
from ansible.module_utils.basic import *
try:
from zabbix_api import ZabbixAPI, ZabbixAPISubClass
@ -168,12 +173,12 @@ def main():
argument_spec=dict(
server_url=dict(required=True, aliases=['url']),
login_user=dict(required=True),
login_password=dict(required=True),
login_password=dict(required=True, no_log=True),
host_name=dict(required=True),
macro_name=dict(required=True),
macro_value=dict(required=True),
state=dict(default="present"),
timeout=dict(default=10)
state=dict(default="present", choices=['present', 'absent']),
timeout=dict(type='int', default=10)
),
supports_check_mode=True
)

@ -26,9 +26,10 @@ short_description: Create Zabbix maintenance windows
description:
- This module will let you create Zabbix maintenance windows.
version_added: "1.8"
author: Alexander Bulimov
author: "Alexander Bulimov (@abulimov)"
requirements:
- zabbix-api python module
- "python >= 2.6"
- zabbix-api
options:
state:
description:
@ -47,12 +48,10 @@ options:
description:
- Zabbix user name.
required: true
default: null
login_password:
description:
- Zabbix user password.
required: true
default: null
host_names:
description:
- Hosts to manage maintenance window for.
@ -82,7 +81,6 @@ options:
description:
- Unique name of maintenance window.
required: true
default: null
desc:
description:
- Short description of maintenance window.
@ -272,9 +270,9 @@ def main():
host_names=dict(type='list', required=False, default=None, aliases=['host_name']),
minutes=dict(type='int', required=False, default=10),
host_groups=dict(type='list', required=False, default=None, aliases=['host_group']),
login_user=dict(required=True, default=None),
login_password=dict(required=True, default=None),
name=dict(required=True, default=None),
login_user=dict(required=True),
login_password=dict(required=True, no_log=True),
name=dict(required=True),
desc=dict(required=False, default="Created by Ansible"),
collect_data=dict(type='bool', required=False, default=True),
),

@ -27,9 +27,13 @@ short_description: Zabbix screen creates/updates/deletes
description:
- This module allows you to create, modify and delete Zabbix screens and associated graph data.
version_added: "2.0"
author: Tony Minfei Ding, Harrison Gu
author:
- "(@cove)"
- "Tony Minfei Ding"
- "Harrison Gu (@harrisongu)"
requirements:
- zabbix-api python module
- "python >= 2.6"
- zabbix-api
options:
server_url:
description:
@ -46,15 +50,15 @@ options:
required: true
timeout:
description:
- The timeout of API request(seconds).
- The timeout of API request (seconds).
default: 10
zabbix_screens:
description:
- List of screens to be created/updated/deleted(see example).
- If the screen(s) already been added, the screen(s) name won't be updated.
- When creating or updating screen(s), the screen_name, host_group are required.
- When deleting screen(s), the screen_name is required.
- 'The available states are: present(default) and absent. If the screen(s) already exists, and the state is not "absent", the screen(s) will just be updated as needed.'
- When creating or updating screen(s), C(screen_name), C(host_group) are required.
- When deleting screen(s), the C(screen_name) is required.
- 'The available states are: C(present) (default) and C(absent). If the screen(s) already exists, and the state is not C(absent), the screen(s) will just be updated as needed.'
required: true
notes:
- Too many concurrent updates to the same screen may cause Zabbix to return errors, see examples for a workaround if needed.
@ -123,8 +127,6 @@ EXAMPLES = '''
when: inventory_hostname==groups['group_name'][0]
'''
from ansible.module_utils.basic import *
try:
from zabbix_api import ZabbixAPI, ZabbixAPISubClass
from zabbix_api import ZabbixAPIException
@ -315,9 +317,9 @@ def main():
argument_spec=dict(
server_url=dict(required=True, aliases=['url']),
login_user=dict(required=True),
login_password=dict(required=True),
timeout=dict(default=10),
screens=dict(required=True)
login_password=dict(required=True, no_log=True),
timeout=dict(type='int', default=10),
screens=dict(type='dict', required=True)
),
supports_check_mode=True
)
@ -411,5 +413,7 @@ def main():
else:
module.exit_json(changed=False)
# <<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -28,7 +28,7 @@ version_added: 1.8
short_description: Manage A10 Networks AX/SoftAX/Thunder/vThunder devices
description:
- Manage slb server objects on A10 Networks devices via aXAPI
author: Mischa Peters
author: "Mischa Peters (@mischapeters)"
notes:
- Requires A10 Networks aXAPI 2.1
options:
@ -183,28 +183,35 @@ def main():
json_post = {
'server': {
'name': slb_server,
'host': slb_server_ip,
'status': axapi_enabled_disabled(slb_server_status),
'port_list': slb_server_ports,
'name': slb_server,
}
}
# add optional module parameters
if slb_server_ip:
json_post['server']['host'] = slb_server_ip
if slb_server_ports:
json_post['server']['port_list'] = slb_server_ports
if slb_server_status:
json_post['server']['status'] = axapi_enabled_disabled(slb_server_status)
slb_server_data = axapi_call(module, session_url + '&method=slb.server.search', json.dumps({'name': slb_server}))
slb_server_exists = not axapi_failure(slb_server_data)
changed = False
if state == 'present':
if not slb_server_ip:
module.fail_json(msg='you must specify an IP address when creating a server')
if not slb_server_exists:
if not slb_server_ip:
module.fail_json(msg='you must specify an IP address when creating a server')
result = axapi_call(module, session_url + '&method=slb.server.create', json.dumps(json_post))
if axapi_failure(result):
module.fail_json(msg="failed to create the server: %s" % result['response']['err']['msg'])
changed = True
else:
def needs_update(src_ports, dst_ports):
def port_needs_update(src_ports, dst_ports):
'''
Checks to determine if the port definitions of the src_ports
array are in or different from those in dst_ports. If there is
@ -227,12 +234,24 @@ def main():
# every port from the src exists in the dst, and none of them were different
return False
def status_needs_update(current_status, new_status):
'''
Check to determine if we want to change the status of a server.
If there is a difference between the current status of the server and
the desired status, return true, otherwise false.
'''
if current_status != new_status:
return True
return False
defined_ports = slb_server_data.get('server', {}).get('port_list', [])
current_status = slb_server_data.get('server', {}).get('status')
# we check for a needed update both ways, in case ports
# are missing from either the ones specified by the user
# or from those on the device
if needs_update(defined_ports, slb_server_ports) or needs_update(slb_server_ports, defined_ports):
# we check for a needed update several ways
# - in case ports are missing from the ones specified by the user
# - in case ports are missing from those on the device
# - in case we are change the status of a server
if port_needs_update(defined_ports, slb_server_ports) or port_needs_update(slb_server_ports, defined_ports) or status_needs_update(current_status, axapi_enabled_disabled(slb_server_status)):
result = axapi_call(module, session_url + '&method=slb.server.update', json.dumps(json_post))
if axapi_failure(result):
module.fail_json(msg="failed to update the server: %s" % result['response']['err']['msg'])
@ -249,10 +268,10 @@ def main():
result = axapi_call(module, session_url + '&method=slb.server.delete', json.dumps({'name': slb_server}))
changed = True
else:
result = dict(msg="the server was not present")
result = dict(msg="the server was not present")
# if the config has changed, save the config unless otherwise requested
if changed and write_config:
# if the config has changed, or we want to force a save, save the config unless otherwise requested
if changed or write_config:
write_result = axapi_call(module, session_url + '&method=system.action.write_memory')
if axapi_failure(write_result):
module.fail_json(msg="failed to save the configuration: %s" % write_result['response']['err']['msg'])

@ -28,7 +28,7 @@ version_added: 1.8
short_description: Manage A10 Networks AX/SoftAX/Thunder/vThunder devices
description:
- Manage slb service-group objects on A10 Networks devices via aXAPI
author: Mischa Peters
author: "Mischa Peters (@mischapeters)"
notes:
- Requires A10 Networks aXAPI 2.1
- When a server doesn't exist and is added to the service-group the server will be created

@ -28,7 +28,7 @@ version_added: 1.8
short_description: Manage A10 Networks AX/SoftAX/Thunder/vThunder devices
description:
- Manage slb virtual server objects on A10 Networks devices via aXAPI
author: Mischa Peters
author: "Mischa Peters (@mischapeters)"
notes:
- Requires A10 Networks aXAPI 2.1
requirements:

@ -81,8 +81,8 @@ options:
default: 'yes'
choices: ['yes', 'no']
requirements: [ "urllib", "urllib2" ]
author: Nandor Sivok
requirements: []
author: "Nandor Sivok (@dominis)"
'''
EXAMPLES = '''
@ -99,7 +99,7 @@ ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=api
import base64
import socket
import urllib
class netscaler(object):

@ -93,7 +93,7 @@ options:
default: null
requirements: [ dnsimple ]
author: Alex Coomans
author: "Alex Coomans (@drcapulet)"
'''
EXAMPLES = '''

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save