Conflicts:
	packaging/os/pkgng.py
reviewable/pr18780/r1
mahadeva604 9 years ago
commit c3a0a3376a

@ -0,0 +1,16 @@
sudo: false
language: python
python:
- "2.7"
addons:
apt:
sources:
- deadsnakes
packages:
- python2.4
- python2.6
script:
- python2.4 -m compileall -fq -x 'cloud/|monitoring/zabbix.*\.py|/dnf\.py|/layman\.py|/maven_artifact\.py|clustering/consul.*\.py|notification/pushbullet\.py' .
- python2.6 -m compileall -fq .
- python2.7 -m compileall -fq .
#- ./test-docs.sh extras

@ -1,28 +1,37 @@
Welcome To Ansible GitHub Contributing to ansible-modules-extras
========================= ======================================
Hi! Nice to see you here! The Ansible Extras Modules are written and maintained by the Ansible community, according to the following contribution guidelines.
If you'd like to contribute code
================================
Please see [this web page](http://docs.ansible.com/community.html) for information about the contribution process. Important license agreement information is also included on that page.
If you'd like to contribute code to an existing module
======================================================
Each module in Extras is maintained by the owner of that module; each module's owner is indicated in the documentation section of the module itself. Any pull request for a module that is given a +1 by the owner in the comments will be merged by the Ansible team.
If you'd like to contribute a new module
========================================
Ansible welcomes new modules. Please be certain that you've read the [module development guide and standards](http://docs.ansible.com/developing_modules.html) thoroughly before submitting your module.
Each new module requires two current module owners to approve a new module for inclusion. The Ansible community reviews new modules as often as possible, but please be patient; there are a lot of new module submissions in the pipeline, and it takes time to evaluate a new module for its adherence to module standards.
Once your module is accepted, you become responsible for maintenance of that module, which means responding to pull requests and issues in a reasonably timely manner.
If you'd like to ask a question If you'd like to ask a question
=============================== ===============================
Please see [this web page ](http://docs.ansible.com/community.html) for community information, which includes pointers on how to ask questions on the [mailing lists](http://docs.ansible.com/community.html#mailing-list-information) and IRC. Please see [this web page ](http://docs.ansible.com/community.html) for community information, which includes pointers on how to ask questions on the [mailing lists](http://docs.ansible.com/community.html#mailing-list-information) and IRC.
The github issue tracker is not the best place for questions for various reasons, but both IRC and the mailing list are very helpful places for those things, and that page has the pointers to those. The Github issue tracker is not the best place for questions for various reasons, but both IRC and the mailing list are very helpful places for those things, and that page has the pointers to those.
If you'd like to contribute code
================================
Please see [this web page](http://docs.ansible.com/community.html) for information about the contribution process. Important license agreement information is also included on that page.
If you'd like to file a bug If you'd like to file a bug
=========================== ===========================
I'd also read the community page above, but in particular, make sure you copy [this issue template](https://github.com/ansible/ansible/blob/devel/ISSUE_TEMPLATE.md) into your ticket description. We have a friendly neighborhood bot that will remind you if you forget :) This template helps us organize tickets faster and prevents asking some repeated questions, so it's very helpful to us and we appreciate your help with it. Read the community page above, but in particular, make sure you copy [this issue template](https://github.com/ansible/ansible/blob/devel/ISSUE_TEMPLATE.md) into your ticket description. We have a friendly neighborhood bot that will remind you if you forget :) This template helps us organize tickets faster and prevents asking some repeated questions, so it's very helpful to us and we appreciate your help with it.
Also please make sure you are testing on the latest released version of Ansible or the development branch. Also please make sure you are testing on the latest released version of Ansible or the development branch.
Thanks! Thanks!

@ -0,0 +1,160 @@
New module reviewers
====================
The following list represents all current Github module reviewers. It's currently comprised of all Ansible module authors, past and present.
Two +1 votes by any of these module reviewers on a new module pull request will result in the inclusion of that module into Ansible Extras.
Active
======
"Adam Garside (@fabulops)"
"Adam Keech (@smadam813)"
"Adam Miller (@maxamillion)"
"Alex Coomans (@drcapulet)"
"Alexander Bulimov (@abulimov)"
"Alexander Saltanov (@sashka)"
"Alexander Winkler (@dermute)"
"Andrew de Quincey (@adq)"
"André Paramés (@andreparames)"
"Andy Hill (@andyhky)"
"Artūras `arturaz` Šlajus (@arturaz)"
"Augustus Kling (@AugustusKling)"
"BOURDEL Paul (@pb8226)"
"Balazs Pocze (@banyek)"
"Ben Whaley (@bwhaley)"
"Benno Joy (@bennojoy)"
"Bernhard Weitzhofer (@b6d)"
"Boyd Adamson (@brontitall)"
"Brad Olson (@bradobro)"
"Brian Coca (@bcoca)"
"Brice Burgess (@briceburg)"
"Bruce Pennypacker (@bpennypacker)"
"Carson Gee (@carsongee)"
"Chris Church (@cchurch)"
"Chris Hoffman (@chrishoffman)"
"Chris Long (@alcamie101)"
"Chris Schmidt (@chrisisbeef)"
"Christian Berendt (@berendt)"
"Christopher H. Laco (@claco)"
"Cristian van Ee (@DJMuggs)"
"Dag Wieers (@dagwieers)"
"Dane Summers (@dsummersl)"
"Daniel Jaouen (@danieljaouen)"
"Daniel Schep (@dschep)"
"Dariusz Owczarek (@dareko)"
"Darryl Stoflet (@dstoflet)"
"David CHANIAL (@davixx)"
"David Stygstra (@stygstra)"
"Derek Carter (@goozbach)"
"Dimitrios Tydeas Mengidis (@dmtrs)"
"Doug Luce (@dougluce)"
"Dylan Martin (@pileofrogs)"
"Elliott Foster (@elliotttf)"
"Eric Johnson (@erjohnso)"
"Evan Duffield (@scicoin-project)"
"Evan Kaufman (@EvanK)"
"Evgenii Terechkov (@evgkrsk)"
"Franck Cuny (@franckcuny)"
"Gareth Rushgrove (@garethr)"
"Hagai Kariti (@hkariti)"
"Hector Acosta (@hacosta)"
"Hiroaki Nakamura (@hnakamur)"
"Ivan Vanderbyl (@ivanvanderbyl)"
"Jakub Jirutka (@jirutka)"
"James Cammarata (@jimi-c)"
"James Laska (@jlaska)"
"James S. Martin (@jsmartin)"
"Jan-Piet Mens (@jpmens)"
"Jayson Vantuyl (@jvantuyl)"
"Jens Depuydt (@jensdepuydt)"
"Jeroen Hoekx (@jhoekx)"
"Jesse Keating (@j2sol)"
"Jim Dalton (@jsdalton)"
"Jim Richardson (@weaselkeeper)"
"Jimmy Tang (@jcftang)"
"Johan Wiren (@johanwiren)"
"John Dewey (@retr0h)"
"John Jarvis (@jarv)"
"John Whitbeck (@jwhitbeck)"
"Jon Hawkesworth (@jhawkesworth)"
"Jonas Pfenniger (@zimbatm)"
"Jonathan I. Davila (@defionscode)"
"Joseph Callen (@jcpowermac)"
"Kevin Carter (@cloudnull)"
"Lester Wade (@lwade)"
"Lorin Hochstein (@lorin)"
"Manuel Sousa (@manuel-sousa)"
"Mark Theunissen (@marktheunissen)"
"Matt Coddington (@mcodd)"
"Matt Hite (@mhite)"
"Matt Makai (@makaimc)"
"Matt Martz (@sivel)"
"Matt Wright (@mattupstate)"
"Matthew Vernon (@mcv21)"
"Matthew Williams (@mgwilliams)"
"Matthias Vogelgesang (@matze)"
"Max Riveiro (@kavu)"
"Michael Gregson (@mgregson)"
"Michael J. Schultz (@mjschultz)"
"Michael Warkentin (@mwarkentin)"
"Mischa Peters (@mischapeters)"
"Monty Taylor (@emonty)"
"Nandor Sivok (@dominis)"
"Nate Coraor (@natefoo)"
"Nate Kingsley (@nate-kingsley)"
"Nick Harring (@NickatEpic)"
"Patrick Callahan (@dirtyharrycallahan)"
"Patrick Ogenstad (@ogenstad)"
"Patrick Pelletier (@skinp)"
"Patrik Lundin (@eest)"
"Paul Durivage (@angstwad)"
"Pavel Antonov (@softzilla)"
"Pepe Barbe (@elventear)"
"Peter Mounce (@petemounce)"
"Peter Oliver (@mavit)"
"Peter Sprygada (@privateip)"
"Peter Tan (@tanpeter)"
"Philippe Makowski (@pmakowski)"
"Phillip Gentry, CX Inc (@pcgentry)"
"Quentin Stafford-Fraser (@quentinsf)"
"Ramon de la Fuente (@ramondelafuente)"
"Raul Melo (@melodous)"
"Ravi Bhure (@ravibhure)"
"René Moser (@resmo)"
"Richard Hoop (@rhoop)"
"Richard Isaacson (@risaacson)"
"Rick Mendes (@rickmendes)"
"Romeo Theriault (@romeotheriault)"
"Scott Anderson (@tastychutney)"
"Sebastian Kornehl (@skornehl)"
"Serge van Ginderachter (@srvg)"
"Sergei Antipov (@UnderGreen)"
"Seth Edwards (@sedward)"
"Silviu Dicu (@silviud)"
"Simon JAILLET (@jails)"
"Stephen Fromm (@sfromm)"
"Steve (@groks)"
"Steve Gargan (@sgargan)"
"Steve Smith (@tarka)"
"Takashi Someda (@tksmd)"
"Taneli Leppä (@rosmo)"
"Tim Bielawa (@tbielawa)"
"Tim Bielawa (@tbielawa)"
"Tim Mahoney (@timmahoney)"
"Timothy Appnel (@tima)"
"Tom Bamford (@tombamford)"
"Trond Hindenes (@trondhindenes)"
"Vincent Van der Kussen (@vincentvdk)"
"Vincent Viallet (@zbal)"
"WAKAYAMA Shirou (@shirou)"
"Will Thames (@willthames)"
"Willy Barro (@willybarro)"
"Xabier Larrakoetxea (@slok)"
"Yeukhon Wong (@yeukhon)"
"Zacharie Eakin (@zeekin)"
"berenddeboer (@berenddeboer)"
"bleader (@bleader)"
"curtis (@ccollicutt)"
Retired
=======
None yet :)

@ -0,0 +1 @@
${version}

@ -0,0 +1,88 @@
Guidelines for AWS modules
--------------------------
Naming your module
==================
Base the name of the module on the part of AWS that
you actually use. (A good rule of thumb is to take
whatever module you use with boto as a starting point).
Don't further abbreviate names - if something is a well
known abbreviation due to it being a major component of
AWS, that's fine, but don't create new ones independently
(e.g. VPC, ELB, etc. are fine)
Using boto
==========
Wrap the `import` statements in a try block and fail the
module later on if the import fails
```
try:
import boto
import boto.module.that.you.use
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
<lots of code here>
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
module_specific_parameter=dict(),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
```
Try and keep backward compatibility with relatively recent
versions of boto. That means that if want to implement some
functionality that uses a new feature of boto, it should only
fail if that feature actually needs to be run, with a message
saying which version of boto is needed.
Use feature testing (e.g. `hasattr('boto.module', 'shiny_new_method')`)
to check whether boto supports a feature rather than version checking
e.g. from the `ec2` module:
```
if boto_supports_profile_name_arg(ec2):
params['instance_profile_name'] = instance_profile_name
else:
if instance_profile_name is not None:
module.fail_json(
msg="instance_profile_name parameter requires Boto version 2.5.0 or higher")
```
Connecting to AWS
=================
For EC2 you can just use
```
ec2 = ec2_connect(module)
```
For other modules, you should use `get_aws_connection_info` and then
`connect_to_aws`. To connect to an example `xyz` service:
```
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
xyz = connect_to_aws(boto.xyz, region, **aws_connect_params)
```
The reason for using `get_aws_connection_info` and `connect_to_aws`
(and even `ec2_connect` uses those under the hood) rather than doing it
yourself is that they handle some of the more esoteric connection
options such as security tokens and boto profiles.

@ -19,9 +19,13 @@ DOCUMENTATION = """
module: cloudtrail module: cloudtrail
short_description: manage CloudTrail creation and deletion short_description: manage CloudTrail creation and deletion
description: description:
- Creates or deletes CloudTrail configuration. Ensures logging is also enabled. This module has a dependency on python-boto >= 2.21. - Creates or deletes CloudTrail configuration. Ensures logging is also enabled.
version_added: "2.0" version_added: "2.0"
author: Ted Timmons author:
- "Ansible Core Team"
- "Ted Timmons"
requirements:
- "boto >= 2.21"
options: options:
state: state:
description: description:
@ -85,21 +89,17 @@ EXAMPLES = """
s3_key_prefix='' region=us-east-1 s3_key_prefix='' region=us-east-1
- name: remove cloudtrail - name: remove cloudtrail
local_action: cloudtrail state=absent name=main region=us-east-1 local_action: cloudtrail state=disabled name=main region=us-east-1
""" """
import time HAS_BOTO = False
import sys
import os
from collections import Counter
boto_import_failed = False
try: try:
import boto import boto
import boto.cloudtrail import boto.cloudtrail
from boto.regioninfo import RegionInfo from boto.regioninfo import RegionInfo
HAS_BOTO = True
except ImportError: except ImportError:
boto_import_failed = True HAS_BOTO = False
class CloudTrailManager: class CloudTrailManager:
"""Handles cloudtrail configuration""" """Handles cloudtrail configuration"""
@ -150,9 +150,6 @@ class CloudTrailManager:
def main(): def main():
if not has_libcloud:
module.fail_json(msg='boto is required.')
argument_spec = ec2_argument_spec() argument_spec = ec2_argument_spec()
argument_spec.update(dict( argument_spec.update(dict(
state={'required': True, 'choices': ['enabled', 'disabled'] }, state={'required': True, 'choices': ['enabled', 'disabled'] },
@ -164,6 +161,10 @@ def main():
required_together = ( ['state', 's3_bucket_name'] ) required_together = ( ['state', 's3_bucket_name'] )
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True, required_together=required_together) module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True, required_together=required_together)
if not HAS_BOTO:
module.fail_json(msg='boto is required.')
ec2_url, access_key, secret_key, region = get_ec2_creds(module) ec2_url, access_key, secret_key, region = get_ec2_creds(module)
aws_connect_params = dict(aws_access_key_id=access_key, aws_connect_params = dict(aws_access_key_id=access_key,
aws_secret_access_key=secret_key) aws_secret_access_key=secret_key)

@ -0,0 +1,286 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = """
---
module: dynamodb_table
short_description: Create, update or delete AWS Dynamo DB tables.
description:
- Create or delete AWS Dynamo DB tables.
- Can update the provisioned throughput on existing tables.
- Returns the status of the specified table.
version_added: "2.0"
author: Alan Loi (@loia)
version_added: "2.0"
requirements:
- "boto >= 2.13.2"
options:
state:
description:
- Create or delete the table
required: false
choices: ['present', 'absent']
default: 'present'
name:
description:
- Name of the table.
required: true
hash_key_name:
description:
- Name of the hash key.
- Required when C(state=present).
required: false
default: null
hash_key_type:
description:
- Type of the hash key.
required: false
choices: ['STRING', 'NUMBER', 'BINARY']
default: 'STRING'
range_key_name:
description:
- Name of the range key.
required: false
default: null
range_key_type:
description:
- Type of the range key.
required: false
choices: ['STRING', 'NUMBER', 'BINARY']
default: 'STRING'
read_capacity:
description:
- Read throughput capacity (units) to provision.
required: false
default: 1
write_capacity:
description:
- Write throughput capacity (units) to provision.
required: false
default: 1
region:
description:
- The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used.
required: false
aliases: ['aws_region', 'ec2_region']
extends_documentation_fragment: aws
"""
EXAMPLES = '''
# Create dynamo table with hash and range primary key
- dynamodb_table:
name: my-table
region: us-east-1
hash_key_name: id
hash_key_type: STRING
range_key_name: create_time
range_key_type: NUMBER
read_capacity: 2
write_capacity: 2
# Update capacity on existing dynamo table
- dynamodb_table:
name: my-table
region: us-east-1
read_capacity: 10
write_capacity: 10
# Delete dynamo table
- dynamodb_table:
name: my-table
region: us-east-1
state: absent
'''
RETURN = '''
table_status:
description: The current status of the table.
returned: success
type: string
sample: ACTIVE
'''
try:
import boto
import boto.dynamodb2
from boto.dynamodb2.table import Table
from boto.dynamodb2.fields import HashKey, RangeKey
from boto.dynamodb2.types import STRING, NUMBER, BINARY
from boto.exception import BotoServerError, NoAuthHandlerFound, JSONResponseError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
DYNAMO_TYPE_MAP = {
'STRING': STRING,
'NUMBER': NUMBER,
'BINARY': BINARY
}
def create_or_update_dynamo_table(connection, module):
table_name = module.params.get('name')
hash_key_name = module.params.get('hash_key_name')
hash_key_type = module.params.get('hash_key_type')
range_key_name = module.params.get('range_key_name')
range_key_type = module.params.get('range_key_type')
read_capacity = module.params.get('read_capacity')
write_capacity = module.params.get('write_capacity')
schema = [
HashKey(hash_key_name, DYNAMO_TYPE_MAP.get(hash_key_type)),
RangeKey(range_key_name, DYNAMO_TYPE_MAP.get(range_key_type))
]
throughput = {
'read': read_capacity,
'write': write_capacity
}
result = dict(
region=module.params.get('region'),
table_name=table_name,
hash_key_name=hash_key_name,
hash_key_type=hash_key_type,
range_key_name=range_key_name,
range_key_type=range_key_type,
read_capacity=read_capacity,
write_capacity=write_capacity,
)
try:
table = Table(table_name, connection=connection)
if dynamo_table_exists(table):
result['changed'] = update_dynamo_table(table, throughput=throughput, check_mode=module.check_mode)
else:
if not module.check_mode:
Table.create(table_name, connection=connection, schema=schema, throughput=throughput)
result['changed'] = True
if not module.check_mode:
result['table_status'] = table.describe()['Table']['TableStatus']
except BotoServerError:
result['msg'] = 'Failed to create/update dynamo table due to error: ' + traceback.format_exc()
module.fail_json(**result)
else:
module.exit_json(**result)
def delete_dynamo_table(connection, module):
table_name = module.params.get('name')
result = dict(
region=module.params.get('region'),
table_name=table_name,
)
try:
table = Table(table_name, connection=connection)
if dynamo_table_exists(table):
if not module.check_mode:
table.delete()
result['changed'] = True
else:
result['changed'] = False
except BotoServerError:
result['msg'] = 'Failed to delete dynamo table due to error: ' + traceback.format_exc()
module.fail_json(**result)
else:
module.exit_json(**result)
def dynamo_table_exists(table):
try:
table.describe()
return True
except JSONResponseError, e:
if e.message and e.message.startswith('Requested resource not found'):
return False
else:
raise e
def update_dynamo_table(table, throughput=None, check_mode=False):
table.describe() # populate table details
if has_throughput_changed(table, throughput):
if not check_mode:
return table.update(throughput=throughput)
else:
return True
return False
def has_throughput_changed(table, new_throughput):
if not new_throughput:
return False
return new_throughput['read'] != table.throughput['read'] or \
new_throughput['write'] != table.throughput['write']
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
state=dict(default='present', choices=['present', 'absent']),
name=dict(required=True, type='str'),
hash_key_name=dict(required=True, type='str'),
hash_key_type=dict(default='STRING', type='str', choices=['STRING', 'NUMBER', 'BINARY']),
range_key_name=dict(type='str'),
range_key_type=dict(default='STRING', type='str', choices=['STRING', 'NUMBER', 'BINARY']),
read_capacity=dict(default=1, type='int'),
write_capacity=dict(default=1, type='int'),
))
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if not region:
module.fail_json(msg='region must be specified')
try:
connection = connect_to_aws(boto.dynamodb2, region, **aws_connect_params)
except (NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
state = module.params.get('state')
if state == 'present':
create_or_update_dynamo_table(connection, module)
elif state == 'absent':
delete_dynamo_table(connection, module)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
if __name__ == '__main__':
main()

@ -0,0 +1,208 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_ami_copy
short_description: copies AMI between AWS regions, return new image id
description:
- Copies AMI from a source region to a destination region. This module has a dependency on python-boto >= 2.5
version_added: "2.0"
options:
source_region:
description:
- the source region that AMI should be copied from
required: true
region:
description:
- the destination region that AMI should be copied to
required: true
aliases: ['aws_region', 'ec2_region', 'dest_region']
source_image_id:
description:
- the id of the image in source region that should be copied
required: true
name:
description:
- The name of the new image to copy
required: false
default: null
description:
description:
- An optional human-readable string describing the contents and purpose of the new AMI.
required: false
default: null
wait:
description:
- wait for the copied AMI to be in state 'available' before returning.
required: false
default: "no"
choices: [ "yes", "no" ]
wait_timeout:
description:
- how long before wait gives up, in seconds
required: false
default: 1200
tags:
description:
- a hash/dictionary of tags to add to the new copied AMI; '{"key":"value"}' and '{"key":"value","key":"value"}'
required: false
default: null
author: Amir Moulavi <amir.moulavi@gmail.com>
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Basic AMI Copy
- local_action:
module: ec2_ami_copy
source_region: eu-west-1
dest_region: us-east-1
source_image_id: ami-xxxxxxx
name: SuperService-new-AMI
description: latest patch
tags: '{"Name":"SuperService-new-AMI", "type":"SuperService"}'
wait: yes
register: image_id
'''
import sys
import time
try:
import boto
import boto.ec2
from boto.vpc import VPCConnection
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
def copy_image(module, ec2):
"""
Copies an AMI
module : AnsibleModule object
ec2: authenticated ec2 connection object
"""
source_region = module.params.get('source_region')
source_image_id = module.params.get('source_image_id')
name = module.params.get('name')
description = module.params.get('description')
tags = module.params.get('tags')
wait_timeout = int(module.params.get('wait_timeout'))
wait = module.params.get('wait')
try:
params = {'source_region': source_region,
'source_image_id': source_image_id,
'name': name,
'description': description
}
image_id = ec2.copy_image(**params).image_id
except boto.exception.BotoServerError, e:
module.fail_json(msg="%s: %s" % (e.error_code, e.error_message))
img = wait_until_image_is_recognized(module, ec2, wait_timeout, image_id, wait)
img = wait_until_image_is_copied(module, ec2, wait_timeout, img, image_id, wait)
register_tags_if_any(module, ec2, tags, image_id)
module.exit_json(msg="AMI copy operation complete", image_id=image_id, state=img.state, changed=True)
# register tags to the copied AMI in dest_region
def register_tags_if_any(module, ec2, tags, image_id):
if tags:
try:
ec2.create_tags([image_id], tags)
except Exception as e:
module.fail_json(msg=str(e))
# wait here until the image is copied (i.e. the state becomes available
def wait_until_image_is_copied(module, ec2, wait_timeout, img, image_id, wait):
wait_timeout = time.time() + wait_timeout
while wait and wait_timeout > time.time() and (img is None or img.state != 'available'):
img = ec2.get_image(image_id)
time.sleep(3)
if wait and wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="timed out waiting for image to be copied")
return img
# wait until the image is recognized.
def wait_until_image_is_recognized(module, ec2, wait_timeout, image_id, wait):
for i in range(wait_timeout):
try:
return ec2.get_image(image_id)
except boto.exception.EC2ResponseError, e:
# This exception we expect initially right after registering the copy with EC2 API
if 'InvalidAMIID.NotFound' in e.error_code and wait:
time.sleep(1)
else:
# On any other exception we should fail
module.fail_json(
msg="Error while trying to find the new image. Using wait=yes and/or a longer wait_timeout may help: " + str(
e))
else:
module.fail_json(msg="timed out waiting for image to be recognized")
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
source_region=dict(required=True),
source_image_id=dict(required=True),
name=dict(),
description=dict(default=""),
wait=dict(type='bool', default=False),
wait_timeout=dict(default=1200),
tags=dict(type='dict')))
module = AnsibleModule(argument_spec=argument_spec)
try:
ec2 = ec2_connect(module)
except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg=str(e))
try:
region, ec2_url, boto_params = get_aws_connection_info(module)
vpc = connect_to_aws(boto.vpc, region, **boto_params)
except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg = str(e))
if not region:
module.fail_json(msg="region must be specified")
copy_image(module, ec2)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
main()

@ -0,0 +1,404 @@
#!/usr/bin/python
#
# This is a free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This Ansible library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this library. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_eni
short_description: Create and optionally attach an Elastic Network Interface (ENI) to an instance
description:
- Create and optionally attach an Elastic Network Interface (ENI) to an instance. If an ENI ID is provided, an attempt is made to update the existing ENI. By passing 'None' as the instance_id, an ENI can be detached from an instance.
version_added: "2.0"
author: Rob White, wimnat [at] gmail.com, @wimnat
options:
eni_id:
description:
- The ID of the ENI
required: false
default: null
instance_id:
description:
- Instance ID that you wish to attach ENI to. To detach an ENI from an instance, use 'None'.
required: false
default: null
private_ip_address:
description:
- Private IP address.
required: false
default: null
subnet_id:
description:
- ID of subnet in which to create the ENI. Only required when state=present.
required: true
description:
description:
- Optional description of the ENI.
required: false
default: null
security_groups:
description:
- List of security groups associated with the interface. Only used when state=present.
required: false
default: null
state:
description:
- Create or delete ENI.
required: false
default: present
choices: [ 'present', 'absent' ]
device_index:
description:
- The index of the device for the network interface attachment on the instance.
required: false
default: 0
force_detach:
description:
- Force detachment of the interface. This applies either when explicitly detaching the interface by setting instance_id to None or when deleting an interface with state=absent.
required: false
default: no
delete_on_termination:
description:
- Delete the interface when the instance it is attached to is terminated. You can only specify this flag when the interface is being modified, not on creation.
required: false
source_dest_check:
description:
- By default, interfaces perform source/destination checks. NAT instances however need this check to be disabled. You can only specify this flag when the interface is being modified, not on creation.
required: false
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Create an ENI. As no security group is defined, ENI will be created in default security group
- ec2_eni:
private_ip_address: 172.31.0.20
subnet_id: subnet-xxxxxxxx
state: present
# Create an ENI and attach it to an instance
- ec2_eni:
instance_id: i-xxxxxxx
device_index: 1
private_ip_address: 172.31.0.20
subnet_id: subnet-xxxxxxxx
state: present
# Destroy an ENI, detaching it from any instance if necessary
- ec2_eni:
eni_id: eni-xxxxxxx
force_detach: yes
state: absent
# Update an ENI
- ec2_eni:
eni_id: eni-xxxxxxx
description: "My new description"
state: present
# Detach an ENI from an instance
- ec2_eni:
eni_id: eni-xxxxxxx
instance_id: None
state: present
### Delete an interface on termination
# First create the interface
- ec2_eni:
instance_id: i-xxxxxxx
device_index: 1
private_ip_address: 172.31.0.20
subnet_id: subnet-xxxxxxxx
state: present
register: eni
# Modify the interface to enable the delete_on_terminaton flag
- ec2_eni:
eni_id: {{ "eni.interface.id" }}
delete_on_termination: true
'''
import time
import xml.etree.ElementTree as ET
import re
try:
import boto.ec2
from boto.exception import BotoServerError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def get_error_message(xml_string):
root = ET.fromstring(xml_string)
for message in root.findall('.//Message'):
return message.text
def get_eni_info(interface):
interface_info = {'id': interface.id,
'subnet_id': interface.subnet_id,
'vpc_id': interface.vpc_id,
'description': interface.description,
'owner_id': interface.owner_id,
'status': interface.status,
'mac_address': interface.mac_address,
'private_ip_address': interface.private_ip_address,
'source_dest_check': interface.source_dest_check,
'groups': dict((group.id, group.name) for group in interface.groups),
}
if interface.attachment is not None:
interface_info['attachment'] = {'attachment_id': interface.attachment.id,
'instance_id': interface.attachment.instance_id,
'device_index': interface.attachment.device_index,
'status': interface.attachment.status,
'attach_time': interface.attachment.attach_time,
'delete_on_termination': interface.attachment.delete_on_termination,
}
return interface_info
def wait_for_eni(eni, status):
while True:
time.sleep(3)
eni.update()
# If the status is detached we just need attachment to disappear
if eni.attachment is None:
if status == "detached":
break
else:
if status == "attached" and eni.attachment.status == "attached":
break
def create_eni(connection, module):
instance_id = module.params.get("instance_id")
if instance_id == 'None':
instance_id = None
do_detach = True
else:
do_detach = False
device_index = module.params.get("device_index")
subnet_id = module.params.get('subnet_id')
private_ip_address = module.params.get('private_ip_address')
description = module.params.get('description')
security_groups = module.params.get('security_groups')
changed = False
try:
eni = compare_eni(connection, module)
if eni is None:
eni = connection.create_network_interface(subnet_id, private_ip_address, description, security_groups)
if instance_id is not None:
try:
eni.attach(instance_id, device_index)
except BotoServerError as ex:
eni.delete()
raise
changed = True
# Wait to allow creation / attachment to finish
wait_for_eni(eni, "attached")
eni.update()
except BotoServerError as e:
module.fail_json(msg=get_error_message(e.args[2]))
module.exit_json(changed=changed, interface=get_eni_info(eni))
def modify_eni(connection, module):
eni_id = module.params.get("eni_id")
instance_id = module.params.get("instance_id")
if instance_id == 'None':
instance_id = None
do_detach = True
else:
do_detach = False
device_index = module.params.get("device_index")
subnet_id = module.params.get('subnet_id')
private_ip_address = module.params.get('private_ip_address')
description = module.params.get('description')
security_groups = module.params.get('security_groups')
force_detach = module.params.get("force_detach")
source_dest_check = module.params.get("source_dest_check")
delete_on_termination = module.params.get("delete_on_termination")
changed = False
try:
# Get the eni with the eni_id specified
eni_result_set = connection.get_all_network_interfaces(eni_id)
eni = eni_result_set[0]
if description is not None:
if eni.description != description:
connection.modify_network_interface_attribute(eni.id, "description", description)
changed = True
if security_groups is not None:
if sorted(get_sec_group_list(eni.groups)) != sorted(security_groups):
connection.modify_network_interface_attribute(eni.id, "groupSet", security_groups)
changed = True
if source_dest_check is not None:
if eni.source_dest_check != source_dest_check:
connection.modify_network_interface_attribute(eni.id, "sourceDestCheck", source_dest_check)
changed = True
if delete_on_termination is not None:
if eni.attachment is not None:
if eni.attachment.delete_on_termination is not delete_on_termination:
connection.modify_network_interface_attribute(eni.id, "deleteOnTermination", delete_on_termination, eni.attachment.id)
changed = True
else:
module.fail_json(msg="Can not modify delete_on_termination as the interface is not attached")
if eni.attachment is not None and instance_id is None and do_detach is True:
eni.detach(force_detach)
wait_for_eni(eni, "detached")
changed = True
else:
if instance_id is not None:
eni.attach(instance_id, device_index)
wait_for_eni(eni, "attached")
changed = True
except BotoServerError as e:
print e
module.fail_json(msg=get_error_message(e.args[2]))
eni.update()
module.exit_json(changed=changed, interface=get_eni_info(eni))
def delete_eni(connection, module):
eni_id = module.params.get("eni_id")
force_detach = module.params.get("force_detach")
try:
eni_result_set = connection.get_all_network_interfaces(eni_id)
eni = eni_result_set[0]
if force_detach is True:
if eni.attachment is not None:
eni.detach(force_detach)
# Wait to allow detachment to finish
wait_for_eni(eni, "detached")
eni.update()
eni.delete()
changed = True
else:
eni.delete()
changed = True
module.exit_json(changed=changed)
except BotoServerError as e:
msg = get_error_message(e.args[2])
regex = re.compile('The networkInterface ID \'.*\' does not exist')
if regex.search(msg) is not None:
module.exit_json(changed=False)
else:
module.fail_json(msg=get_error_message(e.args[2]))
def compare_eni(connection, module):
eni_id = module.params.get("eni_id")
subnet_id = module.params.get('subnet_id')
private_ip_address = module.params.get('private_ip_address')
description = module.params.get('description')
security_groups = module.params.get('security_groups')
try:
all_eni = connection.get_all_network_interfaces(eni_id)
for eni in all_eni:
remote_security_groups = get_sec_group_list(eni.groups)
if (eni.subnet_id == subnet_id) and (eni.private_ip_address == private_ip_address) and (eni.description == description) and (remote_security_groups == security_groups):
return eni
except BotoServerError as e:
module.fail_json(msg=get_error_message(e.args[2]))
return None
def get_sec_group_list(groups):
# Build list of remote security groups
remote_security_groups = []
for group in groups:
remote_security_groups.append(group.id.encode())
return remote_security_groups
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
eni_id = dict(default=None),
instance_id = dict(default=None),
private_ip_address = dict(),
subnet_id = dict(),
description = dict(),
security_groups = dict(type='list'),
device_index = dict(default=0, type='int'),
state = dict(default='present', choices=['present', 'absent']),
force_detach = dict(default='no', type='bool'),
source_dest_check = dict(default=None, type='bool'),
delete_on_termination = dict(default=None, type='bool')
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region:
try:
connection = connect_to_aws(boto.ec2, region, **aws_connect_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
state = module.params.get("state")
eni_id = module.params.get("eni_id")
if state == 'present':
if eni_id is None:
if module.params.get("subnet_id") is None:
module.fail_json(msg="subnet_id must be specified when state=present")
create_eni(connection, module)
else:
modify_eni(connection, module)
elif state == 'absent':
if eni_id is None:
module.fail_json(msg="eni_id must be specified")
else:
delete_eni(connection, module)
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
# this is magic, see lib/ansible/module_common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()

@ -0,0 +1,135 @@
#!/usr/bin/python
#
# This is a free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This Ansible library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this library. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_eni_facts
short_description: Gather facts about ec2 ENI interfaces in AWS
description:
- Gather facts about ec2 ENI interfaces in AWS
version_added: "2.0"
author: "Rob White (@wimnat)"
options:
eni_id:
description:
- The ID of the ENI. Pass this option to gather facts about a particular ENI, otherwise, all ENIs are returned.
required: false
default: null
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Gather facts about all ENIs
- ec2_eni_facts:
# Gather facts about a particular ENI
- ec2_eni_facts:
eni_id: eni-xxxxxxx
'''
import xml.etree.ElementTree as ET
try:
import boto.ec2
from boto.exception import BotoServerError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def get_error_message(xml_string):
root = ET.fromstring(xml_string)
for message in root.findall('.//Message'):
return message.text
def get_eni_info(interface):
interface_info = {'id': interface.id,
'subnet_id': interface.subnet_id,
'vpc_id': interface.vpc_id,
'description': interface.description,
'owner_id': interface.owner_id,
'status': interface.status,
'mac_address': interface.mac_address,
'private_ip_address': interface.private_ip_address,
'source_dest_check': interface.source_dest_check,
'groups': dict((group.id, group.name) for group in interface.groups),
}
if interface.attachment is not None:
interface_info['attachment'] = {'attachment_id': interface.attachment.id,
'instance_id': interface.attachment.instance_id,
'device_index': interface.attachment.device_index,
'status': interface.attachment.status,
'attach_time': interface.attachment.attach_time,
'delete_on_termination': interface.attachment.delete_on_termination,
}
return interface_info
def list_eni(connection, module):
eni_id = module.params.get("eni_id")
interface_dict_array = []
try:
all_eni = connection.get_all_network_interfaces(eni_id)
except BotoServerError as e:
module.fail_json(msg=get_error_message(e.args[2]))
for interface in all_eni:
interface_dict_array.append(get_eni_info(interface))
module.exit_json(interfaces=interface_dict_array)
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
eni_id = dict(default=None)
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region:
try:
connection = connect_to_aws(boto.ec2, region, **aws_connect_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
list_eni(connection, module)
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
# this is magic, see lib/ansible/module_common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()

@ -0,0 +1,159 @@
#!/usr/bin/python
#
# This is a free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This Ansible library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this library. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_vpc_igw
short_description: Manage an AWS VPC Internet gateway
description:
- Manage an AWS VPC Internet gateway
version_added: "2.0"
author: Robert Estelle, @erydo
options:
vpc_id:
description:
- The VPC ID for the VPC in which to manage the Internet Gateway.
required: true
default: null
state:
description:
- Create or terminate the IGW
required: false
default: present
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Ensure that the VPC has an Internet Gateway.
# The Internet Gateway ID is can be accessed via {{igw.gateway_id}} for use
# in setting up NATs etc.
local_action:
module: ec2_vpc_igw
vpc_id: {{vpc.vpc_id}}
region: {{vpc.vpc.region}}
state: present
register: igw
'''
import sys # noqa
try:
import boto.ec2
import boto.vpc
from boto.exception import EC2ResponseError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
if __name__ != '__main__':
raise
class AnsibleIGWException(Exception):
pass
def ensure_igw_absent(vpc_conn, vpc_id, check_mode):
igws = vpc_conn.get_all_internet_gateways(
filters={'attachment.vpc-id': vpc_id})
if not igws:
return {'changed': False}
if check_mode:
return {'changed': True}
for igw in igws:
try:
vpc_conn.detach_internet_gateway(igw.id, vpc_id)
vpc_conn.delete_internet_gateway(igw.id)
except EC2ResponseError as e:
raise AnsibleIGWException(
'Unable to delete Internet Gateway, error: {0}'.format(e))
return {'changed': True}
def ensure_igw_present(vpc_conn, vpc_id, check_mode):
igws = vpc_conn.get_all_internet_gateways(
filters={'attachment.vpc-id': vpc_id})
if len(igws) > 1:
raise AnsibleIGWException(
'EC2 returned more than one Internet Gateway for VPC {0}, aborting'
.format(vpc_id))
if igws:
return {'changed': False, 'gateway_id': igws[0].id}
else:
if check_mode:
return {'changed': True, 'gateway_id': None}
try:
igw = vpc_conn.create_internet_gateway()
vpc_conn.attach_internet_gateway(igw.id, vpc_id)
return {'changed': True, 'gateway_id': igw.id}
except EC2ResponseError as e:
raise AnsibleIGWException(
'Unable to create Internet Gateway, error: {0}'.format(e))
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
vpc_id = dict(required=True),
state = dict(choices=['present', 'absent'], default='present')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
if not HAS_BOTO:
module.fail_json(msg='boto is required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region:
try:
connection = connect_to_aws(boto.ec2, region, **aws_connect_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
vpc_id = module.params.get('vpc_id')
state = module.params.get('state', 'present')
try:
if state == 'present':
result = ensure_igw_present(connection, vpc_id, check_mode=module.check_mode)
elif state == 'absent':
result = ensure_igw_absent(connection, vpc_id, check_mode=module.check_mode)
except AnsibleIGWException as e:
module.fail_json(msg=str(e))
module.exit_json(**result)
from ansible.module_utils.basic import * # noqa
from ansible.module_utils.ec2 import * # noqa
if __name__ == '__main__':
main()

@ -0,0 +1,157 @@
#!/usr/bin/python
DOCUMENTATION = '''
---
module: ec2_win_password
short_description: gets the default administrator password for ec2 windows instances
description:
- Gets the default administrator password from any EC2 Windows instance. The instance is referenced by its id (e.g. i-XXXXXXX). This module has a dependency on python-boto.
version_added: "2.0"
author: "Rick Mendes (@rickmendes)"
options:
instance_id:
description:
- The instance id to get the password data from.
required: true
key_file:
description:
- Path to the file containing the key pair used on the instance.
required: true
key_passphrase:
version_added: "2.0"
description:
- The passphrase for the instance key pair. The key must use DES or 3DES encryption for this module to decrypt it. You can use openssl to convert your password protected keys if they do not use DES or 3DES. ex) openssl rsa -in current_key -out new_key -des3.
required: false
default: null
region:
description:
- The AWS region to use. Must be specified if ec2_url is not used. If not specified then the value of the EC2_REGION environment variable, if any, is used.
required: false
default: null
aliases: [ 'aws_region', 'ec2_region' ]
wait:
version_added: "2.0"
description:
- Whether or not to wait for the password to be available before returning.
required: false
default: "no"
choices: [ "yes", "no" ]
wait_timeout:
version_added: "2.0"
description:
- Number of seconds to wait before giving up.
required: false
default: 120
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Example of getting a password
tasks:
- name: get the Administrator password
ec2_win_password:
profile: my-boto-profile
instance_id: i-XXXXXX
region: us-east-1
key_file: "~/aws-creds/my_test_key.pem"
# Example of getting a password with a password protected key
tasks:
- name: get the Administrator password
ec2_win_password:
profile: my-boto-profile
instance_id: i-XXXXXX
region: us-east-1
key_file: "~/aws-creds/my_protected_test_key.pem"
key_passphrase: "secret"
# Example of waiting for a password
tasks:
- name: get the Administrator password
ec2_win_password:
profile: my-boto-profile
instance_id: i-XXXXXX
region: us-east-1
key_file: "~/aws-creds/my_test_key.pem"
wait: yes
wait_timeout: 45
'''
from base64 import b64decode
from os.path import expanduser
from Crypto.Cipher import PKCS1_v1_5
from Crypto.PublicKey import RSA
import datetime
try:
import boto.ec2
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
instance_id = dict(required=True),
key_file = dict(required=True),
key_passphrase = dict(no_log=True, default=None, required=False),
wait = dict(type='bool', default=False, required=False),
wait_timeout = dict(default=120, required=False),
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='Boto required for this module.')
instance_id = module.params.get('instance_id')
key_file = expanduser(module.params.get('key_file'))
key_passphrase = module.params.get('key_passphrase')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
ec2 = ec2_connect(module)
if wait:
start = datetime.datetime.now()
end = start + datetime.timedelta(seconds=wait_timeout)
while datetime.datetime.now() < end:
data = ec2.get_password_data(instance_id)
decoded = b64decode(data)
if wait and not decoded:
time.sleep(5)
else:
break
else:
data = ec2.get_password_data(instance_id)
decoded = b64decode(data)
if wait and datetime.datetime.now() >= end:
module.fail_json(msg = "wait for password timeout after %d seconds" % wait_timeout)
f = open(key_file, 'r')
key = RSA.importKey(f.read(), key_passphrase)
cipher = PKCS1_v1_5.new(key)
sentinel = 'password decryption failed!!!'
try:
decrypted = cipher.decrypt(decoded, sentinel)
except ValueError as e:
decrypted = None
if decrypted == None:
module.exit_json(win_password='', changed=False)
else:
if wait:
elapsed = datetime.datetime.now() - start
module.exit_json(win_password=decrypted, changed=True, elapsed=elapsed.seconds)
else:
module.exit_json(win_password=decrypted, changed=True)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
main()

@ -0,0 +1,164 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
module: route53_zone
short_description: add or delete Route53 zones
description:
- Creates and deletes Route53 private and public zones
version_added: "2.0"
options:
zone:
description:
- "The DNS zone record (eg: foo.com.)"
required: true
state:
description:
- whether or not the zone should exist or not
required: false
default: true
choices: [ "present", "absent" ]
vpc_id:
description:
- The VPC ID the zone should be a part of (if this is going to be a private zone)
required: false
default: null
vpc_region:
description:
- The VPC Region the zone should be a part of (if this is going to be a private zone)
required: false
default: null
comment:
description:
- Comment associated with the zone
required: false
default: ''
extends_documentation_fragment: aws
author: "Christopher Troup (@minichate)"
'''
import time
try:
import boto
import boto.ec2
from boto import route53
from boto.route53 import Route53Connection
from boto.route53.zone import Zone
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def main():
module = AnsibleModule(
argument_spec=dict(
zone=dict(required=True),
state=dict(default='present', choices=['present', 'absent']),
vpc_id=dict(default=None),
vpc_region=dict(default=None),
comment=dict(default=''),
)
)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
zone_in = module.params.get('zone').lower()
state = module.params.get('state').lower()
vpc_id = module.params.get('vpc_id')
vpc_region = module.params.get('vpc_region')
comment = module.params.get('comment')
private_zone = vpc_id is not None and vpc_region is not None
_, _, aws_connect_kwargs = get_aws_connection_info(module)
# connect to the route53 endpoint
try:
conn = Route53Connection(**aws_connect_kwargs)
except boto.exception.BotoServerError, e:
module.fail_json(msg=e.error_message)
results = conn.get_all_hosted_zones()
zones = {}
for r53zone in results['ListHostedZonesResponse']['HostedZones']:
zone_id = r53zone['Id'].replace('/hostedzone/', '')
zone_details = conn.get_hosted_zone(zone_id)['GetHostedZoneResponse']
if vpc_id and 'VPCs' in zone_details:
# this is to deal with this boto bug: https://github.com/boto/boto/pull/2882
if isinstance(zone_details['VPCs'], dict):
if zone_details['VPCs']['VPC']['VPCId'] == vpc_id:
zones[r53zone['Name']] = zone_id
else: # Forward compatibility for when boto fixes that bug
if vpc_id in [v['VPCId'] for v in zone_details['VPCs']]:
zones[r53zone['Name']] = zone_id
else:
zones[r53zone['Name']] = zone_id
record = {
'private_zone': private_zone,
'vpc_id': vpc_id,
'vpc_region': vpc_region,
'comment': comment,
}
if state == 'present' and zone_in in zones:
if private_zone:
details = conn.get_hosted_zone(zones[zone_in])
if 'VPCs' not in details['GetHostedZoneResponse']:
module.fail_json(
msg="Can't change VPC from public to private"
)
vpc_details = details['GetHostedZoneResponse']['VPCs']['VPC']
current_vpc_id = vpc_details['VPCId']
current_vpc_region = vpc_details['VPCRegion']
if current_vpc_id != vpc_id:
module.fail_json(
msg="Can't change VPC ID once a zone has been created"
)
if current_vpc_region != vpc_region:
module.fail_json(
msg="Can't change VPC Region once a zone has been created"
)
record['zone_id'] = zones[zone_in]
record['name'] = zone_in
module.exit_json(changed=False, set=record)
elif state == 'present':
result = conn.create_hosted_zone(zone_in, **record)
hosted_zone = result['CreateHostedZoneResponse']['HostedZone']
zone_id = hosted_zone['Id'].replace('/hostedzone/', '')
record['zone_id'] = zone_id
record['name'] = zone_in
module.exit_json(changed=True, set=record)
elif state == 'absent' and zone_in in zones:
conn.delete_hosted_zone(zones[zone_in])
module.exit_json(changed=True)
elif state == 'absent':
module.exit_json(changed=False)
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
main()

@ -0,0 +1,184 @@
#!/usr/bin/python
#
# This is a free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This Ansible library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this library. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: s3_logging
short_description: Manage logging facility of an s3 bucket in AWS
description:
- Manage logging facility of an s3 bucket in AWS
version_added: "2.0"
author: Rob White (@wimnat)
options:
name:
description:
- "Name of the s3 bucket."
required: true
region:
description:
- "AWS region to create the bucket in. If not set then the value of the AWS_REGION and EC2_REGION environment variables are checked, followed by the aws_region and ec2_region settings in the Boto config file. If none of those are set the region defaults to the S3 Location: US Standard."
required: false
default: null
state:
description:
- "Enable or disable logging."
required: false
default: present
choices: [ 'present', 'absent' ]
target_bucket:
description:
- "The bucket to log to. Required when state=present."
required: false
default: null
target_prefix:
description:
- "The prefix that should be prepended to the generated log files written to the target_bucket."
required: false
default: ""
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
- name: Enable logging of s3 bucket mywebsite.com to s3 bucket mylogs
s3_logging:
name: mywebsite.com
target_bucket: mylogs
target_prefix: logs/mywebsite.com
state: present
- name: Remove logging on an s3 bucket
s3_logging:
name: mywebsite.com
state: absent
'''
try:
import boto.ec2
from boto.s3.connection import OrdinaryCallingFormat, Location
from boto.exception import BotoServerError, S3CreateError, S3ResponseError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def compare_bucket_logging(bucket, target_bucket, target_prefix):
bucket_log_obj = bucket.get_logging_status()
if bucket_log_obj.target != target_bucket or bucket_log_obj.prefix != target_prefix:
return False
else:
return True
def enable_bucket_logging(connection, module):
bucket_name = module.params.get("name")
target_bucket = module.params.get("target_bucket")
target_prefix = module.params.get("target_prefix")
changed = False
try:
bucket = connection.get_bucket(bucket_name)
except S3ResponseError as e:
module.fail_json(msg=e.message)
try:
if not compare_bucket_logging(bucket, target_bucket, target_prefix):
# Before we can enable logging we must give the log-delivery group WRITE and READ_ACP permissions to the target bucket
try:
target_bucket_obj = connection.get_bucket(target_bucket)
except S3ResponseError as e:
if e.status == 301:
module.fail_json(msg="the logging target bucket must be in the same region as the bucket being logged")
else:
module.fail_json(msg=e.message)
target_bucket_obj.set_as_logging_target()
bucket.enable_logging(target_bucket, target_prefix)
changed = True
except S3ResponseError as e:
module.fail_json(msg=e.message)
module.exit_json(changed=changed)
def disable_bucket_logging(connection, module):
bucket_name = module.params.get("name")
changed = False
try:
bucket = connection.get_bucket(bucket_name)
if not compare_bucket_logging(bucket, None, None):
bucket.disable_logging()
changed = True
except S3ResponseError as e:
module.fail_json(msg=e.message)
module.exit_json(changed=changed)
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
name = dict(required=True),
target_bucket = dict(required=False, default=None),
target_prefix = dict(required=False, default=""),
state = dict(required=False, default='present', choices=['present', 'absent'])
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region in ('us-east-1', '', None):
# S3ism for the US Standard region
location = Location.DEFAULT
else:
# Boto uses symbolic names for locations but region strings will
# actually work fine for everything except us-east-1 (US Standard)
location = region
try:
connection = boto.s3.connect_to_region(location, is_secure=True, calling_format=OrdinaryCallingFormat(), **aws_connect_params)
# use this as fallback because connect_to_region seems to fail in boto + non 'classic' aws accounts in some cases
if connection is None:
connection = boto.connect_s3(**aws_connect_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
state = module.params.get("state")
if state == 'present':
enable_bucket_logging(connection, module)
elif state == 'absent':
disable_bucket_logging(connection, module)
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
if __name__ == '__main__':
main()

@ -0,0 +1,154 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: sts_assume_role
short_description: Assume a role using AWS Security Token Service and obtain temporary credentials
description:
- Assume a role using AWS Security Token Service and obtain temporary credentials
version_added: "2.0"
author: Boris Ekelchik (@bekelchik)
options:
role_arn:
description:
- The Amazon Resource Name (ARN) of the role that the caller is assuming (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#Identifiers_ARNs)
required: true
role_session_name:
description:
- Name of the role's session - will be used by CloudTrail
required: true
policy:
description:
- Supplemental policy to use in addition to assumed role's policies.
required: false
default: null
duration_seconds:
description:
- The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set to 3600 seconds.
required: false
default: null
external_id:
description:
- A unique identifier that is used by third parties to assume a role in their customers' accounts.
required: false
default: null
mfa_serial_number:
description:
- he identification number of the MFA device that is associated with the user who is making the AssumeRole call.
required: false
default: null
mfa_token:
description:
- The value provided by the MFA device, if the trust policy of the role being assumed requires MFA.
required: false
default: null
notes:
- In order to use the assumed role in a following playbook task you must pass the access_key, access_secret and access_token
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Assume an existing role (more details: http://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html)
sts_assume_role:
role_arn: "arn:aws:iam::123456789012:role/someRole"
session_name: "someRoleSession"
register: assumed_role
# Use the assumed role above to tag an instance in account 123456789012
ec2_tag:
aws_access_key: "{{ assumed_role.sts_creds.access_key }}"
aws_secret_key: "{{ assumed_role.sts_creds.secret_key }}"
security_token: "{{ assumed_role.sts_creds.session_token }}"
resource: i-xyzxyz01
state: present
tags:
MyNewTag: value
'''
import sys
import time
try:
import boto.sts
from boto.exception import BotoServerError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def assume_role_policy(connection, module):
role_arn = module.params.get('role_arn')
role_session_name = module.params.get('role_session_name')
policy = module.params.get('policy')
duration_seconds = module.params.get('duration_seconds')
external_id = module.params.get('external_id')
mfa_serial_number = module.params.get('mfa_serial_number')
mfa_token = module.params.get('mfa_token')
changed = False
try:
assumed_role = connection.assume_role(role_arn, role_session_name, policy, duration_seconds, external_id, mfa_serial_number, mfa_token)
changed = True
except BotoServerError, e:
module.fail_json(msg=e)
module.exit_json(changed=changed, sts_creds=assumed_role.credentials.__dict__, sts_user=assumed_role.user.__dict__)
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
role_arn = dict(required=True, default=None),
role_session_name = dict(required=True, default=None),
duration_seconds = dict(required=False, default=None, type='int'),
external_id = dict(required=False, default=None),
policy = dict(required=False, default=None),
mfa_serial_number = dict(required=False, default=None),
mfa_token = dict(required=False, default=None)
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region:
try:
connection = connect_to_aws(boto.sts, region, **aws_connect_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
try:
assume_role_policy(connection, module)
except BotoServerError, e:
module.fail_json(msg=e)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
main()

@ -0,0 +1,408 @@
#!/usr/bin/python
#
# Copyright (c) 2015 CenturyLink
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>
#
DOCUMENTATION = '''
module: clc_group
short_description: Create/delete Server Groups at Centurylink Cloud
description:
- Create or delete Server Groups at Centurylink Centurylink Cloud
version_added: "2.0"
options:
name:
description:
- The name of the Server Group
required: True
description:
description:
- A description of the Server Group
required: False
parent:
description:
- The parent group of the server group. If parent is not provided, it creates the group at top level.
required: False
location:
description:
- Datacenter to create the group in. If location is not provided, the group gets created in the default datacenter
associated with the account
required: False
state:
description:
- Whether to create or delete the group
default: present
choices: ['present', 'absent']
wait:
description:
- Whether to wait for the tasks to finish before returning.
choices: [ True, False ]
default: True
required: False
requirements:
- python = 2.7
- requests >= 2.5.0
- clc-sdk
notes:
- To use this module, it is required to set the below environment variables which enables access to the
Centurylink Cloud
- CLC_V2_API_USERNAME, the account login id for the centurylink cloud
- CLC_V2_API_PASSWORD, the account passwod for the centurylink cloud
- Alternatively, the module accepts the API token and account alias. The API token can be generated using the
CLC account login and password via the HTTP api call @ https://api.ctl.io/v2/authentication/login
- CLC_V2_API_TOKEN, the API token generated from https://api.ctl.io/v2/authentication/login
- CLC_ACCT_ALIAS, the account alias associated with the centurylink cloud
- Users can set CLC_V2_API_URL to specify an endpoint for pointing to a different CLC environment.
'''
EXAMPLES = '''
# Create a Server Group
---
- name: Create Server Group
hosts: localhost
gather_facts: False
connection: local
tasks:
- name: Create / Verify a Server Group at CenturyLink Cloud
clc_group:
name: 'My Cool Server Group'
parent: 'Default Group'
state: present
register: clc
- name: debug
debug: var=clc
# Delete a Server Group
---
- name: Delete Server Group
hosts: localhost
gather_facts: False
connection: local
tasks:
- name: Delete / Verify Absent a Server Group at CenturyLink Cloud
clc_group:
name: 'My Cool Server Group'
parent: 'Default Group'
state: absent
register: clc
- name: debug
debug: var=clc
'''
__version__ = '${version}'
from distutils.version import LooseVersion
try:
import requests
except ImportError:
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk.
# sudo pip install clc-sdk
#
try:
import clc as clc_sdk
from clc import CLCException
except ImportError:
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
class ClcGroup(object):
clc = None
root_group = None
def __init__(self, module):
"""
Construct module
"""
self.clc = clc_sdk
self.module = module
self.group_dict = {}
if not CLC_FOUND:
self.module.fail_json(
msg='clc-python-sdk required for this module')
if not REQUESTS_FOUND:
self.module.fail_json(
msg='requests library is required for this module')
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
def process_request(self):
"""
Execute the main code path, and handle the request
:return: none
"""
location = self.module.params.get('location')
group_name = self.module.params.get('name')
parent_name = self.module.params.get('parent')
group_description = self.module.params.get('description')
state = self.module.params.get('state')
self._set_clc_credentials_from_env()
self.group_dict = self._get_group_tree_for_datacenter(
datacenter=location)
if state == "absent":
changed, group, requests = self._ensure_group_is_absent(
group_name=group_name, parent_name=parent_name)
else:
changed, group, requests = self._ensure_group_is_present(
group_name=group_name, parent_name=parent_name, group_description=group_description)
if requests:
self._wait_for_requests_to_complete(requests)
self.module.exit_json(changed=changed, group=group_name)
@staticmethod
def _define_module_argument_spec():
"""
Define the argument spec for the ansible module
:return: argument spec dictionary
"""
argument_spec = dict(
name=dict(required=True),
description=dict(default=None),
parent=dict(default=None),
location=dict(default=None),
state=dict(default='present', choices=['present', 'absent']),
wait=dict(type='bool', default=True))
return argument_spec
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
def _ensure_group_is_absent(self, group_name, parent_name):
"""
Ensure that group_name is absent by deleting it if necessary
:param group_name: string - the name of the clc server group to delete
:param parent_name: string - the name of the parent group for group_name
:return: changed, group
"""
changed = False
requests = []
if self._group_exists(group_name=group_name, parent_name=parent_name):
if not self.module.check_mode:
request = self._delete_group(group_name)
requests.append(request)
changed = True
return changed, group_name, requests
def _delete_group(self, group_name):
"""
Delete the provided server group
:param group_name: string - the server group to delete
:return: none
"""
response = None
group, parent = self.group_dict.get(group_name)
try:
response = group.Delete()
except CLCException, ex:
self.module.fail_json(msg='Failed to delete group :{0}. {1}'.format(
group_name, ex.response_text
))
return response
def _ensure_group_is_present(
self,
group_name,
parent_name,
group_description):
"""
Checks to see if a server group exists, creates it if it doesn't.
:param group_name: the name of the group to validate/create
:param parent_name: the name of the parent group for group_name
:param group_description: a short description of the server group (used when creating)
:return: (changed, group) -
changed: Boolean- whether a change was made,
group: A clc group object for the group
"""
assert self.root_group, "Implementation Error: Root Group not set"
parent = parent_name if parent_name is not None else self.root_group.name
description = group_description
changed = False
parent_exists = self._group_exists(group_name=parent, parent_name=None)
child_exists = self._group_exists(
group_name=group_name,
parent_name=parent)
if parent_exists and child_exists:
group, parent = self.group_dict[group_name]
changed = False
elif parent_exists and not child_exists:
if not self.module.check_mode:
self._create_group(
group=group_name,
parent=parent,
description=description)
changed = True
else:
self.module.fail_json(
msg="parent group: " +
parent +
" does not exist")
return changed, group_name, None
def _create_group(self, group, parent, description):
"""
Create the provided server group
:param group: clc_sdk.Group - the group to create
:param parent: clc_sdk.Parent - the parent group for {group}
:param description: string - a text description of the group
:return: clc_sdk.Group - the created group
"""
response = None
(parent, grandparent) = self.group_dict[parent]
try:
response = parent.Create(name=group, description=description)
except CLCException, ex:
self.module.fail_json(msg='Failed to create group :{0}. {1}'.format(
group, ex.response_text
))
return response
def _group_exists(self, group_name, parent_name):
"""
Check to see if a group exists
:param group_name: string - the group to check
:param parent_name: string - the parent of group_name
:return: boolean - whether the group exists
"""
result = False
if group_name in self.group_dict:
(group, parent) = self.group_dict[group_name]
if parent_name is None or parent_name == parent.name:
result = True
return result
def _get_group_tree_for_datacenter(self, datacenter=None):
"""
Walk the tree of groups for a datacenter
:param datacenter: string - the datacenter to walk (ex: 'UC1')
:return: a dictionary of groups and parents
"""
self.root_group = self.clc.v2.Datacenter(
location=datacenter).RootGroup()
return self._walk_groups_recursive(
parent_group=None,
child_group=self.root_group)
def _walk_groups_recursive(self, parent_group, child_group):
"""
Walk a parent-child tree of groups, starting with the provided child group
:param parent_group: clc_sdk.Group - the parent group to start the walk
:param child_group: clc_sdk.Group - the child group to start the walk
:return: a dictionary of groups and parents
"""
result = {str(child_group): (child_group, parent_group)}
groups = child_group.Subgroups().groups
if len(groups) > 0:
for group in groups:
if group.type != 'default':
continue
result.update(self._walk_groups_recursive(child_group, group))
return result
def _wait_for_requests_to_complete(self, requests_lst):
"""
Waits until the CLC requests are complete if the wait argument is True
:param requests_lst: The list of CLC request objects
:return: none
"""
if not self.module.params['wait']:
return
for request in requests_lst:
request.WaitUntilComplete()
for request_details in request.requests:
if request_details.Status() != 'succeeded':
self.module.fail_json(
msg='Unable to process group request')
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
The main function. Instantiates the module and calls process_request.
:return: none
"""
module = AnsibleModule(
argument_spec=ClcGroup._define_module_argument_spec(),
supports_check_mode=True)
clc_group = ClcGroup(module)
clc_group.process_request()
from ansible.module_utils.basic import * # pylint: disable=W0614
if __name__ == '__main__':
main()

@ -0,0 +1,353 @@
#!/usr/bin/python
#
# Copyright (c) 2015 CenturyLink
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>
#
DOCUMENTATION = '''
module: clc_publicip
short_description: Add and Delete public ips on servers in CenturyLink Cloud.
description:
- An Ansible module to add or delete public ip addresses on an existing server or servers in CenturyLink Cloud.
version_added: "2.0"
options:
protocol:
descirption:
- The protocol that the public IP will listen for.
default: TCP
choices: ['TCP', 'UDP', 'ICMP']
required: False
ports:
description:
- A list of ports to expose.
required: True
server_ids:
description:
- A list of servers to create public ips on.
required: True
state:
description:
- Determine wheteher to create or delete public IPs. If present module will not create a second public ip if one
already exists.
default: present
choices: ['present', 'absent']
required: False
wait:
description:
- Whether to wait for the tasks to finish before returning.
choices: [ True, False ]
default: True
required: False
requirements:
- python = 2.7
- requests >= 2.5.0
- clc-sdk
notes:
- To use this module, it is required to set the below environment variables which enables access to the
Centurylink Cloud
- CLC_V2_API_USERNAME, the account login id for the centurylink cloud
- CLC_V2_API_PASSWORD, the account passwod for the centurylink cloud
- Alternatively, the module accepts the API token and account alias. The API token can be generated using the
CLC account login and password via the HTTP api call @ https://api.ctl.io/v2/authentication/login
- CLC_V2_API_TOKEN, the API token generated from https://api.ctl.io/v2/authentication/login
- CLC_ACCT_ALIAS, the account alias associated with the centurylink cloud
- Users can set CLC_V2_API_URL to specify an endpoint for pointing to a different CLC environment.
'''
EXAMPLES = '''
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Add Public IP to Server
hosts: localhost
gather_facts: False
connection: local
tasks:
- name: Create Public IP For Servers
clc_publicip:
protocol: 'TCP'
ports:
- 80
server_ids:
- UC1ACCTSRVR01
- UC1ACCTSRVR02
state: present
register: clc
- name: debug
debug: var=clc
- name: Delete Public IP from Server
hosts: localhost
gather_facts: False
connection: local
tasks:
- name: Create Public IP For Servers
clc_publicip:
server_ids:
- UC1ACCTSRVR01
- UC1ACCTSRVR02
state: absent
register: clc
- name: debug
debug: var=clc
'''
__version__ = '${version}'
from distutils.version import LooseVersion
try:
import requests
except ImportError:
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk.
# sudo pip install clc-sdk
#
try:
import clc as clc_sdk
from clc import CLCException
except ImportError:
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
class ClcPublicIp(object):
clc = clc_sdk
module = None
group_dict = {}
def __init__(self, module):
"""
Construct module
"""
self.module = module
if not CLC_FOUND:
self.module.fail_json(
msg='clc-python-sdk required for this module')
if not REQUESTS_FOUND:
self.module.fail_json(
msg='requests library is required for this module')
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
def process_request(self):
"""
Process the request - Main Code Path
:param params: dictionary of module parameters
:return: Returns with either an exit_json or fail_json
"""
self._set_clc_credentials_from_env()
params = self.module.params
server_ids = params['server_ids']
ports = params['ports']
protocol = params['protocol']
state = params['state']
requests = []
chagned_server_ids = []
changed = False
if state == 'present':
changed, chagned_server_ids, requests = self.ensure_public_ip_present(
server_ids=server_ids, protocol=protocol, ports=ports)
elif state == 'absent':
changed, chagned_server_ids, requests = self.ensure_public_ip_absent(
server_ids=server_ids)
else:
return self.module.fail_json(msg="Unknown State: " + state)
self._wait_for_requests_to_complete(requests)
return self.module.exit_json(changed=changed,
server_ids=chagned_server_ids)
@staticmethod
def _define_module_argument_spec():
"""
Define the argument spec for the ansible module
:return: argument spec dictionary
"""
argument_spec = dict(
server_ids=dict(type='list', required=True),
protocol=dict(default='TCP', choices=['TCP', 'UDP', 'ICMP']),
ports=dict(type='list', required=True),
wait=dict(type='bool', default=True),
state=dict(default='present', choices=['present', 'absent']),
)
return argument_spec
def ensure_public_ip_present(self, server_ids, protocol, ports):
"""
Ensures the given server ids having the public ip available
:param server_ids: the list of server ids
:param protocol: the ip protocol
:param ports: the list of ports to expose
:return: (changed, changed_server_ids, results)
changed: A flag indicating if there is any change
changed_server_ids : the list of server ids that are changed
results: The result list from clc public ip call
"""
changed = False
results = []
changed_server_ids = []
servers = self._get_servers_from_clc(
server_ids,
'Failed to obtain server list from the CLC API')
servers_to_change = [
server for server in servers if len(
server.PublicIPs().public_ips) == 0]
ports_to_expose = [{'protocol': protocol, 'port': port}
for port in ports]
for server in servers_to_change:
if not self.module.check_mode:
result = self._add_publicip_to_server(server, ports_to_expose)
results.append(result)
changed_server_ids.append(server.id)
changed = True
return changed, changed_server_ids, results
def _add_publicip_to_server(self, server, ports_to_expose):
result = None
try:
result = server.PublicIPs().Add(ports_to_expose)
except CLCException, ex:
self.module.fail_json(msg='Failed to add public ip to the server : {0}. {1}'.format(
server.id, ex.response_text
))
return result
def ensure_public_ip_absent(self, server_ids):
"""
Ensures the given server ids having the public ip removed if there is any
:param server_ids: the list of server ids
:return: (changed, changed_server_ids, results)
changed: A flag indicating if there is any change
changed_server_ids : the list of server ids that are changed
results: The result list from clc public ip call
"""
changed = False
results = []
changed_server_ids = []
servers = self._get_servers_from_clc(
server_ids,
'Failed to obtain server list from the CLC API')
servers_to_change = [
server for server in servers if len(
server.PublicIPs().public_ips) > 0]
for server in servers_to_change:
if not self.module.check_mode:
result = self._remove_publicip_from_server(server)
results.append(result)
changed_server_ids.append(server.id)
changed = True
return changed, changed_server_ids, results
def _remove_publicip_from_server(self, server):
try:
for ip_address in server.PublicIPs().public_ips:
result = ip_address.Delete()
except CLCException, ex:
self.module.fail_json(msg='Failed to remove public ip from the server : {0}. {1}'.format(
server.id, ex.response_text
))
return result
def _wait_for_requests_to_complete(self, requests_lst):
"""
Waits until the CLC requests are complete if the wait argument is True
:param requests_lst: The list of CLC request objects
:return: none
"""
if not self.module.params['wait']:
return
for request in requests_lst:
request.WaitUntilComplete()
for request_details in request.requests:
if request_details.Status() != 'succeeded':
self.module.fail_json(
msg='Unable to process public ip request')
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
def _get_servers_from_clc(self, server_ids, message):
"""
Gets list of servers form CLC api
"""
try:
return self.clc.v2.Servers(server_ids).servers
except CLCException as exception:
self.module.fail_json(msg=message + ': %s' % exception)
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
The main function. Instantiates the module and calls process_request.
:return: none
"""
module = AnsibleModule(
argument_spec=ClcPublicIp._define_module_argument_spec(),
supports_check_mode=True
)
clc_public_ip = ClcPublicIp(module)
clc_public_ip.process_request()
from ansible.module_utils.basic import * # pylint: disable=W0614
if __name__ == '__main__':
main()

@ -0,0 +1,408 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_account
short_description: Manages account on Apache CloudStack based clouds.
description:
- Create, disable, lock, enable and remove accounts.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of account.
required: true
username:
description:
- Username of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
password:
description:
- Password of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
first_name:
description:
- First name of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
last_name:
description:
- Last name of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
email:
description:
- Email of the user to be created if account did not exist.
- Required on C(state=present).
required: false
default: null
timezone:
description:
- Timezone of the user to be created if account did not exist.
required: false
default: null
network_domain:
description:
- Network domain of the account.
required: false
default: null
account_type:
description:
- Type of the account.
required: false
default: 'user'
choices: [ 'user', 'root_admin', 'domain_admin' ]
domain:
description:
- Domain the account is related to.
required: false
default: 'ROOT'
state:
description:
- State of the account.
required: false
default: 'present'
choices: [ 'present', 'absent', 'enabled', 'disabled', 'locked' ]
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# create an account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
username: customer_xy
password: S3Cur3
last_name: Doe
first_name: John
email: john.doe@example.com
domain: CUSTOMERS
# Lock an existing account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
domain: CUSTOMERS
state: locked
# Disable an existing account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
domain: CUSTOMERS
state: disabled
# Enable an existing account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
domain: CUSTOMERS
state: enabled
# Remove an account in domain 'CUSTOMERS'
local_action:
module: cs_account
name: customer_xy
domain: CUSTOMERS
state: absent
'''
RETURN = '''
---
name:
description: Name of the account.
returned: success
type: string
sample: linus@example.com
account_type:
description: Type of the account.
returned: success
type: string
sample: user
account_state:
description: State of the account.
returned: success
type: string
sample: enabled
network_domain:
description: Network domain of the account.
returned: success
type: string
sample: example.local
domain:
description: Domain the account is related.
returned: success
type: string
sample: ROOT
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackAccount(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.account = None
self.account_types = {
'user': 0,
'root_admin': 1,
'domain_admin': 2,
}
def get_account_type(self):
account_type = self.module.params.get('account_type')
return self.account_types[account_type]
def get_account(self):
if not self.account:
args = {}
args['listall'] = True
args['domainid'] = self.get_domain('id')
accounts = self.cs.listAccounts(**args)
if accounts:
account_name = self.module.params.get('name')
for a in accounts['account']:
if account_name in [ a['name'] ]:
self.account = a
break
return self.account
def enable_account(self):
account = self.get_account()
if not account:
self.module.fail_json(msg="Failed: account not present")
if account['state'].lower() != 'enabled':
self.result['changed'] = True
args = {}
args['id'] = account['id']
args['account'] = self.module.params.get('name')
args['domainid'] = self.get_domain('id')
if not self.module.check_mode:
res = self.cs.enableAccount(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
account = res['account']
return account
def lock_account(self):
return self.lock_or_disable_account(lock=True)
def disable_account(self):
return self.lock_or_disable_account()
def lock_or_disable_account(self, lock=False):
account = self.get_account()
if not account:
self.module.fail_json(msg="Failed: account not present")
# we need to enable the account to lock it.
if lock and account['state'].lower() == 'disabled':
account = self.enable_account()
if lock and account['state'].lower() != 'locked' \
or not lock and account['state'].lower() != 'disabled':
self.result['changed'] = True
args = {}
args['id'] = account['id']
args['account'] = self.module.params.get('name')
args['domainid'] = self.get_domain('id')
args['lock'] = lock
if not self.module.check_mode:
account = self.cs.disableAccount(**args)
if 'errortext' in account:
self.module.fail_json(msg="Failed: '%s'" % account['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
account = self._poll_job(account, 'account')
return account
def present_account(self):
missing_params = []
if not self.module.params.get('email'):
missing_params.append('email')
if not self.module.params.get('username'):
missing_params.append('username')
if not self.module.params.get('password'):
missing_params.append('password')
if not self.module.params.get('first_name'):
missing_params.append('first_name')
if not self.module.params.get('last_name'):
missing_params.append('last_name')
if missing_params:
self.module.fail_json(msg="missing required arguments: %s" % ','.join(missing_params))
account = self.get_account()
if not account:
self.result['changed'] = True
args = {}
args['account'] = self.module.params.get('name')
args['domainid'] = self.get_domain('id')
args['accounttype'] = self.get_account_type()
args['networkdomain'] = self.module.params.get('network_domain')
args['username'] = self.module.params.get('username')
args['password'] = self.module.params.get('password')
args['firstname'] = self.module.params.get('first_name')
args['lastname'] = self.module.params.get('last_name')
args['email'] = self.module.params.get('email')
args['timezone'] = self.module.params.get('timezone')
if not self.module.check_mode:
res = self.cs.createAccount(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
account = res['account']
return account
def absent_account(self):
account = self.get_account()
if account:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.deleteAccount(id=account['id'])
if 'errortext' in account:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
res = self._poll_job(res, 'account')
return account
def get_result(self, account):
if account:
if 'name' in account:
self.result['name'] = account['name']
if 'accounttype' in account:
for key,value in self.account_types.items():
if value == account['accounttype']:
self.result['account_type'] = key
break
if 'state' in account:
self.result['account_state'] = account['state']
if 'domain' in account:
self.result['domain'] = account['domain']
if 'networkdomain' in account:
self.result['network_domain'] = account['networkdomain']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(choices=['present', 'absent', 'enabled', 'disabled', 'locked' ], default='present'),
account_type = dict(choices=['user', 'root_admin', 'domain_admin'], default='user'),
network_domain = dict(default=None),
domain = dict(default='ROOT'),
email = dict(default=None),
first_name = dict(default=None),
last_name = dict(default=None),
username = dict(default=None),
password = dict(default=None),
timezone = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_acc = AnsibleCloudStackAccount(module)
state = module.params.get('state')
if state in ['absent']:
account = acs_acc.absent_account()
elif state in ['enabled']:
account = acs_acc.enable_account()
elif state in ['disabled']:
account = acs_acc.disable_account()
elif state in ['locked']:
account = acs_acc.lock_account()
else:
account = acs_acc.present_account()
result = acs_acc.get_result(account)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,254 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_affinitygroup
short_description: Manages affinity groups on Apache CloudStack based clouds.
description:
- Create and remove affinity groups.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the affinity group.
required: true
affinty_type:
description:
- Type of the affinity group. If not specified, first found affinity type is used.
required: false
default: null
description:
description:
- Description of the affinity group.
required: false
default: null
state:
description:
- State of the affinity group.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
domain:
description:
- Domain the affinity group is related to.
required: false
default: null
account:
description:
- Account the affinity group is related to.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Create a affinity group
- local_action:
module: cs_affinitygroup
name: haproxy
affinty_type: host anti-affinity
# Remove a affinity group
- local_action:
module: cs_affinitygroup
name: haproxy
state: absent
'''
RETURN = '''
---
name:
description: Name of affinity group.
returned: success
type: string
sample: app
description:
description: Description of affinity group.
returned: success
type: string
sample: application affinity group
affinity_type:
description: Type of affinity group.
returned: success
type: string
sample: host anti-affinity
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackAffinityGroup(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.affinity_group = None
def get_affinity_group(self):
if not self.affinity_group:
affinity_group = self.module.params.get('name')
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
affinity_groups = self.cs.listAffinityGroups(**args)
if affinity_groups:
for a in affinity_groups['affinitygroup']:
if affinity_group in [ a['name'], a['id'] ]:
self.affinity_group = a
break
return self.affinity_group
def get_affinity_type(self):
affinity_type = self.module.params.get('affinty_type')
affinity_types = self.cs.listAffinityGroupTypes()
if affinity_types:
if not affinity_type:
return affinity_types['affinityGroupType'][0]['type']
for a in affinity_types['affinityGroupType']:
if a['type'] == affinity_type:
return a['type']
self.module.fail_json(msg="affinity group type '%s' not found" % affinity_type)
def create_affinity_group(self):
affinity_group = self.get_affinity_group()
if not affinity_group:
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['type'] = self.get_affinity_type()
args['description'] = self.module.params.get('description')
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
if not self.module.check_mode:
res = self.cs.createAffinityGroup(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
affinity_group = self._poll_job(res, 'affinitygroup')
return affinity_group
def remove_affinity_group(self):
affinity_group = self.get_affinity_group()
if affinity_group:
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
if not self.module.check_mode:
res = self.cs.deleteAffinityGroup(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'affinitygroup')
return affinity_group
def get_result(self, affinity_group):
if affinity_group:
if 'name' in affinity_group:
self.result['name'] = affinity_group['name']
if 'description' in affinity_group:
self.result['description'] = affinity_group['description']
if 'type' in affinity_group:
self.result['affinity_type'] = affinity_group['type']
if 'domain' in affinity_group:
self.result['domain'] = affinity_group['domain']
if 'account' in affinity_group:
self.result['account'] = affinity_group['account']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
affinty_type = dict(default=None),
description = dict(default=None),
state = dict(choices=['present', 'absent'], default='present'),
domain = dict(default=None),
account = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_ag = AnsibleCloudStackAffinityGroup(module)
state = module.params.get('state')
if state in ['absent']:
affinity_group = acs_ag.remove_affinity_group()
else:
affinity_group = acs_ag.create_affinity_group()
result = acs_ag.get_result(affinity_group)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,221 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_facts
short_description: Gather facts on instances of Apache CloudStack based clouds.
description:
- This module fetches data from the metadata API in CloudStack. The module must be called from within the instance itself.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
filter:
description:
- Filter for a specific fact.
required: false
default: null
choices:
- cloudstack_service_offering
- cloudstack_availability_zone
- cloudstack_public_hostname
- cloudstack_public_ipv4
- cloudstack_local_hostname
- cloudstack_local_ipv4
- cloudstack_instance_id
- cloudstack_user_data
requirements: [ 'yaml' ]
'''
EXAMPLES = '''
# Gather all facts on instances
- name: Gather cloudstack facts
cs_facts:
# Gather specific fact on instances
- name: Gather cloudstack facts
cs_facts: filter=cloudstack_instance_id
'''
RETURN = '''
---
cloudstack_availability_zone:
description: zone the instance is deployed in.
returned: success
type: string
sample: ch-gva-2
cloudstack_instance_id:
description: UUID of the instance.
returned: success
type: string
sample: ab4e80b0-3e7e-4936-bdc5-e334ba5b0139
cloudstack_local_hostname:
description: local hostname of the instance.
returned: success
type: string
sample: VM-ab4e80b0-3e7e-4936-bdc5-e334ba5b0139
cloudstack_local_ipv4:
description: local IPv4 of the instance.
returned: success
type: string
sample: 185.19.28.35
cloudstack_public_hostname:
description: public hostname of the instance.
returned: success
type: string
sample: VM-ab4e80b0-3e7e-4936-bdc5-e334ba5b0139
cloudstack_public_ipv4:
description: public IPv4 of the instance.
returned: success
type: string
sample: 185.19.28.35
cloudstack_service_offering:
description: service offering of the instance.
returned: success
type: string
sample: Micro 512mb 1cpu
cloudstack_user_data:
description: data of the instance provided by users.
returned: success
type: dict
sample: { "bla": "foo" }
'''
import os
try:
import yaml
has_lib_yaml = True
except ImportError:
has_lib_yaml = False
CS_METADATA_BASE_URL = "http://%s/latest/meta-data"
CS_USERDATA_BASE_URL = "http://%s/latest/user-data"
class CloudStackFacts(object):
def __init__(self):
self.facts = ansible_facts(module)
self.api_ip = None
self.fact_paths = {
'cloudstack_service_offering': 'service-offering',
'cloudstack_availability_zone': 'availability-zone',
'cloudstack_public_hostname': 'public-hostname',
'cloudstack_public_ipv4': 'public-ipv4',
'cloudstack_local_hostname': 'local-hostname',
'cloudstack_local_ipv4': 'local-ipv4',
'cloudstack_instance_id': 'instance-id'
}
def run(self):
result = {}
filter = module.params.get('filter')
if not filter:
for key,path in self.fact_paths.iteritems():
result[key] = self._fetch(CS_METADATA_BASE_URL + "/" + path)
result['cloudstack_user_data'] = self._get_user_data_json()
else:
if filter == 'cloudstack_user_data':
result['cloudstack_user_data'] = self._get_user_data_json()
elif filter in self.fact_paths:
result[filter] = self._fetch(CS_METADATA_BASE_URL + "/" + self.fact_paths[filter])
return result
def _get_user_data_json(self):
try:
# this data come form users, we try what we can to parse it...
return yaml.load(self._fetch(CS_USERDATA_BASE_URL))
except:
return None
def _fetch(self, path):
api_ip = self._get_api_ip()
if not api_ip:
return None
api_url = path % api_ip
(response, info) = fetch_url(module, api_url, force=True)
if response:
data = response.read()
else:
data = None
return data
def _get_dhcp_lease_file(self):
"""Return the path of the lease file."""
default_iface = self.facts['default_ipv4']['interface']
dhcp_lease_file_locations = [
'/var/lib/dhcp/dhclient.%s.leases' % default_iface, # debian / ubuntu
'/var/lib/dhclient/dhclient-%s.leases' % default_iface, # centos 6
'/var/lib/dhclient/dhclient--%s.lease' % default_iface, # centos 7
'/var/db/dhclient.leases.%s' % default_iface, # openbsd
]
for file_path in dhcp_lease_file_locations:
if os.path.exists(file_path):
return file_path
module.fail_json(msg="Could not find dhclient leases file.")
def _get_api_ip(self):
"""Return the IP of the DHCP server."""
if not self.api_ip:
dhcp_lease_file = self._get_dhcp_lease_file()
for line in open(dhcp_lease_file):
if 'dhcp-server-identifier' in line:
# get IP of string "option dhcp-server-identifier 185.19.28.176;"
line = line.translate(None, ';')
self.api_ip = line.split()[2]
break
if not self.api_ip:
module.fail_json(msg="No dhcp-server-identifier found in leases file.")
return self.api_ip
def main():
global module
module = AnsibleModule(
argument_spec = dict(
filter = dict(default=None, choices=[
'cloudstack_service_offering',
'cloudstack_availability_zone',
'cloudstack_public_hostname',
'cloudstack_public_ipv4',
'cloudstack_local_hostname',
'cloudstack_local_ipv4',
'cloudstack_instance_id',
'cloudstack_user_data',
]),
),
supports_check_mode=False
)
if not has_lib_yaml:
module.fail_json(msg="missing python library: yaml")
cs_facts = CloudStackFacts().run()
cs_facts_result = dict(changed=False, ansible_facts=cs_facts)
module.exit_json(**cs_facts_result)
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
from ansible.module_utils.facts import *
main()

@ -19,28 +19,45 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>. # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = ''' DOCUMENTATION = '''
---
module: cs_firewall module: cs_firewall
short_description: Manages firewall rules on Apache CloudStack based clouds. short_description: Manages firewall rules on Apache CloudStack based clouds.
description: Creates and removes firewall rules. description:
- Creates and removes firewall rules.
version_added: '2.0' version_added: '2.0'
author: René Moser author: "René Moser (@resmo)"
options: options:
ip_address: ip_address:
description: description:
- Public IP address the rule is assigned to. - Public IP address the ingress rule is assigned to.
required: true - Required if C(type=ingress).
required: false
default: null
network:
description:
- Network the egress rule is related to.
- Required if C(type=egress).
required: false
default: null
state: state:
description: description:
- State of the firewall rule. - State of the firewall rule.
required: false required: false
default: 'present' default: 'present'
choices: [ 'present', 'absent' ] choices: [ 'present', 'absent' ]
type:
description:
- Type of the firewall rule.
required: false
default: 'ingress'
choices: [ 'ingress', 'egress' ]
protocol: protocol:
description: description:
- Protocol of the firewall rule. - Protocol of the firewall rule.
- C(all) is only available if C(type=egress)
required: false required: false
default: 'tcp' default: 'tcp'
choices: [ 'tcp', 'udp', 'icmp' ] choices: [ 'tcp', 'udp', 'icmp', 'all' ]
cidr: cidr:
description: description:
- CIDR (full notation) to be used for firewall rule. - CIDR (full notation) to be used for firewall rule.
@ -51,9 +68,10 @@ options:
- Start port for this rule. Considered if C(protocol=tcp) or C(protocol=udp). - Start port for this rule. Considered if C(protocol=tcp) or C(protocol=udp).
required: false required: false
default: null default: null
aliases: [ 'port' ]
end_port: end_port:
description: description:
- End port for this rule. Considered if C(protocol=tcp) or C(protocol=udp). - End port for this rule. Considered if C(protocol=tcp) or C(protocol=udp). If not specified, equal C(start_port).
required: false required: false
default: null default: null
icmp_type: icmp_type:
@ -66,36 +84,47 @@ options:
- Error code for this icmp message. Considered if C(protocol=icmp). - Error code for this icmp message. Considered if C(protocol=icmp).
required: false required: false
default: null default: null
domain:
description:
- Domain the firewall rule is related to.
required: false
default: null
account:
description:
- Account the firewall rule is related to.
required: false
default: null
project: project:
description: description:
- Name of the project. - Name of the project the firewall rule is related to.
required: false required: false
default: null default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
''' '''
EXAMPLES = ''' EXAMPLES = '''
---
# Allow inbound port 80/tcp from 1.2.3.4 to 4.3.2.1 # Allow inbound port 80/tcp from 1.2.3.4 to 4.3.2.1
- local_action: - local_action:
module: cs_firewall module: cs_firewall
ip_address: 4.3.2.1 ip_address: 4.3.2.1
start_port: 80 port: 80
end_port: 80
cidr: 1.2.3.4/32 cidr: 1.2.3.4/32
# Allow inbound tcp/udp port 53 to 4.3.2.1 # Allow inbound tcp/udp port 53 to 4.3.2.1
- local_action: - local_action:
module: cs_firewall module: cs_firewall
ip_address: 4.3.2.1 ip_address: 4.3.2.1
start_port: 53 port: 53
end_port: 53
protocol: '{{ item }}' protocol: '{{ item }}'
with_items: with_items:
- tcp - tcp
- udp - udp
# Ensure firewall rule is removed # Ensure firewall rule is removed
- local_action: - local_action:
module: cs_firewall module: cs_firewall
@ -104,6 +133,70 @@ EXAMPLES = '''
end_port: 8888 end_port: 8888
cidr: 17.0.0.0/8 cidr: 17.0.0.0/8
state: absent state: absent
# Allow all outbound traffic
- local_action:
module: cs_firewall
network: my_network
type: egress
protocol: all
# Allow only HTTP outbound traffic for an IP
- local_action:
module: cs_firewall
network: my_network
type: egress
port: 80
cidr: 10.101.1.20
'''
RETURN = '''
---
ip_address:
description: IP address of the rule if C(type=ingress)
returned: success
type: string
sample: 10.100.212.10
type:
description: Type of the rule.
returned: success
type: string
sample: ingress
cidr:
description: CIDR of the rule.
returned: success
type: string
sample: 0.0.0.0/0
protocol:
description: Protocol of the rule.
returned: success
type: string
sample: tcp
start_port:
description: Start port of the rule.
returned: success
type: int
sample: 80
end_port:
description: End port of the rule.
returned: success
type: int
sample: 80
icmp_code:
description: ICMP code of the rule.
returned: success
type: int
sample: 1
icmp_type:
description: ICMP type of the rule.
returned: success
type: int
sample: 1
network:
description: Name of the network if C(type=egress)
returned: success
type: string
sample: my_network
''' '''
try: try:
@ -120,38 +213,51 @@ class AnsibleCloudStackFirewall(AnsibleCloudStack):
def __init__(self, module): def __init__(self, module):
AnsibleCloudStack.__init__(self, module) AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
self.firewall_rule = None self.firewall_rule = None
def get_firewall_rule(self): def get_firewall_rule(self):
if not self.firewall_rule: if not self.firewall_rule:
cidr = self.module.params.get('cidr') cidr = self.module.params.get('cidr')
protocol = self.module.params.get('protocol') protocol = self.module.params.get('protocol')
start_port = self.module.params.get('start_port') start_port = self.module.params.get('start_port')
end_port = self.module.params.get('end_port') end_port = self.get_or_fallback('end_port', 'start_port')
icmp_code = self.module.params.get('icmp_code') icmp_code = self.module.params.get('icmp_code')
icmp_type = self.module.params.get('icmp_type') icmp_type = self.module.params.get('icmp_type')
fw_type = self.module.params.get('type')
if protocol in ['tcp', 'udp'] and not (start_port and end_port): if protocol in ['tcp', 'udp'] and not (start_port and end_port):
self.module.fail_json(msg="no start_port or end_port set for protocol '%s'" % protocol) self.module.fail_json(msg="missing required argument for protocol '%s': start_port or end_port" % protocol)
if protocol == 'icmp' and not icmp_type: if protocol == 'icmp' and not icmp_type:
self.module.fail_json(msg="no icmp_type set") self.module.fail_json(msg="missing required argument for protocol 'icmp': icmp_type")
if protocol == 'all' and fw_type != 'egress':
self.module.fail_json(msg="protocol 'all' could only be used for type 'egress'" )
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
if fw_type == 'egress':
args['networkid'] = self.get_network(key='id')
if not args['networkid']:
self.module.fail_json(msg="missing required argument for type egress: network")
firewall_rules = self.cs.listEgressFirewallRules(**args)
else:
args['ipaddressid'] = self.get_ip_address('id')
if not args['ipaddressid']:
self.module.fail_json(msg="missing required argument for type ingress: ip_address")
firewall_rules = self.cs.listFirewallRules(**args)
args = {}
args['ipaddressid'] = self.get_ip_address_id()
args['projectid'] = self.get_project_id()
firewall_rules = self.cs.listFirewallRules(**args)
if firewall_rules and 'firewallrule' in firewall_rules: if firewall_rules and 'firewallrule' in firewall_rules:
for rule in firewall_rules['firewallrule']: for rule in firewall_rules['firewallrule']:
type_match = self._type_cidr_match(rule, cidr) type_match = self._type_cidr_match(rule, cidr)
protocol_match = self._tcp_udp_match(rule, protocol, start_port, end_port) \ protocol_match = self._tcp_udp_match(rule, protocol, start_port, end_port) \
or self._icmp_match(rule, protocol, icmp_code, icmp_type) or self._icmp_match(rule, protocol, icmp_code, icmp_type) \
or self._egress_all_match(rule, protocol, fw_type)
if type_match and protocol_match: if type_match and protocol_match:
self.firewall_rule = rule self.firewall_rule = rule
@ -166,6 +272,12 @@ class AnsibleCloudStackFirewall(AnsibleCloudStack):
and end_port == int(rule['endport']) and end_port == int(rule['endport'])
def _egress_all_match(self, rule, protocol, fw_type):
return protocol in ['all'] \
and protocol == rule['protocol'] \
and fw_type == 'egress'
def _icmp_match(self, rule, protocol, icmp_code, icmp_type): def _icmp_match(self, rule, protocol, icmp_code, icmp_type):
return protocol == 'icmp' \ return protocol == 'icmp' \
and protocol == rule['protocol'] \ and protocol == rule['protocol'] \
@ -177,22 +289,58 @@ class AnsibleCloudStackFirewall(AnsibleCloudStack):
return cidr == rule['cidrlist'] return cidr == rule['cidrlist']
def get_network(self, key=None, network=None):
if not network:
network = self.module.params.get('network')
if not network:
return None
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
args['zoneid'] = self.get_zone('id')
networks = self.cs.listNetworks(**args)
if not networks:
self.module.fail_json(msg="No networks available")
for n in networks['network']:
if network in [ n['displaytext'], n['name'], n['id'] ]:
return self._get_by_key(key, n)
break
self.module.fail_json(msg="Network '%s' not found" % network)
def create_firewall_rule(self): def create_firewall_rule(self):
firewall_rule = self.get_firewall_rule() firewall_rule = self.get_firewall_rule()
if not firewall_rule: if not firewall_rule:
self.result['changed'] = True self.result['changed'] = True
args = {}
args['cidrlist'] = self.module.params.get('cidr')
args['protocol'] = self.module.params.get('protocol')
args['startport'] = self.module.params.get('start_port')
args['endport'] = self.module.params.get('end_port')
args['icmptype'] = self.module.params.get('icmp_type')
args['icmpcode'] = self.module.params.get('icmp_code')
args['ipaddressid'] = self.get_ip_address_id()
if not self.module.check_mode: args = {}
firewall_rule = self.cs.createFirewallRule(**args) args['cidrlist'] = self.module.params.get('cidr')
args['protocol'] = self.module.params.get('protocol')
args['startport'] = self.module.params.get('start_port')
args['endport'] = self.get_or_fallback('end_port', 'start_port')
args['icmptype'] = self.module.params.get('icmp_type')
args['icmpcode'] = self.module.params.get('icmp_code')
fw_type = self.module.params.get('type')
if not self.module.check_mode:
if fw_type == 'egress':
args['networkid'] = self.get_network(key='id')
res = self.cs.createEgressFirewallRule(**args)
else:
args['ipaddressid'] = self.get_ip_address('id')
res = self.cs.createFirewallRule(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
firewall_rule = self._poll_job(res, 'firewallrule')
return firewall_rule return firewall_rule
@ -200,42 +348,82 @@ class AnsibleCloudStackFirewall(AnsibleCloudStack):
firewall_rule = self.get_firewall_rule() firewall_rule = self.get_firewall_rule()
if firewall_rule: if firewall_rule:
self.result['changed'] = True self.result['changed'] = True
args = {}
args = {}
args['id'] = firewall_rule['id'] args['id'] = firewall_rule['id']
fw_type = self.module.params.get('type')
if not self.module.check_mode: if not self.module.check_mode:
res = self.cs.deleteFirewallRule(**args) if fw_type == 'egress':
res = self.cs.deleteEgressFirewallRule(**args)
else:
res = self.cs.deleteFirewallRule(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
res = self._poll_job(res, 'firewallrule')
return firewall_rule return firewall_rule
def get_result(self, firewall_rule): def get_result(self, firewall_rule):
if firewall_rule:
self.result['type'] = self.module.params.get('type')
if 'cidrlist' in firewall_rule:
self.result['cidr'] = firewall_rule['cidrlist']
if 'startport' in firewall_rule:
self.result['start_port'] = int(firewall_rule['startport'])
if 'endport' in firewall_rule:
self.result['end_port'] = int(firewall_rule['endport'])
if 'protocol' in firewall_rule:
self.result['protocol'] = firewall_rule['protocol']
if 'ipaddress' in firewall_rule:
self.result['ip_address'] = firewall_rule['ipaddress']
if 'icmpcode' in firewall_rule:
self.result['icmp_code'] = int(firewall_rule['icmpcode'])
if 'icmptype' in firewall_rule:
self.result['icmp_type'] = int(firewall_rule['icmptype'])
if 'networkid' in firewall_rule:
self.result['network'] = self.get_network(key='displaytext', network=firewall_rule['networkid'])
return self.result return self.result
def main(): def main():
module = AnsibleModule( module = AnsibleModule(
argument_spec = dict( argument_spec = dict(
ip_address = dict(required=True, default=None), ip_address = dict(default=None),
network = dict(default=None),
cidr = dict(default='0.0.0.0/0'), cidr = dict(default='0.0.0.0/0'),
protocol = dict(choices=['tcp', 'udp', 'icmp'], default='tcp'), protocol = dict(choices=['tcp', 'udp', 'icmp', 'all'], default='tcp'),
type = dict(choices=['ingress', 'egress'], default='ingress'),
icmp_type = dict(type='int', default=None), icmp_type = dict(type='int', default=None),
icmp_code = dict(type='int', default=None), icmp_code = dict(type='int', default=None),
start_port = dict(type='int', default=None), start_port = dict(type='int', aliases=['port'], default=None),
end_port = dict(type='int', default=None), end_port = dict(type='int', default=None),
state = dict(choices=['present', 'absent'], default='present'), state = dict(choices=['present', 'absent'], default='present'),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None), project = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None), api_key = dict(default=None),
api_secret = dict(default=None), api_secret = dict(default=None, no_log=True),
api_url = dict(default=None), api_url = dict(default=None),
api_http_method = dict(default='get'), api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_one_of = (
['ip_address', 'network'],
), ),
required_together = ( required_together = (
['start_port', 'end_port'], ['icmp_type', 'icmp_code'],
['api_key', 'api_secret', 'api_url'],
), ),
mutually_exclusive = ( mutually_exclusive = (
['icmp_type', 'start_port'], ['icmp_type', 'start_port'],
['icmp_type', 'end_port'], ['icmp_type', 'end_port'],
['ip_address', 'network'],
), ),
supports_check_mode=True supports_check_mode=True
) )
@ -261,4 +449,5 @@ def main():
# import module snippets # import module snippets
from ansible.module_utils.basic import * from ansible.module_utils.basic import *
main() if __name__ == '__main__':
main()

@ -0,0 +1,856 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_instance
short_description: Manages instances and virtual machines on Apache CloudStack based clouds.
description:
- Deploy, start, restart, stop and destroy instances.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Host name of the instance. C(name) can only contain ASCII letters.
required: true
display_name:
description:
- Custom display name of the instances.
required: false
default: null
group:
description:
- Group in where the new instance should be in.
required: false
default: null
state:
description:
- State of the instance.
required: false
default: 'present'
choices: [ 'deployed', 'started', 'stopped', 'restarted', 'destroyed', 'expunged', 'present', 'absent' ]
service_offering:
description:
- Name or id of the service offering of the new instance.
- If not set, first found service offering is used.
required: false
default: null
template:
description:
- Name or id of the template to be used for creating the new instance.
- Required when using C(state=present).
- Mutually exclusive with C(ISO) option.
required: false
default: null
iso:
description:
- Name or id of the ISO to be used for creating the new instance.
- Required when using C(state=present).
- Mutually exclusive with C(template) option.
required: false
default: null
hypervisor:
description:
- Name the hypervisor to be used for creating the new instance.
- Relevant when using C(state=present), but only considered if not set on ISO/template.
- If not set or found on ISO/template, first found hypervisor will be used.
required: false
default: null
choices: [ 'KVM', 'VMware', 'BareMetal', 'XenServer', 'LXC', 'HyperV', 'UCS', 'OVM' ]
keyboard:
description:
- Keyboard device type for the instance.
required: false
default: null
choices: [ 'de', 'de-ch', 'es', 'fi', 'fr', 'fr-be', 'fr-ch', 'is', 'it', 'jp', 'nl-be', 'no', 'pt', 'uk', 'us' ]
networks:
description:
- List of networks to use for the new instance.
required: false
default: []
aliases: [ 'network' ]
ip_address:
description:
- IPv4 address for default instance's network during creation.
required: false
default: null
ip6_address:
description:
- IPv6 address for default instance's network.
required: false
default: null
disk_offering:
description:
- Name of the disk offering to be used.
required: false
default: null
disk_size:
description:
- Disk size in GByte required if deploying instance from ISO.
required: false
default: null
security_groups:
description:
- List of security groups the instance to be applied to.
required: false
default: []
aliases: [ 'security_group' ]
domain:
description:
- Domain the instance is related to.
required: false
default: null
account:
description:
- Account the instance is related to.
required: false
default: null
project:
description:
- Name of the project the instance to be deployed in.
required: false
default: null
zone:
description:
- Name of the zone in which the instance shoud be deployed.
- If not set, default zone is used.
required: false
default: null
ssh_key:
description:
- Name of the SSH key to be deployed on the new instance.
required: false
default: null
affinity_groups:
description:
- Affinity groups names to be applied to the new instance.
required: false
default: []
aliases: [ 'affinity_group' ]
user_data:
description:
- Optional data (ASCII) that can be sent to the instance upon a successful deployment.
- The data will be automatically base64 encoded.
- Consider switching to HTTP_POST by using C(CLOUDSTACK_METHOD=post) to increase the HTTP_GET size limit of 2KB to 32 KB.
required: false
default: null
force:
description:
- Force stop/start the instance if required to apply changes, otherwise a running instance will not be changed.
required: false
default: false
tags:
description:
- List of tags. Tags are a list of dictionaries having keys C(key) and C(value).
- "If you want to delete all tags, set a empty list e.g. C(tags: [])."
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Create a instance from an ISO
# NOTE: Names of offerings and ISOs depending on the CloudStack configuration.
- local_action:
module: cs_instance
name: web-vm-1
iso: Linux Debian 7 64-bit
hypervisor: VMware
project: Integration
zone: ch-zrh-ix-01
service_offering: 1cpu_1gb
disk_offering: PerfPlus Storage
disk_size: 20
networks:
- Server Integration
- Sync Integration
- Storage Integration
# For changing a running instance, use the 'force' parameter
- local_action:
module: cs_instance
name: web-vm-1
display_name: web-vm-01.example.com
iso: Linux Debian 7 64-bit
service_offering: 2cpu_2gb
force: yes
# Create or update a instance on Exoscale's public cloud
- local_action:
module: cs_instance
name: web-vm-1
template: Linux Debian 7 64-bit
service_offering: Tiny
ssh_key: john@example.com
tags:
- { key: admin, value: john }
- { key: foo, value: bar }
# Ensure a instance has stopped
- local_action: cs_instance name=web-vm-1 state=stopped
# Ensure a instance is running
- local_action: cs_instance name=web-vm-1 state=started
# Remove a instance
- local_action: cs_instance name=web-vm-1 state=absent
'''
RETURN = '''
---
id:
description: ID of the instance.
returned: success
type: string
sample: 04589590-ac63-4ffc-93f5-b698b8ac38b6
name:
description: Name of the instance.
returned: success
type: string
sample: web-01
display_name:
description: Display name of the instance.
returned: success
type: string
sample: web-01
group:
description: Group name of the instance is related.
returned: success
type: string
sample: web
created:
description: Date of the instance was created.
returned: success
type: string
sample: 2014-12-01T14:57:57+0100
password_enabled:
description: True if password setting is enabled.
returned: success
type: boolean
sample: true
password:
description: The password of the instance if exists.
returned: success
type: string
sample: Ge2oe7Do
ssh_key:
description: Name of SSH key deployed to instance.
returned: success
type: string
sample: key@work
domain:
description: Domain the instance is related to.
returned: success
type: string
sample: example domain
account:
description: Account the instance is related to.
returned: success
type: string
sample: example account
project:
description: Name of project the instance is related to.
returned: success
type: string
sample: Production
default_ip:
description: Default IP address of the instance.
returned: success
type: string
sample: 10.23.37.42
public_ip:
description: Public IP address with instance via static NAT rule.
returned: success
type: string
sample: 1.2.3.4
iso:
description: Name of ISO the instance was deployed with.
returned: success
type: string
sample: Debian-8-64bit
template:
description: Name of template the instance was deployed with.
returned: success
type: string
sample: Debian-8-64bit
service_offering:
description: Name of the service offering the instance has.
returned: success
type: string
sample: 2cpu_2gb
zone:
description: Name of zone the instance is in.
returned: success
type: string
sample: ch-gva-2
state:
description: State of the instance.
returned: success
type: string
sample: Running
security_groups:
description: Security groups the instance is in.
returned: success
type: list
sample: '[ "default" ]'
affinity_groups:
description: Affinity groups the instance is in.
returned: success
type: list
sample: '[ "webservers" ]'
tags:
description: List of resource tags associated with the instance.
returned: success
type: dict
sample: '[ { "key": "foo", "value": "bar" } ]'
hypervisor:
description: Hypervisor related to this instance.
returned: success
type: string
sample: KVM
instance_name:
description: Internal name of the instance (ROOT admin only).
returned: success
type: string
sample: i-44-3992-VM
'''
import base64
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackInstance(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.instance = None
self.template = None
self.iso = None
def get_service_offering_id(self):
service_offering = self.module.params.get('service_offering')
service_offerings = self.cs.listServiceOfferings()
if service_offerings:
if not service_offering:
return service_offerings['serviceoffering'][0]['id']
for s in service_offerings['serviceoffering']:
if service_offering in [ s['name'], s['id'] ]:
return s['id']
self.module.fail_json(msg="Service offering '%s' not found" % service_offering)
def get_template_or_iso(self, key=None):
template = self.module.params.get('template')
iso = self.module.params.get('iso')
if not template and not iso:
self.module.fail_json(msg="Template or ISO is required.")
if template and iso:
self.module.fail_json(msg="Template are ISO are mutually exclusive.")
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
args['zoneid'] = self.get_zone(key='id')
args['isrecursive'] = True
if template:
if self.template:
return self._get_by_key(key, self.template)
args['templatefilter'] = 'executable'
templates = self.cs.listTemplates(**args)
if templates:
for t in templates['template']:
if template in [ t['displaytext'], t['name'], t['id'] ]:
self.template = t
return self._get_by_key(key, self.template)
self.module.fail_json(msg="Template '%s' not found" % template)
elif iso:
if self.iso:
return self._get_by_key(key, self.iso)
args['isofilter'] = 'executable'
isos = self.cs.listIsos(**args)
if isos:
for i in isos['iso']:
if iso in [ i['displaytext'], i['name'], i['id'] ]:
self.iso = i
return self._get_by_key(key, self.iso)
self.module.fail_json(msg="ISO '%s' not found" % iso)
def get_disk_offering_id(self):
disk_offering = self.module.params.get('disk_offering')
if not disk_offering:
return None
disk_offerings = self.cs.listDiskOfferings()
if disk_offerings:
for d in disk_offerings['diskoffering']:
if disk_offering in [ d['displaytext'], d['name'], d['id'] ]:
return d['id']
self.module.fail_json(msg="Disk offering '%s' not found" % disk_offering)
def get_instance(self):
instance = self.instance
if not instance:
instance_name = self.module.params.get('name')
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
# Do not pass zoneid, as the instance name must be unique across zones.
instances = self.cs.listVirtualMachines(**args)
if instances:
for v in instances['virtualmachine']:
if instance_name in [ v['name'], v['displayname'], v['id'] ]:
self.instance = v
break
return self.instance
def get_network_ids(self):
network_names = self.module.params.get('networks')
if not network_names:
return None
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
args['zoneid'] = self.get_zone(key='id')
networks = self.cs.listNetworks(**args)
if not networks:
self.module.fail_json(msg="No networks available")
network_ids = []
network_displaytexts = []
for network_name in network_names:
for n in networks['network']:
if network_name in [ n['displaytext'], n['name'], n['id'] ]:
network_ids.append(n['id'])
network_displaytexts.append(n['name'])
break
if len(network_ids) != len(network_names):
self.module.fail_json(msg="Could not find all networks, networks list found: %s" % network_displaytexts)
return ','.join(network_ids)
def present_instance(self):
instance = self.get_instance()
if not instance:
instance = self.deploy_instance()
else:
instance = self.update_instance(instance)
# In check mode, we do not necessarely have an instance
if instance:
instance = self.ensure_tags(resource=instance, resource_type='UserVm')
return instance
def get_user_data(self):
user_data = self.module.params.get('user_data')
if user_data:
user_data = base64.b64encode(user_data)
return user_data
def deploy_instance(self):
self.result['changed'] = True
args = {}
args['templateid'] = self.get_template_or_iso(key='id')
args['zoneid'] = self.get_zone(key='id')
args['serviceofferingid'] = self.get_service_offering_id()
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
args['diskofferingid'] = self.get_disk_offering_id()
args['networkids'] = self.get_network_ids()
args['userdata'] = self.get_user_data()
args['keyboard'] = self.module.params.get('keyboard')
args['ipaddress'] = self.module.params.get('ip_address')
args['ip6address'] = self.module.params.get('ip6_address')
args['name'] = self.module.params.get('name')
args['displayname'] = self.get_or_fallback('display_name', 'name')
args['group'] = self.module.params.get('group')
args['keypair'] = self.module.params.get('ssh_key')
args['size'] = self.module.params.get('disk_size')
args['securitygroupnames'] = ','.join(self.module.params.get('security_groups'))
args['affinitygroupnames'] = ','.join(self.module.params.get('affinity_groups'))
template_iso = self.get_template_or_iso()
if 'hypervisor' not in template_iso:
args['hypervisor'] = self.get_hypervisor()
instance = None
if not self.module.check_mode:
instance = self.cs.deployVirtualMachine(**args)
if 'errortext' in instance:
self.module.fail_json(msg="Failed: '%s'" % instance['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
instance = self._poll_job(instance, 'virtualmachine')
return instance
def update_instance(self, instance):
args_service_offering = {}
args_service_offering['id'] = instance['id']
args_service_offering['serviceofferingid'] = self.get_service_offering_id()
args_instance_update = {}
args_instance_update['id'] = instance['id']
args_instance_update['group'] = self.module.params.get('group')
args_instance_update['displayname'] = self.get_or_fallback('display_name', 'name')
args_instance_update['userdata'] = self.get_user_data()
args_instance_update['ostypeid'] = self.get_os_type(key='id')
args_ssh_key = {}
args_ssh_key['id'] = instance['id']
args_ssh_key['keypair'] = self.module.params.get('ssh_key')
args_ssh_key['projectid'] = self.get_project(key='id')
if self._has_changed(args_service_offering, instance) or \
self._has_changed(args_instance_update, instance) or \
self._has_changed(args_ssh_key, instance):
force = self.module.params.get('force')
instance_state = instance['state'].lower()
if instance_state == 'stopped' or force:
self.result['changed'] = True
if not self.module.check_mode:
# Ensure VM has stopped
instance = self.stop_instance()
instance = self._poll_job(instance, 'virtualmachine')
self.instance = instance
# Change service offering
if self._has_changed(args_service_offering, instance):
res = self.cs.changeServiceForVirtualMachine(**args_service_offering)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
instance = res['virtualmachine']
self.instance = instance
# Update VM
if self._has_changed(args_instance_update, instance):
res = self.cs.updateVirtualMachine(**args_instance_update)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
instance = res['virtualmachine']
self.instance = instance
# Reset SSH key
if self._has_changed(args_ssh_key, instance):
instance = self.cs.resetSSHKeyForVirtualMachine(**args_ssh_key)
if 'errortext' in instance:
self.module.fail_json(msg="Failed: '%s'" % instance['errortext'])
instance = self._poll_job(instance, 'virtualmachine')
self.instance = instance
# Start VM again if it was running before
if instance_state == 'running':
instance = self.start_instance()
return instance
def absent_instance(self):
instance = self.get_instance()
if instance:
if instance['state'].lower() not in ['expunging', 'destroying', 'destroyed']:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.destroyVirtualMachine(id=instance['id'])
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
instance = self._poll_job(res, 'virtualmachine')
return instance
def expunge_instance(self):
instance = self.get_instance()
if instance:
res = {}
if instance['state'].lower() in [ 'destroying', 'destroyed' ]:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.destroyVirtualMachine(id=instance['id'], expunge=True)
elif instance['state'].lower() not in [ 'expunging' ]:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.destroyVirtualMachine(id=instance['id'], expunge=True)
if res and 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
res = self._poll_job(res, 'virtualmachine')
return instance
def stop_instance(self):
instance = self.get_instance()
if not instance:
self.module.fail_json(msg="Instance named '%s' not found" % self.module.params.get('name'))
if instance['state'].lower() in ['stopping', 'stopped']:
return instance
if instance['state'].lower() in ['starting', 'running']:
self.result['changed'] = True
if not self.module.check_mode:
instance = self.cs.stopVirtualMachine(id=instance['id'])
if 'errortext' in instance:
self.module.fail_json(msg="Failed: '%s'" % instance['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
instance = self._poll_job(instance, 'virtualmachine')
return instance
def start_instance(self):
instance = self.get_instance()
if not instance:
self.module.fail_json(msg="Instance named '%s' not found" % module.params.get('name'))
if instance['state'].lower() in ['starting', 'running']:
return instance
if instance['state'].lower() in ['stopped', 'stopping']:
self.result['changed'] = True
if not self.module.check_mode:
instance = self.cs.startVirtualMachine(id=instance['id'])
if 'errortext' in instance:
self.module.fail_json(msg="Failed: '%s'" % instance['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
instance = self._poll_job(instance, 'virtualmachine')
return instance
def restart_instance(self):
instance = self.get_instance()
if not instance:
module.fail_json(msg="Instance named '%s' not found" % self.module.params.get('name'))
if instance['state'].lower() in [ 'running', 'starting' ]:
self.result['changed'] = True
if not self.module.check_mode:
instance = self.cs.rebootVirtualMachine(id=instance['id'])
if 'errortext' in instance:
self.module.fail_json(msg="Failed: '%s'" % instance['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
instance = self._poll_job(instance, 'virtualmachine')
elif instance['state'].lower() in [ 'stopping', 'stopped' ]:
instance = self.start_instance()
return instance
def get_result(self, instance):
if instance:
if 'id' in instance:
self.result['id'] = instance['id']
if 'name' in instance:
self.result['name'] = instance['name']
if 'displayname' in instance:
self.result['display_name'] = instance['displayname']
if 'group' in instance:
self.result['group'] = instance['group']
if 'domain' in instance:
self.result['domain'] = instance['domain']
if 'account' in instance:
self.result['account'] = instance['account']
if 'project' in instance:
self.result['project'] = instance['project']
if 'hypervisor' in instance:
self.result['hypervisor'] = instance['hypervisor']
if 'instancename' in instance:
self.result['instance_name'] = instance['instancename']
if 'publicip' in instance:
self.result['public_ip'] = instance['public_ip']
if 'passwordenabled' in instance:
self.result['password_enabled'] = instance['passwordenabled']
if 'password' in instance:
self.result['password'] = instance['password']
if 'serviceofferingname' in instance:
self.result['service_offering'] = instance['serviceofferingname']
if 'zonename' in instance:
self.result['zone'] = instance['zonename']
if 'templatename' in instance:
self.result['template'] = instance['templatename']
if 'isoname' in instance:
self.result['iso'] = instance['isoname']
if 'keypair' in instance:
self.result['ssh_key'] = instance['keypair']
if 'created' in instance:
self.result['created'] = instance['created']
if 'state' in instance:
self.result['state'] = instance['state']
if 'tags' in instance:
self.result['tags'] = []
for tag in instance['tags']:
result_tag = {}
result_tag['key'] = tag['key']
result_tag['value'] = tag['value']
self.result['tags'].append(result_tag)
if 'securitygroup' in instance:
security_groups = []
for securitygroup in instance['securitygroup']:
security_groups.append(securitygroup['name'])
self.result['security_groups'] = security_groups
if 'affinitygroup' in instance:
affinity_groups = []
for affinitygroup in instance['affinitygroup']:
affinity_groups.append(affinitygroup['name'])
self.result['affinity_groups'] = affinity_groups
if 'nic' in instance:
for nic in instance['nic']:
if nic['isdefault']:
self.result['default_ip'] = nic['ipaddress']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
display_name = dict(default=None),
group = dict(default=None),
state = dict(choices=['present', 'deployed', 'started', 'stopped', 'restarted', 'absent', 'destroyed', 'expunged'], default='present'),
service_offering = dict(default=None),
template = dict(default=None),
iso = dict(default=None),
networks = dict(type='list', aliases=[ 'network' ], default=None),
ip_address = dict(defaul=None),
ip6_address = dict(defaul=None),
disk_offering = dict(default=None),
disk_size = dict(type='int', default=None),
keyboard = dict(choices=['de', 'de-ch', 'es', 'fi', 'fr', 'fr-be', 'fr-ch', 'is', 'it', 'jp', 'nl-be', 'no', 'pt', 'uk', 'us'], default=None),
hypervisor = dict(choices=['KVM', 'VMware', 'BareMetal', 'XenServer', 'LXC', 'HyperV', 'UCS', 'OVM'], default=None),
security_groups = dict(type='list', aliases=[ 'security_group' ], default=[]),
affinity_groups = dict(type='list', aliases=[ 'affinity_group' ], default=[]),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
user_data = dict(default=None),
zone = dict(default=None),
ssh_key = dict(default=None),
force = dict(choices=BOOLEANS, default=False),
tags = dict(type='list', aliases=[ 'tag' ], default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_instance = AnsibleCloudStackInstance(module)
state = module.params.get('state')
if state in ['absent', 'destroyed']:
instance = acs_instance.absent_instance()
elif state in ['expunged']:
instance = acs_instance.expunge_instance()
elif state in ['present', 'deployed']:
instance = acs_instance.present_instance()
elif state in ['stopped']:
instance = acs_instance.stop_instance()
elif state in ['started']:
instance = acs_instance.start_instance()
elif state in ['restarted']:
instance = acs_instance.restart_instance()
if instance and 'state' in instance and instance['state'].lower() == 'error':
module.fail_json(msg="Instance named '%s' in error state." % module.params.get('name'))
result = acs_instance.get_result(instance)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,231 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_instancegroup
short_description: Manages instance groups on Apache CloudStack based clouds.
description:
- Create and remove instance groups.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the instance group.
required: true
domain:
description:
- Domain the instance group is related to.
required: false
default: null
account:
description:
- Account the instance group is related to.
required: false
default: null
project:
description:
- Project the instance group is related to.
required: false
default: null
state:
description:
- State of the instance group.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Create an instance group
- local_action:
module: cs_instancegroup
name: loadbalancers
# Remove an instance group
- local_action:
module: cs_instancegroup
name: loadbalancers
state: absent
'''
RETURN = '''
---
id:
description: ID of the instance group.
returned: success
type: string
sample: 04589590-ac63-4ffc-93f5-b698b8ac38b6
name:
description: Name of the instance group.
returned: success
type: string
sample: webservers
created:
description: Date when the instance group was created.
returned: success
type: string
sample: 2015-05-03T15:05:51+0200
domain:
description: Domain the instance group is related to.
returned: success
type: string
sample: example domain
account:
description: Account the instance group is related to.
returned: success
type: string
sample: example account
project:
description: Project the instance group is related to.
returned: success
type: string
sample: example project
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackInstanceGroup(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.instance_group = None
def get_instance_group(self):
if self.instance_group:
return self.instance_group
name = self.module.params.get('name')
args = {}
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
instance_groups = self.cs.listInstanceGroups(**args)
if instance_groups:
for g in instance_groups['instancegroup']:
if name in [ g['name'], g['id'] ]:
self.instance_group = g
break
return self.instance_group
def present_instance_group(self):
instance_group = self.get_instance_group()
if not instance_group:
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
if not self.module.check_mode:
res = self.cs.createInstanceGroup(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
instance_group = res['instancegroup']
return instance_group
def absent_instance_group(self):
instance_group = self.get_instance_group()
if instance_group:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.deleteInstanceGroup(id=instance_group['id'])
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
return instance_group
def get_result(self, instance_group):
if instance_group:
if 'id' in instance_group:
self.result['id'] = instance_group['id']
if 'created' in instance_group:
self.result['created'] = instance_group['created']
if 'name' in instance_group:
self.result['name'] = instance_group['name']
if 'project' in instance_group:
self.result['project'] = instance_group['project']
if 'domain' in instance_group:
self.result['domain'] = instance_group['domain']
if 'account' in instance_group:
self.result['account'] = instance_group['account']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(default='present', choices=['present', 'absent']),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_ig = AnsibleCloudStackInstanceGroup(module)
state = module.params.get('state')
if state in ['absent']:
instance_group = acs_ig.absent_instance_group()
else:
instance_group = acs_ig.present_instance_group()
result = acs_ig.get_result(instance_group)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -22,9 +22,10 @@ DOCUMENTATION = '''
--- ---
module: cs_iso module: cs_iso
short_description: Manages ISOs images on Apache CloudStack based clouds. short_description: Manages ISOs images on Apache CloudStack based clouds.
description: Register and remove ISO images. description:
- Register and remove ISO images.
version_added: '2.0' version_added: '2.0'
author: René Moser author: "René Moser (@resmo)"
options: options:
name: name:
description: description:
@ -72,6 +73,16 @@ options:
- Register the ISO to be bootable. Only used if C(state) is present. - Register the ISO to be bootable. Only used if C(state) is present.
required: false required: false
default: true default: true
domain:
description:
- Domain the ISO is related to.
required: false
default: null
account:
description:
- Account the ISO is related to.
required: false
default: null
project: project:
description: description:
- Name of the project the ISO to be registered in. - Name of the project the ISO to be registered in.
@ -94,10 +105,10 @@ options:
required: false required: false
default: 'present' default: 'present'
choices: [ 'present', 'absent' ] choices: [ 'present', 'absent' ]
extends_documentation_fragment: cloudstack
''' '''
EXAMPLES = ''' EXAMPLES = '''
---
# Register an ISO if ISO name does not already exist. # Register an ISO if ISO name does not already exist.
- local_action: - local_action:
module: cs_iso module: cs_iso
@ -105,23 +116,20 @@ EXAMPLES = '''
url: http://mirror.switch.ch/ftp/mirror/debian-cd/current/amd64/iso-cd/debian-7.7.0-amd64-netinst.iso url: http://mirror.switch.ch/ftp/mirror/debian-cd/current/amd64/iso-cd/debian-7.7.0-amd64-netinst.iso
os_type: Debian GNU/Linux 7(64-bit) os_type: Debian GNU/Linux 7(64-bit)
# Register an ISO with given name if ISO md5 checksum does not already exist. # Register an ISO with given name if ISO md5 checksum does not already exist.
- local_action: - local_action:
module: cs_iso module: cs_iso
name: Debian 7 64-bit name: Debian 7 64-bit
url: http://mirror.switch.ch/ftp/mirror/debian-cd/current/amd64/iso-cd/debian-7.7.0-amd64-netinst.iso url: http://mirror.switch.ch/ftp/mirror/debian-cd/current/amd64/iso-cd/debian-7.7.0-amd64-netinst.iso
os_type: os_type: Debian GNU/Linux 7(64-bit)
checksum: 0b31bccccb048d20b551f70830bb7ad0 checksum: 0b31bccccb048d20b551f70830bb7ad0
# Remove an ISO by name # Remove an ISO by name
- local_action: - local_action:
module: cs_iso module: cs_iso
name: Debian 7 64-bit name: Debian 7 64-bit
state: absent state: absent
# Remove an ISO by checksum # Remove an ISO by checksum
- local_action: - local_action:
module: cs_iso module: cs_iso
@ -167,6 +175,21 @@ created:
returned: success returned: success
type: string type: string
sample: 2015-03-29T14:57:06+0200 sample: 2015-03-29T14:57:06+0200
domain:
description: Domain the ISO is related to.
returned: success
type: string
sample: example domain
account:
description: Account the ISO is related to.
returned: success
type: string
sample: example account
project:
description: Project the ISO is related to.
returned: success
type: string
sample: example project
''' '''
try: try:
@ -183,20 +206,26 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
def __init__(self, module): def __init__(self, module):
AnsibleCloudStack.__init__(self, module) AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
self.iso = None self.iso = None
def register_iso(self): def register_iso(self):
iso = self.get_iso() iso = self.get_iso()
if not iso: if not iso:
args = {}
args['zoneid'] = self.get_zone_id()
args['projectid'] = self.get_project_id()
args['bootable'] = self.module.params.get('bootable') args = {}
args['ostypeid'] = self.get_os_type_id() args['zoneid'] = self.get_zone('id')
args['domainid'] = self.get_domain('id')
args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['bootable'] = self.module.params.get('bootable')
args['ostypeid'] = self.get_os_type('id')
args['name'] = self.module.params.get('name')
args['displaytext'] = self.module.params.get('name')
args['checksum'] = self.module.params.get('checksum')
args['isdynamicallyscalable'] = self.module.params.get('is_dynamically_scalable')
args['isfeatured'] = self.module.params.get('is_featured')
args['ispublic'] = self.module.params.get('is_public')
if args['bootable'] and not args['ostypeid']: if args['bootable'] and not args['ostypeid']:
self.module.fail_json(msg="OS type 'os_type' is requried if 'bootable=true'.") self.module.fail_json(msg="OS type 'os_type' is requried if 'bootable=true'.")
@ -204,13 +233,6 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
if not args['url']: if not args['url']:
self.module.fail_json(msg="URL is requried.") self.module.fail_json(msg="URL is requried.")
args['name'] = self.module.params.get('name')
args['displaytext'] = self.module.params.get('name')
args['checksum'] = self.module.params.get('checksum')
args['isdynamicallyscalable'] = self.module.params.get('is_dynamically_scalable')
args['isfeatured'] = self.module.params.get('is_featured')
args['ispublic'] = self.module.params.get('is_public')
self.result['changed'] = True self.result['changed'] = True
if not self.module.check_mode: if not self.module.check_mode:
res = self.cs.registerIso(**args) res = self.cs.registerIso(**args)
@ -220,11 +242,14 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
def get_iso(self): def get_iso(self):
if not self.iso: if not self.iso:
args = {}
args['isready'] = self.module.params.get('is_ready') args = {}
args['isofilter'] = self.module.params.get('iso_filter') args['isready'] = self.module.params.get('is_ready')
args['projectid'] = self.get_project_id() args['isofilter'] = self.module.params.get('iso_filter')
args['zoneid'] = self.get_zone_id() args['domainid'] = self.get_domain('id')
args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['zoneid'] = self.get_zone('id')
# if checksum is set, we only look on that. # if checksum is set, we only look on that.
checksum = self.module.params.get('checksum') checksum = self.module.params.get('checksum')
@ -247,10 +272,12 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
iso = self.get_iso() iso = self.get_iso()
if iso: if iso:
self.result['changed'] = True self.result['changed'] = True
args = {}
args['id'] = iso['id'] args = {}
args['projectid'] = self.get_project_id() args['id'] = iso['id']
args['zoneid'] = self.get_zone_id() args['projectid'] = self.get_project('id')
args['zoneid'] = self.get_zone('id')
if not self.module.check_mode: if not self.module.check_mode:
res = self.cs.deleteIso(**args) res = self.cs.deleteIso(**args)
return iso return iso
@ -272,17 +299,25 @@ class AnsibleCloudStackIso(AnsibleCloudStack):
self.result['is_ready'] = iso['isready'] self.result['is_ready'] = iso['isready']
if 'created' in iso: if 'created' in iso:
self.result['created'] = iso['created'] self.result['created'] = iso['created']
if 'project' in iso:
self.result['project'] = iso['project']
if 'domain' in iso:
self.result['domain'] = iso['domain']
if 'account' in iso:
self.result['account'] = iso['account']
return self.result return self.result
def main(): def main():
module = AnsibleModule( module = AnsibleModule(
argument_spec = dict( argument_spec = dict(
name = dict(required=True, default=None), name = dict(required=True),
url = dict(default=None), url = dict(default=None),
os_type = dict(default=None), os_type = dict(default=None),
zone = dict(default=None), zone = dict(default=None),
iso_filter = dict(default='self', choices=[ 'featured', 'self', 'selfexecutable','sharedexecutable','executable', 'community' ]), iso_filter = dict(default='self', choices=[ 'featured', 'self', 'selfexecutable','sharedexecutable','executable', 'community' ]),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None), project = dict(default=None),
checksum = dict(default=None), checksum = dict(default=None),
is_ready = dict(choices=BOOLEANS, default=False), is_ready = dict(choices=BOOLEANS, default=False),
@ -291,9 +326,13 @@ def main():
is_dynamically_scalable = dict(choices=BOOLEANS, default=False), is_dynamically_scalable = dict(choices=BOOLEANS, default=False),
state = dict(choices=['present', 'absent'], default='present'), state = dict(choices=['present', 'absent'], default='present'),
api_key = dict(default=None), api_key = dict(default=None),
api_secret = dict(default=None), api_secret = dict(default=None, no_log=True),
api_url = dict(default=None), api_url = dict(default=None),
api_http_method = dict(default='get'), api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
), ),
supports_check_mode=True supports_check_mode=True
) )
@ -319,4 +358,5 @@ def main():
# import module snippets # import module snippets
from ansible.module_utils.basic import * from ansible.module_utils.basic import *
main() if __name__ == '__main__':
main()

@ -0,0 +1,628 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_network
short_description: Manages networks on Apache CloudStack based clouds.
description:
- Create, update, restart and delete networks.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name (case sensitive) of the network.
required: true
displaytext:
description:
- Displaytext of the network.
- If not specified, C(name) will be used as displaytext.
required: false
default: null
network_offering:
description:
- Name of the offering for the network.
- Required if C(state=present).
required: false
default: null
start_ip:
description:
- The beginning IPv4 address of the network belongs to.
- Only considered on create.
required: false
default: null
end_ip:
description:
- The ending IPv4 address of the network belongs to.
- If not specified, value of C(start_ip) is used.
- Only considered on create.
required: false
default: null
gateway:
description:
- The gateway of the network.
- Required for shared networks and isolated networks when it belongs to VPC.
- Only considered on create.
required: false
default: null
netmask:
description:
- The netmask of the network.
- Required for shared networks and isolated networks when it belongs to VPC.
- Only considered on create.
required: false
default: null
start_ipv6:
description:
- The beginning IPv6 address of the network belongs to.
- Only considered on create.
required: false
default: null
end_ipv6:
description:
- The ending IPv6 address of the network belongs to.
- If not specified, value of C(start_ipv6) is used.
- Only considered on create.
required: false
default: null
cidr_ipv6:
description:
- CIDR of IPv6 network, must be at least /64.
- Only considered on create.
required: false
default: null
gateway_ipv6:
description:
- The gateway of the IPv6 network.
- Required for shared networks.
- Only considered on create.
required: false
default: null
vlan:
description:
- The ID or VID of the network.
required: false
default: null
vpc:
description:
- The ID or VID of the network.
required: false
default: null
isolated_pvlan:
description:
- The isolated private vlan for this network.
required: false
default: null
clean_up:
description:
- Cleanup old network elements.
- Only considered on C(state=restarted).
required: false
default: false
acl_type:
description:
- Access control type.
- Only considered on create.
required: false
default: account
choices: [ 'account', 'domain' ]
network_domain:
description:
- The network domain.
required: false
default: null
state:
description:
- State of the network.
required: false
default: present
choices: [ 'present', 'absent', 'restarted' ]
zone:
description:
- Name of the zone in which the network should be deployed.
- If not set, default zone is used.
required: false
default: null
project:
description:
- Name of the project the network to be deployed in.
required: false
default: null
domain:
description:
- Domain the network is related to.
required: false
default: null
account:
description:
- Account the network is related to.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# create a network
- local_action:
module: cs_network
name: my network
zone: gva-01
network_offering: DefaultIsolatedNetworkOfferingWithSourceNatService
network_domain: example.com
# update a network
- local_action:
module: cs_network
name: my network
displaytext: network of domain example.local
network_domain: example.local
# restart a network with clean up
- local_action:
module: cs_network
name: my network
clean_up: yes
state: restared
# remove a network
- local_action:
module: cs_network
name: my network
state: absent
'''
RETURN = '''
---
id:
description: ID of the network.
returned: success
type: string
sample: 04589590-ac63-4ffc-93f5-b698b8ac38b6
name:
description: Name of the network.
returned: success
type: string
sample: web project
displaytext:
description: Display text of the network.
returned: success
type: string
sample: web project
dns1:
description: IP address of the 1st nameserver.
returned: success
type: string
sample: 1.2.3.4
dns2:
description: IP address of the 2nd nameserver.
returned: success
type: string
sample: 1.2.3.4
cidr:
description: IPv4 network CIDR.
returned: success
type: string
sample: 10.101.64.0/24
gateway:
description: IPv4 gateway.
returned: success
type: string
sample: 10.101.64.1
netmask:
description: IPv4 netmask.
returned: success
type: string
sample: 255.255.255.0
cidr_ipv6:
description: IPv6 network CIDR.
returned: success
type: string
sample: 2001:db8::/64
gateway_ipv6:
description: IPv6 gateway.
returned: success
type: string
sample: 2001:db8::1
state:
description: State of the network.
returned: success
type: string
sample: Implemented
zone:
description: Name of zone.
returned: success
type: string
sample: ch-gva-2
domain:
description: Domain the network is related to.
returned: success
type: string
sample: ROOT
account:
description: Account the network is related to.
returned: success
type: string
sample: example account
project:
description: Name of project.
returned: success
type: string
sample: Production
tags:
description: List of resource tags associated with the network.
returned: success
type: dict
sample: '[ { "key": "foo", "value": "bar" } ]'
acl_type:
description: Access type of the network (Domain, Account).
returned: success
type: string
sample: Account
broadcast_domaintype:
description: Broadcast domain type of the network.
returned: success
type: string
sample: Vlan
type:
description: Type of the network.
returned: success
type: string
sample: Isolated
traffic_type:
description: Traffic type of the network.
returned: success
type: string
sample: Guest
state:
description: State of the network (Allocated, Implemented, Setup).
returned: success
type: string
sample: Allocated
is_persistent:
description: Whether the network is persistent or not.
returned: success
type: boolean
sample: false
network_domain:
description: The network domain
returned: success
type: string
sample: example.local
network_offering:
description: The network offering name.
returned: success
type: string
sample: DefaultIsolatedNetworkOfferingWithSourceNatService
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackNetwork(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.network = None
def get_vpc(self, key=None):
vpc = self.module.params.get('vpc')
if not vpc:
return None
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
args['zoneid'] = self.get_zone(key='id')
vpcs = self.cs.listVPCs(**args)
if vpcs:
for v in vpcs['vpc']:
if vpc in [ v['name'], v['displaytext'], v['id'] ]:
return self._get_by_key(key, v)
self.module.fail_json(msg="VPC '%s' not found" % vpc)
def get_network_offering(self, key=None):
network_offering = self.module.params.get('network_offering')
if not network_offering:
self.module.fail_json(msg="missing required arguments: network_offering")
args = {}
args['zoneid'] = self.get_zone(key='id')
network_offerings = self.cs.listNetworkOfferings(**args)
if network_offerings:
for no in network_offerings['networkoffering']:
if network_offering in [ no['name'], no['displaytext'], no['id'] ]:
return self._get_by_key(key, no)
self.module.fail_json(msg="Network offering '%s' not found" % network_offering)
def _get_args(self):
args = {}
args['name'] = self.module.params.get('name')
args['displaytext'] = self.get_or_fallback('displaytext', 'name')
args['networkdomain'] = self.module.params.get('network_domain')
args['networkofferingid'] = self.get_network_offering(key='id')
return args
def get_network(self):
if not self.network:
network = self.module.params.get('name')
args = {}
args['zoneid'] = self.get_zone(key='id')
args['projectid'] = self.get_project(key='id')
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
networks = self.cs.listNetworks(**args)
if networks:
for n in networks['network']:
if network in [ n['name'], n['displaytext'], n['id']]:
self.network = n
break
return self.network
def present_network(self):
network = self.get_network()
if not network:
network = self.create_network(network)
else:
network = self.update_network(network)
return network
def update_network(self, network):
args = self._get_args()
args['id'] = network['id']
if self._has_changed(args, network):
self.result['changed'] = True
if not self.module.check_mode:
network = self.cs.updateNetwork(**args)
if 'errortext' in network:
self.module.fail_json(msg="Failed: '%s'" % network['errortext'])
poll_async = self.module.params.get('poll_async')
if network and poll_async:
network = self._poll_job(network, 'network')
return network
def create_network(self, network):
self.result['changed'] = True
args = self._get_args()
args['acltype'] = self.module.params.get('acl_type')
args['zoneid'] = self.get_zone(key='id')
args['projectid'] = self.get_project(key='id')
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['startip'] = self.module.params.get('start_ip')
args['endip'] = self.get_or_fallback('end_ip', 'start_ip')
args['netmask'] = self.module.params.get('netmask')
args['gateway'] = self.module.params.get('gateway')
args['startipv6'] = self.module.params.get('start_ipv6')
args['endipv6'] = self.get_or_fallback('end_ipv6', 'start_ipv6')
args['ip6cidr'] = self.module.params.get('cidr_ipv6')
args['ip6gateway'] = self.module.params.get('gateway_ipv6')
args['vlan'] = self.module.params.get('vlan')
args['isolatedpvlan'] = self.module.params.get('isolated_pvlan')
args['subdomainaccess'] = self.module.params.get('subdomain_access')
args['vpcid'] = self.get_vpc(key='id')
if not self.module.check_mode:
res = self.cs.createNetwork(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
network = res['network']
return network
def restart_network(self):
network = self.get_network()
if not network:
self.module.fail_json(msg="No network named '%s' found." % self.module.params('name'))
# Restarting only available for these states
if network['state'].lower() in [ 'implemented', 'setup' ]:
self.result['changed'] = True
args = {}
args['id'] = network['id']
args['cleanup'] = self.module.params.get('clean_up')
if not self.module.check_mode:
network = self.cs.restartNetwork(**args)
if 'errortext' in network:
self.module.fail_json(msg="Failed: '%s'" % network['errortext'])
poll_async = self.module.params.get('poll_async')
if network and poll_async:
network = self._poll_job(network, 'network')
return network
def absent_network(self):
network = self.get_network()
if network:
self.result['changed'] = True
args = {}
args['id'] = network['id']
if not self.module.check_mode:
res = self.cs.deleteNetwork(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'network')
return network
def get_result(self, network):
if network:
if 'id' in network:
self.result['id'] = network['id']
if 'name' in network:
self.result['name'] = network['name']
if 'displaytext' in network:
self.result['displaytext'] = network['displaytext']
if 'dns1' in network:
self.result['dns1'] = network['dns1']
if 'dns2' in network:
self.result['dns2'] = network['dns2']
if 'cidr' in network:
self.result['cidr'] = network['cidr']
if 'broadcastdomaintype' in network:
self.result['broadcast_domaintype'] = network['broadcastdomaintype']
if 'netmask' in network:
self.result['netmask'] = network['netmask']
if 'gateway' in network:
self.result['gateway'] = network['gateway']
if 'ip6cidr' in network:
self.result['cidr_ipv6'] = network['ip6cidr']
if 'ip6gateway' in network:
self.result['gateway_ipv6'] = network['ip6gateway']
if 'state' in network:
self.result['state'] = network['state']
if 'type' in network:
self.result['type'] = network['type']
if 'traffictype' in network:
self.result['traffic_type'] = network['traffictype']
if 'zone' in network:
self.result['zone'] = network['zonename']
if 'domain' in network:
self.result['domain'] = network['domain']
if 'account' in network:
self.result['account'] = network['account']
if 'project' in network:
self.result['project'] = network['project']
if 'acltype' in network:
self.result['acl_type'] = network['acltype']
if 'networkdomain' in network:
self.result['network_domain'] = network['networkdomain']
if 'networkofferingname' in network:
self.result['network_offering'] = network['networkofferingname']
if 'ispersistent' in network:
self.result['is_persistent'] = network['ispersistent']
if 'tags' in network:
self.result['tags'] = []
for tag in network['tags']:
result_tag = {}
result_tag['key'] = tag['key']
result_tag['value'] = tag['value']
self.result['tags'].append(result_tag)
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
displaytext = dict(default=None),
network_offering = dict(default=None),
zone = dict(default=None),
start_ip = dict(default=None),
end_ip = dict(default=None),
gateway = dict(default=None),
netmask = dict(default=None),
start_ipv6 = dict(default=None),
end_ipv6 = dict(default=None),
cidr_ipv6 = dict(default=None),
gateway_ipv6 = dict(default=None),
vlan = dict(default=None),
vpc = dict(default=None),
isolated_pvlan = dict(default=None),
clean_up = dict(type='bool', choices=BOOLEANS, default=False),
network_domain = dict(default=None),
state = dict(choices=['present', 'absent', 'restarted' ], default='present'),
acl_type = dict(choices=['account', 'domain'], default='account'),
project = dict(default=None),
domain = dict(default=None),
account = dict(default=None),
poll_async = dict(type='bool', choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
['start_ip', 'netmask', 'gateway'],
['start_ipv6', 'cidr_ipv6', 'gateway_ipv6'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_network = AnsibleCloudStackNetwork(module)
state = module.params.get('state')
if state in ['absent']:
network = acs_network.absent_network()
elif state in ['restarted']:
network = acs_network.restart_network()
else:
network = acs_network.present_network()
result = acs_network.get_result(network)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,423 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_portforward
short_description: Manages port forwarding rules on Apache CloudStack based clouds.
description:
- Create, update and remove port forwarding rules.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
ip_address:
description:
- Public IP address the rule is assigned to.
required: true
vm:
description:
- Name of virtual machine which we make the port forwarding rule for.
- Required if C(state=present).
required: false
default: null
state:
description:
- State of the port forwarding rule.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
protocol:
description:
- Protocol of the port forwarding rule.
required: false
default: 'tcp'
choices: [ 'tcp', 'udp' ]
public_port:
description:
- Start public port for this rule.
required: true
public_end_port:
description:
- End public port for this rule.
- If not specified equal C(public_port).
required: false
default: null
private_port:
description:
- Start private port for this rule.
required: true
private_end_port:
description:
- End private port for this rule.
- If not specified equal C(private_port).
required: false
default: null
open_firewall:
description:
- Whether the firewall rule for public port should be created, while creating the new rule.
- Use M(cs_firewall) for managing firewall rules.
required: false
default: false
vm_guest_ip:
description:
- VM guest NIC secondary IP address for the port forwarding rule.
required: false
default: false
domain:
description:
- Domain the C(vm) is related to.
required: false
default: null
account:
description:
- Account the C(vm) is related to.
required: false
default: null
project:
description:
- Name of the project the C(vm) is located in.
required: false
default: null
zone:
description:
- Name of the zone in which the virtual machine is in.
- If not set, default zone is used.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# 1.2.3.4:80 -> web01:8080
- local_action:
module: cs_portforward
ip_address: 1.2.3.4
vm: web01
public_port: 80
private_port: 8080
# forward SSH and open firewall
- local_action:
module: cs_portforward
ip_address: '{{ public_ip }}'
vm: '{{ inventory_hostname }}'
public_port: '{{ ansible_ssh_port }}'
private_port: 22
open_firewall: true
# forward DNS traffic, but do not open firewall
- local_action:
module: cs_portforward
ip_address: 1.2.3.4
vm: '{{ inventory_hostname }}'
public_port: 53
private_port: 53
protocol: udp
open_firewall: true
# remove ssh port forwarding
- local_action:
module: cs_portforward
ip_address: 1.2.3.4
public_port: 22
private_port: 22
state: absent
'''
RETURN = '''
---
ip_address:
description: Public IP address.
returned: success
type: string
sample: 1.2.3.4
protocol:
description: Protocol.
returned: success
type: string
sample: tcp
private_port:
description: Start port on the virtual machine's IP address.
returned: success
type: int
sample: 80
private_end_port:
description: End port on the virtual machine's IP address.
returned: success
type: int
public_port:
description: Start port on the public IP address.
returned: success
type: int
sample: 80
public_end_port:
description: End port on the public IP address.
returned: success
type: int
sample: 80
tags:
description: Tags related to the port forwarding.
returned: success
type: list
sample: []
vm_name:
description: Name of the virtual machine.
returned: success
type: string
sample: web-01
vm_display_name:
description: Display name of the virtual machine.
returned: success
type: string
sample: web-01
vm_guest_ip:
description: IP of the virtual machine.
returned: success
type: string
sample: 10.101.65.152
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackPortforwarding(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.portforwarding_rule = None
self.vm_default_nic = None
def get_vm_guest_ip(self):
vm_guest_ip = self.module.params.get('vm_guest_ip')
default_nic = self.get_vm_default_nic()
if not vm_guest_ip:
return default_nic['ipaddress']
for secondary_ip in default_nic['secondaryip']:
if vm_guest_ip == secondary_ip['ipaddress']:
return vm_guest_ip
self.module.fail_json(msg="Secondary IP '%s' not assigned to VM" % vm_guest_ip)
def get_vm_default_nic(self):
if self.vm_default_nic:
return self.vm_default_nic
nics = self.cs.listNics(virtualmachineid=self.get_vm(key='id'))
if nics:
for n in nics['nic']:
if n['isdefault']:
self.vm_default_nic = n
return self.vm_default_nic
self.module.fail_json(msg="No default IP address of VM '%s' found" % self.module.params.get('vm'))
def get_portforwarding_rule(self):
if not self.portforwarding_rule:
protocol = self.module.params.get('protocol')
public_port = self.module.params.get('public_port')
public_end_port = self.get_or_fallback('public_end_port', 'public_port')
private_port = self.module.params.get('private_port')
private_end_port = self.get_or_fallback('private_end_port', 'private_port')
args = {}
args['ipaddressid'] = self.get_ip_address(key='id')
args['projectid'] = self.get_project(key='id')
portforwarding_rules = self.cs.listPortForwardingRules(**args)
if portforwarding_rules and 'portforwardingrule' in portforwarding_rules:
for rule in portforwarding_rules['portforwardingrule']:
if protocol == rule['protocol'] \
and public_port == int(rule['publicport']):
self.portforwarding_rule = rule
break
return self.portforwarding_rule
def present_portforwarding_rule(self):
portforwarding_rule = self.get_portforwarding_rule()
if portforwarding_rule:
portforwarding_rule = self.update_portforwarding_rule(portforwarding_rule)
else:
portforwarding_rule = self.create_portforwarding_rule()
return portforwarding_rule
def create_portforwarding_rule(self):
args = {}
args['protocol'] = self.module.params.get('protocol')
args['publicport'] = self.module.params.get('public_port')
args['publicendport'] = self.get_or_fallback('public_end_port', 'public_port')
args['privateport'] = self.module.params.get('private_port')
args['privateendport'] = self.get_or_fallback('private_end_port', 'private_port')
args['openfirewall'] = self.module.params.get('open_firewall')
args['vmguestip'] = self.get_vm_guest_ip()
args['ipaddressid'] = self.get_ip_address(key='id')
args['virtualmachineid'] = self.get_vm(key='id')
portforwarding_rule = None
self.result['changed'] = True
if not self.module.check_mode:
portforwarding_rule = self.cs.createPortForwardingRule(**args)
poll_async = self.module.params.get('poll_async')
if poll_async:
portforwarding_rule = self._poll_job(portforwarding_rule, 'portforwardingrule')
return portforwarding_rule
def update_portforwarding_rule(self, portforwarding_rule):
args = {}
args['protocol'] = self.module.params.get('protocol')
args['publicport'] = self.module.params.get('public_port')
args['publicendport'] = self.get_or_fallback('public_end_port', 'public_port')
args['privateport'] = self.module.params.get('private_port')
args['privateendport'] = self.get_or_fallback('private_end_port', 'private_port')
args['openfirewall'] = self.module.params.get('open_firewall')
args['vmguestip'] = self.get_vm_guest_ip()
args['ipaddressid'] = self.get_ip_address(key='id')
args['virtualmachineid'] = self.get_vm(key='id')
if self._has_changed(args, portforwarding_rule):
self.result['changed'] = True
if not self.module.check_mode:
# API broken in 4.2.1?, workaround using remove/create instead of update
# portforwarding_rule = self.cs.updatePortForwardingRule(**args)
self.absent_portforwarding_rule()
portforwarding_rule = self.cs.createPortForwardingRule(**args)
poll_async = self.module.params.get('poll_async')
if poll_async:
portforwarding_rule = self._poll_job(portforwarding_rule, 'portforwardingrule')
return portforwarding_rule
def absent_portforwarding_rule(self):
portforwarding_rule = self.get_portforwarding_rule()
if portforwarding_rule:
self.result['changed'] = True
args = {}
args['id'] = portforwarding_rule['id']
if not self.module.check_mode:
res = self.cs.deletePortForwardingRule(**args)
poll_async = self.module.params.get('poll_async')
if poll_async:
self._poll_job(res, 'portforwardingrule')
return portforwarding_rule
def get_result(self, portforwarding_rule):
if portforwarding_rule:
if 'id' in portforwarding_rule:
self.result['id'] = portforwarding_rule['id']
if 'virtualmachinedisplayname' in portforwarding_rule:
self.result['vm_display_name'] = portforwarding_rule['virtualmachinedisplayname']
if 'virtualmachinename' in portforwarding_rule:
self.result['vm_name'] = portforwarding_rule['virtualmachinename']
if 'ipaddress' in portforwarding_rule:
self.result['ip_address'] = portforwarding_rule['ipaddress']
if 'vmguestip' in portforwarding_rule:
self.result['vm_guest_ip'] = portforwarding_rule['vmguestip']
if 'publicport' in portforwarding_rule:
self.result['public_port'] = int(portforwarding_rule['publicport'])
if 'publicendport' in portforwarding_rule:
self.result['public_end_port'] = int(portforwarding_rule['publicendport'])
if 'privateport' in portforwarding_rule:
self.result['private_port'] = int(portforwarding_rule['privateport'])
if 'privateendport' in portforwarding_rule:
self.result['private_end_port'] = int(portforwarding_rule['privateendport'])
if 'protocol' in portforwarding_rule:
self.result['protocol'] = portforwarding_rule['protocol']
if 'tags' in portforwarding_rule:
self.result['tags'] = []
for tag in portforwarding_rule['tags']:
result_tag = {}
result_tag['key'] = tag['key']
result_tag['value'] = tag['value']
self.result['tags'].append(result_tag)
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
ip_address = dict(required=True),
protocol= dict(choices=['tcp', 'udp'], default='tcp'),
public_port = dict(type='int', required=True),
public_end_port = dict(type='int', default=None),
private_port = dict(type='int', required=True),
private_end_port = dict(type='int', default=None),
state = dict(choices=['present', 'absent'], default='present'),
open_firewall = dict(choices=BOOLEANS, default=False),
vm_guest_ip = dict(default=None),
vm = dict(default=None),
zone = dict(default=None),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_pf = AnsibleCloudStackPortforwarding(module)
state = module.params.get('state')
if state in ['absent']:
pf_rule = acs_pf.absent_portforwarding_rule()
else:
pf_rule = acs_pf.present_portforwarding_rule()
result = acs_pf.get_result(pf_rule)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,333 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_project
short_description: Manages projects on Apache CloudStack based clouds.
description:
- Create, update, suspend, activate and remove projects.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the project.
required: true
displaytext:
description:
- Displaytext of the project.
- If not specified, C(name) will be used as displaytext.
required: false
default: null
state:
description:
- State of the project.
required: false
default: 'present'
choices: [ 'present', 'absent', 'active', 'suspended' ]
domain:
description:
- Domain the project is related to.
required: false
default: null
account:
description:
- Account the project is related to.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Create a project
- local_action:
module: cs_project
name: web
# Rename a project
- local_action:
module: cs_project
name: web
displaytext: my web project
# Suspend an existing project
- local_action:
module: cs_project
name: web
state: suspended
# Activate an existing project
- local_action:
module: cs_project
name: web
state: active
# Remove a project
- local_action:
module: cs_project
name: web
state: absent
'''
RETURN = '''
---
id:
description: ID of the project.
returned: success
type: string
sample: 04589590-ac63-4ffc-93f5-b698b8ac38b6
name:
description: Name of the project.
returned: success
type: string
sample: web project
displaytext:
description: Display text of the project.
returned: success
type: string
sample: web project
state:
description: State of the project.
returned: success
type: string
sample: Active
domain:
description: Domain the project is related to.
returned: success
type: string
sample: example domain
account:
description: Account the project is related to.
returned: success
type: string
sample: example account
tags:
description: List of resource tags associated with the project.
returned: success
type: dict
sample: '[ { "key": "foo", "value": "bar" } ]'
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackProject(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.project = None
def get_project(self):
if not self.project:
project = self.module.params.get('name')
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
projects = self.cs.listProjects(**args)
if projects:
for p in projects['project']:
if project.lower() in [ p['name'].lower(), p['id']]:
self.project = p
break
return self.project
def present_project(self):
project = self.get_project()
if not project:
project = self.create_project(project)
else:
project = self.update_project(project)
return project
def update_project(self, project):
args = {}
args['id'] = project['id']
args['displaytext'] = self.get_or_fallback('displaytext', 'name')
if self._has_changed(args, project):
self.result['changed'] = True
if not self.module.check_mode:
project = self.cs.updateProject(**args)
if 'errortext' in project:
self.module.fail_json(msg="Failed: '%s'" % project['errortext'])
poll_async = self.module.params.get('poll_async')
if project and poll_async:
project = self._poll_job(project, 'project')
return project
def create_project(self, project):
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['displaytext'] = self.get_or_fallback('displaytext', 'name')
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
if not self.module.check_mode:
project = self.cs.createProject(**args)
if 'errortext' in project:
self.module.fail_json(msg="Failed: '%s'" % project['errortext'])
poll_async = self.module.params.get('poll_async')
if project and poll_async:
project = self._poll_job(project, 'project')
return project
def state_project(self, state=None):
project = self.get_project()
if not project:
self.module.fail_json(msg="No project named '%s' found." % self.module.params('name'))
if project['state'].lower() != state:
self.result['changed'] = True
args = {}
args['id'] = project['id']
if not self.module.check_mode:
if state == 'suspended':
project = self.cs.suspendProject(**args)
else:
project = self.cs.activateProject(**args)
if 'errortext' in project:
self.module.fail_json(msg="Failed: '%s'" % project['errortext'])
poll_async = self.module.params.get('poll_async')
if project and poll_async:
project = self._poll_job(project, 'project')
return project
def absent_project(self):
project = self.get_project()
if project:
self.result['changed'] = True
args = {}
args['id'] = project['id']
if not self.module.check_mode:
res = self.cs.deleteProject(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'project')
return project
def get_result(self, project):
if project:
if 'name' in project:
self.result['name'] = project['name']
if 'displaytext' in project:
self.result['displaytext'] = project['displaytext']
if 'account' in project:
self.result['account'] = project['account']
if 'domain' in project:
self.result['domain'] = project['domain']
if 'state' in project:
self.result['state'] = project['state']
if 'tags' in project:
self.result['tags'] = []
for tag in project['tags']:
result_tag = {}
result_tag['key'] = tag['key']
result_tag['value'] = tag['value']
self.result['tags'].append(result_tag)
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
displaytext = dict(default=None),
state = dict(choices=['present', 'absent', 'active', 'suspended' ], default='present'),
domain = dict(default=None),
account = dict(default=None),
poll_async = dict(type='bool', choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_project = AnsibleCloudStackProject(module)
state = module.params.get('state')
if state in ['absent']:
project = acs_project.absent_project()
elif state in ['active', 'suspended']:
project = acs_project.state_project(state=state)
else:
project = acs_project.present_project()
result = acs_project.get_result(project)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,198 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_securitygroup
short_description: Manages security groups on Apache CloudStack based clouds.
description:
- Create and remove security groups.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the security group.
required: true
description:
description:
- Description of the security group.
required: false
default: null
state:
description:
- State of the security group.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
project:
description:
- Name of the project the security group to be created in.
required: false
default: null
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Create a security group
- local_action:
module: cs_securitygroup
name: default
description: default security group
# Remove a security group
- local_action:
module: cs_securitygroup
name: default
state: absent
'''
RETURN = '''
---
name:
description: Name of security group.
returned: success
type: string
sample: app
description:
description: Description of security group.
returned: success
type: string
sample: application security group
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackSecurityGroup(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.security_group = None
def get_security_group(self):
if not self.security_group:
sg_name = self.module.params.get('name')
args = {}
args['projectid'] = self.get_project('id')
sgs = self.cs.listSecurityGroups(**args)
if sgs:
for s in sgs['securitygroup']:
if s['name'] == sg_name:
self.security_group = s
break
return self.security_group
def create_security_group(self):
security_group = self.get_security_group()
if not security_group:
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['projectid'] = self.get_project('id')
args['description'] = self.module.params.get('description')
if not self.module.check_mode:
res = self.cs.createSecurityGroup(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
security_group = res['securitygroup']
return security_group
def remove_security_group(self):
security_group = self.get_security_group()
if security_group:
self.result['changed'] = True
args = {}
args['name'] = self.module.params.get('name')
args['projectid'] = self.get_project('id')
if not self.module.check_mode:
res = self.cs.deleteSecurityGroup(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
return security_group
def get_result(self, security_group):
if security_group:
if 'name' in security_group:
self.result['name'] = security_group['name']
if 'description' in security_group:
self.result['description'] = security_group['description']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
description = dict(default=None),
state = dict(choices=['present', 'absent'], default='present'),
project = dict(default=None),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_sg = AnsibleCloudStackSecurityGroup(module)
state = module.params.get('state')
if state in ['absent']:
sg = acs_sg.remove_security_group()
else:
sg = acs_sg.create_security_group()
result = acs_sg.get_result(sg)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,431 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_securitygroup_rule
short_description: Manages security group rules on Apache CloudStack based clouds.
description:
- Add and remove security group rules.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
security_group:
description:
- Name of the security group the rule is related to. The security group must be existing.
required: true
state:
description:
- State of the security group rule.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
protocol:
description:
- Protocol of the security group rule.
required: false
default: 'tcp'
choices: [ 'tcp', 'udp', 'icmp', 'ah', 'esp', 'gre' ]
type:
description:
- Ingress or egress security group rule.
required: false
default: 'ingress'
choices: [ 'ingress', 'egress' ]
cidr:
description:
- CIDR (full notation) to be used for security group rule.
required: false
default: '0.0.0.0/0'
user_security_group:
description:
- Security group this rule is based of.
required: false
default: null
start_port:
description:
- Start port for this rule. Required if C(protocol=tcp) or C(protocol=udp).
required: false
default: null
aliases: [ 'port' ]
end_port:
description:
- End port for this rule. Required if C(protocol=tcp) or C(protocol=udp), but C(start_port) will be used if not set.
required: false
default: null
icmp_type:
description:
- Type of the icmp message being sent. Required if C(protocol=icmp).
required: false
default: null
icmp_code:
description:
- Error code for this icmp message. Required if C(protocol=icmp).
required: false
default: null
project:
description:
- Name of the project the security group to be created in.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
---
# Allow inbound port 80/tcp from 1.2.3.4 added to security group 'default'
- local_action:
module: cs_securitygroup_rule
security_group: default
port: 80
cidr: 1.2.3.4/32
# Allow tcp/udp outbound added to security group 'default'
- local_action:
module: cs_securitygroup_rule
security_group: default
type: egress
start_port: 1
end_port: 65535
protocol: '{{ item }}'
with_items:
- tcp
- udp
# Allow inbound icmp from 0.0.0.0/0 added to security group 'default'
- local_action:
module: cs_securitygroup_rule
security_group: default
protocol: icmp
icmp_code: -1
icmp_type: -1
# Remove rule inbound port 80/tcp from 0.0.0.0/0 from security group 'default'
- local_action:
module: cs_securitygroup_rule
security_group: default
port: 80
state: absent
# Allow inbound port 80/tcp from security group web added to security group 'default'
- local_action:
module: cs_securitygroup_rule
security_group: default
port: 80
user_security_group: web
'''
RETURN = '''
---
security_group:
description: security group of the rule.
returned: success
type: string
sample: default
type:
description: type of the rule.
returned: success
type: string
sample: ingress
cidr:
description: CIDR of the rule.
returned: success and cidr is defined
type: string
sample: 0.0.0.0/0
user_security_group:
description: user security group of the rule.
returned: success and user_security_group is defined
type: string
sample: default
protocol:
description: protocol of the rule.
returned: success
type: string
sample: tcp
start_port:
description: start port of the rule.
returned: success
type: int
sample: 80
end_port:
description: end port of the rule.
returned: success
type: int
sample: 80
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackSecurityGroupRule(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
def _tcp_udp_match(self, rule, protocol, start_port, end_port):
return protocol in ['tcp', 'udp'] \
and protocol == rule['protocol'] \
and start_port == int(rule['startport']) \
and end_port == int(rule['endport'])
def _icmp_match(self, rule, protocol, icmp_code, icmp_type):
return protocol == 'icmp' \
and protocol == rule['protocol'] \
and icmp_code == int(rule['icmpcode']) \
and icmp_type == int(rule['icmptype'])
def _ah_esp_gre_match(self, rule, protocol):
return protocol in ['ah', 'esp', 'gre'] \
and protocol == rule['protocol']
def _type_security_group_match(self, rule, security_group_name):
return security_group_name \
and 'securitygroupname' in rule \
and security_group_name == rule['securitygroupname']
def _type_cidr_match(self, rule, cidr):
return 'cidr' in rule \
and cidr == rule['cidr']
def _get_rule(self, rules):
user_security_group_name = self.module.params.get('user_security_group')
cidr = self.module.params.get('cidr')
protocol = self.module.params.get('protocol')
start_port = self.module.params.get('start_port')
end_port = self.get_or_fallback('end_port', 'start_port')
icmp_code = self.module.params.get('icmp_code')
icmp_type = self.module.params.get('icmp_type')
if protocol in ['tcp', 'udp'] and not (start_port and end_port):
self.module.fail_json(msg="no start_port or end_port set for protocol '%s'" % protocol)
if protocol == 'icmp' and not (icmp_type and icmp_code):
self.module.fail_json(msg="no icmp_type or icmp_code set for protocol '%s'" % protocol)
for rule in rules:
if user_security_group_name:
type_match = self._type_security_group_match(rule, user_security_group_name)
else:
type_match = self._type_cidr_match(rule, cidr)
protocol_match = ( self._tcp_udp_match(rule, protocol, start_port, end_port) \
or self._icmp_match(rule, protocol, icmp_code, icmp_type) \
or self._ah_esp_gre_match(rule, protocol)
)
if type_match and protocol_match:
return rule
return None
def get_security_group(self, security_group_name=None):
if not security_group_name:
security_group_name = self.module.params.get('security_group')
args = {}
args['securitygroupname'] = security_group_name
args['projectid'] = self.get_project('id')
sgs = self.cs.listSecurityGroups(**args)
if not sgs or 'securitygroup' not in sgs:
self.module.fail_json(msg="security group '%s' not found" % security_group_name)
return sgs['securitygroup'][0]
def add_rule(self):
security_group = self.get_security_group()
args = {}
user_security_group_name = self.module.params.get('user_security_group')
# the user_security_group and cidr are mutually_exclusive, but cidr is defaulted to 0.0.0.0/0.
# that is why we ignore if we have a user_security_group.
if user_security_group_name:
args['usersecuritygrouplist'] = []
user_security_group = self.get_security_group(user_security_group_name)
args['usersecuritygrouplist'].append({
'group': user_security_group['name'],
'account': user_security_group['account'],
})
else:
args['cidrlist'] = self.module.params.get('cidr')
args['protocol'] = self.module.params.get('protocol')
args['startport'] = self.module.params.get('start_port')
args['endport'] = self.get_or_fallback('end_port', 'start_port')
args['icmptype'] = self.module.params.get('icmp_type')
args['icmpcode'] = self.module.params.get('icmp_code')
args['projectid'] = self.get_project('id')
args['securitygroupid'] = security_group['id']
rule = None
res = None
sg_type = self.module.params.get('type')
if sg_type == 'ingress':
rule = self._get_rule(security_group['ingressrule'])
if not rule:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.authorizeSecurityGroupIngress(**args)
elif sg_type == 'egress':
rule = self._get_rule(security_group['egressrule'])
if not rule:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.authorizeSecurityGroupEgress(**args)
if res and 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
security_group = self._poll_job(res, 'securitygroup')
key = sg_type + "rule" # ingressrule / egressrule
if key in security_group:
rule = security_group[key][0]
return rule
def remove_rule(self):
security_group = self.get_security_group()
rule = None
res = None
sg_type = self.module.params.get('type')
if sg_type == 'ingress':
rule = self._get_rule(security_group['ingressrule'])
if rule:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.revokeSecurityGroupIngress(id=rule['ruleid'])
elif sg_type == 'egress':
rule = self._get_rule(security_group['egressrule'])
if rule:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.revokeSecurityGroupEgress(id=rule['ruleid'])
if res and 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'securitygroup')
return rule
def get_result(self, security_group_rule):
self.result['type'] = self.module.params.get('type')
self.result['security_group'] = self.module.params.get('security_group')
if security_group_rule:
rule = security_group_rule
if 'securitygroupname' in rule:
self.result['user_security_group'] = rule['securitygroupname']
if 'cidr' in rule:
self.result['cidr'] = rule['cidr']
if 'protocol' in rule:
self.result['protocol'] = rule['protocol']
if 'startport' in rule:
self.result['start_port'] = rule['startport']
if 'endport' in rule:
self.result['end_port'] = rule['endport']
if 'icmpcode' in rule:
self.result['icmp_code'] = rule['icmpcode']
if 'icmptype' in rule:
self.result['icmp_type'] = rule['icmptype']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
security_group = dict(required=True),
type = dict(choices=['ingress', 'egress'], default='ingress'),
cidr = dict(default='0.0.0.0/0'),
user_security_group = dict(default=None),
protocol = dict(choices=['tcp', 'udp', 'icmp', 'ah', 'esp', 'gre'], default='tcp'),
icmp_type = dict(type='int', default=None),
icmp_code = dict(type='int', default=None),
start_port = dict(type='int', default=None, aliases=['port']),
end_port = dict(type='int', default=None),
state = dict(choices=['present', 'absent'], default='present'),
project = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['icmp_type', 'icmp_code'],
['api_key', 'api_secret', 'api_url'],
),
mutually_exclusive = (
['icmp_type', 'start_port'],
['icmp_type', 'end_port'],
['icmp_code', 'start_port'],
['icmp_code', 'end_port'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_sg_rule = AnsibleCloudStackSecurityGroupRule(module)
state = module.params.get('state')
if state in ['absent']:
sg_rule = acs_sg_rule.remove_rule()
else:
sg_rule = acs_sg_rule.add_rule()
result = acs_sg_rule.get_result(sg_rule)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -23,15 +23,26 @@ DOCUMENTATION = '''
module: cs_sshkeypair module: cs_sshkeypair
short_description: Manages SSH keys on Apache CloudStack based clouds. short_description: Manages SSH keys on Apache CloudStack based clouds.
description: description:
- If no key was found and no public key was provided and a new SSH - Create, register and remove SSH keys.
private/public key pair will be created and the private key will be returned. - If no key was found and no public key was provided and a new SSH
private/public key pair will be created and the private key will be returned.
version_added: '2.0' version_added: '2.0'
author: René Moser author: "René Moser (@resmo)"
options: options:
name: name:
description: description:
- Name of public key. - Name of public key.
required: true required: true
domain:
description:
- Domain the public key is related to.
required: false
default: null
account:
description:
- Account the public key is related to.
required: false
default: null
project: project:
description: description:
- Name of the project the public key to be registered in. - Name of the project the public key to be registered in.
@ -48,10 +59,10 @@ options:
- String of the public key. - String of the public key.
required: false required: false
default: null default: null
extends_documentation_fragment: cloudstack
''' '''
EXAMPLES = ''' EXAMPLES = '''
---
# create a new private / public key pair: # create a new private / public key pair:
- local_action: cs_sshkeypair name=linus@example.com - local_action: cs_sshkeypair name=linus@example.com
register: key register: key
@ -102,18 +113,16 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
def __init__(self, module): def __init__(self, module):
AnsibleCloudStack.__init__(self, module) AnsibleCloudStack.__init__(self, module)
self.result = {
'changed': False,
}
self.ssh_key = None self.ssh_key = None
def register_ssh_key(self, public_key): def register_ssh_key(self, public_key):
ssh_key = self.get_ssh_key() ssh_key = self.get_ssh_key()
args = {}
args = {} args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project_id() args['account'] = self.get_account('name')
args['name'] = self.module.params.get('name') args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
res = None res = None
if not ssh_key: if not ssh_key:
@ -141,9 +150,11 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
ssh_key = self.get_ssh_key() ssh_key = self.get_ssh_key()
if not ssh_key: if not ssh_key:
self.result['changed'] = True self.result['changed'] = True
args = {} args = {}
args['projectid'] = self.get_project_id() args['domainid'] = self.get_domain('id')
args['name'] = self.module.params.get('name') args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
if not self.module.check_mode: if not self.module.check_mode:
res = self.cs.createSSHKeyPair(**args) res = self.cs.createSSHKeyPair(**args)
ssh_key = res['keypair'] ssh_key = res['keypair']
@ -154,9 +165,11 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
ssh_key = self.get_ssh_key() ssh_key = self.get_ssh_key()
if ssh_key: if ssh_key:
self.result['changed'] = True self.result['changed'] = True
args = {} args = {}
args['name'] = self.module.params.get('name') args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project_id() args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
if not self.module.check_mode: if not self.module.check_mode:
res = self.cs.deleteSSHKeyPair(**args) res = self.cs.deleteSSHKeyPair(**args)
return ssh_key return ssh_key
@ -164,9 +177,11 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
def get_ssh_key(self): def get_ssh_key(self):
if not self.ssh_key: if not self.ssh_key:
args = {} args = {}
args['projectid'] = self.get_project_id() args['domainid'] = self.get_domain('id')
args['name'] = self.module.params.get('name') args['account'] = self.get_account('name')
args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
ssh_keys = self.cs.listSSHKeyPairs(**args) ssh_keys = self.cs.listSSHKeyPairs(**args)
if ssh_keys and 'sshkeypair' in ssh_keys: if ssh_keys and 'sshkeypair' in ssh_keys:
@ -178,10 +193,8 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
if ssh_key: if ssh_key:
if 'fingerprint' in ssh_key: if 'fingerprint' in ssh_key:
self.result['fingerprint'] = ssh_key['fingerprint'] self.result['fingerprint'] = ssh_key['fingerprint']
if 'name' in ssh_key: if 'name' in ssh_key:
self.result['name'] = ssh_key['name'] self.result['name'] = ssh_key['name']
if 'privatekey' in ssh_key: if 'privatekey' in ssh_key:
self.result['private_key'] = ssh_key['privatekey'] self.result['private_key'] = ssh_key['privatekey']
return self.result return self.result
@ -195,14 +208,20 @@ class AnsibleCloudStackSshKey(AnsibleCloudStack):
def main(): def main():
module = AnsibleModule( module = AnsibleModule(
argument_spec = dict( argument_spec = dict(
name = dict(required=True, default=None), name = dict(required=True),
public_key = dict(default=None), public_key = dict(default=None),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None), project = dict(default=None),
state = dict(choices=['present', 'absent'], default='present'), state = dict(choices=['present', 'absent'], default='present'),
api_key = dict(default=None), api_key = dict(default=None),
api_secret = dict(default=None), api_secret = dict(default=None, no_log=True),
api_url = dict(default=None), api_url = dict(default=None),
api_http_method = dict(default='get'), api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
), ),
supports_check_mode=True supports_check_mode=True
) )
@ -234,4 +253,5 @@ def main():
# import module snippets # import module snippets
from ansible.module_utils.basic import * from ansible.module_utils.basic import *
main() if __name__ == '__main__':
main()

@ -0,0 +1,316 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_staticnat
short_description: Manages static NATs on Apache CloudStack based clouds.
description:
- Create, update and remove static NATs.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
ip_address:
description:
- Public IP address the static NAT is assigned to.
required: true
vm:
description:
- Name of virtual machine which we make the static NAT for.
- Required if C(state=present).
required: false
default: null
vm_guest_ip:
description:
- VM guest NIC secondary IP address for the static NAT.
required: false
default: false
state:
description:
- State of the static NAT.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
domain:
description:
- Domain the static NAT is related to.
required: false
default: null
account:
description:
- Account the static NAT is related to.
required: false
default: null
project:
description:
- Name of the project the static NAT is related to.
required: false
default: null
zone:
description:
- Name of the zone in which the virtual machine is in.
- If not set, default zone is used.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# create a static NAT: 1.2.3.4 -> web01
- local_action:
module: cs_staticnat
ip_address: 1.2.3.4
vm: web01
# remove a static NAT
- local_action:
module: cs_staticnat
ip_address: 1.2.3.4
state: absent
'''
RETURN = '''
---
ip_address:
description: Public IP address.
returned: success
type: string
sample: 1.2.3.4
vm_name:
description: Name of the virtual machine.
returned: success
type: string
sample: web-01
vm_display_name:
description: Display name of the virtual machine.
returned: success
type: string
sample: web-01
vm_guest_ip:
description: IP of the virtual machine.
returned: success
type: string
sample: 10.101.65.152
zone:
description: Name of zone the static NAT is related to.
returned: success
type: string
sample: ch-gva-2
project:
description: Name of project the static NAT is related to.
returned: success
type: string
sample: Production
account:
description: Account the static NAT is related to.
returned: success
type: string
sample: example account
domain:
description: Domain the static NAT is related to.
returned: success
type: string
sample: example domain
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackStaticNat(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
self.vm_default_nic = None
# TODO: move it to cloudstack utils, also used in cs_portforward
def get_vm_guest_ip(self):
vm_guest_ip = self.module.params.get('vm_guest_ip')
default_nic = self.get_vm_default_nic()
if not vm_guest_ip:
return default_nic['ipaddress']
for secondary_ip in default_nic['secondaryip']:
if vm_guest_ip == secondary_ip['ipaddress']:
return vm_guest_ip
self.module.fail_json(msg="Secondary IP '%s' not assigned to VM" % vm_guest_ip)
# TODO: move it to cloudstack utils, also used in cs_portforward
def get_vm_default_nic(self):
if self.vm_default_nic:
return self.vm_default_nic
nics = self.cs.listNics(virtualmachineid=self.get_vm(key='id'))
if nics:
for n in nics['nic']:
if n['isdefault']:
self.vm_default_nic = n
return self.vm_default_nic
self.module.fail_json(msg="No default IP address of VM '%s' found" % self.module.params.get('vm'))
def create_static_nat(self, ip_address):
self.result['changed'] = True
args = {}
args['virtualmachineid'] = self.get_vm(key='id')
args['ipaddressid'] = ip_address['id']
args['vmguestip'] = self.get_vm_guest_ip()
if not self.module.check_mode:
res = self.cs.enableStaticNat(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
# reset ip address and query new values
self.ip_address = None
ip_address = self.get_ip_address()
return ip_address
def update_static_nat(self, ip_address):
args = {}
args['virtualmachineid'] = self.get_vm(key='id')
args['ipaddressid'] = ip_address['id']
args['vmguestip'] = self.get_vm_guest_ip()
# make an alias, so we can use _has_changed()
ip_address['vmguestip'] = ip_address['vmipaddress']
if self._has_changed(args, ip_address):
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.disableStaticNat(ipaddressid=ip_address['id'])
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
res = self._poll_job(res, 'staticnat')
res = self.cs.enableStaticNat(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
# reset ip address and query new values
self.ip_address = None
ip_address = self.get_ip_address()
return ip_address
def present_static_nat(self):
ip_address = self.get_ip_address()
if not ip_address['isstaticnat']:
ip_address = self.create_static_nat(ip_address)
else:
ip_address = self.update_static_nat(ip_address)
return ip_address
def absent_static_nat(self):
ip_address = self.get_ip_address()
if ip_address['isstaticnat']:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.disableStaticNat(ipaddressid=ip_address['id'])
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
res = self._poll_job(res, 'staticnat')
return ip_address
def get_result(self, ip_address):
if ip_address:
if 'zonename' in ip_address:
self.result['zone'] = ip_address['zonename']
if 'domain' in ip_address:
self.result['domain'] = ip_address['domain']
if 'account' in ip_address:
self.result['account'] = ip_address['account']
if 'project' in ip_address:
self.result['project'] = ip_address['project']
if 'virtualmachinedisplayname' in ip_address:
self.result['vm_display_name'] = ip_address['virtualmachinedisplayname']
if 'virtualmachinename' in ip_address:
self.result['vm'] = ip_address['virtualmachinename']
if 'vmipaddress' in ip_address:
self.result['vm_guest_ip'] = ip_address['vmipaddress']
if 'ipaddress' in ip_address:
self.result['ip_address'] = ip_address['ipaddress']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
ip_address = dict(required=True),
vm = dict(default=None),
vm_guest_ip = dict(default=None),
state = dict(choices=['present', 'absent'], default='present'),
zone = dict(default=None),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_static_nat = AnsibleCloudStackStaticNat(module)
state = module.params.get('state')
if state in ['absent']:
ip_address = acs_static_nat.absent_static_nat()
else:
ip_address = acs_static_nat.present_static_nat()
result = acs_static_nat.get_result(ip_address)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,631 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_template
short_description: Manages templates on Apache CloudStack based clouds.
description:
- Register a template from URL, create a template from a ROOT volume of a stopped VM or its snapshot and delete templates.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the template.
required: true
url:
description:
- URL of where the template is hosted.
- Mutually exclusive with C(vm).
required: false
default: null
vm:
description:
- VM name the template will be created from its volume or alternatively from a snapshot.
- VM must be in stopped state if created from its volume.
- Mutually exclusive with C(url).
required: false
default: null
snapshot:
description:
- Name of the snapshot, created from the VM ROOT volume, the template will be created from.
- C(vm) is required together with this argument.
required: false
default: null
os_type:
description:
- OS type that best represents the OS of this template.
required: false
default: null
checksum:
description:
- The MD5 checksum value of this template.
- If set, we search by checksum instead of name.
required: false
default: false
is_ready:
description:
- This flag is used for searching existing templates.
- If set to C(true), it will only list template ready for deployment e.g. successfully downloaded and installed.
- Recommended to set it to C(false).
required: false
default: false
is_public:
description:
- Register the template to be publicly available to all users.
- Only used if C(state) is present.
required: false
default: false
is_featured:
description:
- Register the template to be featured.
- Only used if C(state) is present.
required: false
default: false
is_dynamically_scalable:
description:
- Register the template having XS/VMWare tools installed in order to support dynamic scaling of VM CPU/memory.
- Only used if C(state) is present.
required: false
default: false
project:
description:
- Name of the project the template to be registered in.
required: false
default: null
zone:
description:
- Name of the zone you wish the template to be registered or deleted from.
- If not specified, first found zone will be used.
required: false
default: null
template_filter:
description:
- Name of the filter used to search for the template.
required: false
default: 'self'
choices: [ 'featured', 'self', 'selfexecutable', 'sharedexecutable', 'executable', 'community' ]
hypervisor:
description:
- Name the hypervisor to be used for creating the new template.
- Relevant when using C(state=present).
required: false
default: none
choices: [ 'KVM', 'VMware', 'BareMetal', 'XenServer', 'LXC', 'HyperV', 'UCS', 'OVM' ]
requires_hvm:
description:
- true if this template requires HVM.
required: false
default: false
password_enabled:
description:
- True if the template supports the password reset feature.
required: false
default: false
template_tag:
description:
- the tag for this template.
required: false
default: null
sshkey_enabled:
description:
- True if the template supports the sshkey upload feature.
required: false
default: false
is_routing:
description:
- True if the template type is routing i.e., if template is used to deploy router.
- Only considered if C(url) is used.
required: false
default: false
format:
description:
- The format for the template.
- Relevant when using C(state=present).
required: false
default: null
choices: [ 'QCOW2', 'RAW', 'VHD', 'OVA' ]
is_extractable:
description:
- True if the template or its derivatives are extractable.
required: false
default: false
details:
description:
- Template details in key/value pairs.
required: false
default: null
bits:
description:
- 32 or 64 bits support.
required: false
default: '64'
displaytext:
description:
- the display text of the template.
required: true
default: null
state:
description:
- State of the template.
required: false
default: 'present'
choices: [ 'present', 'absent' ]
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Register a systemvm template
- local_action:
module: cs_template
name: systemvm-4.5
url: "http://packages.shapeblue.com/systemvmtemplate/4.5/systemvm64template-4.5-vmware.ova"
hypervisor: VMware
format: OVA
zone: tokio-ix
os_type: Debian GNU/Linux 7(64-bit)
is_routing: yes
# Create a template from a stopped virtual machine's volume
- local_action:
module: cs_template
name: debian-base-template
vm: debian-base-vm
os_type: Debian GNU/Linux 7(64-bit)
zone: tokio-ix
password_enabled: yes
is_public: yes
# Create a template from a virtual machine's root volume snapshot
- local_action:
module: cs_template
name: debian-base-template
vm: debian-base-vm
snapshot: ROOT-233_2015061509114
os_type: Debian GNU/Linux 7(64-bit)
zone: tokio-ix
password_enabled: yes
is_public: yes
# Remove a template
- local_action:
module: cs_template
name: systemvm-4.2
state: absent
'''
RETURN = '''
---
name:
description: Name of the template.
returned: success
type: string
sample: Debian 7 64-bit
displaytext:
description: Displaytext of the template.
returned: success
type: string
sample: Debian 7.7 64-bit minimal 2015-03-19
checksum:
description: MD5 checksum of the template.
returned: success
type: string
sample: 0b31bccccb048d20b551f70830bb7ad0
status:
description: Status of the template.
returned: success
type: string
sample: Download Complete
is_ready:
description: True if the template is ready to be deployed from.
returned: success
type: boolean
sample: true
is_public:
description: True if the template is public.
returned: success
type: boolean
sample: true
is_featured:
description: True if the template is featured.
returned: success
type: boolean
sample: true
is_extractable:
description: True if the template is extractable.
returned: success
type: boolean
sample: true
format:
description: Format of the template.
returned: success
type: string
sample: OVA
os_type:
description: Typo of the OS.
returned: success
type: string
sample: CentOS 6.5 (64-bit)
password_enabled:
description: True if the reset password feature is enabled, false otherwise.
returned: success
type: boolean
sample: false
sshkey_enabled:
description: true if template is sshkey enabled, false otherwise.
returned: success
type: boolean
sample: false
cross_zones:
description: true if the template is managed across all zones, false otherwise.
returned: success
type: boolean
sample: false
template_type:
description: Type of the template.
returned: success
type: string
sample: USER
created:
description: Date of registering.
returned: success
type: string
sample: 2015-03-29T14:57:06+0200
template_tag:
description: Template tag related to this template.
returned: success
type: string
sample: special
hypervisor:
description: Hypervisor related to this template.
returned: success
type: string
sample: VMware
tags:
description: List of resource tags associated with the template.
returned: success
type: dict
sample: '[ { "key": "foo", "value": "bar" } ]'
zone:
description: Name of zone the template is registered in.
returned: success
type: string
sample: zuerich
domain:
description: Domain the template is related to.
returned: success
type: string
sample: example domain
account:
description: Account the template is related to.
returned: success
type: string
sample: example account
project:
description: Name of project the template is related to.
returned: success
type: string
sample: Production
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackTemplate(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
def _get_args(self):
args = {}
args['name'] = self.module.params.get('name')
args['displaytext'] = self.module.params.get('displaytext')
args['bits'] = self.module.params.get('bits')
args['isdynamicallyscalable'] = self.module.params.get('is_dynamically_scalable')
args['isextractable'] = self.module.params.get('is_extractable')
args['isfeatured'] = self.module.params.get('is_featured')
args['ispublic'] = self.module.params.get('is_public')
args['passwordenabled'] = self.module.params.get('password_enabled')
args['requireshvm'] = self.module.params.get('requires_hvm')
args['templatetag'] = self.module.params.get('template_tag')
args['ostypeid'] = self.get_os_type(key='id')
if not args['ostypeid']:
self.module.fail_json(msg="Missing required arguments: os_type")
if not args['displaytext']:
args['displaytext'] = self.module.params.get('name')
return args
def get_root_volume(self, key=None):
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
args['virtualmachineid'] = self.get_vm(key='id')
args['type'] = "ROOT"
volumes = self.cs.listVolumes(**args)
if volumes:
return self._get_by_key(key, volumes['volume'][0])
self.module.fail_json(msg="Root volume for '%s' not found" % self.get_vm('name'))
def get_snapshot(self, key=None):
snapshot = self.module.params.get('snapshot')
if not snapshot:
return None
args = {}
args['account'] = self.get_account(key='name')
args['domainid'] = self.get_domain(key='id')
args['projectid'] = self.get_project(key='id')
args['volumeid'] = self.get_root_volume('id')
snapshots = self.cs.listSnapshots(**args)
if snapshots:
for s in snapshots['snapshot']:
if snapshot in [ s['name'], s['id'] ]:
return self._get_by_key(key, s)
self.module.fail_json(msg="Snapshot '%s' not found" % snapshot)
def create_template(self):
template = self.get_template()
if not template:
self.result['changed'] = True
args = self._get_args()
snapshot_id = self.get_snapshot(key='id')
if snapshot_id:
args['snapshotid'] = snapshot_id
else:
args['volumeid'] = self.get_root_volume('id')
if not self.module.check_mode:
template = self.cs.createTemplate(**args)
if 'errortext' in template:
self.module.fail_json(msg="Failed: '%s'" % template['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
template = self._poll_job(template, 'template')
return template
def register_template(self):
template = self.get_template()
if not template:
self.result['changed'] = True
args = self._get_args()
args['url'] = self.module.params.get('url')
args['format'] = self.module.params.get('format')
args['checksum'] = self.module.params.get('checksum')
args['isextractable'] = self.module.params.get('is_extractable')
args['isrouting'] = self.module.params.get('is_routing')
args['sshkeyenabled'] = self.module.params.get('sshkey_enabled')
args['hypervisor'] = self.get_hypervisor()
args['zoneid'] = self.get_zone(key='id')
args['domainid'] = self.get_domain(key='id')
args['account'] = self.get_account(key='name')
args['projectid'] = self.get_project(key='id')
if not self.module.check_mode:
res = self.cs.registerTemplate(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
template = res['template']
return template
def get_template(self):
args = {}
args['isready'] = self.module.params.get('is_ready')
args['templatefilter'] = self.module.params.get('template_filter')
args['zoneid'] = self.get_zone(key='id')
args['domainid'] = self.get_domain(key='id')
args['account'] = self.get_account(key='name')
args['projectid'] = self.get_project(key='id')
# if checksum is set, we only look on that.
checksum = self.module.params.get('checksum')
if not checksum:
args['name'] = self.module.params.get('name')
templates = self.cs.listTemplates(**args)
if templates:
# if checksum is set, we only look on that.
if not checksum:
return templates['template'][0]
else:
for i in templates['template']:
if i['checksum'] == checksum:
return i
return None
def remove_template(self):
template = self.get_template()
if template:
self.result['changed'] = True
args = {}
args['id'] = template['id']
args['zoneid'] = self.get_zone(key='id')
if not self.module.check_mode:
res = self.cs.deleteTemplate(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if poll_async:
res = self._poll_job(res, 'template')
return template
def get_result(self, template):
if template:
if 'displaytext' in template:
self.result['displaytext'] = template['displaytext']
if 'name' in template:
self.result['name'] = template['name']
if 'hypervisor' in template:
self.result['hypervisor'] = template['hypervisor']
if 'zonename' in template:
self.result['zone'] = template['zonename']
if 'checksum' in template:
self.result['checksum'] = template['checksum']
if 'format' in template:
self.result['format'] = template['format']
if 'isready' in template:
self.result['is_ready'] = template['isready']
if 'ispublic' in template:
self.result['is_public'] = template['ispublic']
if 'isfeatured' in template:
self.result['is_featured'] = template['isfeatured']
if 'isextractable' in template:
self.result['is_extractable'] = template['isextractable']
# and yes! it is really camelCase!
if 'crossZones' in template:
self.result['cross_zones'] = template['crossZones']
if 'ostypename' in template:
self.result['os_type'] = template['ostypename']
if 'templatetype' in template:
self.result['template_type'] = template['templatetype']
if 'passwordenabled' in template:
self.result['password_enabled'] = template['passwordenabled']
if 'sshkeyenabled' in template:
self.result['sshkey_enabled'] = template['sshkeyenabled']
if 'status' in template:
self.result['status'] = template['status']
if 'created' in template:
self.result['created'] = template['created']
if 'templatetag' in template:
self.result['template_tag'] = template['templatetag']
if 'tags' in template:
self.result['tags'] = []
for tag in template['tags']:
result_tag = {}
result_tag['key'] = tag['key']
result_tag['value'] = tag['value']
self.result['tags'].append(result_tag)
if 'domain' in template:
self.result['domain'] = template['domain']
if 'account' in template:
self.result['account'] = template['account']
if 'project' in template:
self.result['project'] = template['project']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
displaytext = dict(default=None),
url = dict(default=None),
vm = dict(default=None),
snapshot = dict(default=None),
os_type = dict(default=None),
is_ready = dict(type='bool', choices=BOOLEANS, default=False),
is_public = dict(type='bool', choices=BOOLEANS, default=True),
is_featured = dict(type='bool', choices=BOOLEANS, default=False),
is_dynamically_scalable = dict(type='bool', choices=BOOLEANS, default=False),
is_extractable = dict(type='bool', choices=BOOLEANS, default=False),
is_routing = dict(type='bool', choices=BOOLEANS, default=False),
checksum = dict(default=None),
template_filter = dict(default='self', choices=['featured', 'self', 'selfexecutable', 'sharedexecutable', 'executable', 'community']),
hypervisor = dict(choices=['KVM', 'VMware', 'BareMetal', 'XenServer', 'LXC', 'HyperV', 'UCS', 'OVM'], default=None),
requires_hvm = dict(type='bool', choices=BOOLEANS, default=False),
password_enabled = dict(type='bool', choices=BOOLEANS, default=False),
template_tag = dict(default=None),
sshkey_enabled = dict(type='bool', choices=BOOLEANS, default=False),
format = dict(choices=['QCOW2', 'RAW', 'VHD', 'OVA'], default=None),
details = dict(default=None),
bits = dict(type='int', choices=[ 32, 64 ], default=64),
state = dict(choices=['present', 'absent'], default='present'),
zone = dict(default=None),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
poll_async = dict(type='bool', choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
mutually_exclusive = (
['url', 'vm'],
),
required_together = (
['api_key', 'api_secret', 'api_url'],
['format', 'url', 'hypervisor'],
),
required_one_of = (
['url', 'vm'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_tpl = AnsibleCloudStackTemplate(module)
state = module.params.get('state')
if state in ['absent']:
tpl = acs_tpl.remove_template()
else:
url = module.params.get('url')
if url:
tpl = acs_tpl.register_template()
else:
tpl = acs_tpl.create_template()
result = acs_tpl.get_result(tpl)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,325 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2015, René Moser <mail@renemoser.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: cs_vmsnapshot
short_description: Manages VM snapshots on Apache CloudStack based clouds.
description:
- Create, remove and revert VM from snapshots.
version_added: '2.0'
author: "René Moser (@resmo)"
options:
name:
description:
- Unique Name of the snapshot. In CloudStack terms C(displayname).
required: true
aliases: ['displayname']
vm:
description:
- Name of the virtual machine.
required: true
description:
description:
- Description of the snapshot.
required: false
default: null
snapshot_memory:
description:
- Snapshot memory if set to true.
required: false
default: false
zone:
description:
- Name of the zone in which the VM is in. If not set, default zone is used.
required: false
default: null
project:
description:
- Name of the project the VM is assigned to.
required: false
default: null
state:
description:
- State of the snapshot.
required: false
default: 'present'
choices: [ 'present', 'absent', 'revert' ]
domain:
description:
- Domain the VM snapshot is related to.
required: false
default: null
account:
description:
- Account the VM snapshot is related to.
required: false
default: null
poll_async:
description:
- Poll async jobs until job has finished.
required: false
default: true
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
# Create a VM snapshot of disk and memory before an upgrade
- local_action:
module: cs_vmsnapshot
name: Snapshot before upgrade
vm: web-01
snapshot_memory: yes
# Revert a VM to a snapshot after a failed upgrade
- local_action:
module: cs_vmsnapshot
name: Snapshot before upgrade
vm: web-01
state: revert
# Remove a VM snapshot after successful upgrade
- local_action:
module: cs_vmsnapshot
name: Snapshot before upgrade
vm: web-01
state: absent
'''
RETURN = '''
---
name:
description: Name of the snapshot.
returned: success
type: string
sample: snapshot before update
displayname:
description: displayname of the snapshot.
returned: success
type: string
sample: snapshot before update
created:
description: date of the snapshot.
returned: success
type: string
sample: 2015-03-29T14:57:06+0200
current:
description: true if snapshot is current
returned: success
type: boolean
sample: True
state:
description: state of the vm snapshot
returned: success
type: string
sample: Allocated
type:
description: type of vm snapshot
returned: success
type: string
sample: DiskAndMemory
description:
description:
description: description of vm snapshot
returned: success
type: string
sample: snapshot brought to you by Ansible
domain:
description: Domain the the vm snapshot is related to.
returned: success
type: string
sample: example domain
account:
description: Account the vm snapshot is related to.
returned: success
type: string
sample: example account
project:
description: Name of project the vm snapshot is related to.
returned: success
type: string
sample: Production
'''
try:
from cs import CloudStack, CloudStackException, read_config
has_lib_cs = True
except ImportError:
has_lib_cs = False
# import cloudstack common
from ansible.module_utils.cloudstack import *
class AnsibleCloudStackVmSnapshot(AnsibleCloudStack):
def __init__(self, module):
AnsibleCloudStack.__init__(self, module)
def get_snapshot(self):
args = {}
args['virtualmachineid'] = self.get_vm('id')
args['account'] = self.get_account('name')
args['domainid'] = self.get_domain('id')
args['projectid'] = self.get_project('id')
args['name'] = self.module.params.get('name')
snapshots = self.cs.listVMSnapshot(**args)
if snapshots:
return snapshots['vmSnapshot'][0]
return None
def create_snapshot(self):
snapshot = self.get_snapshot()
if not snapshot:
self.result['changed'] = True
args = {}
args['virtualmachineid'] = self.get_vm('id')
args['name'] = self.module.params.get('name')
args['description'] = self.module.params.get('description')
args['snapshotmemory'] = self.module.params.get('snapshot_memory')
if not self.module.check_mode:
res = self.cs.createVMSnapshot(**args)
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
snapshot = self._poll_job(res, 'vmsnapshot')
return snapshot
def remove_snapshot(self):
snapshot = self.get_snapshot()
if snapshot:
self.result['changed'] = True
if not self.module.check_mode:
res = self.cs.deleteVMSnapshot(vmsnapshotid=snapshot['id'])
if 'errortext' in res:
self.module.fail_json(msg="Failed: '%s'" % res['errortext'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'vmsnapshot')
return snapshot
def revert_vm_to_snapshot(self):
snapshot = self.get_snapshot()
if snapshot:
self.result['changed'] = True
if snapshot['state'] != "Ready":
self.module.fail_json(msg="snapshot state is '%s', not ready, could not revert VM" % snapshot['state'])
if not self.module.check_mode:
res = self.cs.revertToVMSnapshot(vmsnapshotid=snapshot['id'])
poll_async = self.module.params.get('poll_async')
if res and poll_async:
res = self._poll_job(res, 'vmsnapshot')
return snapshot
self.module.fail_json(msg="snapshot not found, could not revert VM")
def get_result(self, snapshot):
if snapshot:
if 'displayname' in snapshot:
self.result['displayname'] = snapshot['displayname']
if 'created' in snapshot:
self.result['created'] = snapshot['created']
if 'current' in snapshot:
self.result['current'] = snapshot['current']
if 'state' in snapshot:
self.result['state'] = snapshot['state']
if 'type' in snapshot:
self.result['type'] = snapshot['type']
if 'name' in snapshot:
self.result['name'] = snapshot['name']
if 'description' in snapshot:
self.result['description'] = snapshot['description']
if 'domain' in snapshot:
self.result['domain'] = snapshot['domain']
if 'account' in snapshot:
self.result['account'] = snapshot['account']
if 'project' in snapshot:
self.result['project'] = snapshot['project']
return self.result
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True, aliases=['displayname']),
vm = dict(required=True),
description = dict(default=None),
zone = dict(default=None),
snapshot_memory = dict(choices=BOOLEANS, default=False),
state = dict(choices=['present', 'absent', 'revert'], default='present'),
domain = dict(default=None),
account = dict(default=None),
project = dict(default=None),
poll_async = dict(choices=BOOLEANS, default=True),
api_key = dict(default=None),
api_secret = dict(default=None, no_log=True),
api_url = dict(default=None),
api_http_method = dict(choices=['get', 'post'], default='get'),
api_timeout = dict(type='int', default=10),
),
required_together = (
['icmp_type', 'icmp_code'],
['api_key', 'api_secret', 'api_url'],
),
supports_check_mode=True
)
if not has_lib_cs:
module.fail_json(msg="python library cs required: pip install cs")
try:
acs_vmsnapshot = AnsibleCloudStackVmSnapshot(module)
state = module.params.get('state')
if state in ['revert']:
snapshot = acs_vmsnapshot.revert_vm_to_snapshot()
elif state in ['absent']:
snapshot = acs_vmsnapshot.remove_snapshot()
else:
snapshot = acs_vmsnapshot.create_snapshot()
result = acs_vmsnapshot.get_result(snapshot)
except CloudStackException, e:
module.fail_json(msg='CloudStackException: %s' % str(e))
module.exit_json(**result)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -78,8 +78,10 @@ options:
default: null default: null
aliases: [] aliases: []
requirements: [ "libcloud" ] requirements:
author: Peter Tan <ptan@google.com> - "python >= 2.6"
- "apache-libcloud"
author: "Peter Tan (@tanpeter)"
''' '''
EXAMPLES = ''' EXAMPLES = '''

@ -0,0 +1,230 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>
DOCUMENTATION = '''
---
module: gce_tag
version_added: "2.0"
short_description: add or remove tag(s) to/from GCE instance
description:
- This module can add or remove tags U(https://cloud.google.com/compute/docs/instances/#tags)
to/from GCE instance.
options:
instance_name:
description:
- the name of the GCE instance to add/remove tags
required: true
default: null
aliases: []
tags:
description:
- comma-separated list of tags to add or remove
required: true
default: null
aliases: []
state:
description:
- desired state of the tags
required: false
default: "present"
choices: ["present", "absent"]
aliases: []
zone:
description:
- the zone of the disk specified by source
required: false
default: "us-central1-a"
aliases: []
service_account_email:
description:
- service account email
required: false
default: null
aliases: []
pem_file:
description:
- path to the pem file associated with the service account email
required: false
default: null
aliases: []
project_id:
description:
- your GCE project ID
required: false
default: null
aliases: []
requirements:
- "python >= 2.6"
- "apache-libcloud"
author: "Do Hoang Khiem (dohoangkhiem@gmail.com)"
'''
EXAMPLES = '''
# Add tags 'http-server', 'https-server', 'staging' to instance name 'staging-server' in zone us-central1-a.
- gce_tag:
instance_name: staging-server
tags: http-server,https-server,staging
zone: us-central1-a
state: present
# Remove tags 'foo', 'bar' from instance 'test-server' in default zone (us-central1-a)
- gce_tag:
instance_name: test-server
tags: foo,bar
state: absent
'''
try:
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
from libcloud.common.google import GoogleBaseError, QuotaExceededError, \
ResourceExistsError, ResourceNotFoundError, InvalidRequestError
_ = Provider.GCE
HAS_LIBCLOUD = True
except ImportError:
HAS_LIBCLOUD = False
def add_tags(gce, module, instance_name, tags):
"""Add tags to instance."""
zone = module.params.get('zone')
if not instance_name:
module.fail_json(msg='Must supply instance_name', changed=False)
if not tags:
module.fail_json(msg='Must supply tags', changed=False)
tags = [x.lower() for x in tags]
try:
node = gce.ex_get_node(instance_name, zone=zone)
except ResourceNotFoundError:
module.fail_json(msg='Instance %s not found in zone %s' % (instance_name, zone), changed=False)
except GoogleBaseError, e:
module.fail_json(msg=str(e), changed=False)
node_tags = node.extra['tags']
changed = False
tags_changed = []
for t in tags:
if t not in node_tags:
changed = True
node_tags.append(t)
tags_changed.append(t)
if not changed:
return False, None
try:
gce.ex_set_node_tags(node, node_tags)
return True, tags_changed
except (GoogleBaseError, InvalidRequestError) as e:
module.fail_json(msg=str(e), changed=False)
def remove_tags(gce, module, instance_name, tags):
"""Remove tags from instance."""
zone = module.params.get('zone')
if not instance_name:
module.fail_json(msg='Must supply instance_name', changed=False)
if not tags:
module.fail_json(msg='Must supply tags', changed=False)
tags = [x.lower() for x in tags]
try:
node = gce.ex_get_node(instance_name, zone=zone)
except ResourceNotFoundError:
module.fail_json(msg='Instance %s not found in zone %s' % (instance_name, zone), changed=False)
except GoogleBaseError, e:
module.fail_json(msg=str(e), changed=False)
node_tags = node.extra['tags']
changed = False
tags_changed = []
for t in tags:
if t in node_tags:
node_tags.remove(t)
changed = True
tags_changed.append(t)
if not changed:
return False, None
try:
gce.ex_set_node_tags(node, node_tags)
return True, tags_changed
except (GoogleBaseError, InvalidRequestError) as e:
module.fail_json(msg=str(e), changed=False)
def main():
module = AnsibleModule(
argument_spec=dict(
instance_name=dict(required=True),
tags=dict(type='list'),
state=dict(default='present', choices=['present', 'absent']),
zone=dict(default='us-central1-a'),
service_account_email=dict(),
pem_file=dict(),
project_id=dict(),
)
)
if not HAS_LIBCLOUD:
module.fail_json(msg='libcloud with GCE support is required.')
instance_name = module.params.get('instance_name')
state = module.params.get('state')
tags = module.params.get('tags')
zone = module.params.get('zone')
changed = False
if not zone:
module.fail_json(msg='Must specify "zone"', changed=False)
if not tags:
module.fail_json(msg='Must specify "tags"', changed=False)
gce = gce_connect(module)
# add tags to instance.
if state == 'present':
changed, tags_changed = add_tags(gce, module, instance_name, tags)
# remove tags from instance
if state == 'absent':
changed, tags_changed = remove_tags(gce, module, instance_name, tags)
module.exit_json(changed=changed, instance_name=instance_name, tags=tags_changed, zone=zone)
sys.exit(0)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.gce import *
if __name__ == '__main__':
main()

@ -26,7 +26,7 @@ short_description: Manage LXC Containers
version_added: 1.8.0 version_added: 1.8.0
description: description:
- Management of LXC containers - Management of LXC containers
author: Kevin Carter author: "Kevin Carter (@cloudnull)"
options: options:
name: name:
description: description:
@ -38,6 +38,7 @@ options:
- lvm - lvm
- loop - loop
- btrfs - btrfs
- overlayfs
description: description:
- Backend storage type for the container. - Backend storage type for the container.
required: false required: false
@ -112,6 +113,24 @@ options:
- Set the log level for a container where *container_log* was set. - Set the log level for a container where *container_log* was set.
required: false required: false
default: INFO default: INFO
clone_name:
version_added: "2.0"
description:
- Name of the new cloned server. This is only used when state is
clone.
required: false
default: false
clone_snapshot:
version_added: "2.0"
required: false
choices:
- true
- false
description:
- Create a snapshot a container when cloning. This is not supported
by all container storage backends. Enabling this may fail if the
backing store does not support snapshots.
default: false
archive: archive:
choices: choices:
- true - true
@ -142,14 +161,21 @@ options:
- absent - absent
- frozen - frozen
description: description:
- Start a container right after it's created. - Define the state of a container. If you clone a container using
`clone_name` the newly cloned container created in a stopped state.
The running container will be stopped while the clone operation is
happening and upon completion of the clone the original container
state will be restored.
required: false required: false
default: started default: started
container_config: container_config:
description: description:
- list of 'key=value' options to use when configuring a container. - list of 'key=value' options to use when configuring a container.
required: false required: false
requirements: ['lxc >= 1.0', 'python2-lxc >= 0.1'] requirements:
- 'lxc >= 1.0 # OS package'
- 'python >= 2.6 # OS Package'
- 'lxc-python2 >= 0.1 # PIP Package from https://github.com/lxc/python2-lxc'
notes: notes:
- Containers must have a unique name. If you attempt to create a container - Containers must have a unique name. If you attempt to create a container
with a name that already exists in the users namespace the module will with a name that already exists in the users namespace the module will
@ -169,7 +195,8 @@ notes:
creating the archive. creating the archive.
- If your distro does not have a package for "python2-lxc", which is a - If your distro does not have a package for "python2-lxc", which is a
requirement for this module, it can be installed from source at requirement for this module, it can be installed from source at
"https://github.com/lxc/python2-lxc" "https://github.com/lxc/python2-lxc" or installed via pip using the package
name lxc-python2.
""" """
EXAMPLES = """ EXAMPLES = """
@ -203,6 +230,7 @@ EXAMPLES = """
- name: Create filesystem container - name: Create filesystem container
lxc_container: lxc_container:
name: test-container-config name: test-container-config
backing_store: dir
container_log: true container_log: true
template: ubuntu template: ubuntu
state: started state: started
@ -216,7 +244,7 @@ EXAMPLES = """
# Create an lvm container, run a complex command in it, add additional # Create an lvm container, run a complex command in it, add additional
# configuration to it, create an archive of it, and finally leave the container # configuration to it, create an archive of it, and finally leave the container
# in a frozen state. The container archive will be compressed using bzip2 # in a frozen state. The container archive will be compressed using bzip2
- name: Create an lvm container - name: Create a frozen lvm container
lxc_container: lxc_container:
name: test-container-lvm name: test-container-lvm
container_log: true container_log: true
@ -241,14 +269,6 @@ EXAMPLES = """
- name: Debug info on container "test-container-lvm" - name: Debug info on container "test-container-lvm"
debug: var=lvm_container_info debug: var=lvm_container_info
- name: Get information on a given container.
lxc_container:
name: test-container-config
register: config_container_info
- name: debug info on container "test-container"
debug: var=config_container_info
- name: Run a command in a container and ensure its in a "stopped" state. - name: Run a command in a container and ensure its in a "stopped" state.
lxc_container: lxc_container:
name: test-container-started name: test-container-started
@ -263,19 +283,19 @@ EXAMPLES = """
container_command: | container_command: |
echo 'hello world.' | tee /opt/frozen echo 'hello world.' | tee /opt/frozen
- name: Start a container. - name: Start a container
lxc_container: lxc_container:
name: test-container-stopped name: test-container-stopped
state: started state: started
- name: Run a command in a container and then restart it. - name: Run a command in a container and then restart it
lxc_container: lxc_container:
name: test-container-started name: test-container-started
state: restarted state: restarted
container_command: | container_command: |
echo 'hello world.' | tee /opt/restarted echo 'hello world.' | tee /opt/restarted
- name: Run a complex command within a "running" container. - name: Run a complex command within a "running" container
lxc_container: lxc_container:
name: test-container-started name: test-container-started
container_command: | container_command: |
@ -295,7 +315,53 @@ EXAMPLES = """
archive: true archive: true
archive_path: /opt/archives archive_path: /opt/archives
- name: Destroy a container. # Create a container using overlayfs, create an archive of it, create a
# snapshot clone of the container and and finally leave the container
# in a frozen state. The container archive will be compressed using gzip.
- name: Create an overlayfs container archive and clone it
lxc_container:
name: test-container-overlayfs
container_log: true
template: ubuntu
state: started
backing_store: overlayfs
template_options: --release trusty
clone_snapshot: true
clone_name: test-container-overlayfs-clone-snapshot
archive: true
archive_compression: gzip
register: clone_container_info
- name: debug info on container "test-container"
debug: var=clone_container_info
- name: Clone a container using snapshot
lxc_container:
name: test-container-overlayfs-clone-snapshot
backing_store: overlayfs
clone_name: test-container-overlayfs-clone-snapshot2
clone_snapshot: true
- name: Create a new container and clone it
lxc_container:
name: test-container-new-archive
backing_store: dir
clone_name: test-container-new-archive-clone
- name: Archive and clone a container then destroy it
lxc_container:
name: test-container-new-archive
state: absent
clone_name: test-container-new-archive-destroyed-clone
archive: true
archive_compression: gzip
- name: Start a cloned container.
lxc_container:
name: test-container-new-archive-destroyed-clone
state: started
- name: Destroy a container
lxc_container: lxc_container:
name: "{{ item }}" name: "{{ item }}"
state: absent state: absent
@ -305,15 +371,22 @@ EXAMPLES = """
- test-container-frozen - test-container-frozen
- test-container-lvm - test-container-lvm
- test-container-config - test-container-config
- test-container-overlayfs
- test-container-overlayfs-clone
- test-container-overlayfs-clone-snapshot
- test-container-overlayfs-clone-snapshot2
- test-container-new-archive
- test-container-new-archive-clone
- test-container-new-archive-destroyed-clone
""" """
try: try:
import lxc import lxc
except ImportError: except ImportError:
msg = 'The lxc module is not importable. Check the requirements.' HAS_LXC = False
print("failed=True msg='%s'" % msg) else:
raise SystemExit(msg) HAS_LXC = True
# LXC_COMPRESSION_MAP is a map of available compression types when creating # LXC_COMPRESSION_MAP is a map of available compression types when creating
@ -351,6 +424,15 @@ LXC_COMMAND_MAP = {
'directory': '--dir', 'directory': '--dir',
'zfs_root': '--zfsroot' 'zfs_root': '--zfsroot'
} }
},
'clone': {
'variables': {
'backing_store': '--backingstore',
'lxc_path': '--lxcpath',
'fs_size': '--fssize',
'name': '--orig',
'clone_name': '--new'
}
} }
} }
@ -369,6 +451,9 @@ LXC_BACKING_STORE = {
], ],
'loop': [ 'loop': [
'lv_name', 'vg_name', 'thinpool', 'zfs_root' 'lv_name', 'vg_name', 'thinpool', 'zfs_root'
],
'overlayfs': [
'lv_name', 'vg_name', 'fs_type', 'fs_size', 'thinpool', 'zfs_root'
] ]
} }
@ -388,7 +473,8 @@ LXC_ANSIBLE_STATES = {
'stopped': '_stopped', 'stopped': '_stopped',
'restarted': '_restarted', 'restarted': '_restarted',
'absent': '_destroyed', 'absent': '_destroyed',
'frozen': '_frozen' 'frozen': '_frozen',
'clone': '_clone'
} }
@ -439,18 +525,16 @@ def create_script(command):
f.close() f.close()
# Ensure the script is executable. # Ensure the script is executable.
os.chmod(script_file, 0755) os.chmod(script_file, 1755)
# Get temporary directory. # Get temporary directory.
tempdir = tempfile.gettempdir() tempdir = tempfile.gettempdir()
# Output log file. # Output log file.
stdout = path.join(tempdir, 'lxc-attach-script.log') stdout_file = open(path.join(tempdir, 'lxc-attach-script.log'), 'ab')
stdout_file = open(stdout, 'ab')
# Error log file. # Error log file.
stderr = path.join(tempdir, 'lxc-attach-script.err') stderr_file = open(path.join(tempdir, 'lxc-attach-script.err'), 'ab')
stderr_file = open(stderr, 'ab')
# Execute the script command. # Execute the script command.
try: try:
@ -482,6 +566,7 @@ class LxcContainerManagement(object):
self.container_name = self.module.params['name'] self.container_name = self.module.params['name']
self.container = self.get_container_bind() self.container = self.get_container_bind()
self.archive_info = None self.archive_info = None
self.clone_info = None
def get_container_bind(self): def get_container_bind(self):
return lxc.Container(name=self.container_name) return lxc.Container(name=self.container_name)
@ -502,15 +587,15 @@ class LxcContainerManagement(object):
return num return num
@staticmethod @staticmethod
def _container_exists(name): def _container_exists(container_name):
"""Check if a container exists. """Check if a container exists.
:param name: Name of the container. :param container_name: Name of the container.
:type: ``str`` :type: ``str``
:returns: True or False if the container is found. :returns: True or False if the container is found.
:rtype: ``bol`` :rtype: ``bol``
""" """
if [i for i in lxc.list_containers() if i == name]: if [i for i in lxc.list_containers() if i == container_name]:
return True return True
else: else:
return False return False
@ -543,6 +628,7 @@ class LxcContainerManagement(object):
""" """
# Remove incompatible storage backend options. # Remove incompatible storage backend options.
variables = variables.copy()
for v in LXC_BACKING_STORE[self.module.params['backing_store']]: for v in LXC_BACKING_STORE[self.module.params['backing_store']]:
variables.pop(v, None) variables.pop(v, None)
@ -624,7 +710,7 @@ class LxcContainerManagement(object):
for option_line in container_config: for option_line in container_config:
# Look for key in config # Look for key in config
if option_line.startswith(key): if option_line.startswith(key):
_, _value = option_line.split('=') _, _value = option_line.split('=', 1)
config_value = ' '.join(_value.split()) config_value = ' '.join(_value.split())
line_index = container_config.index(option_line) line_index = container_config.index(option_line)
# If the sanitized values don't match replace them # If the sanitized values don't match replace them
@ -655,6 +741,68 @@ class LxcContainerManagement(object):
self._container_startup() self._container_startup()
self.container.freeze() self.container.freeze()
def _container_create_clone(self):
"""Clone a new LXC container from an existing container.
This method will clone an existing container to a new container using
the `clone_name` variable as the new container name. The method will
create a container if the container `name` does not exist.
Note that cloning a container will ensure that the original container
is "stopped" before the clone can be done. Because this operation can
require a state change the method will return the original container
to its prior state upon completion of the clone.
Once the clone is complete the new container will be left in a stopped
state.
"""
# Ensure that the state of the original container is stopped
container_state = self._get_state()
if container_state != 'stopped':
self.state_change = True
self.container.stop()
build_command = [
self.module.get_bin_path('lxc-clone', True),
]
build_command = self._add_variables(
variables_dict=self._get_vars(
variables=LXC_COMMAND_MAP['clone']['variables']
),
build_command=build_command
)
# Load logging for the instance when creating it.
if self.module.params.get('clone_snapshot') in BOOLEANS_TRUE:
build_command.append('--snapshot')
# Check for backing_store == overlayfs if so force the use of snapshot
# If overlay fs is used and snapshot is unset the clone command will
# fail with an unsupported type.
elif self.module.params.get('backing_store') == 'overlayfs':
build_command.append('--snapshot')
rc, return_data, err = self._run_command(build_command)
if rc != 0:
message = "Failed executing lxc-clone."
self.failure(
err=err, rc=rc, msg=message, command=' '.join(
build_command
)
)
else:
self.state_change = True
# Restore the original state of the origin container if it was
# not in a stopped state.
if container_state == 'running':
self.container.start()
elif container_state == 'frozen':
self.container.start()
self.container.freeze()
return True
def _create(self): def _create(self):
"""Create a new LXC container. """Create a new LXC container.
@ -709,9 +857,9 @@ class LxcContainerManagement(object):
rc, return_data, err = self._run_command(build_command) rc, return_data, err = self._run_command(build_command)
if rc != 0: if rc != 0:
msg = "Failed executing lxc-create." message = "Failed executing lxc-create."
self.failure( self.failure(
err=err, rc=rc, msg=msg, command=' '.join(build_command) err=err, rc=rc, msg=message, command=' '.join(build_command)
) )
else: else:
self.state_change = True self.state_change = True
@ -751,7 +899,7 @@ class LxcContainerManagement(object):
:rtype: ``str`` :rtype: ``str``
""" """
if self._container_exists(name=self.container_name): if self._container_exists(container_name=self.container_name):
return str(self.container.state).lower() return str(self.container.state).lower()
else: else:
return str('absent') return str('absent')
@ -794,7 +942,7 @@ class LxcContainerManagement(object):
rc=1, rc=1,
msg='The container [ %s ] failed to start. Check to lxc is' msg='The container [ %s ] failed to start. Check to lxc is'
' available and that the container is in a functional' ' available and that the container is in a functional'
' state.' ' state.' % self.container_name
) )
def _check_archive(self): def _check_archive(self):
@ -808,6 +956,23 @@ class LxcContainerManagement(object):
'archive': self._container_create_tar() 'archive': self._container_create_tar()
} }
def _check_clone(self):
"""Create a compressed archive of a container.
This will store archive_info in as self.archive_info
"""
clone_name = self.module.params.get('clone_name')
if clone_name:
if not self._container_exists(container_name=clone_name):
self.clone_info = {
'cloned': self._container_create_clone()
}
else:
self.clone_info = {
'cloned': False
}
def _destroyed(self, timeout=60): def _destroyed(self, timeout=60):
"""Ensure a container is destroyed. """Ensure a container is destroyed.
@ -816,12 +981,15 @@ class LxcContainerManagement(object):
""" """
for _ in xrange(timeout): for _ in xrange(timeout):
if not self._container_exists(name=self.container_name): if not self._container_exists(container_name=self.container_name):
break break
# Check if the container needs to have an archive created. # Check if the container needs to have an archive created.
self._check_archive() self._check_archive()
# Check if the container is to be cloned
self._check_clone()
if self._get_state() != 'stopped': if self._get_state() != 'stopped':
self.state_change = True self.state_change = True
self.container.stop() self.container.stop()
@ -852,7 +1020,7 @@ class LxcContainerManagement(object):
""" """
self.check_count(count=count, method='frozen') self.check_count(count=count, method='frozen')
if self._container_exists(name=self.container_name): if self._container_exists(container_name=self.container_name):
self._execute_command() self._execute_command()
# Perform any configuration updates # Perform any configuration updates
@ -871,6 +1039,9 @@ class LxcContainerManagement(object):
# Check if the container needs to have an archive created. # Check if the container needs to have an archive created.
self._check_archive() self._check_archive()
# Check if the container is to be cloned
self._check_clone()
else: else:
self._create() self._create()
count += 1 count += 1
@ -886,7 +1057,7 @@ class LxcContainerManagement(object):
""" """
self.check_count(count=count, method='restart') self.check_count(count=count, method='restart')
if self._container_exists(name=self.container_name): if self._container_exists(container_name=self.container_name):
self._execute_command() self._execute_command()
# Perform any configuration updates # Perform any configuration updates
@ -896,8 +1067,14 @@ class LxcContainerManagement(object):
self.container.stop() self.container.stop()
self.state_change = True self.state_change = True
# Run container startup
self._container_startup()
# Check if the container needs to have an archive created. # Check if the container needs to have an archive created.
self._check_archive() self._check_archive()
# Check if the container is to be cloned
self._check_clone()
else: else:
self._create() self._create()
count += 1 count += 1
@ -913,7 +1090,7 @@ class LxcContainerManagement(object):
""" """
self.check_count(count=count, method='stop') self.check_count(count=count, method='stop')
if self._container_exists(name=self.container_name): if self._container_exists(container_name=self.container_name):
self._execute_command() self._execute_command()
# Perform any configuration updates # Perform any configuration updates
@ -925,6 +1102,9 @@ class LxcContainerManagement(object):
# Check if the container needs to have an archive created. # Check if the container needs to have an archive created.
self._check_archive() self._check_archive()
# Check if the container is to be cloned
self._check_clone()
else: else:
self._create() self._create()
count += 1 count += 1
@ -940,7 +1120,7 @@ class LxcContainerManagement(object):
""" """
self.check_count(count=count, method='start') self.check_count(count=count, method='start')
if self._container_exists(name=self.container_name): if self._container_exists(container_name=self.container_name):
container_state = self._get_state() container_state = self._get_state()
if container_state == 'running': if container_state == 'running':
pass pass
@ -965,6 +1145,9 @@ class LxcContainerManagement(object):
# Check if the container needs to have an archive created. # Check if the container needs to have an archive created.
self._check_archive() self._check_archive()
# Check if the container is to be cloned
self._check_clone()
else: else:
self._create() self._create()
count += 1 count += 1
@ -1007,18 +1190,18 @@ class LxcContainerManagement(object):
all_lvms = [i.split() for i in stdout.splitlines()][1:] all_lvms = [i.split() for i in stdout.splitlines()][1:]
return [lv_entry[0] for lv_entry in all_lvms if lv_entry[1] == vg] return [lv_entry[0] for lv_entry in all_lvms if lv_entry[1] == vg]
def _get_vg_free_pe(self, name): def _get_vg_free_pe(self, vg_name):
"""Return the available size of a given VG. """Return the available size of a given VG.
:param name: Name of volume. :param vg_name: Name of volume.
:type name: ``str`` :type vg_name: ``str``
:returns: size and measurement of an LV :returns: size and measurement of an LV
:type: ``tuple`` :type: ``tuple``
""" """
build_command = [ build_command = [
'vgdisplay', 'vgdisplay',
name, vg_name,
'--units', '--units',
'g' 'g'
] ]
@ -1027,7 +1210,7 @@ class LxcContainerManagement(object):
self.failure( self.failure(
err=err, err=err,
rc=rc, rc=rc,
msg='failed to read vg %s' % name, msg='failed to read vg %s' % vg_name,
command=' '.join(build_command) command=' '.join(build_command)
) )
@ -1036,17 +1219,17 @@ class LxcContainerManagement(object):
_free_pe = free_pe[0].split() _free_pe = free_pe[0].split()
return float(_free_pe[-2]), _free_pe[-1] return float(_free_pe[-2]), _free_pe[-1]
def _get_lv_size(self, name): def _get_lv_size(self, lv_name):
"""Return the available size of a given LV. """Return the available size of a given LV.
:param name: Name of volume. :param lv_name: Name of volume.
:type name: ``str`` :type lv_name: ``str``
:returns: size and measurement of an LV :returns: size and measurement of an LV
:type: ``tuple`` :type: ``tuple``
""" """
vg = self._get_lxc_vg() vg = self._get_lxc_vg()
lv = os.path.join(vg, name) lv = os.path.join(vg, lv_name)
build_command = [ build_command = [
'lvdisplay', 'lvdisplay',
lv, lv,
@ -1080,7 +1263,7 @@ class LxcContainerManagement(object):
""" """
vg = self._get_lxc_vg() vg = self._get_lxc_vg()
free_space, messurement = self._get_vg_free_pe(name=vg) free_space, messurement = self._get_vg_free_pe(vg_name=vg)
if free_space < float(snapshot_size_gb): if free_space < float(snapshot_size_gb):
message = ( message = (
@ -1183,25 +1366,25 @@ class LxcContainerManagement(object):
return archive_name return archive_name
def _lvm_lv_remove(self, name): def _lvm_lv_remove(self, lv_name):
"""Remove an LV. """Remove an LV.
:param name: The name of the logical volume :param lv_name: The name of the logical volume
:type name: ``str`` :type lv_name: ``str``
""" """
vg = self._get_lxc_vg() vg = self._get_lxc_vg()
build_command = [ build_command = [
self.module.get_bin_path('lvremove', True), self.module.get_bin_path('lvremove', True),
"-f", "-f",
"%s/%s" % (vg, name), "%s/%s" % (vg, lv_name),
] ]
rc, stdout, err = self._run_command(build_command) rc, stdout, err = self._run_command(build_command)
if rc != 0: if rc != 0:
self.failure( self.failure(
err=err, err=err,
rc=rc, rc=rc,
msg='Failed to remove LVM LV %s/%s' % (vg, name), msg='Failed to remove LVM LV %s/%s' % (vg, lv_name),
command=' '.join(build_command) command=' '.join(build_command)
) )
@ -1213,31 +1396,71 @@ class LxcContainerManagement(object):
:param temp_dir: path to the temporary local working directory :param temp_dir: path to the temporary local working directory
:type temp_dir: ``str`` :type temp_dir: ``str``
""" """
# This loop is created to support overlayfs archives. This should
# squash all of the layers into a single archive.
fs_paths = container_path.split(':')
if 'overlayfs' in fs_paths:
fs_paths.pop(fs_paths.index('overlayfs'))
for fs_path in fs_paths:
# Set the path to the container data
fs_path = os.path.dirname(fs_path)
# Run the sync command
build_command = [
self.module.get_bin_path('rsync', True),
'-aHAX',
fs_path,
temp_dir
]
rc, stdout, err = self._run_command(
build_command,
unsafe_shell=True
)
if rc != 0:
self.failure(
err=err,
rc=rc,
msg='failed to perform archive',
command=' '.join(build_command)
)
def _unmount(self, mount_point):
"""Unmount a file system.
:param mount_point: path on the file system that is mounted.
:type mount_point: ``str``
"""
build_command = [ build_command = [
self.module.get_bin_path('rsync', True), self.module.get_bin_path('umount', True),
'-aHAX', mount_point,
container_path,
temp_dir
] ]
rc, stdout, err = self._run_command(build_command, unsafe_shell=True) rc, stdout, err = self._run_command(build_command)
if rc != 0: if rc != 0:
self.failure( self.failure(
err=err, err=err,
rc=rc, rc=rc,
msg='failed to perform archive', msg='failed to unmount [ %s ]' % mount_point,
command=' '.join(build_command) command=' '.join(build_command)
) )
def _unmount(self, mount_point): def _overlayfs_mount(self, lowerdir, upperdir, mount_point):
"""Unmount a file system. """mount an lv.
:param lowerdir: name/path of the lower directory
:type lowerdir: ``str``
:param upperdir: name/path of the upper directory
:type upperdir: ``str``
:param mount_point: path on the file system that is mounted. :param mount_point: path on the file system that is mounted.
:type mount_point: ``str`` :type mount_point: ``str``
""" """
build_command = [ build_command = [
self.module.get_bin_path('umount', True), self.module.get_bin_path('mount', True),
'-t overlayfs',
'-o lowerdir=%s,upperdir=%s' % (lowerdir, upperdir),
'overlayfs',
mount_point, mount_point,
] ]
rc, stdout, err = self._run_command(build_command) rc, stdout, err = self._run_command(build_command)
@ -1245,8 +1468,8 @@ class LxcContainerManagement(object):
self.failure( self.failure(
err=err, err=err,
rc=rc, rc=rc,
msg='failed to unmount [ %s ]' % mount_point, msg='failed to mount overlayfs:%s:%s to %s -- Command: %s'
command=' '.join(build_command) % (lowerdir, upperdir, mount_point, build_command)
) )
def _container_create_tar(self): def _container_create_tar(self):
@ -1275,13 +1498,15 @@ class LxcContainerManagement(object):
# Test if the containers rootfs is a block device # Test if the containers rootfs is a block device
block_backed = lxc_rootfs.startswith(os.path.join(os.sep, 'dev')) block_backed = lxc_rootfs.startswith(os.path.join(os.sep, 'dev'))
# Test if the container is using overlayfs
overlayfs_backed = lxc_rootfs.startswith('overlayfs')
mount_point = os.path.join(work_dir, 'rootfs') mount_point = os.path.join(work_dir, 'rootfs')
# Set the snapshot name if needed # Set the snapshot name if needed
snapshot_name = '%s_lxc_snapshot' % self.container_name snapshot_name = '%s_lxc_snapshot' % self.container_name
# Set the path to the container data
container_path = os.path.dirname(lxc_rootfs)
container_state = self._get_state() container_state = self._get_state()
try: try:
# Ensure the original container is stopped or frozen # Ensure the original container is stopped or frozen
@ -1292,7 +1517,7 @@ class LxcContainerManagement(object):
self.container.stop() self.container.stop()
# Sync the container data from the container_path to work_dir # Sync the container data from the container_path to work_dir
self._rsync_data(container_path, temp_dir) self._rsync_data(lxc_rootfs, temp_dir)
if block_backed: if block_backed:
if snapshot_name not in self._lvm_lv_list(): if snapshot_name not in self._lvm_lv_list():
@ -1301,7 +1526,7 @@ class LxcContainerManagement(object):
# Take snapshot # Take snapshot
size, measurement = self._get_lv_size( size, measurement = self._get_lv_size(
name=self.container_name lv_name=self.container_name
) )
self._lvm_snapshot_create( self._lvm_snapshot_create(
source_lv=self.container_name, source_lv=self.container_name,
@ -1322,25 +1547,33 @@ class LxcContainerManagement(object):
' up old snapshot of containers before continuing.' ' up old snapshot of containers before continuing.'
% snapshot_name % snapshot_name
) )
elif overlayfs_backed:
# Restore original state of container lowerdir, upperdir = lxc_rootfs.split(':')[1:]
if container_state == 'running': self._overlayfs_mount(
if self._get_state() == 'frozen': lowerdir=lowerdir,
self.container.unfreeze() upperdir=upperdir,
else: mount_point=mount_point
self.container.start() )
# Set the state as changed and set a new fact # Set the state as changed and set a new fact
self.state_change = True self.state_change = True
return self._create_tar(source_dir=work_dir) return self._create_tar(source_dir=work_dir)
finally: finally:
if block_backed: if block_backed or overlayfs_backed:
# unmount snapshot # unmount snapshot
self._unmount(mount_point) self._unmount(mount_point)
if block_backed:
# Remove snapshot # Remove snapshot
self._lvm_lv_remove(snapshot_name) self._lvm_lv_remove(snapshot_name)
# Restore original state of container
if container_state == 'running':
if self._get_state() == 'frozen':
self.container.unfreeze()
else:
self.container.start()
# Remove tmpdir # Remove tmpdir
shutil.rmtree(temp_dir) shutil.rmtree(temp_dir)
@ -1374,6 +1607,9 @@ class LxcContainerManagement(object):
if self.archive_info: if self.archive_info:
outcome.update(self.archive_info) outcome.update(self.archive_info)
if self.clone_info:
outcome.update(self.clone_info)
self.module.exit_json( self.module.exit_json(
changed=self.state_change, changed=self.state_change,
lxc_container=outcome lxc_container=outcome
@ -1450,6 +1686,14 @@ def main():
choices=[n for i in LXC_LOGGING_LEVELS.values() for n in i], choices=[n for i in LXC_LOGGING_LEVELS.values() for n in i],
default='INFO' default='INFO'
), ),
clone_name=dict(
type='str',
required=False
),
clone_snapshot=dict(
choices=BOOLEANS,
default='false'
),
archive=dict( archive=dict(
choices=BOOLEANS, choices=BOOLEANS,
default='false' default='false'
@ -1466,6 +1710,11 @@ def main():
supports_check_mode=False, supports_check_mode=False,
) )
if not HAS_LXC:
module.fail_json(
msg='The `lxc` module is not importable. Check the requirements.'
)
lv_name = module.params.get('lv_name') lv_name = module.params.get('lv_name')
if not lv_name: if not lv_name:
module.params['lv_name'] = module.params.get('name') module.params['lv_name'] = module.params.get('name')
@ -1477,4 +1726,3 @@ def main():
# import module bits # import module bits
from ansible.module_utils.basic import * from ansible.module_utils.basic import *
main() main()

@ -20,7 +20,7 @@
DOCUMENTATION = ''' DOCUMENTATION = '''
--- ---
module: ovirt module: ovirt
author: Vincent Van der Kussen author: "Vincent Van der Kussen (@vincentvdk)"
short_description: oVirt/RHEV platform management short_description: oVirt/RHEV platform management
description: description:
- allows you to create new instances, either from scratch or an image, in addition to deleting or stopping instances on the oVirt/RHEV platform - allows you to create new instances, either from scratch or an image, in addition to deleting or stopping instances on the oVirt/RHEV platform
@ -152,7 +152,9 @@ options:
aliases: [] aliases: []
choices: ['present', 'absent', 'shutdown', 'started', 'restarted'] choices: ['present', 'absent', 'shutdown', 'started', 'restarted']
requirements: [ "ovirt-engine-sdk" ] requirements:
- "python >= 2.6"
- "ovirt-engine-sdk-python"
''' '''
EXAMPLES = ''' EXAMPLES = '''
# Basic example provisioning from image. # Basic example provisioning from image.

@ -0,0 +1,433 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: proxmox
short_description: management of instances in Proxmox VE cluster
description:
- allows you to create/delete/stop instances in Proxmox VE cluster
version_added: "2.0"
options:
api_host:
description:
- the host of the Proxmox VE cluster
required: true
api_user:
description:
- the user to authenticate with
required: true
api_password:
description:
- the password to authenticate with
- you can use PROXMOX_PASSWORD environment variable
default: null
required: false
vmid:
description:
- the instance id
default: null
required: true
validate_certs:
description:
- enable / disable https certificate verification
default: false
required: false
type: boolean
node:
description:
- Proxmox VE node, when new VM will be created
- required only for C(state=present)
- for another states will be autodiscovered
default: null
required: false
password:
description:
- the instance root password
- required only for C(state=present)
default: null
required: false
hostname:
description:
- the instance hostname
- required only for C(state=present)
default: null
required: false
ostemplate:
description:
- the template for VM creating
- required only for C(state=present)
default: null
required: false
disk:
description:
- hard disk size in GB for instance
default: 3
required: false
cpus:
description:
- numbers of allocated cpus for instance
default: 1
required: false
memory:
description:
- memory size in MB for instance
default: 512
required: false
swap:
description:
- swap memory size in MB for instance
default: 0
required: false
netif:
description:
- specifies network interfaces for the container
default: null
required: false
type: string
ip_address:
description:
- specifies the address the container will be assigned
default: null
required: false
type: string
onboot:
description:
- specifies whether a VM will be started during system bootup
default: false
required: false
type: boolean
storage:
description:
- target storage
default: 'local'
required: false
type: string
cpuunits:
description:
- CPU weight for a VM
default: 1000
required: false
type: integer
nameserver:
description:
- sets DNS server IP address for a container
default: null
required: false
type: string
searchdomain:
description:
- sets DNS search domain for a container
default: null
required: false
type: string
timeout:
description:
- timeout for operations
default: 30
required: false
type: integer
force:
description:
- forcing operations
- can be used only with states C(present), C(stopped), C(restarted)
- with C(state=present) force option allow to overwrite existing container
- with states C(stopped) , C(restarted) allow to force stop instance
default: false
required: false
type: boolean
state:
description:
- Indicate desired state of the instance
choices: ['present', 'started', 'absent', 'stopped', 'restarted']
default: present
notes:
- Requires proxmoxer and requests modules on host. This modules can be installed with pip.
requirements: [ "proxmoxer", "requests" ]
author: "Sergei Antipov @UnderGreen"
'''
EXAMPLES = '''
# Create new container with minimal options
- proxmox: vmid=100 node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' password='123456' hostname='example.org' ostemplate='local:vztmpl/ubuntu-14.04-x86_64.tar.gz'
# Create new container with minimal options with force(it will rewrite existing container)
- proxmox: vmid=100 node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' password='123456' hostname='example.org' ostemplate='local:vztmpl/ubuntu-14.04-x86_64.tar.gz' force=yes
# Create new container with minimal options use environment PROXMOX_PASSWORD variable(you should export it before)
- proxmox: vmid=100 node='uk-mc02' api_user='root@pam' api_host='node1' password='123456' hostname='example.org' ostemplate='local:vztmpl/ubuntu-14.04-x86_64.tar.gz'
# Start container
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' state=started
# Stop container
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' state=stopped
# Stop container with force
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' force=yes state=stopped
# Restart container(stopped or mounted container you can't restart)
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' state=stopped
# Remove container
- proxmox: vmid=100 api_user='root@pam' api_password='1q2w3e' api_host='node1' state=absent
'''
import os
import time
try:
from proxmoxer import ProxmoxAPI
HAS_PROXMOXER = True
except ImportError:
HAS_PROXMOXER = False
def get_instance(proxmox, vmid):
return [ vm for vm in proxmox.cluster.resources.get(type='vm') if vm['vmid'] == int(vmid) ]
def content_check(proxmox, node, ostemplate, storage):
return [ True for cnt in proxmox.nodes(node).storage(storage).content.get() if cnt['volid'] == ostemplate ]
def node_check(proxmox, node):
return [ True for nd in proxmox.nodes.get() if nd['node'] == node ]
def create_instance(module, proxmox, vmid, node, disk, storage, cpus, memory, swap, timeout, **kwargs):
proxmox_node = proxmox.nodes(node)
taskid = proxmox_node.openvz.create(vmid=vmid, storage=storage, memory=memory, swap=swap,
cpus=cpus, disk=disk, **kwargs)
while timeout:
if ( proxmox_node.tasks(taskid).status.get()['status'] == 'stopped'
and proxmox_node.tasks(taskid).status.get()['exitstatus'] == 'OK' ):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for creating VM. Last line in task before timeout: %s'
% proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def start_instance(module, proxmox, vm, vmid, timeout):
taskid = proxmox.nodes(vm[0]['node']).openvz(vmid).status.start.post()
while timeout:
if ( proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped'
and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK' ):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for starting VM. Last line in task before timeout: %s'
% proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def stop_instance(module, proxmox, vm, vmid, timeout, force):
if force:
taskid = proxmox.nodes(vm[0]['node']).openvz(vmid).status.shutdown.post(forceStop=1)
else:
taskid = proxmox.nodes(vm[0]['node']).openvz(vmid).status.shutdown.post()
while timeout:
if ( proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped'
and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK' ):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for stopping VM. Last line in task before timeout: %s'
% proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def umount_instance(module, proxmox, vm, vmid, timeout):
taskid = proxmox.nodes(vm[0]['node']).openvz(vmid).status.umount.post()
while timeout:
if ( proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped'
and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK' ):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for unmounting VM. Last line in task before timeout: %s'
% proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def main():
module = AnsibleModule(
argument_spec = dict(
api_host = dict(required=True),
api_user = dict(required=True),
api_password = dict(no_log=True),
vmid = dict(required=True),
validate_certs = dict(type='bool', choices=BOOLEANS, default='no'),
node = dict(),
password = dict(no_log=True),
hostname = dict(),
ostemplate = dict(),
disk = dict(type='int', default=3),
cpus = dict(type='int', default=1),
memory = dict(type='int', default=512),
swap = dict(type='int', default=0),
netif = dict(),
ip_address = dict(),
onboot = dict(type='bool', choices=BOOLEANS, default='no'),
storage = dict(default='local'),
cpuunits = dict(type='int', default=1000),
nameserver = dict(),
searchdomain = dict(),
timeout = dict(type='int', default=30),
force = dict(type='bool', choices=BOOLEANS, default='no'),
state = dict(default='present', choices=['present', 'absent', 'stopped', 'started', 'restarted']),
)
)
if not HAS_PROXMOXER:
module.fail_json(msg='proxmoxer required for this module')
state = module.params['state']
api_user = module.params['api_user']
api_host = module.params['api_host']
api_password = module.params['api_password']
vmid = module.params['vmid']
validate_certs = module.params['validate_certs']
node = module.params['node']
disk = module.params['disk']
cpus = module.params['cpus']
memory = module.params['memory']
swap = module.params['swap']
storage = module.params['storage']
timeout = module.params['timeout']
# If password not set get it from PROXMOX_PASSWORD env
if not api_password:
try:
api_password = os.environ['PROXMOX_PASSWORD']
except KeyError, e:
module.fail_json(msg='You should set api_password param or use PROXMOX_PASSWORD environment variable')
try:
proxmox = ProxmoxAPI(api_host, user=api_user, password=api_password, verify_ssl=validate_certs)
except Exception, e:
module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e)
if state == 'present':
try:
if get_instance(proxmox, vmid) and not module.params['force']:
module.exit_json(changed=False, msg="VM with vmid = %s is already exists" % vmid)
elif not (node, module.params['hostname'] and module.params['password'] and module.params['ostemplate']):
module.fail_json(msg='node, hostname, password and ostemplate are mandatory for creating vm')
elif not node_check(proxmox, node):
module.fail_json(msg="node '%s' not exists in cluster" % node)
elif not content_check(proxmox, node, module.params['ostemplate'], storage):
module.fail_json(msg="ostemplate '%s' not exists on node %s and storage %s"
% (module.params['ostemplate'], node, storage))
create_instance(module, proxmox, vmid, node, disk, storage, cpus, memory, swap, timeout,
password = module.params['password'],
hostname = module.params['hostname'],
ostemplate = module.params['ostemplate'],
netif = module.params['netif'],
ip_address = module.params['ip_address'],
onboot = int(module.params['onboot']),
cpuunits = module.params['cpuunits'],
nameserver = module.params['nameserver'],
searchdomain = module.params['searchdomain'],
force = int(module.params['force']))
module.exit_json(changed=True, msg="deployed VM %s from template %s" % (vmid, module.params['ostemplate']))
except Exception, e:
module.fail_json(msg="creation of VM %s failed with exception: %s" % ( vmid, e ))
elif state == 'started':
try:
vm = get_instance(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s not exists in cluster' % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'running':
module.exit_json(changed=False, msg="VM %s is already running" % vmid)
if start_instance(module, proxmox, vm, vmid, timeout):
module.exit_json(changed=True, msg="VM %s started" % vmid)
except Exception, e:
module.fail_json(msg="starting of VM %s failed with exception: %s" % ( vmid, e ))
elif state == 'stopped':
try:
vm = get_instance(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s not exists in cluster' % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'mounted':
if module.params['force']:
if umount_instance(module, proxmox, vm, vmid, timeout):
module.exit_json(changed=True, msg="VM %s is shutting down" % vmid)
else:
module.exit_json(changed=False, msg=("VM %s is already shutdown, but mounted. "
"You can use force option to umount it.") % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'stopped':
module.exit_json(changed=False, msg="VM %s is already shutdown" % vmid)
if stop_instance(module, proxmox, vm, vmid, timeout, force = module.params['force']):
module.exit_json(changed=True, msg="VM %s is shutting down" % vmid)
except Exception, e:
module.fail_json(msg="stopping of VM %s failed with exception: %s" % ( vmid, e ))
elif state == 'restarted':
try:
vm = get_instance(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s not exists in cluster' % vmid)
if ( proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'stopped'
or proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'mounted' ):
module.exit_json(changed=False, msg="VM %s is not running" % vmid)
if ( stop_instance(module, proxmox, vm, vmid, timeout, force = module.params['force']) and
start_instance(module, proxmox, vm, vmid, timeout) ):
module.exit_json(changed=True, msg="VM %s is restarted" % vmid)
except Exception, e:
module.fail_json(msg="restarting of VM %s failed with exception: %s" % ( vmid, e ))
elif state == 'absent':
try:
vm = get_instance(proxmox, vmid)
if not vm:
module.exit_json(changed=False, msg="VM %s does not exist" % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'running':
module.exit_json(changed=False, msg="VM %s is running. Stop it before deletion." % vmid)
if proxmox.nodes(vm[0]['node']).openvz(vmid).status.current.get()['status'] == 'mounted':
module.exit_json(changed=False, msg="VM %s is mounted. Stop it with force option before deletion." % vmid)
taskid = proxmox.nodes(vm[0]['node']).openvz.delete(vmid)
while timeout:
if ( proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped'
and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK' ):
module.exit_json(changed=True, msg="VM %s removed" % vmid)
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for removing VM. Last line in task before timeout: %s'
% proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
except Exception, e:
module.fail_json(msg="deletion of VM %s failed with exception: %s" % ( vmid, e ))
# import module snippets
from ansible.module_utils.basic import *
main()

@ -0,0 +1,232 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: proxmox_template
short_description: management of OS templates in Proxmox VE cluster
description:
- allows you to upload/delete templates in Proxmox VE cluster
version_added: "2.0"
options:
api_host:
description:
- the host of the Proxmox VE cluster
required: true
api_user:
description:
- the user to authenticate with
required: true
api_password:
description:
- the password to authenticate with
- you can use PROXMOX_PASSWORD environment variable
default: null
required: false
validate_certs:
description:
- enable / disable https certificate verification
default: false
required: false
type: boolean
node:
description:
- Proxmox VE node, when you will operate with template
default: null
required: true
src:
description:
- path to uploaded file
- required only for C(state=present)
default: null
required: false
aliases: ['path']
template:
description:
- the template name
- required only for states C(absent), C(info)
default: null
required: false
content_type:
description:
- content type
- required only for C(state=present)
default: 'vztmpl'
required: false
choices: ['vztmpl', 'iso']
storage:
description:
- target storage
default: 'local'
required: false
type: string
timeout:
description:
- timeout for operations
default: 30
required: false
type: integer
force:
description:
- can be used only with C(state=present), exists template will be overwritten
default: false
required: false
type: boolean
state:
description:
- Indicate desired state of the template
choices: ['present', 'absent']
default: present
notes:
- Requires proxmoxer and requests modules on host. This modules can be installed with pip.
requirements: [ "proxmoxer", "requests" ]
author: "Sergei Antipov @UnderGreen"
'''
EXAMPLES = '''
# Upload new openvz template with minimal options
- proxmox_template: node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' src='~/ubuntu-14.04-x86_64.tar.gz'
# Upload new openvz template with minimal options use environment PROXMOX_PASSWORD variable(you should export it before)
- proxmox_template: node='uk-mc02' api_user='root@pam' api_host='node1' src='~/ubuntu-14.04-x86_64.tar.gz'
# Upload new openvz template with all options and force overwrite
- proxmox_template: node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' storage='local' content_type='vztmpl' src='~/ubuntu-14.04-x86_64.tar.gz' force=yes
# Delete template with minimal options
- proxmox_template: node='uk-mc02' api_user='root@pam' api_password='1q2w3e' api_host='node1' template='ubuntu-14.04-x86_64.tar.gz' state=absent
'''
import os
import time
try:
from proxmoxer import ProxmoxAPI
HAS_PROXMOXER = True
except ImportError:
HAS_PROXMOXER = False
def get_template(proxmox, node, storage, content_type, template):
return [ True for tmpl in proxmox.nodes(node).storage(storage).content.get()
if tmpl['volid'] == '%s:%s/%s' % (storage, content_type, template) ]
def upload_template(module, proxmox, api_host, node, storage, content_type, realpath, timeout):
taskid = proxmox.nodes(node).storage(storage).upload.post(content=content_type, filename=open(realpath))
while timeout:
task_status = proxmox.nodes(api_host.split('.')[0]).tasks(taskid).status.get()
if task_status['status'] == 'stopped' and task_status['exitstatus'] == 'OK':
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for uploading template. Last line in task before timeout: %s'
% proxmox.node(node).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def delete_template(module, proxmox, node, storage, content_type, template, timeout):
volid = '%s:%s/%s' % (storage, content_type, template)
proxmox.nodes(node).storage(storage).content.delete(volid)
while timeout:
if not get_template(proxmox, node, storage, content_type, template):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for deleting template.')
time.sleep(1)
return False
def main():
module = AnsibleModule(
argument_spec = dict(
api_host = dict(required=True),
api_user = dict(required=True),
api_password = dict(no_log=True),
validate_certs = dict(type='bool', choices=BOOLEANS, default='no'),
node = dict(),
src = dict(),
template = dict(),
content_type = dict(default='vztmpl', choices=['vztmpl','iso']),
storage = dict(default='local'),
timeout = dict(type='int', default=30),
force = dict(type='bool', choices=BOOLEANS, default='no'),
state = dict(default='present', choices=['present', 'absent']),
)
)
if not HAS_PROXMOXER:
module.fail_json(msg='proxmoxer required for this module')
state = module.params['state']
api_user = module.params['api_user']
api_host = module.params['api_host']
api_password = module.params['api_password']
validate_certs = module.params['validate_certs']
node = module.params['node']
storage = module.params['storage']
timeout = module.params['timeout']
# If password not set get it from PROXMOX_PASSWORD env
if not api_password:
try:
api_password = os.environ['PROXMOX_PASSWORD']
except KeyError, e:
module.fail_json(msg='You should set api_password param or use PROXMOX_PASSWORD environment variable')
try:
proxmox = ProxmoxAPI(api_host, user=api_user, password=api_password, verify_ssl=validate_certs)
except Exception, e:
module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e)
if state == 'present':
try:
content_type = module.params['content_type']
src = module.params['src']
from ansible import utils
realpath = utils.path_dwim(None, src)
template = os.path.basename(realpath)
if get_template(proxmox, node, storage, content_type, template) and not module.params['force']:
module.exit_json(changed=False, msg='template with volid=%s:%s/%s is already exists' % (storage, content_type, template))
elif not src:
module.fail_json(msg='src param to uploading template file is mandatory')
elif not (os.path.exists(realpath) and os.path.isfile(realpath)):
module.fail_json(msg='template file on path %s not exists' % realpath)
if upload_template(module, proxmox, api_host, node, storage, content_type, realpath, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s uploaded' % (storage, content_type, template))
except Exception, e:
module.fail_json(msg="uploading of template %s failed with exception: %s" % ( template, e ))
elif state == 'absent':
try:
content_type = module.params['content_type']
template = module.params['template']
if not template:
module.fail_json(msg='template param is mandatory')
elif not get_template(proxmox, node, storage, content_type, template):
module.exit_json(changed=False, msg='template with volid=%s:%s/%s is already deleted' % (storage, content_type, template))
if delete_template(module, proxmox, node, storage, content_type, template, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s deleted' % (storage, content_type, template))
except Exception, e:
module.fail_json(msg="deleting of template %s failed with exception: %s" % ( template, e ))
# import module snippets
from ansible.module_utils.basic import *
main()

@ -55,8 +55,13 @@ options:
- XML document used with the define command - XML document used with the define command
required: false required: false
default: null default: null
requirements: [ "libvirt" ] requirements:
author: Michael DeHaan, Seth Vidal - "python >= 2.6"
- "libvirt-python"
author:
- "Ansible Core Team"
- "Michael DeHaan"
- "Seth Vidal"
''' '''
EXAMPLES = ''' EXAMPLES = '''
@ -88,8 +93,9 @@ import sys
try: try:
import libvirt import libvirt
except ImportError: except ImportError:
print "failed=True msg='libvirt python module unavailable'" HAS_VIRT = False
sys.exit(1) else:
HAS_VIRT = True
ALL_COMMANDS = [] ALL_COMMANDS = []
VM_COMMANDS = ['create','status', 'start', 'stop', 'pause', 'unpause', VM_COMMANDS = ['create','status', 'start', 'stop', 'pause', 'unpause',
@ -476,6 +482,11 @@ def main():
xml = dict(), xml = dict(),
)) ))
if not HAS_VIRT:
module.fail_json(
msg='The `libvirt` module is not importable. Check the requirements.'
)
rc = VIRT_SUCCESS rc = VIRT_SUCCESS
try: try:
rc, result = core(module) rc, result = core(module)

@ -0,0 +1,637 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: profitbricks
short_description: Create, destroy, start, stop, and reboot a ProfitBricks virtual machine.
description:
- Create, destroy, update, start, stop, and reboot a ProfitBricks virtual machine. When the virtual machine is created it can optionally wait for it to be 'running' before returning. This module has a dependency on profitbricks >= 1.0.0
version_added: "2.0"
options:
auto_increment:
description:
- Whether or not to increment a single number in the name for created virtual machines.
default: yes
choices: ["yes", "no"]
name:
description:
- The name of the virtual machine.
required: true
image:
description:
- The system image ID for creating the virtual machine, e.g. a3eae284-a2fe-11e4-b187-5f1f641608c8.
required: true
datacenter:
description:
- The Datacenter to provision this virtual machine.
required: false
default: null
cores:
description:
- The number of CPU cores to allocate to the virtual machine.
required: false
default: 2
ram:
description:
- The amount of memory to allocate to the virtual machine.
required: false
default: 2048
volume_size:
description:
- The size in GB of the boot volume.
required: false
default: 10
bus:
description:
- The bus type for the volume.
required: false
default: VIRTIO
choices: [ "IDE", "VIRTIO"]
instance_ids:
description:
- list of instance ids, currently only used when state='absent' to remove instances.
required: false
count:
description:
- The number of virtual machines to create.
required: false
default: 1
location:
description:
- The datacenter location. Use only if you want to create the Datacenter or else this value is ignored.
required: false
default: us/las
choices: [ "us/las", "us/lasdev", "de/fra", "de/fkb" ]
assign_public_ip:
description:
- This will assign the machine to the public LAN. If no LAN exists with public Internet access it is created.
required: false
default: false
lan:
description:
- The ID of the LAN you wish to add the servers to.
required: false
default: 1
subscription_user:
description:
- The ProfitBricks username. Overrides the PB_SUBSCRIPTION_ID environement variable.
required: false
default: null
subscription_password:
description:
- THe ProfitBricks password. Overrides the PB_PASSWORD environement variable.
required: false
default: null
wait:
description:
- wait for the instance to be in state 'running' before returning
required: false
default: "yes"
choices: [ "yes", "no" ]
wait_timeout:
description:
- how long before wait gives up, in seconds
default: 600
remove_boot_volume:
description:
- remove the bootVolume of the virtual machine you're destroying.
required: false
default: "yes"
choices: ["yes", "no"]
state:
description:
- create or terminate instances
required: false
default: 'present'
choices: [ "running", "stopped", "absent", "present" ]
requirements:
- "profitbricks"
- "python >= 2.6"
author: Matt Baldwin (baldwin@stackpointcloud.com)
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Provisioning example. This will create three servers and enumerate their names.
- profitbricks:
datacenter: Tardis One
name: web%02d.stackpointcloud.com
cores: 4
ram: 2048
volume_size: 50
image: a3eae284-a2fe-11e4-b187-5f1f641608c8
location: us/las
count: 3
assign_public_ip: true
# Removing Virtual machines
- profitbricks:
datacenter: Tardis One
instance_ids:
- 'web001.stackpointcloud.com'
- 'web002.stackpointcloud.com'
- 'web003.stackpointcloud.com'
wait_timeout: 500
state: absent
# Starting Virtual Machines.
- profitbricks:
datacenter: Tardis One
instance_ids:
- 'web001.stackpointcloud.com'
- 'web002.stackpointcloud.com'
- 'web003.stackpointcloud.com'
wait_timeout: 500
state: running
# Stopping Virtual Machines
- profitbricks:
datacenter: Tardis One
instance_ids:
- 'web001.stackpointcloud.com'
- 'web002.stackpointcloud.com'
- 'web003.stackpointcloud.com'
wait_timeout: 500
state: stopped
'''
import re
import uuid
import time
HAS_PB_SDK = True
try:
from profitbricks.client import ProfitBricksService, Volume, Server, Datacenter, NIC, LAN
except ImportError:
HAS_PB_SDK = False
LOCATIONS = ['us/las',
'de/fra',
'de/fkb',
'us/lasdev']
uuid_match = re.compile(
'[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}', re.I)
def _wait_for_completion(profitbricks, promise, wait_timeout, msg):
if not promise: return
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
time.sleep(5)
operation_result = profitbricks.get_request(
request_id=promise['requestId'],
status=True)
if operation_result['metadata']['status'] == "DONE":
return
elif operation_result['metadata']['status'] == "FAILED":
raise Exception(
'Request failed to complete ' + msg + ' "' + str(
promise['requestId']) + '" to complete.')
raise Exception(
'Timed out waiting for async operation ' + msg + ' "' + str(
promise['requestId']
) + '" to complete.')
def _create_machine(module, profitbricks, datacenter, name):
image = module.params.get('image')
cores = module.params.get('cores')
ram = module.params.get('ram')
volume_size = module.params.get('volume_size')
bus = module.params.get('bus')
lan = module.params.get('lan')
assign_public_ip = module.params.get('assign_public_ip')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
location = module.params.get('location')
image = module.params.get('image')
assign_public_ip = module.boolean(module.params.get('assign_public_ip'))
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
try:
# Generate name, but grab first 10 chars so we don't
# screw up the uuid match routine.
v = Volume(
name=str(uuid.uuid4()).replace('-','')[:10],
size=volume_size,
image=image,
bus=bus)
volume_response = profitbricks.create_volume(
datacenter_id=datacenter, volume=v)
# We're forced to wait on the volume creation since
# server create relies upon this existing.
_wait_for_completion(profitbricks, volume_response,
wait_timeout, "create_volume")
except Exception as e:
module.fail_json(msg="failed to create the new volume: %s" % str(e))
if assign_public_ip:
public_found = False
lans = profitbricks.list_lans(datacenter)
for lan in lans['items']:
if lan['properties']['public']:
public_found = True
lan = lan['id']
if not public_found:
i = LAN(
name='public',
public=True)
lan_response = profitbricks.create_lan(datacenter, i)
lan = lan_response['id']
_wait_for_completion(profitbricks, lan_response,
wait_timeout, "_create_machine")
try:
n = NIC(
lan=int(lan)
)
nics = [n]
s = Server(
name=name,
ram=ram,
cores=cores,
nics=nics,
boot_volume_id=volume_response['id']
)
server_response = profitbricks.create_server(
datacenter_id=datacenter, server=s)
if wait:
_wait_for_completion(profitbricks, server_response,
wait_timeout, "create_virtual_machine")
return (server_response)
except Exception as e:
module.fail_json(msg="failed to create the new server: %s" % str(e))
def _remove_machine(module, profitbricks, datacenter, name):
remove_boot_volume = module.params.get('remove_boot_volume')
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
changed = False
# User provided the actual UUID instead of the name.
try:
if remove_boot_volume:
# Collect information needed for later.
server = profitbricks.get_server(datacenter, name)
volume_id = server['properties']['bootVolume']['href'].split('/')[7]
server_response = profitbricks.delete_server(datacenter, name)
changed = True
except Exception as e:
module.fail_json(msg="failed to terminate the virtual server: %s" % str(e))
# Remove the bootVolume
if remove_boot_volume:
try:
volume_response = profitbricks.delete_volume(datacenter, volume_id)
except Exception as e:
module.fail_json(msg="failed to remove the virtual server's bootvolume: %s" % str(e))
return changed
def _startstop_machine(module, profitbricks, datacenter, name):
state = module.params.get('state')
try:
if state == 'running':
profitbricks.start_server(datacenter, name)
else:
profitbricks.stop_server(datacenter, name)
return True
except Exception as e:
module.fail_json(msg="failed to start or stop the virtual machine %s: %s" % (name, str(e)))
def _create_datacenter(module, profitbricks):
datacenter = module.params.get('datacenter')
location = module.params.get('location')
wait_timeout = module.params.get('wait_timeout')
i = Datacenter(
name=datacenter,
location=location
)
try:
datacenter_response = profitbricks.create_datacenter(datacenter=i)
_wait_for_completion(profitbricks, datacenter_response,
wait_timeout, "_create_datacenter")
return datacenter_response
except Exception as e:
module.fail_json(msg="failed to create the new server(s): %s" % str(e))
def create_virtual_machine(module, profitbricks):
"""
Create new virtual machine
module : AnsibleModule object
profitbricks: authenticated profitbricks object
Returns:
True if a new virtual machine was created, false otherwise
"""
datacenter = module.params.get('datacenter')
name = module.params.get('name')
auto_increment = module.params.get('auto_increment')
count = module.params.get('count')
lan = module.params.get('lan')
wait_timeout = module.params.get('wait_timeout')
failed = True
datacenter_found = False
virtual_machines = []
virtual_machine_ids = []
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
datacenter_found = True
break
if not datacenter_found:
datacenter_response = _create_datacenter(module, profitbricks)
datacenter = datacenter_response['id']
_wait_for_completion(profitbricks, datacenter_response,
wait_timeout, "create_virtual_machine")
if auto_increment:
numbers = set()
count_offset = 1
try:
name % 0
except TypeError, e:
if e.message.startswith('not all'):
name = '%s%%d' % name
else:
module.fail_json(msg=e.message)
number_range = xrange(count_offset,count_offset + count + len(numbers))
available_numbers = list(set(number_range).difference(numbers))
names = []
numbers_to_use = available_numbers[:count]
for number in numbers_to_use:
names.append(name % number)
else:
names = [name] * count
for name in names:
create_response = _create_machine(module, profitbricks, str(datacenter), name)
nics = profitbricks.list_nics(datacenter,create_response['id'])
for n in nics['items']:
if lan == n['properties']['lan']:
create_response.update({ 'public_ip': n['properties']['ips'][0] })
virtual_machines.append(create_response)
failed = False
results = {
'failed': failed,
'machines': virtual_machines,
'action': 'create',
'instance_ids': {
'instances': [i['id'] for i in virtual_machines],
}
}
return results
def remove_virtual_machine(module, profitbricks):
"""
Removes a virtual machine.
This will remove the virtual machine along with the bootVolume.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Not yet supported: handle deletion of attached data disks.
Returns:
True if a new virtual server was deleted, false otherwise
"""
if not isinstance(module.params.get('instance_ids'), list) or len(module.params.get('instance_ids')) < 1:
module.fail_json(msg='instance_ids should be a list of virtual machine ids or names, aborting')
datacenter = module.params.get('datacenter')
instance_ids = module.params.get('instance_ids')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
for n in instance_ids:
if(uuid_match.match(n)):
_remove_machine(module, profitbricks, d['id'], n)
else:
servers = profitbricks.list_servers(d['id'])
for s in servers['items']:
if n == s['properties']['name']:
server_id = s['id']
_remove_machine(module, profitbricks, datacenter, server_id)
def startstop_machine(module, profitbricks, state):
"""
Starts or Stops a virtual machine.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True when the servers process the action successfully, false otherwise.
"""
if not isinstance(module.params.get('instance_ids'), list) or len(module.params.get('instance_ids')) < 1:
module.fail_json(msg='instance_ids should be a list of virtual machine ids or names, aborting')
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
changed = False
datacenter = module.params.get('datacenter')
instance_ids = module.params.get('instance_ids')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
for n in instance_ids:
if(uuid_match.match(n)):
_startstop_machine(module, profitbricks, datacenter, n)
changed = True
else:
servers = profitbricks.list_servers(d['id'])
for s in servers['items']:
if n == s['properties']['name']:
server_id = s['id']
_startstop_machine(module, profitbricks, datacenter, server_id)
changed = True
if wait:
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
matched_instances = []
for res in profitbricks.list_servers(datacenter)['items']:
if state == 'running':
if res['properties']['vmState'].lower() == state:
matched_instances.append(res)
elif state == 'stopped':
if res['properties']['vmState'].lower() == 'shutoff':
matched_instances.append(res)
if len(matched_instances) < len(instance_ids):
time.sleep(5)
else:
break
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg = "wait for virtual machine state timeout on %s" % time.asctime())
return (changed)
def main():
module = AnsibleModule(
argument_spec=dict(
datacenter=dict(),
name=dict(),
image=dict(),
cores=dict(default=2),
ram=dict(default=2048),
volume_size=dict(default=10),
bus=dict(default='VIRTIO'),
lan=dict(default=1),
count=dict(default=1),
auto_increment=dict(type='bool', default=True),
instance_ids=dict(),
subscription_user=dict(),
subscription_password=dict(),
location=dict(choices=LOCATIONS, default='us/las'),
assign_public_ip=dict(type='bool', default=False),
wait=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=600),
remove_boot_volume=dict(type='bool', default=True),
state=dict(default='present'),
)
)
if not HAS_PB_SDK:
module.fail_json(msg='profitbricks required for this module')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
profitbricks = ProfitBricksService(
username=subscription_user,
password=subscription_password)
state = module.params.get('state')
if state == 'absent':
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required ' +
'for running or stopping machines.')
try:
(changed) = remove_virtual_machine(module, profitbricks)
module.exit_json(changed=changed)
except Exception as e:
module.fail_json(msg='failed to set instance state: %s' % str(e))
elif state in ('running', 'stopped'):
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required for ' +
'running or stopping machines.')
try:
(changed) = startstop_machine(module, profitbricks, state)
module.exit_json(changed=changed)
except Exception as e:
module.fail_json(msg='failed to set instance state: %s' % str(e))
elif state == 'present':
if not module.params.get('name'):
module.fail_json(msg='name parameter is required for new instance')
if not module.params.get('image'):
module.fail_json(msg='image parameter is required for new instance')
if not module.params.get('subscription_user'):
module.fail_json(msg='subscription_user parameter is ' +
'required for new instance')
if not module.params.get('subscription_password'):
module.fail_json(msg='subscription_password parameter is ' +
'required for new instance')
try:
(machine_dict_array) = create_virtual_machine(module, profitbricks)
module.exit_json(**machine_dict_array)
except Exception as e:
module.fail_json(msg='failed to set instance state: %s' % str(e))
from ansible.module_utils.basic import *
main()

@ -0,0 +1,269 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION='''
module: rax_clb_ssl
short_description: Manage SSL termination for a Rackspace Cloud Load Balancer.
description:
- Set up, reconfigure, or remove SSL termination for an existing load balancer.
version_added: "2.0"
options:
loadbalancer:
description:
- Name or ID of the load balancer on which to manage SSL termination.
required: true
state:
description:
- If set to "present", SSL termination will be added to this load balancer.
- If "absent", SSL termination will be removed instead.
choices:
- present
- absent
default: present
enabled:
description:
- If set to "false", temporarily disable SSL termination without discarding
- existing credentials.
default: true
private_key:
description:
- The private SSL key as a string in PEM format.
certificate:
description:
- The public SSL certificates as a string in PEM format.
intermediate_certificate:
description:
- One or more intermediate certificate authorities as a string in PEM
- format, concatenated into a single string.
secure_port:
description:
- The port to listen for secure traffic.
default: 443
secure_traffic_only:
description:
- If "true", the load balancer will *only* accept secure traffic.
default: false
https_redirect:
description:
- If "true", the load balancer will redirect HTTP traffic to HTTPS.
- Requires "secure_traffic_only" to be true. Incurs an implicit wait if SSL
- termination is also applied or removed.
wait:
description:
- Wait for the balancer to be in state "running" before turning.
default: false
wait_timeout:
description:
- How long before "wait" gives up, in seconds.
default: 300
author: Ash Wilson
extends_documentation_fragment: rackspace
'''
EXAMPLES = '''
- name: Enable SSL termination on a load balancer
rax_clb_ssl:
loadbalancer: the_loadbalancer
state: present
private_key: "{{ lookup('file', 'credentials/server.key' ) }}"
certificate: "{{ lookup('file', 'credentials/server.crt' ) }}"
intermediate_certificate: "{{ lookup('file', 'credentials/trust-chain.crt') }}"
secure_traffic_only: true
wait: true
- name: Disable SSL termination
rax_clb_ssl:
loadbalancer: "{{ registered_lb.balancer.id }}"
state: absent
wait: true
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def cloud_load_balancer_ssl(module, loadbalancer, state, enabled, private_key,
certificate, intermediate_certificate, secure_port,
secure_traffic_only, https_redirect,
wait, wait_timeout):
# Validate arguments.
if state == 'present':
if not private_key:
module.fail_json(msg="private_key must be provided.")
else:
private_key = private_key.strip()
if not certificate:
module.fail_json(msg="certificate must be provided.")
else:
certificate = certificate.strip()
attempts = wait_timeout / 5
# Locate the load balancer.
balancer = rax_find_loadbalancer(module, pyrax, loadbalancer)
existing_ssl = balancer.get_ssl_termination()
changed = False
if state == 'present':
# Apply or reconfigure SSL termination on the load balancer.
ssl_attrs = dict(
securePort=secure_port,
privatekey=private_key,
certificate=certificate,
intermediateCertificate=intermediate_certificate,
enabled=enabled,
secureTrafficOnly=secure_traffic_only
)
needs_change = False
if existing_ssl:
for ssl_attr, value in ssl_attrs.iteritems():
if ssl_attr == 'privatekey':
# The private key is not included in get_ssl_termination's
# output (as it shouldn't be). Also, if you're changing the
# private key, you'll also be changing the certificate,
# so we don't lose anything by not checking it.
continue
if value is not None and existing_ssl.get(ssl_attr) != value:
# module.fail_json(msg='Unnecessary change', attr=ssl_attr, value=value, existing=existing_ssl.get(ssl_attr))
needs_change = True
else:
needs_change = True
if needs_change:
try:
balancer.add_ssl_termination(**ssl_attrs)
except pyrax.exceptions.PyraxException, e:
module.fail_json(msg='%s' % e.message)
changed = True
elif state == 'absent':
# Remove SSL termination if it's already configured.
if existing_ssl:
try:
balancer.delete_ssl_termination()
except pyrax.exceptions.PyraxException, e:
module.fail_json(msg='%s' % e.message)
changed = True
if https_redirect is not None and balancer.httpsRedirect != https_redirect:
if changed:
# This wait is unavoidable because load balancers are immutable
# while the SSL termination changes above are being applied.
pyrax.utils.wait_for_build(balancer, interval=5, attempts=attempts)
try:
balancer.update(httpsRedirect=https_redirect)
except pyrax.exceptions.PyraxException, e:
module.fail_json(msg='%s' % e.message)
changed = True
if changed and wait:
pyrax.utils.wait_for_build(balancer, interval=5, attempts=attempts)
balancer.get()
new_ssl_termination = balancer.get_ssl_termination()
# Intentionally omit the private key from the module output, so you don't
# accidentally echo it with `ansible-playbook -v` or `debug`, and the
# certificate, which is just long. Convert other attributes to snake_case
# and include https_redirect at the top-level.
if new_ssl_termination:
new_ssl = dict(
enabled=new_ssl_termination['enabled'],
secure_port=new_ssl_termination['securePort'],
secure_traffic_only=new_ssl_termination['secureTrafficOnly']
)
else:
new_ssl = None
result = dict(
changed=changed,
https_redirect=balancer.httpsRedirect,
ssl_termination=new_ssl,
balancer=rax_to_dict(balancer, 'clb')
)
success = True
if balancer.status == 'ERROR':
result['msg'] = '%s failed to build' % balancer.id
success = False
elif wait and balancer.status not in ('ACTIVE', 'ERROR'):
result['msg'] = 'Timeout waiting on %s' % balancer.id
success = False
if success:
module.exit_json(**result)
else:
module.fail_json(**result)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(dict(
loadbalancer=dict(required=True),
state=dict(default='present', choices=['present', 'absent']),
enabled=dict(type='bool', default=True),
private_key=dict(),
certificate=dict(),
intermediate_certificate=dict(),
secure_port=dict(type='int', default=443),
secure_traffic_only=dict(type='bool', default=False),
https_redirect=dict(type='bool'),
wait=dict(type='bool', default=False),
wait_timeout=dict(type='int', default=300)
))
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module.')
loadbalancer = module.params.get('loadbalancer')
state = module.params.get('state')
enabled = module.boolean(module.params.get('enabled'))
private_key = module.params.get('private_key')
certificate = module.params.get('certificate')
intermediate_certificate = module.params.get('intermediate_certificate')
secure_port = module.params.get('secure_port')
secure_traffic_only = module.boolean(module.params.get('secure_traffic_only'))
https_redirect = module.boolean(module.params.get('https_redirect'))
wait = module.boolean(module.params.get('wait'))
wait_timeout = module.params.get('wait_timeout')
setup_rax_module(module, pyrax)
cloud_load_balancer_ssl(
module, loadbalancer, state, enabled, private_key, certificate,
intermediate_certificate, secure_port, secure_traffic_only,
https_redirect, wait, wait_timeout
)
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
main()

@ -0,0 +1,227 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_alarm
short_description: Create or delete a Rackspace Cloud Monitoring alarm.
description:
- Create or delete a Rackspace Cloud Monitoring alarm that associates an
existing rax_mon_entity, rax_mon_check, and rax_mon_notification_plan with
criteria that specify what conditions will trigger which levels of
notifications. Rackspace monitoring module flow | rax_mon_entity ->
rax_mon_check -> rax_mon_notification -> rax_mon_notification_plan ->
*rax_mon_alarm*
version_added: "2.0"
options:
state:
description:
- Ensure that the alarm with this C(label) exists or does not exist.
choices: [ "present", "absent" ]
required: false
default: present
label:
description:
- Friendly name for this alarm, used to achieve idempotence. Must be a String
between 1 and 255 characters long.
required: true
entity_id:
description:
- ID of the entity this alarm is attached to. May be acquired by registering
the value of a rax_mon_entity task.
required: true
check_id:
description:
- ID of the check that should be alerted on. May be acquired by registering
the value of a rax_mon_check task.
required: true
notification_plan_id:
description:
- ID of the notification plan to trigger if this alarm fires. May be acquired
by registering the value of a rax_mon_notification_plan task.
required: true
criteria:
description:
- Alarm DSL that describes alerting conditions and their output states. Must
be between 1 and 16384 characters long. See
http://docs.rackspace.com/cm/api/v1.0/cm-devguide/content/alerts-language.html
for a reference on the alerting language.
disabled:
description:
- If yes, create this alarm, but leave it in an inactive state. Defaults to
no.
choices: [ "yes", "no" ]
metadata:
description:
- Arbitrary key/value pairs to accompany the alarm. Must be a hash of String
keys and values between 1 and 255 characters long.
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Alarm example
gather_facts: False
hosts: local
connection: local
tasks:
- name: Ensure that a specific alarm exists.
rax_mon_alarm:
credentials: ~/.rax_pub
state: present
label: uhoh
entity_id: "{{ the_entity['entity']['id'] }}"
check_id: "{{ the_check['check']['id'] }}"
notification_plan_id: "{{ defcon1['notification_plan']['id'] }}"
criteria: >
if (rate(metric['average']) > 10) {
return new AlarmStatus(WARNING);
}
return new AlarmStatus(OK);
register: the_alarm
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def alarm(module, state, label, entity_id, check_id, notification_plan_id, criteria,
disabled, metadata):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
if criteria and len(criteria) < 1 or len(criteria) > 16384:
module.fail_json(msg='criteria must be between 1 and 16384 characters long')
# Coerce attributes.
changed = False
alarm = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = [a for a in cm.list_alarms(entity_id) if a.label == label]
if existing:
alarm = existing[0]
if state == 'present':
should_create = False
should_update = False
should_delete = False
if len(existing) > 1:
module.fail_json(msg='%s existing alarms have the label %s.' %
(len(existing), label))
if alarm:
if check_id != alarm.check_id or notification_plan_id != alarm.notification_plan_id:
should_delete = should_create = True
should_update = (disabled and disabled != alarm.disabled) or \
(metadata and metadata != alarm.metadata) or \
(criteria and criteria != alarm.criteria)
if should_update and not should_delete:
cm.update_alarm(entity=entity_id, alarm=alarm,
criteria=criteria, disabled=disabled,
label=label, metadata=metadata)
changed = True
if should_delete:
alarm.delete()
changed = True
else:
should_create = True
if should_create:
alarm = cm.create_alarm(entity=entity_id, check=check_id,
notification_plan=notification_plan_id,
criteria=criteria, disabled=disabled, label=label,
metadata=metadata)
changed = True
else:
for a in existing:
a.delete()
changed = True
if alarm:
alarm_dict = {
"id": alarm.id,
"label": alarm.label,
"check_id": alarm.check_id,
"notification_plan_id": alarm.notification_plan_id,
"criteria": alarm.criteria,
"disabled": alarm.disabled,
"metadata": alarm.metadata
}
module.exit_json(changed=changed, alarm=alarm_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
entity_id=dict(required=True),
check_id=dict(required=True),
notification_plan_id=dict(required=True),
criteria=dict(),
disabled=dict(type='bool', default=False),
metadata=dict(type='dict')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
entity_id = module.params.get('entity_id')
check_id = module.params.get('check_id')
notification_plan_id = module.params.get('notification_plan_id')
criteria = module.params.get('criteria')
disabled = module.boolean(module.params.get('disabled'))
metadata = module.params.get('metadata')
setup_rax_module(module, pyrax)
alarm(module, state, label, entity_id, check_id, notification_plan_id,
criteria, disabled, metadata)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -0,0 +1,313 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_check
short_description: Create or delete a Rackspace Cloud Monitoring check for an
existing entity.
description:
- Create or delete a Rackspace Cloud Monitoring check associated with an
existing rax_mon_entity. A check is a specific test or measurement that is
performed, possibly from different monitoring zones, on the systems you
monitor. Rackspace monitoring module flow | rax_mon_entity ->
*rax_mon_check* -> rax_mon_notification -> rax_mon_notification_plan ->
rax_mon_alarm
version_added: "2.0"
options:
state:
description:
- Ensure that a check with this C(label) exists or does not exist.
choices: ["present", "absent"]
entity_id:
description:
- ID of the rax_mon_entity to target with this check.
required: true
label:
description:
- Defines a label for this check, between 1 and 64 characters long.
required: true
check_type:
description:
- The type of check to create. C(remote.) checks may be created on any
rax_mon_entity. C(agent.) checks may only be created on rax_mon_entities
that have a non-null C(agent_id).
choices:
- remote.dns
- remote.ftp-banner
- remote.http
- remote.imap-banner
- remote.mssql-banner
- remote.mysql-banner
- remote.ping
- remote.pop3-banner
- remote.postgresql-banner
- remote.smtp-banner
- remote.smtp
- remote.ssh
- remote.tcp
- remote.telnet-banner
- agent.filesystem
- agent.memory
- agent.load_average
- agent.cpu
- agent.disk
- agent.network
- agent.plugin
required: true
monitoring_zones_poll:
description:
- Comma-separated list of the names of the monitoring zones the check should
run from. Available monitoring zones include mzdfw, mzhkg, mziad, mzlon,
mzord and mzsyd. Required for remote.* checks; prohibited for agent.* checks.
target_hostname:
description:
- One of `target_hostname` and `target_alias` is required for remote.* checks,
but prohibited for agent.* checks. The hostname this check should target.
Must be a valid IPv4, IPv6, or FQDN.
target_alias:
description:
- One of `target_alias` and `target_hostname` is required for remote.* checks,
but prohibited for agent.* checks. Use the corresponding key in the entity's
`ip_addresses` hash to resolve an IP address to target.
details:
description:
- Additional details specific to the check type. Must be a hash of strings
between 1 and 255 characters long, or an array or object containing 0 to
256 items.
disabled:
description:
- If "yes", ensure the check is created, but don't actually use it yet.
choices: [ "yes", "no" ]
metadata:
description:
- Hash of arbitrary key-value pairs to accompany this check if it fires.
Keys and values must be strings between 1 and 255 characters long.
period:
description:
- The number of seconds between each time the check is performed. Must be
greater than the minimum period set on your account.
timeout:
description:
- The number of seconds this check will wait when attempting to collect
results. Must be less than the period.
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Create a monitoring check
gather_facts: False
hosts: local
connection: local
tasks:
- name: Associate a check with an existing entity.
rax_mon_check:
credentials: ~/.rax_pub
state: present
entity_id: "{{ the_entity['entity']['id'] }}"
label: the_check
check_type: remote.ping
monitoring_zones_poll: mziad,mzord,mzdfw
details:
count: 10
meta:
hurf: durf
register: the_check
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def cloud_check(module, state, entity_id, label, check_type,
monitoring_zones_poll, target_hostname, target_alias, details,
disabled, metadata, period, timeout):
# Coerce attributes.
if monitoring_zones_poll and not isinstance(monitoring_zones_poll, list):
monitoring_zones_poll = [monitoring_zones_poll]
if period:
period = int(period)
if timeout:
timeout = int(timeout)
changed = False
check = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
entity = cm.get_entity(entity_id)
if not entity:
module.fail_json(msg='Failed to instantiate entity. "%s" may not be'
' a valid entity id.' % entity_id)
existing = [e for e in entity.list_checks() if e.label == label]
if existing:
check = existing[0]
if state == 'present':
if len(existing) > 1:
module.fail_json(msg='%s existing checks have a label of %s.' %
(len(existing), label))
should_delete = False
should_create = False
should_update = False
if check:
# Details may include keys set to default values that are not
# included in the initial creation.
#
# Only force a recreation of the check if one of the *specified*
# keys is missing or has a different value.
if details:
for (key, value) in details.iteritems():
if key not in check.details:
should_delete = should_create = True
elif value != check.details[key]:
should_delete = should_create = True
should_update = label != check.label or \
(target_hostname and target_hostname != check.target_hostname) or \
(target_alias and target_alias != check.target_alias) or \
(disabled != check.disabled) or \
(metadata and metadata != check.metadata) or \
(period and period != check.period) or \
(timeout and timeout != check.timeout) or \
(monitoring_zones_poll and monitoring_zones_poll != check.monitoring_zones_poll)
if should_update and not should_delete:
check.update(label=label,
disabled=disabled,
metadata=metadata,
monitoring_zones_poll=monitoring_zones_poll,
timeout=timeout,
period=period,
target_alias=target_alias,
target_hostname=target_hostname)
changed = True
else:
# The check doesn't exist yet.
should_create = True
if should_delete:
check.delete()
if should_create:
check = cm.create_check(entity,
label=label,
check_type=check_type,
target_hostname=target_hostname,
target_alias=target_alias,
monitoring_zones_poll=monitoring_zones_poll,
details=details,
disabled=disabled,
metadata=metadata,
period=period,
timeout=timeout)
changed = True
elif state == 'absent':
if check:
check.delete()
changed = True
else:
module.fail_json(msg='state must be either present or absent.')
if check:
check_dict = {
"id": check.id,
"label": check.label,
"type": check.type,
"target_hostname": check.target_hostname,
"target_alias": check.target_alias,
"monitoring_zones_poll": check.monitoring_zones_poll,
"details": check.details,
"disabled": check.disabled,
"metadata": check.metadata,
"period": check.period,
"timeout": check.timeout
}
module.exit_json(changed=changed, check=check_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
entity_id=dict(required=True),
label=dict(required=True),
check_type=dict(required=True),
monitoring_zones_poll=dict(),
target_hostname=dict(),
target_alias=dict(),
details=dict(type='dict', default={}),
disabled=dict(type='bool', default=False),
metadata=dict(type='dict', default={}),
period=dict(type='int'),
timeout=dict(type='int'),
state=dict(default='present', choices=['present', 'absent'])
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
entity_id = module.params.get('entity_id')
label = module.params.get('label')
check_type = module.params.get('check_type')
monitoring_zones_poll = module.params.get('monitoring_zones_poll')
target_hostname = module.params.get('target_hostname')
target_alias = module.params.get('target_alias')
details = module.params.get('details')
disabled = module.boolean(module.params.get('disabled'))
metadata = module.params.get('metadata')
period = module.params.get('period')
timeout = module.params.get('timeout')
state = module.params.get('state')
setup_rax_module(module, pyrax)
cloud_check(module, state, entity_id, label, check_type,
monitoring_zones_poll, target_hostname, target_alias, details,
disabled, metadata, period, timeout)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -0,0 +1,192 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_entity
short_description: Create or delete a Rackspace Cloud Monitoring entity
description:
- Create or delete a Rackspace Cloud Monitoring entity, which represents a device
to monitor. Entities associate checks and alarms with a target system and
provide a convenient, centralized place to store IP addresses. Rackspace
monitoring module flow | *rax_mon_entity* -> rax_mon_check ->
rax_mon_notification -> rax_mon_notification_plan -> rax_mon_alarm
version_added: "2.0"
options:
label:
description:
- Defines a name for this entity. Must be a non-empty string between 1 and
255 characters long.
required: true
state:
description:
- Ensure that an entity with this C(name) exists or does not exist.
choices: ["present", "absent"]
agent_id:
description:
- Rackspace monitoring agent on the target device to which this entity is
bound. Necessary to collect C(agent.) rax_mon_checks against this entity.
named_ip_addresses:
description:
- Hash of IP addresses that may be referenced by name by rax_mon_checks
added to this entity. Must be a dictionary of with keys that are names
between 1 and 64 characters long, and values that are valid IPv4 or IPv6
addresses.
metadata:
description:
- Hash of arbitrary C(name), C(value) pairs that are passed to associated
rax_mon_alarms. Names and values must all be between 1 and 255 characters
long.
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Entity example
gather_facts: False
hosts: local
connection: local
tasks:
- name: Ensure an entity exists
rax_mon_entity:
credentials: ~/.rax_pub
state: present
label: my_entity
named_ip_addresses:
web_box: 192.168.0.10
db_box: 192.168.0.11
meta:
hurf: durf
register: the_entity
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def cloud_monitoring(module, state, label, agent_id, named_ip_addresses,
metadata):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
changed = False
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = []
for entity in cm.list_entities():
if label == entity.label:
existing.append(entity)
entity = None
if existing:
entity = existing[0]
if state == 'present':
should_update = False
should_delete = False
should_create = False
if len(existing) > 1:
module.fail_json(msg='%s existing entities have the label %s.' %
(len(existing), label))
if entity:
if named_ip_addresses and named_ip_addresses != entity.ip_addresses:
should_delete = should_create = True
# Change an existing Entity, unless there's nothing to do.
should_update = agent_id and agent_id != entity.agent_id or \
(metadata and metadata != entity.metadata)
if should_update and not should_delete:
entity.update(agent_id, metadata)
changed = True
if should_delete:
entity.delete()
else:
should_create = True
if should_create:
# Create a new Entity.
entity = cm.create_entity(label=label, agent=agent_id,
ip_addresses=named_ip_addresses,
metadata=metadata)
changed = True
else:
# Delete the existing Entities.
for e in existing:
e.delete()
changed = True
if entity:
entity_dict = {
"id": entity.id,
"name": entity.name,
"agent_id": entity.agent_id,
}
module.exit_json(changed=changed, entity=entity_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
agent_id=dict(),
named_ip_addresses=dict(type='dict', default={}),
metadata=dict(type='dict', default={})
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
agent_id = module.params.get('agent_id')
named_ip_addresses = module.params.get('named_ip_addresses')
metadata = module.params.get('metadata')
setup_rax_module(module, pyrax)
cloud_monitoring(module, state, label, agent_id, named_ip_addresses, metadata)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -0,0 +1,176 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_notification
short_description: Create or delete a Rackspace Cloud Monitoring notification.
description:
- Create or delete a Rackspace Cloud Monitoring notification that specifies a
channel that can be used to communicate alarms, such as email, webhooks, or
PagerDuty. Rackspace monitoring module flow | rax_mon_entity -> rax_mon_check ->
*rax_mon_notification* -> rax_mon_notification_plan -> rax_mon_alarm
version_added: "2.0"
options:
state:
description:
- Ensure that the notification with this C(label) exists or does not exist.
choices: ['present', 'absent']
label:
description:
- Defines a friendly name for this notification. String between 1 and 255
characters long.
required: true
notification_type:
description:
- A supported notification type.
choices: ["webhook", "email", "pagerduty"]
required: true
details:
description:
- Dictionary of key-value pairs used to initialize the notification.
Required keys and meanings vary with notification type. See
http://docs.rackspace.com/cm/api/v1.0/cm-devguide/content/
service-notification-types-crud.html for details.
required: true
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Monitoring notification example
gather_facts: False
hosts: local
connection: local
tasks:
- name: Email me when something goes wrong.
rax_mon_entity:
credentials: ~/.rax_pub
label: omg
type: email
details:
address: me@mailhost.com
register: the_notification
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def notification(module, state, label, notification_type, details):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
changed = False
notification = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = []
for n in cm.list_notifications():
if n.label == label:
existing.append(n)
if existing:
notification = existing[0]
if state == 'present':
should_update = False
should_delete = False
should_create = False
if len(existing) > 1:
module.fail_json(msg='%s existing notifications are labelled %s.' %
(len(existing), label))
if notification:
should_delete = (notification_type != notification.type)
should_update = (details != notification.details)
if should_update and not should_delete:
notification.update(details=notification.details)
changed = True
if should_delete:
notification.delete()
else:
should_create = True
if should_create:
notification = cm.create_notification(notification_type,
label=label, details=details)
changed = True
else:
for n in existing:
n.delete()
changed = True
if notification:
notification_dict = {
"id": notification.id,
"type": notification.type,
"label": notification.label,
"details": notification.details
}
module.exit_json(changed=changed, notification=notification_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
notification_type=dict(required=True, choices=['webhook', 'email', 'pagerduty']),
details=dict(required=True, type='dict')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
notification_type = module.params.get('notification_type')
details = module.params.get('details')
setup_rax_module(module, pyrax)
notification(module, state, label, notification_type, details)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -0,0 +1,181 @@
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a DOCUMENTATION stub specific to this module, it extends
# a documentation fragment located in ansible.utils.module_docs_fragments
DOCUMENTATION = '''
---
module: rax_mon_notification_plan
short_description: Create or delete a Rackspace Cloud Monitoring notification
plan.
description:
- Create or delete a Rackspace Cloud Monitoring notification plan by
associating existing rax_mon_notifications with severity levels. Rackspace
monitoring module flow | rax_mon_entity -> rax_mon_check ->
rax_mon_notification -> *rax_mon_notification_plan* -> rax_mon_alarm
version_added: "2.0"
options:
state:
description:
- Ensure that the notification plan with this C(label) exists or does not
exist.
choices: ['present', 'absent']
label:
description:
- Defines a friendly name for this notification plan. String between 1 and
255 characters long.
required: true
critical_state:
description:
- Notification list to use when the alarm state is CRITICAL. Must be an
array of valid rax_mon_notification ids.
warning_state:
description:
- Notification list to use when the alarm state is WARNING. Must be an array
of valid rax_mon_notification ids.
ok_state:
description:
- Notification list to use when the alarm state is OK. Must be an array of
valid rax_mon_notification ids.
author: Ash Wilson
extends_documentation_fragment: rackspace.openstack
'''
EXAMPLES = '''
- name: Example notification plan
gather_facts: False
hosts: local
connection: local
tasks:
- name: Establish who gets called when.
rax_mon_notification_plan:
credentials: ~/.rax_pub
state: present
label: defcon1
critical_state:
- "{{ everyone['notification']['id'] }}"
warning_state:
- "{{ opsfloor['notification']['id'] }}"
register: defcon1
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
def notification_plan(module, state, label, critical_state, warning_state, ok_state):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
changed = False
notification_plan = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = []
for n in cm.list_notification_plans():
if n.label == label:
existing.append(n)
if existing:
notification_plan = existing[0]
if state == 'present':
should_create = False
should_delete = False
if len(existing) > 1:
module.fail_json(msg='%s notification plans are labelled %s.' %
(len(existing), label))
if notification_plan:
should_delete = (critical_state and critical_state != notification_plan.critical_state) or \
(warning_state and warning_state != notification_plan.warning_state) or \
(ok_state and ok_state != notification_plan.ok_state)
if should_delete:
notification_plan.delete()
should_create = True
else:
should_create = True
if should_create:
notification_plan = cm.create_notification_plan(label=label,
critical_state=critical_state,
warning_state=warning_state,
ok_state=ok_state)
changed = True
else:
for np in existing:
np.delete()
changed = True
if notification_plan:
notification_plan_dict = {
"id": notification_plan.id,
"critical_state": notification_plan.critical_state,
"warning_state": notification_plan.warning_state,
"ok_state": notification_plan.ok_state,
"metadata": notification_plan.metadata
}
module.exit_json(changed=changed, notification_plan=notification_plan_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
critical_state=dict(type='list'),
warning_state=dict(type='list'),
ok_state=dict(type='list')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
critical_state = module.params.get('critical_state')
warning_state = module.params.get('warning_state')
ok_state = module.params.get('ok_state')
setup_rax_module(module, pyrax)
notification_plan(module, state, label, critical_state, warning_state, ok_state)
# Import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.rax import *
# Invoke the module.
main()

@ -0,0 +1,725 @@
#!/usr/bin/python
# Copyright (c) 2015 Ansible, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: vca_vapp
short_description: create, terminate, start or stop a vm in vca
description:
- Creates or terminates vca vms.
version_added: "2.0"
options:
username:
description:
- The vca username or email address, if not set the environment variable VCA_USER is checked for the username.
required: false
default: None
password:
description:
- The vca password, if not set the environment variable VCA_PASS is checked for the password
required: false
default: None
org:
description:
- The org to login to for creating vapp, mostly set when the service_type is vdc.
required: false
default: None
service_id:
description:
- The service id in a vchs environment to be used for creating the vapp
required: false
default: None
host:
description:
- The authentication host to be used when service type is vcd.
required: false
default: None
api_version:
description:
- The api version to be used with the vca
required: false
default: "5.7"
service_type:
description:
- The type of service we are authenticating against
required: false
default: vca
choices: [ "vca", "vchs", "vcd" ]
state:
description:
- if the object should be added or removed
required: false
default: present
choices: [ "present", "absent" ]
catalog_name:
description:
- The catalog from which the vm template is used.
required: false
default: "Public Catalog"
script:
description:
- The path to script that gets injected to vm during creation.
required: false
default: "Public Catalog"
template_name:
description:
- The template name from which the vm should be created.
required: True
network_name:
description:
- The network name to which the vm should be attached.
required: false
default: 'None'
network_ip:
description:
- The ip address that should be assigned to vm when the ip assignment type is static
required: false
default: None
network_mode:
description:
- The network mode in which the ip should be allocated.
required: false
default: pool
choices: [ "pool", "dhcp", 'static' ]
instance_id::
description:
- The instance id of the region in vca flavour where the vm should be created
required: false
default: None
wait:
description:
- If the module should wait if the operation is poweroff or poweron, is better to wait to report the right state.
required: false
default: True
wait_timeout:
description:
- The wait timeout when wait is set to true
required: false
default: 250
vdc_name:
description:
- The name of the vdc where the vm should be created.
required: false
default: None
vm_name:
description:
- The name of the vm to be created, the vapp is named the same as the vapp name
required: false
default: 'default_ansible_vm1'
vm_cpus:
description:
- The number if cpus to be added to the vm
required: false
default: None
vm_memory:
description:
- The amount of memory to be added to vm in megabytes
required: false
default: None
verify_certs:
description:
- If the certificates of the authentication is to be verified
required: false
default: True
admin_password:
description:
- The password to be set for admin
required: false
default: None
operation:
description:
- The operation to be done on the vm
required: false
default: poweroff
choices: [ 'shutdown', 'poweroff', 'poweron', 'reboot', 'reset', 'suspend' ]
'''
EXAMPLES = '''
#Create a vm in an vca environment. The username password is not set as they are set in environment
- hosts: localhost
connection: local
tasks:
- vca_vapp:
operation: poweroff
instance_id: 'b15ff1e5-1024-4f55-889f-ea0209726282'
vdc_name: 'benz_ansible'
vm_name: benz
vm_cpus: 2
vm_memory: 1024
network_mode: pool
template_name: "CentOS63-32BIT"
admin_password: "Password!123"
network_name: "default-routed-network"
#Create a vm in a vchs environment.
- hosts: localhost
connection: local
tasks:
- vca_app:
operation: poweron
service_id: '9-69'
vdc_name: 'Marketing'
service_type: 'vchs'
vm_name: benz
vm_cpus: 1
script: "/tmp/configure_vm.sh"
catalog_name: "Marketing-Catalog"
template_name: "Marketing-Ubuntu-1204x64"
vm_memory: 512
network_name: "M49-default-isolated"
#create a vm in a vdc environment
- hosts: localhost
connection: local
tasks:
- vca_vapp:
operation: poweron
org: IT20
host: "mycloud.vmware.net"
api_version: "5.5"
service_type: vcd
vdc_name: 'IT20 Data Center (Beta)'
vm_name: benz
vm_cpus: 1
catalog_name: "OS Templates"
template_name: "CentOS 6.5 64Bit CLI"
network_mode: pool
'''
import time, json, xmltodict
HAS_PYVCLOUD = False
try:
from pyvcloud.vcloudair import VCA
HAS_PYVCLOUD = True
except ImportError:
pass
SERVICE_MAP = {'vca': 'ondemand', 'vchs': 'subscription', 'vcd': 'vcd'}
LOGIN_HOST = {}
LOGIN_HOST['vca'] = 'vca.vmware.com'
LOGIN_HOST['vchs'] = 'vchs.vmware.com'
VM_COMPARE_KEYS = ['admin_password', 'status', 'cpus', 'memory_mb']
def vm_state(val=None):
if val == 8:
return "Power_Off"
elif val == 4:
return "Power_On"
else:
return "Unknown Status"
def serialize_instances(instance_list):
instances = []
for i in instance_list:
instances.append(dict(apiUrl=i['apiUrl'], instance_id=i['id']))
return instances
def get_catalogs(vca):
catalogs = vca.get_catalogs()
results = []
for catalog in catalogs:
if catalog.CatalogItems and catalog.CatalogItems.CatalogItem:
for item in catalog.CatalogItems.CatalogItem:
results.append([catalog.name, item.name])
else:
results.append([catalog.name, ''])
return results
def vca_login(module=None):
service_type = module.params.get('service_type')
username = module.params.get('username')
password = module.params.get('password')
instance = module.params.get('instance_id')
org = module.params.get('org')
service = module.params.get('service_id')
vdc_name = module.params.get('vdc_name')
version = module.params.get('api_version')
verify = module.params.get('verify_certs')
if not vdc_name:
if service_type == 'vchs':
vdc_name = module.params.get('service_id')
if not org:
if service_type == 'vchs':
if vdc_name:
org = vdc_name
else:
org = service
if service_type == 'vcd':
host = module.params.get('host')
else:
host = LOGIN_HOST[service_type]
if not username:
if 'VCA_USER' in os.environ:
username = os.environ['VCA_USER']
if not password:
if 'VCA_PASS' in os.environ:
password = os.environ['VCA_PASS']
if not username or not password:
module.fail_json(msg = "Either the username or password is not set, please check")
if service_type == 'vchs':
version = '5.6'
if service_type == 'vcd':
if not version:
version == '5.6'
vca = VCA(host=host, username=username, service_type=SERVICE_MAP[service_type], version=version, verify=verify)
if service_type == 'vca':
if not vca.login(password=password):
module.fail_json(msg = "Login Failed: Please check username or password", error=vca.response.content)
if not vca.login_to_instance(password=password, instance=instance, token=None, org_url=None):
s_json = serialize_instances(vca.instances)
module.fail_json(msg = "Login to Instance failed: Seems like instance_id provided is wrong .. Please check",\
valid_instances=s_json)
if not vca.login_to_instance(instance=instance, password=None, token=vca.vcloud_session.token,
org_url=vca.vcloud_session.org_url):
module.fail_json(msg = "Error logging into org for the instance", error=vca.response.content)
return vca
if service_type == 'vchs':
if not vca.login(password=password):
module.fail_json(msg = "Login Failed: Please check username or password", error=vca.response.content)
if not vca.login(token=vca.token):
module.fail_json(msg = "Failed to get the token", error=vca.response.content)
if not vca.login_to_org(service, org):
module.fail_json(msg = "Failed to login to org, Please check the orgname", error=vca.response.content)
return vca
if service_type == 'vcd':
if not vca.login(password=password, org=org):
module.fail_json(msg = "Login Failed: Please check username or password or host parameters")
if not vca.login(password=password, org=org):
module.fail_json(msg = "Failed to get the token", error=vca.response.content)
if not vca.login(token=vca.token, org=org, org_url=vca.vcloud_session.org_url):
module.fail_json(msg = "Failed to login to org", error=vca.response.content)
return vca
def set_vm_state(module=None, vca=None, state=None):
wait = module.params.get('wait')
wait_tmout = module.params.get('wait_timeout')
vm_name = module.params.get('vm_name')
vdc_name = module.params.get('vdc_name')
vapp_name = module.params.get('vm_name')
service_type = module.params.get('service_type')
service_id = module.params.get('service_id')
if service_type == 'vchs' and not vdc_name:
vdc_name = service_id
vdc = vca.get_vdc(vdc_name)
if wait:
tmout = time.time() + wait_tmout
while tmout > time.time():
vapp = vca.get_vapp(vdc, vapp_name)
vms = filter(lambda vm: vm['name'] == vm_name, vapp.get_vms_details())
vm = vms[0]
if vm['status'] == state:
return True
time.sleep(5)
module.fail_json(msg="Timeut waiting for the vms state to change")
return True
def vm_details(vdc=None, vapp=None, vca=None):
table = []
networks = []
vm_name = vapp
vdc1 = vca.get_vdc(vdc)
if not vdc1:
module.fail_json(msg = "Error getting the vdc, Please check the vdc name")
vap = vca.get_vapp(vdc1, vapp)
if vap:
vms = filter(lambda vm: vm['name'] == vm_name, vap.get_vms_details())
networks = vap.get_vms_network_info()
if len(networks[0]) >= 1:
table.append(dict(vm_info=vms[0], network_info=networks[0][0]))
else:
table.append(dict(vm_info=vms[0], network_info=networks[0]))
return table
def vapp_attach_net(module=None, vca=None, vapp=None):
network_name = module.params.get('network_name')
service_type = module.params.get('service_type')
vdc_name = module.params.get('vdc_name')
mode = module.params.get('network_mode')
if mode.upper() == 'STATIC':
network_ip = module.params.get('network_ip')
else:
network_ip = None
if not vdc_name:
if service_type == 'vchs':
vdc_name = module.params.get('service_id')
nets = filter(lambda n: n.name == network_name, vca.get_networks(vdc_name))
if len(nets) <= 1:
net_task = vapp.disconnect_vms()
if not net_task:
module.fail_json(msg="Failure in detattaching vms from vnetworks", error=vapp.response.content)
if not vca.block_until_completed(net_task):
module.fail_json(msg="Failure in waiting for detaching vms from vnetworks", error=vapp.response.content)
net_task = vapp.disconnect_from_networks()
if not net_task:
module.fail_json(msg="Failure in detattaching network from vapp", error=vapp.response.content)
if not vca.block_until_completed(net_task):
module.fail_json(msg="Failure in waiting for detaching network from vapp", error=vapp.response.content)
if not network_name:
return True
net_task = vapp.connect_to_network(nets[0].name, nets[0].href)
if not net_task:
module.fail_json(msg="Failure in attaching network to vapp", error=vapp.response.content)
if not vca.block_until_completed(net_task):
module.fail_json(msg="Failure in waiting for attching network to vapp", error=vapp.response.content)
net_task = vapp.connect_vms(nets[0].name, connection_index=0, ip_allocation_mode=mode.upper(), ip_address=network_ip )
if not net_task:
module.fail_json(msg="Failure in attaching network to vm", error=vapp.response.content)
if not vca.block_until_completed(net_task):
module.fail_json(msg="Failure in waiting for attaching network to vm", error=vapp.response.content)
return True
nets = []
for i in vca.get_networks(vdc_name):
nets.append(i.name)
module.fail_json(msg="Seems like network_name is not found in the vdc, please check Available networks as above", Available_networks=nets)
def create_vm(vca=None, module=None):
vm_name = module.params.get('vm_name')
operation = module.params.get('operation')
vm_cpus = module.params.get('vm_cpus')
vm_memory = module.params.get('vm_memory')
catalog_name = module.params.get('catalog_name')
template_name = module.params.get('template_name')
vdc_name = module.params.get('vdc_name')
network_name = module.params.get('network_name')
service_type = module.params.get('service_type')
admin_pass = module.params.get('admin_password')
script = module.params.get('script')
vapp_name = vm_name
if not vdc_name:
if service_type == 'vchs':
vdc_name = module.params.get('service_id')
task = vca.create_vapp(vdc_name, vapp_name, template_name, catalog_name, vm_name=None)
if not task:
catalogs = get_catalogs(vca)
module.fail_json(msg="Error in Creating VM, Please check catalog or template, Available catalogs and templates are as above or check the error field", catalogs=catalogs, errors=vca.response.content)
if not vca.block_until_completed(task):
module.fail_json(msg = "Error in waiting for VM Creation, Please check logs", errors=vca.response.content)
vdc = vca.get_vdc(vdc_name)
if not vdc:
module.fail_json(msg = "Error getting the vdc, Please check the vdc name", errors=vca.response.content)
vapp = vca.get_vapp(vdc, vapp_name)
task = vapp.modify_vm_name(1, vm_name)
if not task:
module.fail_json(msg="Error in setting the vm_name to vapp_name", errors=vca.response.content)
if not vca.block_until_completed(task):
module.fail_json(msg = "Error in waiting for VM Renaming, Please check logs", errors=vca.response.content)
vapp = vca.get_vapp(vdc, vapp_name)
task = vapp.customize_guest_os(vm_name, computer_name=vm_name)
if not task:
module.fail_json(msg="Error in setting the computer_name to vm_name", errors=vca.response.content)
if not vca.block_until_completed(task):
module.fail_json(msg = "Error in waiting for Computer Renaming, Please check logs", errors=vca.response.content)
if network_name:
vapp = vca.get_vapp(vdc, vapp_name)
if not vapp_attach_net(module, vca, vapp):
module.fail_json(msg= "Attaching network to VM fails", errors=vca.response.content)
if vm_cpus:
vapp = vca.get_vapp(vdc, vapp_name)
task = vapp.modify_vm_cpu(vm_name, vm_cpus)
if not task:
module.fail_json(msg="Error adding cpu", error=vapp.resonse.contents)
if not vca.block_until_completed(task):
module.fail_json(msg="Failure in waiting for modifying cpu", error=vapp.response.content)
if vm_memory:
vapp = vca.get_vapp(vdc, vapp_name)
task = vapp.modify_vm_memory(vm_name, vm_memory)
if not task:
module.fail_json(msg="Error adding memory", error=vapp.resonse.contents)
if not vca.block_until_completed(task):
module.fail_json(msg="Failure in waiting for modifying memory", error=vapp.response.content)
if admin_pass:
vapp = vca.get_vapp(vdc, vapp_name)
task = vapp.customize_guest_os(vm_name, customization_script=None,
computer_name=None, admin_password=admin_pass,
reset_password_required=False)
if not task:
module.fail_json(msg="Error adding admin password", error=vapp.resonse.contents)
if not vca.block_until_completed(task):
module.fail_json(msg = "Error in waiting for resettng admin pass, Please check logs", errors=vapp.response.content)
if script:
vapp = vca.get_vapp(vdc, vapp_name)
if os.path.exists(os.path.expanduser(script)):
file_contents = open(script, 'r')
task = vapp.customize_guest_os(vm_name, customization_script=file_contents.read())
if not task:
module.fail_json(msg="Error adding customization script", error=vapp.resonse.contents)
if not vca.block_until_completed(task):
module.fail_json(msg = "Error in waiting for customization script, please check logs", errors=vapp.response.content)
task = vapp.force_customization(vm_name, power_on=False )
if not task:
module.fail_json(msg="Error adding customization script", error=vapp.resonse.contents)
if not vca.block_until_completed(task):
module.fail_json(msg = "Error in waiting for customization script, please check logs", errors=vapp.response.content)
else:
module.fail_json(msg = "The file specified in script paramter is not avaialable or accessible")
vapp = vca.get_vapp(vdc, vapp_name)
if operation == 'poweron':
vapp.poweron()
set_vm_state(module, vca, state='Powered on')
elif operation == 'poweroff':
vapp.poweroff()
elif operation == 'reboot':
vapp.reboot()
elif operation == 'reset':
vapp.reset()
elif operation == 'suspend':
vapp.suspend()
elif operation == 'shutdown':
vapp.shutdown()
details = vm_details(vdc_name, vapp_name, vca)
module.exit_json(changed=True, msg="VM created", vm_details=details[0])
def vapp_reconfigure(module=None, diff=None, vm=None, vca=None, vapp=None, vdc_name=None):
flag = False
vapp_name = module.params.get('vm_name')
vm_name = module.params.get('vm_name')
cpus = module.params.get('vm_cpus')
memory = module.params.get('vm_memory')
admin_pass = module.params.get('admin_password')
if 'status' in diff:
operation = module.params.get('operation')
if operation == 'poweroff':
vapp.poweroff()
set_vm_state(module, vca, state='Powered off')
flag = True
if 'network' in diff:
vapp_attach_net(module, vca, vapp)
flag = True
if 'cpus' in diff:
task = vapp.modify_vm_cpu(vm_name, cpus)
if not vca.block_until_completed(task):
module.fail_json(msg="Failure in waiting for modifying cpu, might be vm is powered on and doesnt support hotplugging", error=vapp.response.content)
flag = True
if 'memory_mb' in diff:
task = vapp.modify_vm_memory(vm_name, memory)
if not vca.block_until_completed(task):
module.fail_json(msg="Failure in waiting for modifying memory, might be vm is powered on and doesnt support hotplugging", error=vapp.response.content)
flag = True
if 'admin_password' in diff:
task = vapp.customize_guest_os(vm_name, customization_script=None,
computer_name=None, admin_password=admin_pass,
reset_password_required=False)
if not task:
module.fail_json(msg="Error adding admin password", error=vapp.resonse.contents)
if not vca.block_until_completed(task):
module.fail_json(msg = "Error in waiting for resettng admin pass, Please check logs", errors=vapp.response.content)
flag = True
if 'status' in diff:
operation = module.params.get('operation')
if operation == 'poweron':
vapp.poweron()
set_vm_state(module, vca, state='Powered on')
elif operation == 'reboot':
vapp.reboot()
elif operation == 'reset':
vapp.reset()
elif operation == 'suspend':
vapp.suspend()
elif operation == 'shutdown':
vapp.shutdown()
flag = True
details = vm_details(vdc_name, vapp_name, vca)
if flag:
module.exit_json(changed=True, msg="VM reconfigured", vm_details=details[0])
module.exit_json(changed=False, msg="VM exists as per configuration",\
vm_details=details[0])
def vm_exists(module=None, vapp=None, vca=None, vdc_name=None):
vm_name = module.params.get('vm_name')
operation = module.params.get('operation')
vm_cpus = module.params.get('vm_cpus')
vm_memory = module.params.get('vm_memory')
network_name = module.params.get('network_name')
admin_pass = module.params.get('admin_password')
d_vm = {}
d_vm['name'] = vm_name
d_vm['cpus'] = vm_cpus
d_vm['memory_mb'] = vm_memory
d_vm['admin_password'] = admin_pass
if operation == 'poweron':
d_vm['status'] = 'Powered on'
elif operation == 'poweroff':
d_vm['status'] = 'Powered off'
else:
d_vm['status'] = 'operate'
vms = filter(lambda vm: vm['name'] == vm_name, vapp.get_vms_details())
if len(vms) > 1:
module.fail_json(msg = "The vapp seems to have more than one vm with same name,\
currently we only support a single vm deployment")
elif len(vms) == 0:
return False
else:
vm = vms[0]
diff = []
for i in VM_COMPARE_KEYS:
if not d_vm[i]:
continue
if vm[i] != d_vm[i]:
diff.append(i)
if len(diff) == 1 and 'status' in diff:
vapp_reconfigure(module, diff, vm, vca, vapp, vdc_name)
networks = vapp.get_vms_network_info()
if not network_name and len(networks) >=1:
if len(networks[0]) >= 1:
if networks[0][0]['network_name'] != 'none':
diff.append('network')
if not network_name:
if len(diff) == 0:
return True
if not networks[0] and network_name:
diff.append('network')
if networks[0]:
if len(networks[0]) >= 1:
if networks[0][0]['network_name'] != network_name:
diff.append('network')
if vm['status'] != 'Powered off':
if operation != 'poweroff' and len(diff) > 0:
module.fail_json(msg="To change any properties of a vm, The vm should be in Powered Off state")
if len(diff) == 0:
return True
else:
vapp_reconfigure(module, diff, vm, vca, vapp, vdc_name)
def main():
module = AnsibleModule(
argument_spec=dict(
username = dict(default=None),
password = dict(default=None),
org = dict(default=None),
service_id = dict(default=None),
script = dict(default=None),
host = dict(default=None),
api_version = dict(default='5.7'),
service_type = dict(default='vca', choices=['vchs', 'vca', 'vcd']),
state = dict(default='present', choices = ['present', 'absent']),
catalog_name = dict(default="Public Catalog"),
template_name = dict(default=None, required=True),
network_name = dict(default=None),
network_ip = dict(default=None),
network_mode = dict(default='pool', choices=['dhcp', 'static', 'pool']),
instance_id = dict(default=None),
wait = dict(default=True, type='bool'),
wait_timeout = dict(default=250, type='int'),
vdc_name = dict(default=None),
vm_name = dict(default='default_ansible_vm1'),
vm_cpus = dict(default=None, type='int'),
verify_certs = dict(default=True, type='bool'),
vm_memory = dict(default=None, type='int'),
admin_password = dict(default=None),
operation = dict(default='poweroff', choices=['shutdown', 'poweroff', 'poweron', 'reboot', 'reset', 'suspend'])
)
)
vdc_name = module.params.get('vdc_name')
vm_name = module.params.get('vm_name')
org = module.params.get('org')
service = module.params.get('service_id')
state = module.params.get('state')
service_type = module.params.get('service_type')
host = module.params.get('host')
instance_id = module.params.get('instance_id')
network_mode = module.params.get('network_mode')
network_ip = module.params.get('network_ip')
vapp_name = vm_name
if not HAS_PYVCLOUD:
module.fail_json(msg="python module pyvcloud is needed for this module")
if network_mode.upper() == 'STATIC':
if not network_ip:
module.fail_json(msg="if network_mode is STATIC, network_ip is mandatory")
if service_type == 'vca':
if not instance_id:
module.fail_json(msg="When service type is vca the instance_id parameter is mandatory")
if not vdc_name:
module.fail_json(msg="When service type is vca the vdc_name parameter is mandatory")
if service_type == 'vchs':
if not service:
module.fail_json(msg="When service type vchs the service_id parameter is mandatory")
if not org:
org = service
if not vdc_name:
vdc_name = service
if service_type == 'vcd':
if not host:
module.fail_json(msg="When service type is vcd host parameter is mandatory")
vca = vca_login(module)
vdc = vca.get_vdc(vdc_name)
if not vdc:
module.fail_json(msg = "Error getting the vdc, Please check the vdc name")
vapp = vca.get_vapp(vdc, vapp_name)
if vapp:
if state == 'absent':
task = vca.delete_vapp(vdc_name, vapp_name)
if not vca.block_until_completed(task):
module.fail_json(msg="failure in deleting vapp")
module.exit_json(changed=True, msg="Vapp deleted")
if vm_exists(module, vapp, vca, vdc_name ):
details = vm_details(vdc_name, vapp_name, vca)
module.exit_json(changed=False, msg="vapp exists", vm_details=details[0])
else:
create_vm(vca, module)
if state == 'absent':
module.exit_json(changed=False, msg="Vapp does not exist")
create_vm(vca, module)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,176 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Joseph Callen <jcallen () csc.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: vmware_datacenter
short_description: Manage VMware vSphere Datacenters
description:
- Manage VMware vSphere Datacenters
version_added: 2.0
author: "Joseph Callen (@jcpowermac)"
notes:
- Tested on vSphere 5.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
hostname:
description:
- The hostname or IP address of the vSphere vCenter API server
required: True
username:
description:
- The username of the vSphere vCenter
required: True
aliases: ['user', 'admin']
password:
description:
- The password of the vSphere vCenter
required: True
aliases: ['pass', 'pwd']
datacenter_name:
description:
- The name of the datacenter the cluster will be created in.
required: True
state:
description:
- If the datacenter should be present or absent
choices: ['present', 'absent']
required: True
'''
EXAMPLES = '''
# Example vmware_datacenter command from Ansible Playbooks
- name: Create Datacenter
local_action: >
vmware_datacenter
hostname="{{ ansible_ssh_host }}" username=root password=vmware
datacenter_name="datacenter"
'''
try:
from pyVmomi import vim, vmodl
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
def state_create_datacenter(module):
datacenter_name = module.params['datacenter_name']
content = module.params['content']
changed = True
datacenter = None
folder = content.rootFolder
try:
if not module.check_mode:
datacenter = folder.CreateDatacenter(name=datacenter_name)
module.exit_json(changed=changed, result=str(datacenter))
except vim.fault.DuplicateName:
module.fail_json(msg="A datacenter with the name %s already exists" % datacenter_name)
except vim.fault.InvalidName:
module.fail_json(msg="%s is an invalid name for a cluster" % datacenter_name)
except vmodl.fault.NotSupported:
# This should never happen
module.fail_json(msg="Trying to create a datacenter on an incorrect folder object")
except vmodl.RuntimeFault as runtime_fault:
module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
module.fail_json(msg=method_fault.msg)
def check_datacenter_state(module):
datacenter_name = module.params['datacenter_name']
try:
content = connect_to_api(module)
datacenter = find_datacenter_by_name(content, datacenter_name)
module.params['content'] = content
if datacenter is None:
return 'absent'
else:
module.params['datacenter'] = datacenter
return 'present'
except vmodl.RuntimeFault as runtime_fault:
module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
module.fail_json(msg=method_fault.msg)
def state_destroy_datacenter(module):
datacenter = module.params['datacenter']
changed = True
result = None
try:
if not module.check_mode:
task = datacenter.Destroy_Task()
changed, result = wait_for_task(task)
module.exit_json(changed=changed, result=result)
except vim.fault.VimFault as vim_fault:
module.fail_json(msg=vim_fault.msg)
except vmodl.RuntimeFault as runtime_fault:
module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
module.fail_json(msg=method_fault.msg)
def state_exit_unchanged(module):
module.exit_json(changed=False)
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
dict(
datacenter_name=dict(required=True, type='str'),
state=dict(required=True, choices=['present', 'absent'], type='str'),
)
)
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True)
if not HAS_PYVMOMI:
module.fail_json(msg='pyvmomi is required for this module')
datacenter_states = {
'absent': {
'present': state_destroy_datacenter,
'absent': state_exit_unchanged,
},
'present': {
'present': state_exit_unchanged,
'absent': state_create_datacenter,
}
}
desired_state = module.params['state']
current_state = check_datacenter_state(module)
datacenter_states[desired_state][current_state](module)
from ansible.module_utils.basic import *
from ansible.module_utils.vmware import *
if __name__ == '__main__':
main()

@ -0,0 +1,155 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2015 Dag Wieers <dag@wieers.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: vsphere_copy
short_description: Copy a file to a vCenter datastore
description: Upload files to a vCenter datastore
version_added: 2.0
author: Dag Wieers (@dagwieers) <dag@wieers.com>
options:
host:
description:
- The vCenter server on which the datastore is available.
required: true
login:
description:
- The login name to authenticate on the vCenter server.
required: true
password:
description:
- The password to authenticate on the vCenter server.
required: true
src:
description:
- The file to push to vCenter
required: true
datacenter:
description:
- The datacenter on the vCenter server that holds the datastore.
required: true
datastore:
description:
- The datastore on the vCenter server to push files to.
required: true
path:
description:
- The file to push to the datastore on the vCenter server.
required: true
notes:
- "This module ought to be run from a system that can access vCenter directly and has the file to transfer.
It can be the normal remote target or you can change it either by using C(transport: local) or using C(delegate_to)."
- Tested on vSphere 5.5
'''
EXAMPLES = '''
- vsphere_copy: host=vhost login=vuser password=vpass src=/some/local/file datacenter='DC1 Someplace' datastore=datastore1 path=some/remote/file
transport: local
- vsphere_copy: host=vhost login=vuser password=vpass src=/other/local/file datacenter='DC2 Someplace' datastore=datastore2 path=other/remote/file
delegate_to: other_system
'''
import atexit
import base64
import httplib
import urllib
import mmap
import errno
import socket
def vmware_path(datastore, datacenter, path):
''' Constructs a URL path that VSphere accepts reliably '''
path = "/folder/%s" % path.lstrip("/")
# Due to a software bug in vSphere, it fails to handle ampersand in datacenter names
# The solution is to do what vSphere does (when browsing) and double-encode ampersands, maybe others ?
datacenter = datacenter.replace('&', '%26')
if not path.startswith("/"):
path = "/" + path
params = dict( dsName = datastore )
if datacenter:
params["dcPath"] = datacenter
params = urllib.urlencode(params)
return "%s?%s" % (path, params)
def main():
module = AnsibleModule(
argument_spec = dict(
host = dict(required=True, aliases=[ 'hostname' ]),
login = dict(required=True, aliases=[ 'username' ]),
password = dict(required=True),
src = dict(required=True, aliases=[ 'name' ]),
datacenter = dict(required=True),
datastore = dict(required=True),
dest = dict(required=True, aliases=[ 'path' ]),
),
# Implementing check-mode using HEAD is impossible, since size/date is not 100% reliable
supports_check_mode = False,
)
host = module.params.get('host')
login = module.params.get('login')
password = module.params.get('password')
src = module.params.get('src')
datacenter = module.params.get('datacenter')
datastore = module.params.get('datastore')
dest = module.params.get('dest')
fd = open(src, "rb")
atexit.register(fd.close)
data = mmap.mmap(fd.fileno(), 0, access=mmap.ACCESS_READ)
atexit.register(data.close)
conn = httplib.HTTPSConnection(host)
atexit.register(conn.close)
remote_path = vmware_path(datastore, datacenter, dest)
auth = base64.encodestring('%s:%s' % (login, password)).rstrip()
headers = {
"Content-Type": "application/octet-stream",
"Content-Length": str(len(data)),
"Authorization": "Basic %s" % auth,
}
# URL is only used in JSON output (helps troubleshooting)
url = 'https://%s%s' % (host, remote_path)
try:
conn.request("PUT", remote_path, body=data, headers=headers)
except socket.error, e:
if isinstance(e.args, tuple) and e[0] == errno.ECONNRESET:
# VSphere resets connection if the file is in use and cannot be replaced
module.fail_json(msg='Failed to upload, image probably in use', status=e[0], reason=str(e), url=url)
else:
module.fail_json(msg=str(e), status=e[0], reason=str(e), url=url)
resp = conn.getresponse()
if resp.status in range(200, 300):
module.exit_json(changed=True, status=resp.status, reason=resp.reason, url=url)
else:
module.fail_json(msg='Failed to upload', status=resp.status, reason=resp.reason, length=resp.length, version=resp.version, headers=resp.getheaders(), chunked=resp.chunked, url=url)
# Import module snippets
from ansible.module_utils.basic import *
main()

@ -0,0 +1,197 @@
#!/usr/bin/python
#
# Create a Webfaction application using Ansible and the Webfaction API
#
# Valid application types can be found by looking here:
# http://docs.webfaction.com/xmlrpc-api/apps.html#application-types
#
# ------------------------------------------
#
# (c) Quentin Stafford-Fraser 2015, with contributions gratefully acknowledged from:
# * Andy Baker
# * Federico Tarantini
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_app
short_description: Add or remove applications on a Webfaction host
description:
- Add or remove applications on a Webfaction host. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
name:
description:
- The name of the application
required: true
state:
description:
- Whether the application should exist
required: false
choices: ['present', 'absent']
default: "present"
type:
description:
- The type of application to create. See the Webfaction docs at http://docs.webfaction.com/xmlrpc-api/apps.html for a list.
required: true
autostart:
description:
- Whether the app should restart with an autostart.cgi script
required: false
default: "no"
extra_info:
description:
- Any extra parameters required by the app
required: false
default: null
open_port:
required: false
default: false
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
machine:
description:
- The machine name to use (optional for accounts with only one machine)
required: false
'''
EXAMPLES = '''
- name: Create a test app
webfaction_app:
name="my_wsgi_app1"
state=present
type=mod_wsgi35-python27
login_name={{webfaction_user}}
login_password={{webfaction_passwd}}
machine={{webfaction_machine}}
'''
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(required=False, choices=['present', 'absent'], default='present'),
type = dict(required=True),
autostart = dict(required=False, choices=BOOLEANS, default=False),
extra_info = dict(required=False, default=""),
port_open = dict(required=False, choices=BOOLEANS, default=False),
login_name = dict(required=True),
login_password = dict(required=True),
machine = dict(required=False, default=False),
),
supports_check_mode=True
)
app_name = module.params['name']
app_type = module.params['type']
app_state = module.params['state']
if module.params['machine']:
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password'],
module.params['machine']
)
else:
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
app_list = webfaction.list_apps(session_id)
app_map = dict([(i['name'], i) for i in app_list])
existing_app = app_map.get(app_name)
result = {}
# Here's where the real stuff happens
if app_state == 'present':
# Does an app with this name already exist?
if existing_app:
if existing_app['type'] != app_type:
module.fail_json(msg="App already exists with different type. Please fix by hand.")
# If it exists with the right type, we don't change it
# Should check other parameters.
module.exit_json(
changed = False,
)
if not module.check_mode:
# If this isn't a dry run, create the app
result.update(
webfaction.create_app(
session_id, app_name, app_type,
module.boolean(module.params['autostart']),
module.params['extra_info'],
module.boolean(module.params['port_open'])
)
)
elif app_state == 'absent':
# If the app's already not there, nothing changed.
if not existing_app:
module.exit_json(
changed = False,
)
if not module.check_mode:
# If this isn't a dry run, delete the app
result.update(
webfaction.delete_app(session_id, app_name)
)
else:
module.fail_json(msg="Unknown state specified: {}".format(app_state))
module.exit_json(
changed = True,
result = result
)
from ansible.module_utils.basic import *
main()

@ -0,0 +1,200 @@
#!/usr/bin/python
#
# Create a webfaction database using Ansible and the Webfaction API
#
# ------------------------------------------
#
# (c) Quentin Stafford-Fraser 2015, with contributions gratefully acknowledged from:
# * Andy Baker
# * Federico Tarantini
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_db
short_description: Add or remove a database on Webfaction
description:
- Add or remove a database on a Webfaction host. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
name:
description:
- The name of the database
required: true
state:
description:
- Whether the database should exist
required: false
choices: ['present', 'absent']
default: "present"
type:
description:
- The type of database to create.
required: true
choices: ['mysql', 'postgresql']
password:
description:
- The password for the new database user.
required: false
default: None
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
machine:
description:
- The machine name to use (optional for accounts with only one machine)
required: false
'''
EXAMPLES = '''
# This will also create a default DB user with the same
# name as the database, and the specified password.
- name: Create a database
webfaction_db:
name: "{{webfaction_user}}_db1"
password: mytestsql
type: mysql
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
machine: "{{webfaction_machine}}"
# Note that, for symmetry's sake, deleting a database using
# 'state: absent' will also delete the matching user.
'''
import socket
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(required=False, choices=['present', 'absent'], default='present'),
# You can specify an IP address or hostname.
type = dict(required=True),
password = dict(required=False, default=None),
login_name = dict(required=True),
login_password = dict(required=True),
machine = dict(required=False, default=False),
),
supports_check_mode=True
)
db_name = module.params['name']
db_state = module.params['state']
db_type = module.params['type']
db_passwd = module.params['password']
if module.params['machine']:
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password'],
module.params['machine']
)
else:
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
db_list = webfaction.list_dbs(session_id)
db_map = dict([(i['name'], i) for i in db_list])
existing_db = db_map.get(db_name)
user_list = webfaction.list_db_users(session_id)
user_map = dict([(i['username'], i) for i in user_list])
existing_user = user_map.get(db_name)
result = {}
# Here's where the real stuff happens
if db_state == 'present':
# Does a database with this name already exist?
if existing_db:
# Yes, but of a different type - fail
if existing_db['db_type'] != db_type:
module.fail_json(msg="Database already exists but is a different type. Please fix by hand.")
# If it exists with the right type, we don't change anything.
module.exit_json(
changed = False,
)
if not module.check_mode:
# If this isn't a dry run, create the db
# and default user.
result.update(
webfaction.create_db(
session_id, db_name, db_type, db_passwd
)
)
elif db_state == 'absent':
# If this isn't a dry run...
if not module.check_mode:
if not (existing_db or existing_user):
module.exit_json(changed = False,)
if existing_db:
# Delete the db if it exists
result.update(
webfaction.delete_db(session_id, db_name, db_type)
)
if existing_user:
# Delete the default db user if it exists
result.update(
webfaction.delete_db_user(session_id, db_name, db_type)
)
else:
module.fail_json(msg="Unknown state specified: {}".format(db_state))
module.exit_json(
changed = True,
result = result
)
from ansible.module_utils.basic import *
main()

@ -0,0 +1,171 @@
#!/usr/bin/python
#
# Create Webfaction domains and subdomains using Ansible and the Webfaction API
#
# ------------------------------------------
#
# (c) Quentin Stafford-Fraser 2015
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_domain
short_description: Add or remove domains and subdomains on Webfaction
description:
- Add or remove domains or subdomains on a Webfaction host. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- If you are I(deleting) domains by using C(state=absent), then note that if you specify subdomains, just those particular subdomains will be deleted. If you don't specify subdomains, the domain will be deleted.
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
name:
description:
- The name of the domain
required: true
state:
description:
- Whether the domain should exist
required: false
choices: ['present', 'absent']
default: "present"
subdomains:
description:
- Any subdomains to create.
required: false
default: null
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
'''
EXAMPLES = '''
- name: Create a test domain
webfaction_domain:
name: mydomain.com
state: present
subdomains:
- www
- blog
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
- name: Delete test domain and any subdomains
webfaction_domain:
name: mydomain.com
state: absent
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
'''
import socket
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(required=False, choices=['present', 'absent'], default='present'),
subdomains = dict(required=False, default=[]),
login_name = dict(required=True),
login_password = dict(required=True),
),
supports_check_mode=True
)
domain_name = module.params['name']
domain_state = module.params['state']
domain_subdomains = module.params['subdomains']
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
domain_list = webfaction.list_domains(session_id)
domain_map = dict([(i['domain'], i) for i in domain_list])
existing_domain = domain_map.get(domain_name)
result = {}
# Here's where the real stuff happens
if domain_state == 'present':
# Does an app with this name already exist?
if existing_domain:
if set(existing_domain['subdomains']) >= set(domain_subdomains):
# If it exists with the right subdomains, we don't change anything.
module.exit_json(
changed = False,
)
positional_args = [session_id, domain_name] + domain_subdomains
if not module.check_mode:
# If this isn't a dry run, create the app
# print positional_args
result.update(
webfaction.create_domain(
*positional_args
)
)
elif domain_state == 'absent':
# If the app's already not there, nothing changed.
if not existing_domain:
module.exit_json(
changed = False,
)
positional_args = [session_id, domain_name] + domain_subdomains
if not module.check_mode:
# If this isn't a dry run, delete the app
result.update(
webfaction.delete_domain(*positional_args)
)
else:
module.fail_json(msg="Unknown state specified: {}".format(domain_state))
module.exit_json(
changed = True,
result = result
)
from ansible.module_utils.basic import *
main()

@ -0,0 +1,139 @@
#!/usr/bin/python
#
# Create webfaction mailbox using Ansible and the Webfaction API
#
# ------------------------------------------
# (c) Quentin Stafford-Fraser and Andy Baker 2015
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_mailbox
short_description: Add or remove mailboxes on Webfaction
description:
- Add or remove mailboxes on a Webfaction account. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
mailbox_name:
description:
- The name of the mailbox
required: true
mailbox_password:
description:
- The password for the mailbox
required: true
default: null
state:
description:
- Whether the mailbox should exist
required: false
choices: ['present', 'absent']
default: "present"
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
'''
EXAMPLES = '''
- name: Create a mailbox
webfaction_mailbox:
mailbox_name="mybox"
mailbox_password="myboxpw"
state=present
login_name={{webfaction_user}}
login_password={{webfaction_passwd}}
'''
import socket
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec=dict(
mailbox_name=dict(required=True),
mailbox_password=dict(required=True),
state=dict(required=False, choices=['present', 'absent'], default='present'),
login_name=dict(required=True),
login_password=dict(required=True),
),
supports_check_mode=True
)
mailbox_name = module.params['mailbox_name']
site_state = module.params['state']
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
mailbox_list = webfaction.list_mailboxes(session_id)
existing_mailbox = mailbox_name in mailbox_list
result = {}
# Here's where the real stuff happens
if site_state == 'present':
# Does a mailbox with this name already exist?
if existing_mailbox:
module.exit_json(changed=False,)
positional_args = [session_id, mailbox_name]
if not module.check_mode:
# If this isn't a dry run, create the mailbox
result.update(webfaction.create_mailbox(*positional_args))
elif site_state == 'absent':
# If the mailbox is already not there, nothing changed.
if not existing_mailbox:
module.exit_json(changed=False)
if not module.check_mode:
# If this isn't a dry run, delete the mailbox
result.update(webfaction.delete_mailbox(session_id, mailbox_name))
else:
module.fail_json(msg="Unknown state specified: {}".format(site_state))
module.exit_json(changed=True, result=result)
from ansible.module_utils.basic import *
main()

@ -0,0 +1,208 @@
#!/usr/bin/python
#
# Create Webfaction website using Ansible and the Webfaction API
#
# ------------------------------------------
#
# (c) Quentin Stafford-Fraser 2015
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: webfaction_site
short_description: Add or remove a website on a Webfaction host
description:
- Add or remove a website on a Webfaction host. Further documentation at http://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
version_added: "2.0"
notes:
- Sadly, you I(do) need to know your webfaction hostname for the C(host) parameter. But at least, unlike the API, you don't need to know the IP address - you can use a DNS name.
- If a site of the same name exists in the account but on a different host, the operation will exit.
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
options:
name:
description:
- The name of the website
required: true
state:
description:
- Whether the website should exist
required: false
choices: ['present', 'absent']
default: "present"
host:
description:
- The webfaction host on which the site should be created.
required: true
https:
description:
- Whether or not to use HTTPS
required: false
choices: BOOLEANS
default: 'false'
site_apps:
description:
- A mapping of URLs to apps
required: false
subdomains:
description:
- A list of subdomains associated with this site.
required: false
default: null
login_name:
description:
- The webfaction account to use
required: true
login_password:
description:
- The webfaction password to use
required: true
'''
EXAMPLES = '''
- name: create website
webfaction_site:
name: testsite1
state: present
host: myhost.webfaction.com
subdomains:
- 'testsite1.my_domain.org'
site_apps:
- ['testapp1', '/']
https: no
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
'''
import socket
import xmlrpclib
webfaction = xmlrpclib.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
state = dict(required=False, choices=['present', 'absent'], default='present'),
# You can specify an IP address or hostname.
host = dict(required=True),
https = dict(required=False, choices=BOOLEANS, default=False),
subdomains = dict(required=False, default=[]),
site_apps = dict(required=False, default=[]),
login_name = dict(required=True),
login_password = dict(required=True),
),
supports_check_mode=True
)
site_name = module.params['name']
site_state = module.params['state']
site_host = module.params['host']
site_ip = socket.gethostbyname(site_host)
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
site_list = webfaction.list_websites(session_id)
site_map = dict([(i['name'], i) for i in site_list])
existing_site = site_map.get(site_name)
result = {}
# Here's where the real stuff happens
if site_state == 'present':
# Does a site with this name already exist?
if existing_site:
# If yes, but it's on a different IP address, then fail.
# If we wanted to allow relocation, we could add a 'relocate=true' option
# which would get the existing IP address, delete the site there, and create it
# at the new address. A bit dangerous, perhaps, so for now we'll require manual
# deletion if it's on another host.
if existing_site['ip'] != site_ip:
module.fail_json(msg="Website already exists with a different IP address. Please fix by hand.")
# If it's on this host and the key parameters are the same, nothing needs to be done.
if (existing_site['https'] == module.boolean(module.params['https'])) and \
(set(existing_site['subdomains']) == set(module.params['subdomains'])) and \
(dict(existing_site['website_apps']) == dict(module.params['site_apps'])):
module.exit_json(
changed = False
)
positional_args = [
session_id, site_name, site_ip,
module.boolean(module.params['https']),
module.params['subdomains'],
]
for a in module.params['site_apps']:
positional_args.append( (a[0], a[1]) )
if not module.check_mode:
# If this isn't a dry run, create or modify the site
result.update(
webfaction.create_website(
*positional_args
) if not existing_site else webfaction.update_website (
*positional_args
)
)
elif site_state == 'absent':
# If the site's already not there, nothing changed.
if not existing_site:
module.exit_json(
changed = False,
)
if not module.check_mode:
# If this isn't a dry run, delete the site
result.update(
webfaction.delete_website(session_id, site_name, site_ip)
)
else:
module.fail_json(msg="Unknown state specified: {}".format(site_state))
module.exit_json(
changed = True,
result = result
)
from ansible.module_utils.basic import *
main()

@ -0,0 +1,506 @@
#!/usr/bin/python
#
# (c) 2015, Steve Gargan <steve.gargan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = """
module: consul
short_description: "Add, modify & delete services within a consul cluster."
description:
- Registers services and checks for an agent with a consul cluster.
A service is some process running on the agent node that should be advertised by
consul's discovery mechanism. It may optionally supply a check definition,
a periodic service test to notify the consul cluster of service's health.
- "Checks may also be registered per node e.g. disk usage, or cpu usage and
notify the health of the entire node to the cluster.
Service level checks do not require a check name or id as these are derived
by Consul from the Service name and id respectively by appending 'service:'
Node level checks require a check_name and optionally a check_id."
- Currently, there is no complete way to retrieve the script, interval or ttl
metadata for a registered check. Without this metadata it is not possible to
tell if the data supplied with ansible represents a change to a check. As a
result this does not attempt to determine changes and will always report a
changed occurred. An api method is planned to supply this metadata so at that
stage change management will be added.
- "See http://consul.io for more details."
requirements:
- "python >= 2.6"
- python-consul
- requests
version_added: "2.0"
author: "Steve Gargan (@sgargan)"
options:
state:
description:
- register or deregister the consul service, defaults to present
required: true
choices: ['present', 'absent']
service_name:
description:
- Unique name for the service on a node, must be unique per node,
required if registering a service. May be ommitted if registering
a node level check
required: false
service_id:
description:
- the ID for the service, must be unique per node, defaults to the
service name if the service name is supplied
required: false
default: service_name if supplied
host:
description:
- host of the consul agent defaults to localhost
required: false
default: localhost
port:
description:
- the port on which the consul agent is running
required: false
default: 8500
notes:
description:
- Notes to attach to check when registering it.
required: false
default: None
service_port:
description:
- the port on which the service is listening required for
registration of a service, i.e. if service_name or service_id is set
required: false
tags:
description:
- a list of tags that will be attached to the service registration.
required: false
default: None
script:
description:
- the script/command that will be run periodically to check the health
of the service. Scripts require an interval and vise versa
required: false
default: None
interval:
description:
- the interval at which the service check will be run. This is a number
with a s or m suffix to signify the units of seconds or minutes e.g
15s or 1m. If no suffix is supplied, m will be used by default e.g.
1 will be 1m. Required if the script param is specified.
required: false
default: None
check_id:
description:
- an ID for the service check, defaults to the check name, ignored if
part of a service definition.
required: false
default: None
check_name:
description:
- a name for the service check, defaults to the check id. required if
standalone, ignored if part of service definition.
required: false
default: None
ttl:
description:
- checks can be registered with a ttl instead of a script and interval
this means that the service will check in with the agent before the
ttl expires. If it doesn't the check will be considered failed.
Required if registering a check and the script an interval are missing
Similar to the interval this is a number with a s or m suffix to
signify the units of seconds or minutes e.g 15s or 1m. If no suffix
is supplied, m will be used by default e.g. 1 will be 1m
required: false
default: None
token:
description:
- the token key indentifying an ACL rule set. May be required to register services.
required: false
default: None
"""
EXAMPLES = '''
- name: register nginx service with the local consul agent
consul:
name: nginx
service_port: 80
- name: register nginx service with curl check
consul:
name: nginx
service_port: 80
script: "curl http://localhost"
interval: 60s
- name: register nginx with some service tags
consul:
name: nginx
service_port: 80
tags:
- prod
- webservers
- name: remove nginx service
consul:
name: nginx
state: absent
- name: create a node level check to test disk usage
consul:
check_name: Disk usage
check_id: disk_usage
script: "/opt/disk_usage.py"
interval: 5m
'''
import sys
try:
import json
except ImportError:
import simplejson as json
try:
import consul
from requests.exceptions import ConnectionError
python_consul_installed = True
except ImportError, e:
python_consul_installed = False
def register_with_consul(module):
state = module.params.get('state')
if state == 'present':
add(module)
else:
remove(module)
def add(module):
''' adds a service or a check depending on supplied configuration'''
check = parse_check(module)
service = parse_service(module)
if not service and not check:
module.fail_json(msg='a name and port are required to register a service')
if service:
if check:
service.add_check(check)
add_service(module, service)
elif check:
add_check(module, check)
def remove(module):
''' removes a service or a check '''
service_id = module.params.get('service_id') or module.params.get('service_name')
check_id = module.params.get('check_id') or module.params.get('check_name')
if not (service_id or check_id):
module.fail_json(msg='services and checks are removed by id or name.'\
' please supply a service id/name or a check id/name')
if service_id:
remove_service(module, service_id)
else:
remove_check(module, check_id)
def add_check(module, check):
''' registers a check with the given agent. currently there is no way
retrieve the full metadata of an existing check through the consul api.
Without this we can't compare to the supplied check and so we must assume
a change. '''
if not check.name:
module.fail_json(msg='a check name is required for a node level check,'\
' one not attached to a service')
consul_api = get_consul_api(module)
check.register(consul_api)
module.exit_json(changed=True,
check_id=check.check_id,
check_name=check.name,
script=check.script,
interval=check.interval,
ttl=check.ttl)
def remove_check(module, check_id):
''' removes a check using its id '''
consul_api = get_consul_api(module)
if check_id in consul_api.agent.checks():
consul_api.agent.check.deregister(check_id)
module.exit_json(changed=True, id=check_id)
module.exit_json(changed=False, id=check_id)
def add_service(module, service):
''' registers a service with the the current agent '''
result = service
changed = False
consul_api = get_consul_api(module)
existing = get_service_by_id(consul_api, service.id)
# there is no way to retreive the details of checks so if a check is present
# in the service it must be reregistered
if service.has_checks() or not existing or not existing == service:
service.register(consul_api)
# check that it registered correctly
registered = get_service_by_id(consul_api, service.id)
if registered:
result = registered
changed = True
module.exit_json(changed=changed,
service_id=result.id,
service_name=result.name,
service_port=result.port,
checks=map(lambda x: x.to_dict(), service.checks),
tags=result.tags)
def remove_service(module, service_id):
''' deregister a service from the given agent using its service id '''
consul_api = get_consul_api(module)
service = get_service_by_id(consul_api, service_id)
if service:
consul_api.agent.service.deregister(service_id)
module.exit_json(changed=True, id=service_id)
module.exit_json(changed=False, id=service_id)
def get_consul_api(module, token=None):
return consul.Consul(host=module.params.get('host'),
port=module.params.get('port'),
token=module.params.get('token'))
def get_service_by_id(consul_api, service_id):
''' iterate the registered services and find one with the given id '''
for name, service in consul_api.agent.services().iteritems():
if service['ID'] == service_id:
return ConsulService(loaded=service)
def parse_check(module):
if module.params.get('script') and module.params.get('ttl'):
module.fail_json(
msg='check are either script or ttl driven, supplying both does'\
' not make sense')
if module.params.get('check_id') or module.params.get('script') or module.params.get('ttl'):
return ConsulCheck(
module.params.get('check_id'),
module.params.get('check_name'),
module.params.get('check_node'),
module.params.get('check_host'),
module.params.get('script'),
module.params.get('interval'),
module.params.get('ttl'),
module.params.get('notes')
)
def parse_service(module):
if module.params.get('service_name') and module.params.get('service_port'):
return ConsulService(
module.params.get('service_id'),
module.params.get('service_name'),
module.params.get('service_port'),
module.params.get('tags'),
)
elif module.params.get('service_name') and not module.params.get('service_port'):
module.fail_json(
msg="service_name supplied but no service_port, a port is required"\
" to configure a service. Did you configure the 'port' "\
"argument meaning 'service_port'?")
class ConsulService():
def __init__(self, service_id=None, name=None, port=-1,
tags=None, loaded=None):
self.id = self.name = name
if service_id:
self.id = service_id
self.port = port
self.tags = tags
self.checks = []
if loaded:
self.id = loaded['ID']
self.name = loaded['Service']
self.port = loaded['Port']
self.tags = loaded['Tags']
def register(self, consul_api):
if len(self.checks) > 0:
check = self.checks[0]
consul_api.agent.service.register(
self.name,
service_id=self.id,
port=self.port,
tags=self.tags,
script=check.script,
interval=check.interval,
ttl=check.ttl)
else:
consul_api.agent.service.register(
self.name,
service_id=self.id,
port=self.port,
tags=self.tags)
def add_check(self, check):
self.checks.append(check)
def checks(self):
return self.checks
def has_checks(self):
return len(self.checks) > 0
def __eq__(self, other):
return (isinstance(other, self.__class__)
and self.id == other.id
and self.name == other.name
and self.port == other.port
and self.tags == other.tags)
def __ne__(self, other):
return not self.__eq__(other)
def to_dict(self):
data = {'id': self.id, "name": self.name}
if self.port:
data['port'] = self.port
if self.tags and len(self.tags) > 0:
data['tags'] = self.tags
if len(self.checks) > 0:
data['check'] = self.checks[0].to_dict()
return data
class ConsulCheck():
def __init__(self, check_id, name, node=None, host='localhost',
script=None, interval=None, ttl=None, notes=None):
self.check_id = self.name = name
if check_id:
self.check_id = check_id
self.script = script
self.interval = self.validate_duration('interval', interval)
self.ttl = self.validate_duration('ttl', ttl)
self.notes = notes
self.node = node
self.host = host
def validate_duration(self, name, duration):
if duration:
duration_units = ['ns', 'us', 'ms', 's', 'm', 'h']
if not any((duration.endswith(suffix) for suffix in duration_units)):
raise Exception('Invalid %s %s you must specify units (%s)' %
(name, duration, ', '.join(duration_units)))
return duration
def register(self, consul_api):
consul_api.agent.check.register(self.name, check_id=self.check_id,
script=self.script,
interval=self.interval,
ttl=self.ttl, notes=self.notes)
def __eq__(self, other):
return (isinstance(other, self.__class__)
and self.check_id == other.check_id
and self.name == other.name
and self.script == script
and self.interval == interval)
def __ne__(self, other):
return not self.__eq__(other)
def to_dict(self):
data = {}
self._add(data, 'id', attr='check_id')
self._add(data, 'name', attr='check_name')
self._add(data, 'script')
self._add(data, 'node')
self._add(data, 'notes')
self._add(data, 'host')
self._add(data, 'interval')
self._add(data, 'ttl')
return data
def _add(self, data, key, attr=None):
try:
if attr == None:
attr = key
data[key] = getattr(self, attr)
except:
pass
def test_dependencies(module):
if not python_consul_installed:
module.fail_json(msg="python-consul required for this module. "\
"see http://python-consul.readthedocs.org/en/latest/#installation")
def main():
module = AnsibleModule(
argument_spec=dict(
host=dict(default='localhost'),
port=dict(default=8500, type='int'),
check_id=dict(required=False),
check_name=dict(required=False),
check_node=dict(required=False),
check_host=dict(required=False),
notes=dict(required=False),
script=dict(required=False),
service_id=dict(required=False),
service_name=dict(required=False),
service_port=dict(required=False, type='int'),
state=dict(default='present', choices=['present', 'absent']),
interval=dict(required=False, type='str'),
ttl=dict(required=False, type='str'),
tags=dict(required=False, type='list'),
token=dict(required=False)
),
supports_check_mode=False,
)
test_dependencies(module)
try:
register_with_consul(module)
except ConnectionError, e:
module.fail_json(msg='Could not connect to consul agent at %s:%s, error was %s' % (
module.params.get('host'), module.params.get('port'), str(e)))
except Exception, e:
module.fail_json(msg=str(e))
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,321 @@
#!/usr/bin/python
#
# (c) 2015, Steve Gargan <steve.gargan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = """
module: consul_acl
short_description: "manipulate consul acl keys and rules"
description:
- allows the addition, modification and deletion of ACL keys and associated
rules in a consul cluster via the agent. For more details on using and
configuring ACLs, see https://www.consul.io/docs/internals/acl.html.
requirements:
- "python >= 2.6"
- python-consul
- pyhcl
- requests
version_added: "2.0"
author: "Steve Gargan (@sgargan)"
options:
mgmt_token:
description:
- a management token is required to manipulate the acl lists
state:
description:
- whether the ACL pair should be present or absent, defaults to present
required: false
choices: ['present', 'absent']
type:
description:
- the type of token that should be created, either management or
client, defaults to client
choices: ['client', 'management']
name:
description:
- the name that should be associated with the acl key, this is opaque
to Consul
required: false
token:
description:
- the token key indentifying an ACL rule set. If generated by consul
this will be a UUID.
required: false
rules:
description:
- an list of the rules that should be associated with a given key/token.
required: false
host:
description:
- host of the consul agent defaults to localhost
required: false
default: localhost
port:
description:
- the port on which the consul agent is running
required: false
default: 8500
"""
EXAMPLES = '''
- name: create an acl token with rules
consul_acl:
mgmt_token: 'some_management_acl'
host: 'consul1.mycluster.io'
name: 'Foo access'
rules:
- key: 'foo'
policy: read
- key: 'private/foo'
policy: deny
- name: remove a token
consul_acl:
mgmt_token: 'some_management_acl'
host: 'consul1.mycluster.io'
token: '172bd5c8-9fe9-11e4-b1b0-3c15c2c9fd5e'
state: absent
'''
import sys
try:
import consul
from requests.exceptions import ConnectionError
python_consul_installed = True
except ImportError, e:
python_consul_installed = False
try:
import hcl
pyhcl_installed = True
except ImportError:
pyhcl_installed = False
from requests.exceptions import ConnectionError
def execute(module):
state = module.params.get('state')
if state == 'present':
update_acl(module)
else:
remove_acl(module)
def update_acl(module):
rules = module.params.get('rules')
state = module.params.get('state')
token = module.params.get('token')
token_type = module.params.get('token_type')
mgmt = module.params.get('mgmt_token')
name = module.params.get('name')
consul = get_consul_api(module, mgmt)
changed = False
try:
if token:
existing_rules = load_rules_for_token(module, consul, token)
supplied_rules = yml_to_rules(module, rules)
print existing_rules
print supplied_rules
changed = not existing_rules == supplied_rules
if changed:
y = supplied_rules.to_hcl()
token = consul.acl.update(
token,
name=name,
type=token_type,
rules=supplied_rules.to_hcl())
else:
try:
rules = yml_to_rules(module, rules)
if rules.are_rules():
rules = rules.to_json()
else:
rules = None
token = consul.acl.create(
name=name, type=token_type, rules=rules)
changed = True
except Exception, e:
module.fail_json(
msg="No token returned, check your managment key and that \
the host is in the acl datacenter %s" % e)
except Exception, e:
module.fail_json(msg="Could not create/update acl %s" % e)
module.exit_json(changed=changed,
token=token,
rules=rules,
name=name,
type=token_type)
def remove_acl(module):
state = module.params.get('state')
token = module.params.get('token')
mgmt = module.params.get('mgmt_token')
consul = get_consul_api(module, token=mgmt)
changed = token and consul.acl.info(token)
if changed:
token = consul.acl.destroy(token)
module.exit_json(changed=changed, token=token)
def load_rules_for_token(module, consul_api, token):
try:
rules = Rules()
info = consul_api.acl.info(token)
if info and info['Rules']:
rule_set = to_ascii(info['Rules'])
for rule in hcl.loads(rule_set).values():
for key, policy in rule.iteritems():
rules.add_rule(Rule(key, policy['policy']))
return rules
except Exception, e:
module.fail_json(
msg="Could not load rule list from retrieved rule data %s, %s" % (
token, e))
return json_to_rules(module, loaded)
def to_ascii(unicode_string):
if isinstance(unicode_string, unicode):
return unicode_string.encode('ascii', 'ignore')
return unicode_string
def yml_to_rules(module, yml_rules):
rules = Rules()
if yml_rules:
for rule in yml_rules:
if not('key' in rule or 'policy' in rule):
module.fail_json(msg="a rule requires a key and a policy.")
rules.add_rule(Rule(rule['key'], rule['policy']))
return rules
template = '''key "%s" {
policy = "%s"
}'''
class Rules:
def __init__(self):
self.rules = {}
def add_rule(self, rule):
self.rules[rule.key] = rule
def are_rules(self):
return len(self.rules) > 0
def to_json(self):
rules = {}
for key, rule in self.rules.iteritems():
rules[key] = {'policy': rule.policy}
return json.dumps({'keys': rules})
def to_hcl(self):
rules = ""
for key, rule in self.rules.iteritems():
rules += template % (key, rule.policy)
return to_ascii(rules)
def __eq__(self, other):
if not (other or isinstance(other, self.__class__)
or len(other.rules) == len(self.rules)):
return False
for name, other_rule in other.rules.iteritems():
if not name in self.rules:
return False
rule = self.rules[name]
if not (rule and rule == other_rule):
return False
return True
def __str__(self):
return self.to_hcl()
class Rule:
def __init__(self, key, policy):
self.key = key
self.policy = policy
def __eq__(self, other):
return (isinstance(other, self.__class__)
and self.key == other.key
and self.policy == other.policy)
def __hash__(self):
return hash(self.key) ^ hash(self.policy)
def __str__(self):
return '%s %s' % (self.key, self.policy)
def get_consul_api(module, token=None):
if not token:
token = token = module.params.get('token')
return consul.Consul(host=module.params.get('host'),
port=module.params.get('port'),
token=token)
def test_dependencies(module):
if not python_consul_installed:
module.fail_json(msg="python-consul required for this module. "\
"see http://python-consul.readthedocs.org/en/latest/#installation")
if not pyhcl_installed:
module.fail_json( msg="pyhcl required for this module."\
" see https://pypi.python.org/pypi/pyhcl")
def main():
argument_spec = dict(
mgmt_token=dict(required=True),
host=dict(default='localhost'),
name=dict(required=False),
port=dict(default=8500, type='int'),
rules=dict(default=None, required=False, type='list'),
state=dict(default='present', choices=['present', 'absent']),
token=dict(required=False),
token_type=dict(
required=False, choices=['client', 'management'], default='client')
)
module = AnsibleModule(argument_spec, supports_check_mode=False)
test_dependencies(module)
try:
execute(module)
except ConnectionError, e:
module.fail_json(msg='Could not connect to consul agent at %s:%s, error was %s' % (
module.params.get('host'), module.params.get('port'), str(e)))
except Exception, e:
module.fail_json(msg=str(e))
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,264 @@
#!/usr/bin/python
#
# (c) 2015, Steve Gargan <steve.gargan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = """
module: consul_kv
short_description: Manipulate entries in the key/value store of a consul cluster.
description:
- Allows the addition, modification and deletion of key/value entries in a
consul cluster via the agent. The entire contents of the record, including
the indices, flags and session are returned as 'value'.
- If the key represents a prefix then Note that when a value is removed, the existing
value if any is returned as part of the results.
- "See http://www.consul.io/docs/agent/http.html#kv for more details."
requirements:
- "python >= 2.6"
- python-consul
- requests
version_added: "2.0"
author: "Steve Gargan (@sgargan)"
options:
state:
description:
- the action to take with the supplied key and value. If the state is
'present', the key contents will be set to the value supplied,
'changed' will be set to true only if the value was different to the
current contents. The state 'absent' will remove the key/value pair,
again 'changed' will be set to true only if the key actually existed
prior to the removal. An attempt can be made to obtain or free the
lock associated with a key/value pair with the states 'acquire' or
'release' respectively. a valid session must be supplied to make the
attempt changed will be true if the attempt is successful, false
otherwise.
required: false
choices: ['present', 'absent', 'acquire', 'release']
default: present
key:
description:
- the key at which the value should be stored.
required: true
value:
description:
- the value should be associated with the given key, required if state
is present
required: true
recurse:
description:
- if the key represents a prefix, each entry with the prefix can be
retrieved by setting this to true.
required: false
default: false
session:
description:
- the session that should be used to acquire or release a lock
associated with a key/value pair
required: false
default: None
token:
description:
- the token key indentifying an ACL rule set that controls access to
the key value pair
required: false
default: None
cas:
description:
- used when acquiring a lock with a session. If the cas is 0, then
Consul will only put the key if it does not already exist. If the
cas value is non-zero, then the key is only set if the index matches
the ModifyIndex of that key.
required: false
default: None
flags:
description:
- opaque integer value that can be passed when setting a value.
required: false
default: None
host:
description:
- host of the consul agent defaults to localhost
required: false
default: localhost
port:
description:
- the port on which the consul agent is running
required: false
default: 8500
"""
EXAMPLES = '''
- name: add or update the value associated with a key in the key/value store
consul_kv:
key: somekey
value: somevalue
- name: remove a key from the store
consul_kv:
key: somekey
state: absent
- name: add a node to an arbitrary group via consul inventory (see consul.ini)
consul_kv:
key: ansible/groups/dc1/somenode
value: 'top_secret'
'''
import sys
try:
import json
except ImportError:
import simplejson as json
try:
import consul
from requests.exceptions import ConnectionError
python_consul_installed = True
except ImportError, e:
python_consul_installed = False
from requests.exceptions import ConnectionError
def execute(module):
state = module.params.get('state')
if state == 'acquire' or state == 'release':
lock(module, state)
if state == 'present':
add_value(module)
else:
remove_value(module)
def lock(module, state):
session = module.params.get('session')
key = module.params.get('key')
value = module.params.get('value')
if not session:
module.fail(
msg='%s of lock for %s requested but no session supplied' %
(state, key))
if state == 'acquire':
successful = consul_api.kv.put(key, value,
cas=module.params.get('cas'),
acquire=session,
flags=module.params.get('flags'))
else:
successful = consul_api.kv.put(key, value,
cas=module.params.get('cas'),
release=session,
flags=module.params.get('flags'))
module.exit_json(changed=successful,
index=index,
key=key)
def add_value(module):
consul_api = get_consul_api(module)
key = module.params.get('key')
value = module.params.get('value')
index, existing = consul_api.kv.get(key)
changed = not existing or (existing and existing['Value'] != value)
if changed and not module.check_mode:
changed = consul_api.kv.put(key, value,
cas=module.params.get('cas'),
flags=module.params.get('flags'))
if module.params.get('retrieve'):
index, stored = consul_api.kv.get(key)
module.exit_json(changed=changed,
index=index,
key=key,
data=stored)
def remove_value(module):
''' remove the value associated with the given key. if the recurse parameter
is set then any key prefixed with the given key will be removed. '''
consul_api = get_consul_api(module)
key = module.params.get('key')
value = module.params.get('value')
index, existing = consul_api.kv.get(
key, recurse=module.params.get('recurse'))
changed = existing != None
if changed and not module.check_mode:
consul_api.kv.delete(key, module.params.get('recurse'))
module.exit_json(changed=changed,
index=index,
key=key,
data=existing)
def get_consul_api(module, token=None):
return consul.Consul(host=module.params.get('host'),
port=module.params.get('port'),
token=module.params.get('token'))
def test_dependencies(module):
if not python_consul_installed:
module.fail_json(msg="python-consul required for this module. "\
"see http://python-consul.readthedocs.org/en/latest/#installation")
def main():
argument_spec = dict(
cas=dict(required=False),
flags=dict(required=False),
key=dict(required=True),
host=dict(default='localhost'),
port=dict(default=8500, type='int'),
recurse=dict(required=False, type='bool'),
retrieve=dict(required=False, default=True),
state=dict(default='present', choices=['present', 'absent']),
token=dict(required=False, default='anonymous'),
value=dict(required=False)
)
module = AnsibleModule(argument_spec, supports_check_mode=False)
test_dependencies(module)
try:
execute(module)
except ConnectionError, e:
module.fail_json(msg='Could not connect to consul agent at %s:%s, error was %s' % (
module.params.get('host'), module.params.get('port'), str(e)))
except Exception, e:
module.fail_json(msg=str(e))
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,269 @@
#!/usr/bin/python
#
# (c) 2015, Steve Gargan <steve.gargan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = """
module: consul_session
short_description: "manipulate consul sessions"
description:
- allows the addition, modification and deletion of sessions in a consul
cluster. These sessions can then be used in conjunction with key value pairs
to implement distributed locks. In depth documentation for working with
sessions can be found here http://www.consul.io/docs/internals/sessions.html
requirements:
- "python >= 2.6"
- python-consul
- requests
version_added: "2.0"
author: "Steve Gargan (@sgargan)"
options:
state:
description:
- whether the session should be present i.e. created if it doesn't
exist, or absent, removed if present. If created, the ID for the
session is returned in the output. If absent, the name or ID is
required to remove the session. Info for a single session, all the
sessions for a node or all available sessions can be retrieved by
specifying info, node or list for the state; for node or info, the
node name or session id is required as parameter.
required: false
choices: ['present', 'absent', 'info', 'node', 'list']
default: present
name:
description:
- the name that should be associated with the session. This is opaque
to Consul and not required.
required: false
default: None
delay:
description:
- the optional lock delay that can be attached to the session when it
is created. Locks for invalidated sessions ar blocked from being
acquired until this delay has expired. Valid units for delays
include 'ns', 'us', 'ms', 's', 'm', 'h'
default: 15s
required: false
node:
description:
- the name of the node that with which the session will be associated.
by default this is the name of the agent.
required: false
default: None
datacenter:
description:
- name of the datacenter in which the session exists or should be
created.
required: false
default: None
checks:
description:
- a list of checks that will be used to verify the session health. If
all the checks fail, the session will be invalidated and any locks
associated with the session will be release and can be acquired once
the associated lock delay has expired.
required: false
default: None
host:
description:
- host of the consul agent defaults to localhost
required: false
default: localhost
port:
description:
- the port on which the consul agent is running
required: false
default: 8500
"""
EXAMPLES = '''
- name: register basic session with consul
consul_session:
name: session1
- name: register a session with an existing check
consul_session:
name: session_with_check
checks:
- existing_check_name
- name: register a session with lock_delay
consul_session:
name: session_with_delay
delay: 20s
- name: retrieve info about session by id
consul_session: id=session_id state=info
- name: retrieve active sessions
consul_session: state=list
'''
import sys
try:
import consul
from requests.exceptions import ConnectionError
python_consul_installed = True
except ImportError, e:
python_consul_installed = False
def execute(module):
state = module.params.get('state')
if state in ['info', 'list', 'node']:
lookup_sessions(module)
elif state == 'present':
update_session(module)
else:
remove_session(module)
def lookup_sessions(module):
datacenter = module.params.get('datacenter')
state = module.params.get('state')
consul = get_consul_api(module)
try:
if state == 'list':
sessions_list = consul.session.list(dc=datacenter)
#ditch the index, this can be grabbed from the results
if sessions_list and sessions_list[1]:
sessions_list = sessions_list[1]
module.exit_json(changed=True,
sessions=sessions_list)
elif state == 'node':
node = module.params.get('node')
if not node:
module.fail_json(
msg="node name is required to retrieve sessions for node")
sessions = consul.session.node(node, dc=datacenter)
module.exit_json(changed=True,
node=node,
sessions=sessions)
elif state == 'info':
session_id = module.params.get('id')
if not session_id:
module.fail_json(
msg="session_id is required to retrieve indvidual session info")
session_by_id = consul.session.info(session_id, dc=datacenter)
module.exit_json(changed=True,
session_id=session_id,
sessions=session_by_id)
except Exception, e:
module.fail_json(msg="Could not retrieve session info %s" % e)
def update_session(module):
name = module.params.get('name')
session_id = module.params.get('id')
delay = module.params.get('delay')
checks = module.params.get('checks')
datacenter = module.params.get('datacenter')
node = module.params.get('node')
consul = get_consul_api(module)
changed = True
try:
session = consul.session.create(
name=name,
node=node,
lock_delay=validate_duration('delay', delay),
dc=datacenter,
checks=checks
)
module.exit_json(changed=True,
session_id=session,
name=name,
delay=delay,
checks=checks,
node=node)
except Exception, e:
module.fail_json(msg="Could not create/update session %s" % e)
def remove_session(module):
session_id = module.params.get('id')
if not session_id:
module.fail_json(msg="""A session id must be supplied in order to
remove a session.""")
consul = get_consul_api(module)
changed = False
try:
session = consul.session.destroy(session_id)
module.exit_json(changed=True,
session_id=session_id)
except Exception, e:
module.fail_json(msg="Could not remove session with id '%s' %s" % (
session_id, e))
def validate_duration(name, duration):
if duration:
duration_units = ['ns', 'us', 'ms', 's', 'm', 'h']
if not any((duration.endswith(suffix) for suffix in duration_units)):
raise Exception('Invalid %s %s you must specify units (%s)' %
(name, duration, ', '.join(duration_units)))
return duration
def get_consul_api(module):
return consul.Consul(host=module.params.get('host'),
port=module.params.get('port'))
def test_dependencies(module):
if not python_consul_installed:
module.fail_json(msg="python-consul required for this module. "\
"see http://python-consul.readthedocs.org/en/latest/#installation")
def main():
argument_spec = dict(
checks=dict(default=None, required=False, type='list'),
delay=dict(required=False,type='str', default='15s'),
host=dict(default='localhost'),
port=dict(default=8500, type='int'),
id=dict(required=False),
name=dict(required=False),
node=dict(required=False),
state=dict(default='present',
choices=['present', 'absent', 'info', 'node', 'list'])
)
module = AnsibleModule(argument_spec, supports_check_mode=False)
test_dependencies(module)
try:
execute(module)
except ConnectionError, e:
module.fail_json(msg='Could not connect to consul agent at %s:%s, error was %s' % (
module.params.get('host'), module.params.get('port'), str(e)))
except Exception, e:
module.fail_json(msg=str(e))
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

@ -0,0 +1,177 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Matt Martz <matt@sivel.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import datetime
try:
import pexpect
HAS_PEXPECT = True
except ImportError:
HAS_PEXPECT = False
DOCUMENTATION = '''
---
module: expect
version_added: 2.0
short_description: Executes a command and responds to prompts
description:
- The M(expect) module executes a command and responds to prompts
- The given command will be executed on all selected nodes. It will not be
processed through the shell, so variables like C($HOME) and operations
like C("<"), C(">"), C("|"), and C("&") will not work
options:
command:
description:
- the command module takes command to run.
required: true
creates:
description:
- a filename, when it already exists, this step will B(not) be run.
required: false
removes:
description:
- a filename, when it does not exist, this step will B(not) be run.
required: false
chdir:
description:
- cd into this directory before running the command
required: false
responses:
description:
- Mapping of expected string and string to respond with
required: true
timeout:
description:
- Amount of time in seconds to wait for the expected strings
default: 30
echo:
description:
- Whether or not to echo out your response strings
default: false
requirements:
- python >= 2.6
- pexpect >= 3.3
notes:
- If you want to run a command through the shell (say you are using C(<),
C(>), C(|), etc), you must specify a shell in the command such as
C(/bin/bash -c "/path/to/something | grep else")
author: "Matt Martz (@sivel)"
'''
EXAMPLES = '''
- expect:
command: passwd username
responses:
(?i)password: "MySekretPa$$word"
'''
def main():
module = AnsibleModule(
argument_spec=dict(
command=dict(required=True),
chdir=dict(),
creates=dict(),
removes=dict(),
responses=dict(type='dict', required=True),
timeout=dict(type='int', default=30),
echo=dict(type='bool', default=False),
)
)
if not HAS_PEXPECT:
module.fail_json(msg='The pexpect python module is required')
chdir = module.params['chdir']
args = module.params['command']
creates = module.params['creates']
removes = module.params['removes']
responses = module.params['responses']
timeout = module.params['timeout']
echo = module.params['echo']
events = dict()
for key, value in responses.iteritems():
events[key.decode()] = u'%s\n' % value.rstrip('\n').decode()
if args.strip() == '':
module.fail_json(rc=256, msg="no command given")
if chdir:
chdir = os.path.abspath(os.path.expanduser(chdir))
os.chdir(chdir)
if creates:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of command executions.
v = os.path.expanduser(creates)
if os.path.exists(v):
module.exit_json(
cmd=args,
stdout="skipped, since %s exists" % v,
changed=False,
stderr=False,
rc=0
)
if removes:
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of command executions.
v = os.path.expanduser(removes)
if not os.path.exists(v):
module.exit_json(
cmd=args,
stdout="skipped, since %s does not exist" % v,
changed=False,
stderr=False,
rc=0
)
startd = datetime.datetime.now()
try:
out, rc = pexpect.runu(args, timeout=timeout, withexitstatus=True,
events=events, cwd=chdir, echo=echo)
except pexpect.ExceptionPexpect, e:
module.fail_json(msg='%s' % e)
endd = datetime.datetime.now()
delta = endd - startd
if out is None:
out = ''
module.exit_json(
cmd=args,
stdout=out.rstrip('\r\n'),
rc=rc,
start=str(startd),
end=str(endd),
delta=str(delta),
changed=True,
)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -87,11 +87,19 @@ options:
required: false required: false
default: present default: present
choices: [ "present", "absent" ] choices: [ "present", "absent" ]
update_password:
required: false
default: always
choices: ['always', 'on_create']
version_added: "2.1"
description:
- C(always) will update passwords if they differ. C(on_create) will only set the password for newly created users.
notes: notes:
- Requires the pymongo Python package on the remote host, version 2.4.2+. This - Requires the pymongo Python package on the remote host, version 2.4.2+. This
can be installed using pip or the OS package manager. @see http://api.mongodb.org/python/current/installation.html can be installed using pip or the OS package manager. @see http://api.mongodb.org/python/current/installation.html
requirements: [ "pymongo" ] requirements: [ "pymongo" ]
author: Elliott Foster author: "Elliott Foster (@elliotttf)"
''' '''
EXAMPLES = ''' EXAMPLES = '''
@ -134,7 +142,15 @@ else:
# MongoDB module specific support methods. # MongoDB module specific support methods.
# #
def user_find(client, user):
for mongo_user in client["admin"].system.users.find():
if mongo_user['user'] == user:
return mongo_user
return False
def user_add(module, client, db_name, user, password, roles): def user_add(module, client, db_name, user, password, roles):
#pymono's user_add is a _create_or_update_user so we won't know if it was changed or updated
#without reproducing a lot of the logic in database.py of pymongo
db = client[db_name] db = client[db_name]
if roles is None: if roles is None:
db.add_user(user, password, False) db.add_user(user, password, False)
@ -147,9 +163,13 @@ def user_add(module, client, db_name, user, password, roles):
err_msg = err_msg + ' (Note: you must be on mongodb 2.4+ and pymongo 2.5+ to use the roles param)' err_msg = err_msg + ' (Note: you must be on mongodb 2.4+ and pymongo 2.5+ to use the roles param)'
module.fail_json(msg=err_msg) module.fail_json(msg=err_msg)
def user_remove(client, db_name, user): def user_remove(module, client, db_name, user):
db = client[db_name] exists = user_find(client, user)
db.remove_user(user) if exists:
db = client[db_name]
db.remove_user(user)
else:
module.exit_json(changed=False, user=user)
def load_mongocnf(): def load_mongocnf():
config = ConfigParser.RawConfigParser() config = ConfigParser.RawConfigParser()
@ -184,6 +204,7 @@ def main():
ssl=dict(default=False), ssl=dict(default=False),
roles=dict(default=None, type='list'), roles=dict(default=None, type='list'),
state=dict(default='present', choices=['absent', 'present']), state=dict(default='present', choices=['absent', 'present']),
update_password=dict(default="always", choices=["always", "on_create"]),
) )
) )
@ -201,39 +222,38 @@ def main():
ssl = module.params['ssl'] ssl = module.params['ssl']
roles = module.params['roles'] roles = module.params['roles']
state = module.params['state'] state = module.params['state']
update_password = module.params['update_password']
try: try:
if replica_set: if replica_set:
client = MongoClient(login_host, int(login_port), replicaset=replica_set, ssl=ssl) client = MongoClient(login_host, int(login_port), replicaset=replica_set, ssl=ssl)
else: else:
client = MongoClient(login_host, int(login_port), ssl=ssl) client = MongoClient(login_host, int(login_port), ssl=ssl)
# try to authenticate as a target user to check if it already exists
try:
client[db_name].authenticate(user, password)
if state == 'present':
module.exit_json(changed=False, user=user)
except OperationFailure:
if state == 'absent':
module.exit_json(changed=False, user=user)
if login_user is None and login_password is None: if login_user is None and login_password is None:
mongocnf_creds = load_mongocnf() mongocnf_creds = load_mongocnf()
if mongocnf_creds is not False: if mongocnf_creds is not False:
login_user = mongocnf_creds['user'] login_user = mongocnf_creds['user']
login_password = mongocnf_creds['password'] login_password = mongocnf_creds['password']
elif login_password is None and login_user is not None: elif login_password is None or login_user is None:
module.fail_json(msg='when supplying login arguments, both login_user and login_password must be provided') module.fail_json(msg='when supplying login arguments, both login_user and login_password must be provided')
if login_user is not None and login_password is not None: if login_user is not None and login_password is not None:
client.admin.authenticate(login_user, login_password) client.admin.authenticate(login_user, login_password)
elif LooseVersion(PyMongoVersion) >= LooseVersion('3.0'):
if db_name != "admin":
module.fail_json(msg='The localhost login exception only allows the first admin account to be created')
#else: this has to be the first admin user added
except ConnectionFailure, e: except ConnectionFailure, e:
module.fail_json(msg='unable to connect to database: %s' % str(e)) module.fail_json(msg='unable to connect to database: %s' % str(e))
if state == 'present': if state == 'present':
if password is None: if password is None and update_password == 'always':
module.fail_json(msg='password parameter required when adding a user') module.fail_json(msg='password parameter required when adding a user unless update_password is set to on_create')
if update_password != 'always' and user_find(client, user):
password = None
try: try:
user_add(module, client, db_name, user, password, roles) user_add(module, client, db_name, user, password, roles)
@ -242,7 +262,7 @@ def main():
elif state == 'absent': elif state == 'absent':
try: try:
user_remove(client, db_name, user) user_remove(module, client, db_name, user)
except OperationFailure, e: except OperationFailure, e:
module.fail_json(msg='Unable to remove user: %s' % str(e)) module.fail_json(msg='Unable to remove user: %s' % str(e))

@ -98,7 +98,7 @@ notes:
this needs to be in the redis.conf in the masterauth variable this needs to be in the redis.conf in the masterauth variable
requirements: [ redis ] requirements: [ redis ]
author: Xabier Larrakoetxea author: "Xabier Larrakoetxea (@slok)"
''' '''
EXAMPLES = ''' EXAMPLES = '''

@ -26,6 +26,9 @@ description:
- This module can be used to join nodes to a cluster, check - This module can be used to join nodes to a cluster, check
the status of the cluster. the status of the cluster.
version_added: "1.2" version_added: "1.2"
author:
- "James Martin (@jsmartin)"
- "Drew Kerrigan (@drewkerrigan)"
options: options:
command: command:
description: description:
@ -94,7 +97,6 @@ EXAMPLES = '''
- riak: wait_for_service=kv - riak: wait_for_service=kv
''' '''
import urllib2
import time import time
import socket import socket
import sys import sys
@ -251,5 +253,5 @@ def main():
# import module snippets # import module snippets
from ansible.module_utils.basic import * from ansible.module_utils.basic import *
from ansible.module_utils.urls import * from ansible.module_utils.urls import *
if __name__ == '__main__':
main() main()

@ -30,6 +30,7 @@ short_description: Manage MySQL replication
description: description:
- Manages MySQL server replication, slave, master status get and change master host. - Manages MySQL server replication, slave, master status get and change master host.
version_added: "1.3" version_added: "1.3"
author: "Balazs Pocze (@banyek)"
options: options:
mode: mode:
description: description:
@ -93,7 +94,7 @@ options:
master_ssl: master_ssl:
description: description:
- same as mysql variable - same as mysql variable
possible values: 0,1 choices: [ 0, 1 ]
master_ssl_ca: master_ssl_ca:
description: description:
- same as mysql variable - same as mysql variable
@ -109,7 +110,12 @@ options:
master_ssl_cipher: master_ssl_cipher:
description: description:
- same as mysql variable - same as mysql variable
master_auto_position:
description:
- does the host uses GTID based replication or not
required: false
default: null
version_added: "2.0"
''' '''
EXAMPLES = ''' EXAMPLES = '''
@ -242,6 +248,7 @@ def main():
login_port=dict(default=3306, type='int'), login_port=dict(default=3306, type='int'),
login_unix_socket=dict(default=None), login_unix_socket=dict(default=None),
mode=dict(default="getslave", choices=["getmaster", "getslave", "changemaster", "stopslave", "startslave"]), mode=dict(default="getslave", choices=["getmaster", "getslave", "changemaster", "stopslave", "startslave"]),
master_auto_position=dict(default=False, type='bool'),
master_host=dict(default=None), master_host=dict(default=None),
master_user=dict(default=None), master_user=dict(default=None),
master_password=dict(default=None), master_password=dict(default=None),
@ -279,6 +286,7 @@ def main():
master_ssl_cert = module.params["master_ssl_cert"] master_ssl_cert = module.params["master_ssl_cert"]
master_ssl_key = module.params["master_ssl_key"] master_ssl_key = module.params["master_ssl_key"]
master_ssl_cipher = module.params["master_ssl_cipher"] master_ssl_cipher = module.params["master_ssl_cipher"]
master_auto_position = module.params["master_auto_position"]
if not mysqldb_found: if not mysqldb_found:
module.fail_json(msg="the python mysqldb module is required") module.fail_json(msg="the python mysqldb module is required")
@ -376,6 +384,8 @@ def main():
if master_ssl_cipher: if master_ssl_cipher:
chm.append("MASTER_SSL_CIPHER=%(master_ssl_cipher)s") chm.append("MASTER_SSL_CIPHER=%(master_ssl_cipher)s")
chm_params['master_ssl_cipher'] = master_ssl_cipher chm_params['master_ssl_cipher'] = master_ssl_cipher
if master_auto_position:
chm.append("MASTER_AUTO_POSITION = 1")
changemaster(cursor, chm, chm_params) changemaster(cursor, chm, chm_params)
module.exit_json(changed=True) module.exit_json(changed=True)
elif mode in "startslave": elif mode in "startslave":

@ -65,7 +65,7 @@ notes:
- This module uses I(psycopg2), a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed on - This module uses I(psycopg2), a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed on
the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the C(postgresql), C(libpq-dev), and C(python-psycopg2) packages on the remote host before using this module. the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the C(postgresql), C(libpq-dev), and C(python-psycopg2) packages on the remote host before using this module.
requirements: [ psycopg2 ] requirements: [ psycopg2 ]
author: Daniel Schep author: "Daniel Schep (@dschep)"
''' '''
EXAMPLES = ''' EXAMPLES = '''

@ -95,7 +95,7 @@ notes:
systems, install the postgresql, libpq-dev, and python-psycopg2 packages systems, install the postgresql, libpq-dev, and python-psycopg2 packages
on the remote host before using this module. on the remote host before using this module.
requirements: [ psycopg2 ] requirements: [ psycopg2 ]
author: Jens Depuydt author: "Jens Depuydt (@jensdepuydt)"
''' '''
EXAMPLES = ''' EXAMPLES = '''

@ -67,7 +67,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16) and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini). to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ] requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek author: "Dariusz Owczarek (@dareko)"
""" """
EXAMPLES = """ EXAMPLES = """

@ -59,7 +59,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16) and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini). to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ] requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek author: "Dariusz Owczarek (@dareko)"
""" """
EXAMPLES = """ EXAMPLES = """

@ -75,7 +75,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16) and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini). to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ] requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek author: "Dariusz Owczarek (@dareko)"
""" """
EXAMPLES = """ EXAMPLES = """

@ -91,7 +91,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16) and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini). to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ] requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek author: "Dariusz Owczarek (@dareko)"
""" """
EXAMPLES = """ EXAMPLES = """

@ -107,7 +107,7 @@ notes:
and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16) and both C(ErrorMessagesPath = /opt/vertica/lib64) and C(DriverManagerEncoding = UTF-16)
to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini). to be added to the C(Driver) section of either C(/etc/vertica.ini) or C($HOME/.vertica.ini).
requirements: [ 'unixODBC', 'pyodbc' ] requirements: [ 'unixODBC', 'pyodbc' ]
author: Dariusz Owczarek author: "Dariusz Owczarek (@dareko)"
""" """
EXAMPLES = """ EXAMPLES = """
@ -233,7 +233,10 @@ def present(user_facts, cursor, user, profile, resource_pool,
changed = False changed = False
query_fragments = ["alter user {0}".format(user)] query_fragments = ["alter user {0}".format(user)]
if locked is not None and locked != (user_facts[user_key]['locked'] == 'True'): if locked is not None and locked != (user_facts[user_key]['locked'] == 'True'):
state = 'lock' if locked else 'unlock' if locked:
state = 'lock'
else:
state = 'unlock'
query_fragments.append("account {0}".format(state)) query_fragments.append("account {0}".format(state))
changed = True changed = True
if password and password != user_facts[user_key]['password']: if password and password != user_facts[user_key]['password']:

@ -22,7 +22,9 @@
DOCUMENTATION = ''' DOCUMENTATION = '''
--- ---
module: patch module: patch
author: Luis Alberto Perez Lazaro, Jakub Jirutka author:
- "Jakub Jirutka (@jirutka)"
- "Luis Alberto Perez Lazaro (@luisperlaz)"
version_added: 1.9 version_added: 1.9
description: description:
- Apply patch files using the GNU patch tool. - Apply patch files using the GNU patch tool.
@ -63,6 +65,20 @@ options:
required: false required: false
type: "int" type: "int"
default: "0" default: "0"
backup:
version_added: "2.0"
description:
- passes --backup --version-control=numbered to patch,
producing numbered backup copies
binary:
version_added: "2.0"
description:
- Setting to true will disable patch's heuristic for transforming CRLF
line endings into LF. Line endings of src and dest must match. If set to
False, patch will replace CRLF in src files on POSIX.
required: false
type: "bool"
default: "False"
note: note:
- This module requires GNU I(patch) utility to be installed on the remote host. - This module requires GNU I(patch) utility to be installed on the remote host.
''' '''
@ -88,10 +104,12 @@ class PatchError(Exception):
pass pass
def is_already_applied(patch_func, patch_file, basedir, dest_file=None, strip=0): def is_already_applied(patch_func, patch_file, basedir, dest_file=None, binary=False, strip=0):
opts = ['--quiet', '--reverse', '--forward', '--dry-run', opts = ['--quiet', '--reverse', '--forward', '--dry-run',
"--strip=%s" % strip, "--directory='%s'" % basedir, "--strip=%s" % strip, "--directory='%s'" % basedir,
"--input='%s'" % patch_file] "--input='%s'" % patch_file]
if binary:
opts.append('--binary')
if dest_file: if dest_file:
opts.append("'%s'" % dest_file) opts.append("'%s'" % dest_file)
@ -99,18 +117,22 @@ def is_already_applied(patch_func, patch_file, basedir, dest_file=None, strip=0)
return rc == 0 return rc == 0
def apply_patch(patch_func, patch_file, basedir, dest_file=None, strip=0, dry_run=False): def apply_patch(patch_func, patch_file, basedir, dest_file=None, binary=False, strip=0, dry_run=False, backup=False):
opts = ['--quiet', '--forward', '--batch', '--reject-file=-', opts = ['--quiet', '--forward', '--batch', '--reject-file=-',
"--strip=%s" % strip, "--directory='%s'" % basedir, "--strip=%s" % strip, "--directory='%s'" % basedir,
"--input='%s'" % patch_file] "--input='%s'" % patch_file]
if dry_run: if dry_run:
opts.append('--dry-run') opts.append('--dry-run')
if binary:
opts.append('--binary')
if dest_file: if dest_file:
opts.append("'%s'" % dest_file) opts.append("'%s'" % dest_file)
if backup:
opts.append('--backup --version-control=numbered')
(rc, out, err) = patch_func(opts) (rc, out, err) = patch_func(opts)
if rc != 0: if rc != 0:
msg = out if not err else err msg = err or out
raise PatchError(msg) raise PatchError(msg)
@ -122,6 +144,10 @@ def main():
'basedir': {}, 'basedir': {},
'strip': {'default': 0, 'type': 'int'}, 'strip': {'default': 0, 'type': 'int'},
'remote_src': {'default': False, 'type': 'bool'}, 'remote_src': {'default': False, 'type': 'bool'},
# NB: for 'backup' parameter, semantics is slightly different from standard
# since patch will create numbered copies, not strftime("%Y-%m-%d@%H:%M:%S~")
'backup': {'default': False, 'type': 'bool'},
'binary': {'default': False, 'type': 'bool'},
}, },
required_one_of=[['dest', 'basedir']], required_one_of=[['dest', 'basedir']],
supports_check_mode=True supports_check_mode=True
@ -152,10 +178,10 @@ def main():
p.src = os.path.abspath(p.src) p.src = os.path.abspath(p.src)
changed = False changed = False
if not is_already_applied(patch_func, p.src, p.basedir, dest_file=p.dest, strip=p.strip): if not is_already_applied(patch_func, p.src, p.basedir, dest_file=p.dest, binary=p.binary, strip=p.strip):
try: try:
apply_patch(patch_func, p.src, p.basedir, dest_file=p.dest, strip=p.strip, apply_patch( patch_func, p.src, p.basedir, dest_file=p.dest, binary=p.binary, strip=p.strip,
dry_run=module.check_mode) dry_run=module.check_mode, backup=p.backup )
changed = True changed = True
except PatchError, e: except PatchError, e:
module.fail_json(msg=str(e)) module.fail_json(msg=str(e))

@ -0,0 +1,214 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Manuel Sousa <manuel.sousa@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: rabbitmq_binding
author: "Manuel Sousa (@manuel-sousa)"
version_added: "2.0"
short_description: This module manages rabbitMQ bindings
description:
- This module uses rabbitMQ Rest API to create/delete bindings
requirements: [ python requests ]
options:
state:
description:
- Whether the exchange should be present or absent
- Only present implemented atm
choices: [ "present", "absent" ]
required: false
default: present
name:
description:
- source exchange to create binding on
required: true
aliases: [ "src", "source" ]
login_user:
description:
- rabbitMQ user for connection
required: false
default: guest
login_password:
description:
- rabbitMQ password for connection
required: false
default: false
login_host:
description:
- rabbitMQ host for connection
required: false
default: localhost
login_port:
description:
- rabbitMQ management api port
required: false
default: 15672
vhost:
description:
- rabbitMQ virtual host
- default vhost is /
required: false
default: "/"
destination:
description:
- destination exchange or queue for the binding
required: true
aliases: [ "dst", "dest" ]
destination_type:
description:
- Either queue or exchange
required: true
choices: [ "queue", "exchange" ]
aliases: [ "type", "dest_type" ]
routing_key:
description:
- routing key for the binding
- default is #
required: false
default: "#"
arguments:
description:
- extra arguments for exchange. If defined this argument is a key/value dictionary
required: false
default: {}
'''
EXAMPLES = '''
# Bind myQueue to directExchange with routing key info
- rabbitmq_binding: name=directExchange destination=myQueue type=queue routing_key=info
# Bind directExchange to topicExchange with routing key *.info
- rabbitmq_binding: name=topicExchange destination=topicExchange type=exchange routing_key="*.info"
'''
import requests
import urllib
import json
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(default='present', choices=['present', 'absent'], type='str'),
name = dict(required=True, aliases=[ "src", "source" ], type='str'),
login_user = dict(default='guest', type='str'),
login_password = dict(default='guest', type='str', no_log=True),
login_host = dict(default='localhost', type='str'),
login_port = dict(default='15672', type='str'),
vhost = dict(default='/', type='str'),
destination = dict(required=True, aliases=[ "dst", "dest"], type='str'),
destination_type = dict(required=True, aliases=[ "type", "dest_type"], choices=[ "queue", "exchange" ],type='str'),
routing_key = dict(default='#', type='str'),
arguments = dict(default=dict(), type='dict')
),
supports_check_mode = True
)
if module.params['destination_type'] == "queue":
dest_type="q"
else:
dest_type="e"
url = "http://%s:%s/api/bindings/%s/e/%s/%s/%s/%s" % (
module.params['login_host'],
module.params['login_port'],
urllib.quote(module.params['vhost'],''),
module.params['name'],
dest_type,
module.params['destination'],
urllib.quote(module.params['routing_key'],'')
)
# Check if exchange already exists
r = requests.get( url, auth=(module.params['login_user'],module.params['login_password']))
if r.status_code==200:
binding_exists = True
response = r.json()
elif r.status_code==404:
binding_exists = False
response = r.text
else:
module.fail_json(
msg = "Invalid response from RESTAPI when trying to check if exchange exists",
details = r.text
)
if module.params['state']=='present':
change_required = not binding_exists
else:
change_required = binding_exists
# Exit if check_mode
if module.check_mode:
module.exit_json(
changed= change_required,
name = module.params['name'],
details = response,
arguments = module.params['arguments']
)
# Do changes
if change_required:
if module.params['state'] == 'present':
url = "http://%s:%s/api/bindings/%s/e/%s/%s/%s" % (
module.params['login_host'],
module.params['login_port'],
urllib.quote(module.params['vhost'],''),
module.params['name'],
dest_type,
module.params['destination']
)
r = requests.post(
url,
auth = (module.params['login_user'],module.params['login_password']),
headers = { "content-type": "application/json"},
data = json.dumps({
"routing_key": module.params['routing_key'],
"arguments": module.params['arguments']
})
)
elif module.params['state'] == 'absent':
r = requests.delete( url, auth = (module.params['login_user'],module.params['login_password']))
if r.status_code == 204 or r.status_code == 201:
module.exit_json(
changed = True,
name = module.params['name'],
destination = module.params['destination']
)
else:
module.fail_json(
msg = "Error creating exchange",
status = r.status_code,
details = r.text
)
else:
module.exit_json(
changed = False,
name = module.params['name']
)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -0,0 +1,218 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Manuel Sousa <manuel.sousa@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: rabbitmq_exchange
author: "Manuel Sousa (@manuel-sousa)"
version_added: "2.0"
short_description: This module manages rabbitMQ exchanges
description:
- This module uses rabbitMQ Rest API to create/delete exchanges
requirements: [ python requests ]
options:
name:
description:
- Name of the exchange to create
required: true
state:
description:
- Whether the exchange should be present or absent
- Only present implemented atm
choices: [ "present", "absent" ]
required: false
default: present
login_user:
description:
- rabbitMQ user for connection
required: false
default: guest
login_password:
description:
- rabbitMQ password for connection
required: false
default: false
login_host:
description:
- rabbitMQ host for connection
required: false
default: localhost
login_port:
description:
- rabbitMQ management api port
required: false
default: 15672
vhost:
description:
- rabbitMQ virtual host
required: false
default: "/"
durable:
description:
- whether exchange is durable or not
required: false
choices: [ "yes", "no" ]
default: yes
exchange_type:
description:
- type for the exchange
required: false
choices: [ "fanout", "direct", "headers", "topic" ]
aliases: [ "type" ]
default: direct
auto_delete:
description:
- if the exchange should delete itself after all queues/exchanges unbound from it
required: false
choices: [ "yes", "no" ]
default: no
internal:
description:
- exchange is available only for other exchanges
required: false
choices: [ "yes", "no" ]
default: no
arguments:
description:
- extra arguments for exchange. If defined this argument is a key/value dictionary
required: false
default: {}
'''
EXAMPLES = '''
# Create direct exchange
- rabbitmq_exchange: name=directExchange
# Create topic exchange on vhost
- rabbitmq_exchange: name=topicExchange type=topic vhost=myVhost
'''
import requests
import urllib
import json
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(default='present', choices=['present', 'absent'], type='str'),
name = dict(required=True, type='str'),
login_user = dict(default='guest', type='str'),
login_password = dict(default='guest', type='str', no_log=True),
login_host = dict(default='localhost', type='str'),
login_port = dict(default='15672', type='str'),
vhost = dict(default='/', type='str'),
durable = dict(default=True, choices=BOOLEANS, type='bool'),
auto_delete = dict(default=False, choices=BOOLEANS, type='bool'),
internal = dict(default=False, choices=BOOLEANS, type='bool'),
exchange_type = dict(default='direct', aliases=['type'], type='str'),
arguments = dict(default=dict(), type='dict')
),
supports_check_mode = True
)
url = "http://%s:%s/api/exchanges/%s/%s" % (
module.params['login_host'],
module.params['login_port'],
urllib.quote(module.params['vhost'],''),
module.params['name']
)
# Check if exchange already exists
r = requests.get( url, auth=(module.params['login_user'],module.params['login_password']))
if r.status_code==200:
exchange_exists = True
response = r.json()
elif r.status_code==404:
exchange_exists = False
response = r.text
else:
module.fail_json(
msg = "Invalid response from RESTAPI when trying to check if exchange exists",
details = r.text
)
if module.params['state']=='present':
change_required = not exchange_exists
else:
change_required = exchange_exists
# Check if attributes change on existing exchange
if not change_required and r.status_code==200 and module.params['state'] == 'present':
if not (
response['durable'] == module.params['durable'] and
response['auto_delete'] == module.params['auto_delete'] and
response['internal'] == module.params['internal'] and
response['type'] == module.params['exchange_type']
):
module.fail_json(
msg = "RabbitMQ RESTAPI doesn't support attribute changes for existing exchanges"
)
# Exit if check_mode
if module.check_mode:
module.exit_json(
changed= change_required,
name = module.params['name'],
details = response,
arguments = module.params['arguments']
)
# Do changes
if change_required:
if module.params['state'] == 'present':
r = requests.put(
url,
auth = (module.params['login_user'],module.params['login_password']),
headers = { "content-type": "application/json"},
data = json.dumps({
"durable": module.params['durable'],
"auto_delete": module.params['auto_delete'],
"internal": module.params['internal'],
"type": module.params['exchange_type'],
"arguments": module.params['arguments']
})
)
elif module.params['state'] == 'absent':
r = requests.delete( url, auth = (module.params['login_user'],module.params['login_password']))
if r.status_code == 204:
module.exit_json(
changed = True,
name = module.params['name']
)
else:
module.fail_json(
msg = "Error creating exchange",
status = r.status_code,
details = r.text
)
else:
module.exit_json(
changed = False,
name = module.params['name']
)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -25,7 +25,7 @@ short_description: Adds or removes parameters to RabbitMQ
description: description:
- Manage dynamic, cluster-wide parameters for RabbitMQ - Manage dynamic, cluster-wide parameters for RabbitMQ
version_added: "1.1" version_added: "1.1"
author: Chris Hoffman author: '"Chris Hoffman (@chrishoffman)"'
options: options:
component: component:
description: description:

@ -25,7 +25,7 @@ short_description: Adds or removes plugins to RabbitMQ
description: description:
- Enables or disables RabbitMQ plugins - Enables or disables RabbitMQ plugins
version_added: "1.1" version_added: "1.1"
author: Chris Hoffman author: '"Chris Hoffman (@chrishoffman)"'
options: options:
names: names:
description: description:

@ -26,7 +26,7 @@ short_description: Manage the state of policies in RabbitMQ.
description: description:
- Manage the state of a virtual host in RabbitMQ. - Manage the state of a virtual host in RabbitMQ.
version_added: "1.5" version_added: "1.5"
author: John Dewey author: "John Dewey (@retr0h)"
options: options:
name: name:
description: description:

@ -0,0 +1,263 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Manuel Sousa <manuel.sousa@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: rabbitmq_queue
author: "Manuel Sousa (@manuel-sousa)"
version_added: "2.0"
short_description: This module manages rabbitMQ queues
description:
- This module uses rabbitMQ Rest API to create/delete queues
requirements: [ python requests ]
options:
name:
description:
- Name of the queue to create
required: true
state:
description:
- Whether the queue should be present or absent
- Only present implemented atm
choices: [ "present", "absent" ]
required: false
default: present
login_user:
description:
- rabbitMQ user for connection
required: false
default: guest
login_password:
description:
- rabbitMQ password for connection
required: false
default: false
login_host:
description:
- rabbitMQ host for connection
required: false
default: localhost
login_port:
description:
- rabbitMQ management api port
required: false
default: 15672
vhost:
description:
- rabbitMQ virtual host
required: false
default: "/"
durable:
description:
- whether queue is durable or not
required: false
choices: [ "yes", "no" ]
default: yes
auto_delete:
description:
- if the queue should delete itself after all queues/queues unbound from it
required: false
choices: [ "yes", "no" ]
default: no
message_ttl:
description:
- How long a message can live in queue before it is discarded (milliseconds)
required: False
default: forever
auto_expires:
description:
- How long a queue can be unused before it is automatically deleted (milliseconds)
required: false
default: forever
max_length:
description:
- How many messages can the queue contain before it starts rejecting
required: false
default: no limit
dead_letter_exchange:
description:
- Optional name of an exchange to which messages will be republished if they
- are rejected or expire
required: false
default: None
dead_letter_routing_key:
description:
- Optional replacement routing key to use when a message is dead-lettered.
- Original routing key will be used if unset
required: false
default: None
arguments:
description:
- extra arguments for queue. If defined this argument is a key/value dictionary
required: false
default: {}
'''
EXAMPLES = '''
# Create a queue
- rabbitmq_queue: name=myQueue
# Create a queue on remote host
- rabbitmq_queue: name=myRemoteQueue login_user=user login_password=secret login_host=remote.example.org
'''
import requests
import urllib
import json
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(default='present', choices=['present', 'absent'], type='str'),
name = dict(required=True, type='str'),
login_user = dict(default='guest', type='str'),
login_password = dict(default='guest', type='str', no_log=True),
login_host = dict(default='localhost', type='str'),
login_port = dict(default='15672', type='str'),
vhost = dict(default='/', type='str'),
durable = dict(default=True, choices=BOOLEANS, type='bool'),
auto_delete = dict(default=False, choices=BOOLEANS, type='bool'),
message_ttl = dict(default=None, type='int'),
auto_expires = dict(default=None, type='int'),
max_length = dict(default=None, type='int'),
dead_letter_exchange = dict(default=None, type='str'),
dead_letter_routing_key = dict(default=None, type='str'),
arguments = dict(default=dict(), type='dict')
),
supports_check_mode = True
)
url = "http://%s:%s/api/queues/%s/%s" % (
module.params['login_host'],
module.params['login_port'],
urllib.quote(module.params['vhost'],''),
module.params['name']
)
# Check if queue already exists
r = requests.get( url, auth=(module.params['login_user'],module.params['login_password']))
if r.status_code==200:
queue_exists = True
response = r.json()
elif r.status_code==404:
queue_exists = False
response = r.text
else:
module.fail_json(
msg = "Invalid response from RESTAPI when trying to check if queue exists",
details = r.text
)
if module.params['state']=='present':
change_required = not queue_exists
else:
change_required = queue_exists
# Check if attributes change on existing queue
if not change_required and r.status_code==200 and module.params['state'] == 'present':
if not (
response['durable'] == module.params['durable'] and
response['auto_delete'] == module.params['auto_delete'] and
(
( 'x-message-ttl' in response['arguments'] and response['arguments']['x-message-ttl'] == module.params['message_ttl'] ) or
( 'x-message-ttl' not in response['arguments'] and module.params['message_ttl'] is None )
) and
(
( 'x-expires' in response['arguments'] and response['arguments']['x-expires'] == module.params['auto_expires'] ) or
( 'x-expires' not in response['arguments'] and module.params['auto_expires'] is None )
) and
(
( 'x-max-length' in response['arguments'] and response['arguments']['x-max-length'] == module.params['max_length'] ) or
( 'x-max-length' not in response['arguments'] and module.params['max_length'] is None )
) and
(
( 'x-dead-letter-exchange' in response['arguments'] and response['arguments']['x-dead-letter-exchange'] == module.params['dead_letter_exchange'] ) or
( 'x-dead-letter-exchange' not in response['arguments'] and module.params['dead_letter_exchange'] is None )
) and
(
( 'x-dead-letter-routing-key' in response['arguments'] and response['arguments']['x-dead-letter-routing-key'] == module.params['dead_letter_routing_key'] ) or
( 'x-dead-letter-routing-key' not in response['arguments'] and module.params['dead_letter_routing_key'] is None )
)
):
module.fail_json(
msg = "RabbitMQ RESTAPI doesn't support attribute changes for existing queues",
)
# Copy parameters to arguments as used by RabbitMQ
for k,v in {
'message_ttl': 'x-message-ttl',
'auto_expires': 'x-expires',
'max_length': 'x-max-length',
'dead_letter_exchange': 'x-dead-letter-exchange',
'dead_letter_routing_key': 'x-dead-letter-routing-key'
}.items():
if module.params[k]:
module.params['arguments'][v] = module.params[k]
# Exit if check_mode
if module.check_mode:
module.exit_json(
changed= change_required,
name = module.params['name'],
details = response,
arguments = module.params['arguments']
)
# Do changes
if change_required:
if module.params['state'] == 'present':
r = requests.put(
url,
auth = (module.params['login_user'],module.params['login_password']),
headers = { "content-type": "application/json"},
data = json.dumps({
"durable": module.params['durable'],
"auto_delete": module.params['auto_delete'],
"arguments": module.params['arguments']
})
)
elif module.params['state'] == 'absent':
r = requests.delete( url, auth = (module.params['login_user'],module.params['login_password']))
if r.status_code == 204:
module.exit_json(
changed = True,
name = module.params['name']
)
else:
module.fail_json(
msg = "Error creating queue",
status = r.status_code,
details = r.text
)
else:
module.exit_json(
changed = False,
name = module.params['name']
)
# import module snippets
from ansible.module_utils.basic import *
main()

@ -25,7 +25,7 @@ short_description: Adds or removes users to RabbitMQ
description: description:
- Add or remove users to RabbitMQ and assign permissions - Add or remove users to RabbitMQ and assign permissions
version_added: "1.1" version_added: "1.1"
author: Chris Hoffman author: '"Chris Hoffman (@chrishoffman)"'
options: options:
user: user:
description: description:

@ -26,7 +26,7 @@ short_description: Manage the state of a virtual host in RabbitMQ
description: description:
- Manage the state of a virtual host in RabbitMQ - Manage the state of a virtual host in RabbitMQ
version_added: "1.1" version_added: "1.1"
author: Chris Hoffman author: '"Chris Hoffman (@choffman)"'
options: options:
name: name:
description: description:

@ -22,7 +22,7 @@ DOCUMENTATION = '''
--- ---
module: airbrake_deployment module: airbrake_deployment
version_added: "1.2" version_added: "1.2"
author: Bruce Pennypacker author: "Bruce Pennypacker (@bpennypacker)"
short_description: Notify airbrake about app deployments short_description: Notify airbrake about app deployments
description: description:
- Notify airbrake about app deployments (see http://help.airbrake.io/kb/api-2/deploy-tracking) - Notify airbrake about app deployments (see http://help.airbrake.io/kb/api-2/deploy-tracking)
@ -61,8 +61,7 @@ options:
default: 'yes' default: 'yes'
choices: ['yes', 'no'] choices: ['yes', 'no']
# informational: requirements for nodes requirements: []
requirements: [ urllib, urllib2 ]
''' '''
EXAMPLES = ''' EXAMPLES = '''
@ -72,6 +71,8 @@ EXAMPLES = '''
revision=4.2 revision=4.2
''' '''
import urllib
# =========================================== # ===========================================
# Module execution. # Module execution.
# #

@ -3,7 +3,7 @@
DOCUMENTATION = ''' DOCUMENTATION = '''
--- ---
module: bigpanda module: bigpanda
author: BigPanda author: "Hagai Kariti (@hkariti)"
short_description: Notify BigPanda about deployments short_description: Notify BigPanda about deployments
version_added: "1.8" version_added: "1.8"
description: description:
@ -59,7 +59,7 @@ options:
choices: ['yes', 'no'] choices: ['yes', 'no']
# informational: requirements for nodes # informational: requirements for nodes
requirements: [ urllib, urllib2 ] requirements: [ ]
''' '''
EXAMPLES = ''' EXAMPLES = '''
@ -162,11 +162,11 @@ def main():
module.exit_json(changed=True, **deployment) module.exit_json(changed=True, **deployment)
else: else:
module.fail_json(msg=json.dumps(info)) module.fail_json(msg=json.dumps(info))
except Exception as e: except Exception, e:
module.fail_json(msg=str(e)) module.fail_json(msg=str(e))
# import module snippets # import module snippets
from ansible.module_utils.basic import * from ansible.module_utils.basic import *
from ansible.module_utils.urls import * from ansible.module_utils.urls import *
if __name__ == '__main__':
main() main()

@ -34,11 +34,10 @@ short_description: Manage boundary meters
description: description:
- This module manages boundary meters - This module manages boundary meters
version_added: "1.3" version_added: "1.3"
author: curtis@serverascode.com author: "curtis (@ccollicutt)"
requirements: requirements:
- Boundary API access - Boundary API access
- bprobe is required to send data, but not to register a meter - bprobe is required to send data, but not to register a meter
- Python urllib2
options: options:
name: name:
description: description:
@ -213,7 +212,7 @@ def download_request(module, name, apiid, apikey, cert_type):
cert_file = open(cert_file_path, 'w') cert_file = open(cert_file_path, 'w')
cert_file.write(body) cert_file.write(body)
cert_file.close cert_file.close
os.chmod(cert_file_path, 0o600) os.chmod(cert_file_path, 0600)
except: except:
module.fail_json("Could not write to certificate file") module.fail_json("Could not write to certificate file")
@ -252,5 +251,6 @@ def main():
# import module snippets # import module snippets
from ansible.module_utils.basic import * from ansible.module_utils.basic import *
from ansible.module_utils.urls import * from ansible.module_utils.urls import *
main() if __name__ == '__main__':
main()

@ -0,0 +1,132 @@
#!/usr/bin/python
# (c) 2014-2015, Epic Games, Inc.
import requests
import time
import json
DOCUMENTATION = '''
---
module: circonus_annotation
short_description: create an annotation in circonus
description:
- Create an annotation event with a given category, title and description. Optionally start, end or durations can be provided
author: "Nick Harring (@NickatEpic)"
version_added: 2.0
requirements:
- urllib3
- requests
- time
options:
api_key:
description:
- Circonus API key
required: true
category:
description:
- Annotation Category
required: true
description:
description:
- Description of annotation
required: true
title:
description:
- Title of annotation
required: true
start:
description:
- Unix timestamp of event start, defaults to now
required: false
stop:
description:
- Unix timestamp of event end, defaults to now + duration
required: false
duration:
description:
- Duration in seconds of annotation, defaults to 0
required: false
'''
EXAMPLES = '''
# Create a simple annotation event with a source, defaults to start and end time of now
- circonus_annotation:
api_key: XXXXXXXXXXXXXXXXX
title: 'App Config Change'
description: 'This is a detailed description of the config change'
category: 'This category groups like annotations'
# Create an annotation with a duration of 5 minutes and a default start time of now
- circonus_annotation:
api_key: XXXXXXXXXXXXXXXXX
title: 'App Config Change'
description: 'This is a detailed description of the config change'
category: 'This category groups like annotations'
duration: 300
# Create an annotation with a start_time and end_time
- circonus_annotation:
api_key: XXXXXXXXXXXXXXXXX
title: 'App Config Change'
description: 'This is a detailed description of the config change'
category: 'This category groups like annotations'
start_time: 1395940006
end_time: 1395954407
'''
def post_annotation(annotation, api_key):
''' Takes annotation dict and api_key string'''
base_url = 'https://api.circonus.com/v2'
anootate_post_endpoint = '/annotation'
resp = requests.post(base_url + anootate_post_endpoint,
headers=build_headers(api_key), data=json.dumps(annotation))
resp.raise_for_status()
return resp
def create_annotation(module):
''' Takes ansible module object '''
annotation = {}
if module.params['duration'] != None:
duration = module.params['duration']
else:
duration = 0
if module.params['start'] != None:
start = module.params['start']
else:
start = int(time.time())
if module.params['stop'] != None:
stop = module.params['stop']
else:
stop = int(time.time())+ duration
annotation['start'] = int(start)
annotation['stop'] = int(stop)
annotation['category'] = module.params['category']
annotation['description'] = module.params['description']
annotation['title'] = module.params['title']
return annotation
def build_headers(api_token):
'''Takes api token, returns headers with it included.'''
headers = {'X-Circonus-App-Name': 'ansible',
'Host': 'api.circonus.com', 'X-Circonus-Auth-Token': api_token,
'Accept': 'application/json'}
return headers
def main():
'''Main function, dispatches logic'''
module = AnsibleModule(
argument_spec=dict(
start=dict(required=False, type='int'),
stop=dict(required=False, type='int'),
category=dict(required=True),
title=dict(required=True),
description=dict(required=True),
duration=dict(required=False, type='int'),
api_key=dict(required=True)
)
)
annotation = create_annotation(module)
try:
resp = post_annotation(annotation, module.params['api_key'])
except requests.exceptions.RequestException, err_str:
module.fail_json(msg='Request Failed', reason=err_str)
module.exit_json(changed=True, annotation=resp.json())
from ansible.module_utils.basic import *
main()

@ -14,9 +14,9 @@ description:
- "Allows to post events to DataDog (www.datadoghq.com) service." - "Allows to post events to DataDog (www.datadoghq.com) service."
- "Uses http://docs.datadoghq.com/api/#events API." - "Uses http://docs.datadoghq.com/api/#events API."
version_added: "1.3" version_added: "1.3"
author: Artūras 'arturaz' Šlajus <x11@arturaz.net> author: "Artūras `arturaz` Šlajus (@arturaz)"
notes: [] notes: []
requirements: [urllib2] requirements: []
options: options:
api_key: api_key:
description: ["Your DataDog API key."] description: ["Your DataDog API key."]
@ -71,7 +71,7 @@ datadog_event: title="Testing from ansible" text="Test!" priority="low"
# Post an event with several tags # Post an event with several tags
datadog_event: title="Testing from ansible" text="Test!" datadog_event: title="Testing from ansible" text="Test!"
api_key="6873258723457823548234234234" api_key="6873258723457823548234234234"
tags=aa,bb,cc tags=aa,bb,#host:{{ inventory_hostname }}
''' '''
import socket import socket
@ -86,7 +86,7 @@ def main():
priority=dict( priority=dict(
required=False, default='normal', choices=['normal', 'low'] required=False, default='normal', choices=['normal', 'low']
), ),
tags=dict(required=False, default=None), tags=dict(required=False, default=None, type='list'),
alert_type=dict( alert_type=dict(
required=False, default='info', required=False, default='info',
choices=['error', 'warning', 'info', 'success'] choices=['error', 'warning', 'info', 'success']
@ -116,7 +116,7 @@ def post_event(module):
if module.params['date_happened'] != None: if module.params['date_happened'] != None:
body['date_happened'] = module.params['date_happened'] body['date_happened'] = module.params['date_happened']
if module.params['tags'] != None: if module.params['tags'] != None:
body['tags'] = module.params['tags'].split(",") body['tags'] = module.params['tags']
if module.params['aggregation_key'] != None: if module.params['aggregation_key'] != None:
body['aggregation_key'] = module.params['aggregation_key'] body['aggregation_key'] = module.params['aggregation_key']
if module.params['source_type_name'] != None: if module.params['source_type_name'] != None:
@ -139,5 +139,5 @@ def post_event(module):
# import module snippets # import module snippets
from ansible.module_utils.basic import * from ansible.module_utils.basic import *
from ansible.module_utils.urls import * from ansible.module_utils.urls import *
if __name__ == '__main__':
main() main()

@ -0,0 +1,283 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Sebastian Kornehl <sebastian.kornehl@asideas.de>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# import module snippets
# Import Datadog
try:
from datadog import initialize, api
HAS_DATADOG = True
except:
HAS_DATADOG = False
DOCUMENTATION = '''
---
module: datadog_monitor
short_description: Manages Datadog monitors
description:
- "Manages monitors within Datadog"
- "Options like described on http://docs.datadoghq.com/api/"
version_added: "2.0"
author: "Sebastian Kornehl (@skornehl)"
notes: []
requirements: [datadog]
options:
api_key:
description: ["Your DataDog API key."]
required: true
app_key:
description: ["Your DataDog app key."]
required: true
state:
description: ["The designated state of the monitor."]
required: true
choices: ['present', 'absent', 'muted', 'unmuted']
type:
description: ["The type of the monitor."]
required: false
default: null
choices: ['metric alert', 'service check']
query:
description: ["he monitor query to notify on with syntax varying depending on what type of monitor you are creating."]
required: false
default: null
name:
description: ["The name of the alert."]
required: true
message:
description: ["A message to include with notifications for this monitor. Email notifications can be sent to specific users by using the same '@username' notation as events."]
required: false
default: null
silenced:
description: ["Dictionary of scopes to timestamps or None. Each scope will be muted until the given POSIX timestamp or forever if the value is None. "]
required: false
default: ""
notify_no_data:
description: ["A boolean indicating whether this monitor will notify when data stops reporting.."]
required: false
default: False
no_data_timeframe:
description: ["The number of minutes before a monitor will notify when data stops reporting. Must be at least 2x the monitor timeframe for metric alerts or 2 minutes for service checks."]
required: false
default: 2x timeframe for metric, 2 minutes for service
timeout_h:
description: ["The number of hours of the monitor not reporting data before it will automatically resolve from a triggered state."]
required: false
default: null
renotify_interval:
description: ["The number of minutes after the last notification before a monitor will re-notify on the current status. It will only re-notify if it's not resolved."]
required: false
default: null
escalation_message:
description: ["A message to include with a re-notification. Supports the '@username' notification we allow elsewhere. Not applicable if renotify_interval is None"]
required: false
default: null
notify_audit:
description: ["A boolean indicating whether tagged users will be notified on changes to this monitor."]
required: false
default: False
thresholds:
description: ["A dictionary of thresholds by status. Because service checks can have multiple thresholds, we don't define them directly in the query."]
required: false
default: {'ok': 1, 'critical': 1, 'warning': 1}
'''
EXAMPLES = '''
# Create a metric monitor
datadog_monitor:
type: "metric alert"
name: "Test monitor"
state: "present"
query: "datadog.agent.up".over("host:host1").last(2).count_by_status()"
message: "Some message."
api_key: "9775a026f1ca7d1c6c5af9d94d9595a4"
app_key: "87ce4a24b5553d2e482ea8a8500e71b8ad4554ff"
# Deletes a monitor
datadog_monitor:
name: "Test monitor"
state: "absent"
api_key: "9775a026f1ca7d1c6c5af9d94d9595a4"
app_key: "87ce4a24b5553d2e482ea8a8500e71b8ad4554ff"
# Mutes a monitor
datadog_monitor:
name: "Test monitor"
state: "mute"
silenced: '{"*":None}'
api_key: "9775a026f1ca7d1c6c5af9d94d9595a4"
app_key: "87ce4a24b5553d2e482ea8a8500e71b8ad4554ff"
# Unmutes a monitor
datadog_monitor:
name: "Test monitor"
state: "unmute"
api_key: "9775a026f1ca7d1c6c5af9d94d9595a4"
app_key: "87ce4a24b5553d2e482ea8a8500e71b8ad4554ff"
'''
def main():
module = AnsibleModule(
argument_spec=dict(
api_key=dict(required=True),
app_key=dict(required=True),
state=dict(required=True, choises=['present', 'absent', 'mute', 'unmute']),
type=dict(required=False, choises=['metric alert', 'service check']),
name=dict(required=True),
query=dict(required=False),
message=dict(required=False, default=None),
silenced=dict(required=False, default=None, type='dict'),
notify_no_data=dict(required=False, default=False, choices=BOOLEANS),
no_data_timeframe=dict(required=False, default=None),
timeout_h=dict(required=False, default=None),
renotify_interval=dict(required=False, default=None),
escalation_message=dict(required=False, default=None),
notify_audit=dict(required=False, default=False, choices=BOOLEANS),
thresholds=dict(required=False, type='dict', default={'ok': 1, 'critical': 1, 'warning': 1}),
)
)
# Prepare Datadog
if not HAS_DATADOG:
module.fail_json(msg='datadogpy required for this module')
options = {
'api_key': module.params['api_key'],
'app_key': module.params['app_key']
}
initialize(**options)
if module.params['state'] == 'present':
install_monitor(module)
elif module.params['state'] == 'absent':
delete_monitor(module)
elif module.params['state'] == 'mute':
mute_monitor(module)
elif module.params['state'] == 'unmute':
unmute_monitor(module)
def _get_monitor(module):
for monitor in api.Monitor.get_all():
if monitor['name'] == module.params['name']:
return monitor
return {}
def _post_monitor(module, options):
try:
msg = api.Monitor.create(type=module.params['type'], query=module.params['query'],
name=module.params['name'], message=module.params['message'],
options=options)
if 'errors' in msg:
module.fail_json(msg=str(msg['errors']))
else:
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
def _equal_dicts(a, b, ignore_keys):
ka = set(a).difference(ignore_keys)
kb = set(b).difference(ignore_keys)
return ka == kb and all(a[k] == b[k] for k in ka)
def _update_monitor(module, monitor, options):
try:
msg = api.Monitor.update(id=monitor['id'], query=module.params['query'],
name=module.params['name'], message=module.params['message'],
options=options)
if 'errors' in msg:
module.fail_json(msg=str(msg['errors']))
elif _equal_dicts(msg, monitor, ['creator', 'overall_state']):
module.exit_json(changed=False, msg=msg)
else:
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
def install_monitor(module):
options = {
"silenced": module.params['silenced'],
"notify_no_data": module.boolean(module.params['notify_no_data']),
"no_data_timeframe": module.params['no_data_timeframe'],
"timeout_h": module.params['timeout_h'],
"renotify_interval": module.params['renotify_interval'],
"escalation_message": module.params['escalation_message'],
"notify_audit": module.boolean(module.params['notify_audit']),
}
if module.params['type'] == "service check":
options["thresholds"] = module.params['thresholds']
monitor = _get_monitor(module)
if not monitor:
_post_monitor(module, options)
else:
_update_monitor(module, monitor, options)
def delete_monitor(module):
monitor = _get_monitor(module)
if not monitor:
module.exit_json(changed=False)
try:
msg = api.Monitor.delete(monitor['id'])
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
def mute_monitor(module):
monitor = _get_monitor(module)
if not monitor:
module.fail_json(msg="Monitor %s not found!" % module.params['name'])
elif monitor['options']['silenced']:
module.fail_json(msg="Monitor is already muted. Datadog does not allow to modify muted alerts, consider unmuting it first.")
elif (module.params['silenced'] is not None
and len(set(monitor['options']['silenced']) - set(module.params['silenced'])) == 0):
module.exit_json(changed=False)
try:
if module.params['silenced'] is None or module.params['silenced'] == "":
msg = api.Monitor.mute(id=monitor['id'])
else:
msg = api.Monitor.mute(id=monitor['id'], silenced=module.params['silenced'])
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
def unmute_monitor(module):
monitor = _get_monitor(module)
if not monitor:
module.fail_json(msg="Monitor %s not found!" % module.params['name'])
elif not monitor['options']['silenced']:
module.exit_json(changed=False)
try:
msg = api.Monitor.unmute(monitor['id'])
module.exit_json(changed=True, msg=msg)
except Exception, e:
module.fail_json(msg=str(e))
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
main()

@ -29,9 +29,8 @@ short_description: create an annotation in librato
description: description:
- Create an annotation event on the given annotation stream :name. If the annotation stream does not exist, it will be created automatically - Create an annotation event on the given annotation stream :name. If the annotation stream does not exist, it will be created automatically
version_added: "1.6" version_added: "1.6"
author: Seth Edwards author: "Seth Edwards (@sedward)"
requirements: requirements:
- urllib2
- base64 - base64
options: options:
user: user:
@ -107,11 +106,7 @@ EXAMPLES = '''
''' '''
try: import urllib2
import urllib2
HAS_URLLIB2 = True
except ImportError:
HAS_URLLIB2 = False
def post_annotation(module): def post_annotation(module):
user = module.params['user'] user = module.params['user']
@ -138,11 +133,11 @@ def post_annotation(module):
headers = {} headers = {}
headers['Content-Type'] = 'application/json' headers['Content-Type'] = 'application/json'
headers['Authorization'] = b"Basic " + base64.b64encode(user + b":" + api_key).strip() headers['Authorization'] = "Basic " + base64.b64encode(user + ":" + api_key).strip()
req = urllib2.Request(url, json_body, headers) req = urllib2.Request(url, json_body, headers)
try: try:
response = urllib2.urlopen(req) response = urllib2.urlopen(req)
except urllib2.HTTPError as e: except urllib2.HTTPError, e:
module.fail_json(msg="Request Failed", reason=e.reason) module.fail_json(msg="Request Failed", reason=e.reason)
response = response.read() response = response.read()
module.exit_json(changed=True, annotation=response) module.exit_json(changed=True, annotation=response)

@ -19,8 +19,8 @@
DOCUMENTATION = ''' DOCUMENTATION = '''
--- ---
module: logentries module: logentries
author: Ivan Vanderbyl author: "Ivan Vanderbyl (@ivanvanderbyl)"
short_description: Module for tracking logs via logentries.com short_description: Module for tracking logs via logentries.com
description: description:
- Sends logs to LogEntries in realtime - Sends logs to LogEntries in realtime
version_added: "1.6" version_added: "1.6"

@ -39,7 +39,7 @@ options:
default: null default: null
choices: [ "present", "started", "stopped", "restarted", "monitored", "unmonitored", "reloaded" ] choices: [ "present", "started", "stopped", "restarted", "monitored", "unmonitored", "reloaded" ]
requirements: [ ] requirements: [ ]
author: Darryl Stoflet author: "Darryl Stoflet (@dstoflet)"
''' '''
EXAMPLES = ''' EXAMPLES = '''
@ -77,7 +77,7 @@ def main():
# Process 'name' Running - restart pending # Process 'name' Running - restart pending
parts = line.split() parts = line.split()
if len(parts) > 2 and parts[0].lower() == 'process' and parts[1] == "'%s'" % name: if len(parts) > 2 and parts[0].lower() == 'process' and parts[1] == "'%s'" % name:
return ' '.join(parts[2:]) return ' '.join(parts[2:]).lower()
else: else:
return '' return ''
@ -86,7 +86,8 @@ def main():
module.run_command('%s %s %s' % (MONIT, command, name), check_rc=True) module.run_command('%s %s %s' % (MONIT, command, name), check_rc=True)
return status() return status()
present = status() != '' process_status = status()
present = process_status != ''
if not present and not state == 'present': if not present and not state == 'present':
module.fail_json(msg='%s process not presently configured with monit' % name, name=name, state=state) module.fail_json(msg='%s process not presently configured with monit' % name, name=name, state=state)
@ -102,7 +103,7 @@ def main():
module.exit_json(changed=True, name=name, state=state) module.exit_json(changed=True, name=name, state=state)
module.exit_json(changed=False, name=name, state=state) module.exit_json(changed=False, name=name, state=state)
running = 'running' in status() running = 'running' in process_status
if running and state in ['started', 'monitored']: if running and state in ['started', 'monitored']:
module.exit_json(changed=False, name=name, state=state) module.exit_json(changed=False, name=name, state=state)
@ -119,7 +120,7 @@ def main():
if module.check_mode: if module.check_mode:
module.exit_json(changed=True) module.exit_json(changed=True)
status = run_command('unmonitor') status = run_command('unmonitor')
if status in ['not monitored']: if status in ['not monitored'] or 'unmonitor pending' in status:
module.exit_json(changed=True, name=name, state=state) module.exit_json(changed=True, name=name, state=state)
module.fail_json(msg='%s process not unmonitored' % name, status=status) module.fail_json(msg='%s process not unmonitored' % name, status=status)

@ -9,7 +9,7 @@
# Tim Bielawa <tbielawa@redhat.com> # Tim Bielawa <tbielawa@redhat.com>
# #
# This software may be freely redistributed under the terms of the GNU # This software may be freely redistributed under the terms of the GNU
# general public license version 2. # general public license version 2 or any later version.
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
@ -30,10 +30,12 @@ options:
action: action:
description: description:
- Action to take. - Action to take.
- servicegroup options were added in 2.0.
required: true required: true
default: null default: null
choices: [ "downtime", "enable_alerts", "disable_alerts", "silence", "unsilence", choices: [ "downtime", "enable_alerts", "disable_alerts", "silence", "unsilence",
"silence_nagios", "unsilence_nagios", "command" ] "silence_nagios", "unsilence_nagios", "command", "servicegroup_service_downtime",
"servicegroup_host_downtime" ]
host: host:
description: description:
- Host to operate on in Nagios. - Host to operate on in Nagios.
@ -51,6 +53,12 @@ options:
Only usable with the C(downtime) action. Only usable with the C(downtime) action.
required: false required: false
default: Ansible default: Ansible
comment:
version_added: "2.0"
description:
- Comment for C(downtime) action.
required: false
default: Scheduling downtime
minutes: minutes:
description: description:
- Minutes to schedule downtime for. - Minutes to schedule downtime for.
@ -65,6 +73,11 @@ options:
aliases: [ "service" ] aliases: [ "service" ]
required: true required: true
default: null default: null
servicegroup:
version_added: "2.0"
description:
- the Servicegroup we want to set downtimes/alerts for.
B(Required) option when using the C(servicegroup_service_downtime) amd C(servicegroup_host_downtime).
command: command:
description: description:
- The raw command to send to nagios, which - The raw command to send to nagios, which
@ -73,7 +86,7 @@ options:
required: true required: true
default: null default: null
author: Tim Bielawa author: "Tim Bielawa (@tbielawa)"
requirements: [ "Nagios" ] requirements: [ "Nagios" ]
''' '''
@ -84,12 +97,22 @@ EXAMPLES = '''
# schedule an hour of HOST downtime # schedule an hour of HOST downtime
- nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }} - nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }}
# schedule an hour of HOST downtime, with a comment describing the reason
- nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }}
comment='This host needs disciplined'
# schedule downtime for ALL services on HOST # schedule downtime for ALL services on HOST
- nagios: action=downtime minutes=45 service=all host={{ inventory_hostname }} - nagios: action=downtime minutes=45 service=all host={{ inventory_hostname }}
# schedule downtime for a few services # schedule downtime for a few services
- nagios: action=downtime services=frob,foobar,qeuz host={{ inventory_hostname }} - nagios: action=downtime services=frob,foobar,qeuz host={{ inventory_hostname }}
# set 30 minutes downtime for all services in servicegroup foo
- nagios: action=servicegroup_service_downtime minutes=30 servicegroup=foo host={{ inventory_hostname }}
# set 30 minutes downtime for all host in servicegroup foo
- nagios: action=servicegroup_host_downtime minutes=30 servicegroup=foo host={{ inventory_hostname }}
# enable SMART disk alerts # enable SMART disk alerts
- nagios: action=enable_alerts service=smart host={{ inventory_hostname }} - nagios: action=enable_alerts service=smart host={{ inventory_hostname }}
@ -169,13 +192,18 @@ def main():
'silence_nagios', 'silence_nagios',
'unsilence_nagios', 'unsilence_nagios',
'command', 'command',
'servicegroup_host_downtime',
'servicegroup_service_downtime',
] ]
module = AnsibleModule( module = AnsibleModule(
argument_spec=dict( argument_spec=dict(
action=dict(required=True, default=None, choices=ACTION_CHOICES), action=dict(required=True, default=None, choices=ACTION_CHOICES),
author=dict(default='Ansible'), author=dict(default='Ansible'),
comment=dict(default='Scheduling downtime'),
host=dict(required=False, default=None), host=dict(required=False, default=None),
servicegroup=dict(required=False, default=None),
minutes=dict(default=30), minutes=dict(default=30),
cmdfile=dict(default=which_cmdfile()), cmdfile=dict(default=which_cmdfile()),
services=dict(default=None, aliases=['service']), services=dict(default=None, aliases=['service']),
@ -185,11 +213,12 @@ def main():
action = module.params['action'] action = module.params['action']
host = module.params['host'] host = module.params['host']
servicegroup = module.params['servicegroup']
minutes = module.params['minutes'] minutes = module.params['minutes']
services = module.params['services'] services = module.params['services']
cmdfile = module.params['cmdfile'] cmdfile = module.params['cmdfile']
command = module.params['command'] command = module.params['command']
################################################################## ##################################################################
# Required args per action: # Required args per action:
# downtime = (minutes, service, host) # downtime = (minutes, service, host)
@ -217,6 +246,20 @@ def main():
except Exception: except Exception:
module.fail_json(msg='invalid entry for minutes') module.fail_json(msg='invalid entry for minutes')
######################################################################
if action in ['servicegroup_service_downtime', 'servicegroup_host_downtime']:
# Make sure there's an actual servicegroup selected
if not servicegroup:
module.fail_json(msg='no servicegroup selected to set downtime for')
# Make sure minutes is a number
try:
m = int(minutes)
if not isinstance(m, types.IntType):
module.fail_json(msg='minutes must be a number')
except Exception:
module.fail_json(msg='invalid entry for minutes')
################################################################## ##################################################################
if action in ['enable_alerts', 'disable_alerts']: if action in ['enable_alerts', 'disable_alerts']:
if not services: if not services:
@ -258,7 +301,9 @@ class Nagios(object):
self.module = module self.module = module
self.action = kwargs['action'] self.action = kwargs['action']
self.author = kwargs['author'] self.author = kwargs['author']
self.comment = kwargs['comment']
self.host = kwargs['host'] self.host = kwargs['host']
self.servicegroup = kwargs['servicegroup']
self.minutes = int(kwargs['minutes']) self.minutes = int(kwargs['minutes'])
self.cmdfile = kwargs['cmdfile'] self.cmdfile = kwargs['cmdfile']
self.command = kwargs['command'] self.command = kwargs['command']
@ -293,7 +338,7 @@ class Nagios(object):
cmdfile=self.cmdfile) cmdfile=self.cmdfile)
def _fmt_dt_str(self, cmd, host, duration, author=None, def _fmt_dt_str(self, cmd, host, duration, author=None,
comment="Scheduling downtime", start=None, comment=None, start=None,
svc=None, fixed=1, trigger=0): svc=None, fixed=1, trigger=0):
""" """
Format an external-command downtime string. Format an external-command downtime string.
@ -326,6 +371,9 @@ class Nagios(object):
if not author: if not author:
author = self.author author = self.author
if not comment:
comment = self.comment
if svc is not None: if svc is not None:
dt_args = [svc, str(start), str(end), str(fixed), str(trigger), dt_args = [svc, str(start), str(end), str(fixed), str(trigger),
str(duration_s), author, comment] str(duration_s), author, comment]
@ -356,7 +404,7 @@ class Nagios(object):
notif_str = "[%s] %s" % (entry_time, cmd) notif_str = "[%s] %s" % (entry_time, cmd)
if host is not None: if host is not None:
notif_str += ";%s" % host notif_str += ";%s" % host
if svc is not None: if svc is not None:
notif_str += ";%s" % svc notif_str += ";%s" % svc
@ -796,42 +844,42 @@ class Nagios(object):
return return_str_list return return_str_list
else: else:
return "Fail: could not write to the command file" return "Fail: could not write to the command file"
def silence_nagios(self): def silence_nagios(self):
""" """
This command is used to disable notifications for all hosts and services This command is used to disable notifications for all hosts and services
in nagios. in nagios.
This is a 'SHUT UP, NAGIOS' command This is a 'SHUT UP, NAGIOS' command
""" """
cmd = 'DISABLE_NOTIFICATIONS' cmd = 'DISABLE_NOTIFICATIONS'
self._write_command(self._fmt_notif_str(cmd)) self._write_command(self._fmt_notif_str(cmd))
def unsilence_nagios(self): def unsilence_nagios(self):
""" """
This command is used to enable notifications for all hosts and services This command is used to enable notifications for all hosts and services
in nagios. in nagios.
This is a 'OK, NAGIOS, GO'' command This is a 'OK, NAGIOS, GO'' command
""" """
cmd = 'ENABLE_NOTIFICATIONS' cmd = 'ENABLE_NOTIFICATIONS'
self._write_command(self._fmt_notif_str(cmd)) self._write_command(self._fmt_notif_str(cmd))
def nagios_cmd(self, cmd): def nagios_cmd(self, cmd):
""" """
This sends an arbitrary command to nagios This sends an arbitrary command to nagios
It prepends the submitted time and appends a \n It prepends the submitted time and appends a \n
You just have to provide the properly formatted command You just have to provide the properly formatted command
""" """
pre = '[%s]' % int(time.time()) pre = '[%s]' % int(time.time())
post = '\n' post = '\n'
cmdstr = '%s %s %s' % (pre, cmd, post) cmdstr = '%s %s %s' % (pre, cmd, post)
self._write_command(cmdstr) self._write_command(cmdstr)
def act(self): def act(self):
""" """
Figure out what you want to do from ansible, and then do the Figure out what you want to do from ansible, and then do the
@ -847,6 +895,12 @@ class Nagios(object):
self.schedule_svc_downtime(self.host, self.schedule_svc_downtime(self.host,
services=self.services, services=self.services,
minutes=self.minutes) minutes=self.minutes)
elif self.action == "servicegroup_host_downtime":
if self.servicegroup:
self.schedule_servicegroup_host_downtime(servicegroup = self.servicegroup, minutes = self.minutes)
elif self.action == "servicegroup_service_downtime":
if self.servicegroup:
self.schedule_servicegroup_svc_downtime(servicegroup = self.servicegroup, minutes = self.minutes)
# toggle the host AND service alerts # toggle the host AND service alerts
elif self.action == 'silence': elif self.action == 'silence':
@ -871,13 +925,13 @@ class Nagios(object):
services=self.services) services=self.services)
elif self.action == 'silence_nagios': elif self.action == 'silence_nagios':
self.silence_nagios() self.silence_nagios()
elif self.action == 'unsilence_nagios': elif self.action == 'unsilence_nagios':
self.unsilence_nagios() self.unsilence_nagios()
elif self.action == 'command': elif self.action == 'command':
self.nagios_cmd(self.command) self.nagios_cmd(self.command)
# wtf? # wtf?
else: else:
self.module.fail_json(msg="unknown action specified: '%s'" % \ self.module.fail_json(msg="unknown action specified: '%s'" % \

@ -22,14 +22,14 @@ DOCUMENTATION = '''
--- ---
module: newrelic_deployment module: newrelic_deployment
version_added: "1.2" version_added: "1.2"
author: Matt Coddington author: "Matt Coddington (@mcodd)"
short_description: Notify newrelic about app deployments short_description: Notify newrelic about app deployments
description: description:
- Notify newrelic about app deployments (see http://newrelic.github.io/newrelic_api/NewRelicApi/Deployment.html) - Notify newrelic about app deployments (see https://docs.newrelic.com/docs/apm/new-relic-apm/maintenance/deployment-notifications#api)
options: options:
token: token:
description: description:
- API token. - API token, to place in the x-api-key header.
required: true required: true
app_name: app_name:
description: description:
@ -72,8 +72,7 @@ options:
choices: ['yes', 'no'] choices: ['yes', 'no']
version_added: 1.5.1 version_added: 1.5.1
# informational: requirements for nodes requirements: []
requirements: [ urllib, urllib2 ]
''' '''
EXAMPLES = ''' EXAMPLES = '''
@ -83,6 +82,8 @@ EXAMPLES = '''
revision=1.0 revision=1.0
''' '''
import urllib
# =========================================== # ===========================================
# Module execution. # Module execution.
# #
@ -102,6 +103,7 @@ def main():
environment=dict(required=False), environment=dict(required=False),
validate_certs = dict(default='yes', type='bool'), validate_certs = dict(default='yes', type='bool'),
), ),
required_one_of=[['app_name', 'application_id']],
supports_check_mode=True supports_check_mode=True
) )

@ -7,7 +7,11 @@ short_description: Create PagerDuty maintenance windows
description: description:
- This module will let you create PagerDuty maintenance windows - This module will let you create PagerDuty maintenance windows
version_added: "1.2" version_added: "1.2"
author: Justin Johns author:
- "Andrew Newdigate (@suprememoocow)"
- "Dylan Silva (@thaumos)"
- "Justin Johns"
- "Bruce Pennypacker"
requirements: requirements:
- PagerDuty API access - PagerDuty API access
options: options:
@ -16,7 +20,7 @@ options:
- Create a maintenance window or get a list of ongoing windows. - Create a maintenance window or get a list of ongoing windows.
required: true required: true
default: null default: null
choices: [ "running", "started", "ongoing" ] choices: [ "running", "started", "ongoing", "absent" ]
aliases: [] aliases: []
name: name:
description: description:
@ -58,11 +62,11 @@ options:
version_added: '1.8' version_added: '1.8'
service: service:
description: description:
- PagerDuty service ID. - A comma separated list of PagerDuty service IDs.
required: false required: false
default: null default: null
choices: [] choices: []
aliases: [] aliases: [ services ]
hours: hours:
description: description:
- Length of maintenance window in hours. - Length of maintenance window in hours.
@ -93,9 +97,6 @@ options:
default: 'yes' default: 'yes'
choices: ['yes', 'no'] choices: ['yes', 'no']
version_added: 1.5.1 version_added: 1.5.1
notes:
- This module does not yet have support to end maintenance windows.
''' '''
EXAMPLES=''' EXAMPLES='''
@ -129,6 +130,14 @@ EXAMPLES='''
service=FOO123 service=FOO123
hours=4 hours=4
desc=deployment desc=deployment
register: pd_window
# Delete the previous maintenance window
- pagerduty: name=companyabc
user=example@example.com
passwd=password123
state=absent
service={{ pd_window.result.maintenance_window.id }}
''' '''
import datetime import datetime
@ -149,7 +158,12 @@ def ongoing(module, name, user, passwd, token):
if info['status'] != 200: if info['status'] != 200:
module.fail_json(msg="failed to lookup the ongoing window: %s" % info['msg']) module.fail_json(msg="failed to lookup the ongoing window: %s" % info['msg'])
return False, response.read() try:
json_out = json.loads(response.read())
except:
json_out = ""
return False, json_out, False
def create(module, name, user, passwd, token, requester_id, service, hours, minutes, desc): def create(module, name, user, passwd, token, requester_id, service, hours, minutes, desc):
@ -163,7 +177,8 @@ def create(module, name, user, passwd, token, requester_id, service, hours, minu
'Authorization': auth_header(user, passwd, token), 'Authorization': auth_header(user, passwd, token),
'Content-Type' : 'application/json', 'Content-Type' : 'application/json',
} }
request_data = {'maintenance_window': {'start_time': start, 'end_time': end, 'description': desc, 'service_ids': [service]}} request_data = {'maintenance_window': {'start_time': start, 'end_time': end, 'description': desc, 'service_ids': service}}
if requester_id: if requester_id:
request_data['requester_id'] = requester_id request_data['requester_id'] = requester_id
else: else:
@ -175,19 +190,50 @@ def create(module, name, user, passwd, token, requester_id, service, hours, minu
if info['status'] != 200: if info['status'] != 200:
module.fail_json(msg="failed to create the window: %s" % info['msg']) module.fail_json(msg="failed to create the window: %s" % info['msg'])
return False, response.read() try:
json_out = json.loads(response.read())
except:
json_out = ""
return False, json_out, True
def absent(module, name, user, passwd, token, requester_id, service):
url = "https://" + name + ".pagerduty.com/api/v1/maintenance_windows/" + service[0]
headers = {
'Authorization': auth_header(user, passwd, token),
'Content-Type' : 'application/json',
}
request_data = {}
if requester_id:
request_data['requester_id'] = requester_id
else:
if token:
module.fail_json(msg="requester_id is required when using a token")
data = json.dumps(request_data)
response, info = fetch_url(module, url, data=data, headers=headers, method='DELETE')
if info['status'] != 200:
module.fail_json(msg="failed to delete the window: %s" % info['msg'])
try:
json_out = json.loads(response.read())
except:
json_out = ""
return False, json_out, True
def main(): def main():
module = AnsibleModule( module = AnsibleModule(
argument_spec=dict( argument_spec=dict(
state=dict(required=True, choices=['running', 'started', 'ongoing']), state=dict(required=True, choices=['running', 'started', 'ongoing', 'absent']),
name=dict(required=True), name=dict(required=True),
user=dict(required=False), user=dict(required=False),
passwd=dict(required=False), passwd=dict(required=False),
token=dict(required=False), token=dict(required=False),
service=dict(required=False), service=dict(required=False, type='list', aliases=["services"]),
requester_id=dict(required=False), requester_id=dict(required=False),
hours=dict(default='1', required=False), hours=dict(default='1', required=False),
minutes=dict(default='0', required=False), minutes=dict(default='0', required=False),
@ -214,15 +260,21 @@ def main():
if state == "running" or state == "started": if state == "running" or state == "started":
if not service: if not service:
module.fail_json(msg="service not specified") module.fail_json(msg="service not specified")
(rc, out) = create(module, name, user, passwd, token, requester_id, service, hours, minutes, desc) (rc, out, changed) = create(module, name, user, passwd, token, requester_id, service, hours, minutes, desc)
if rc == 0:
changed=True
if state == "ongoing": if state == "ongoing":
(rc, out) = ongoing(module, name, user, passwd, token) (rc, out, changed) = ongoing(module, name, user, passwd, token)
if state == "absent":
(rc, out, changed) = absent(module, name, user, passwd, token, requester_id, service)
if rc != 0: if rc != 0:
module.fail_json(msg="failed", result=out) module.fail_json(msg="failed", result=out)
module.exit_json(msg="success", result=out)
module.exit_json(msg="success", result=out, changed=changed)
# import module snippets # import module snippets
from ansible.module_utils.basic import * from ansible.module_utils.basic import *

@ -7,7 +7,9 @@ short_description: Pause/unpause Pingdom alerts
description: description:
- This module will let you pause/unpause Pingdom alerts - This module will let you pause/unpause Pingdom alerts
version_added: "1.2" version_added: "1.2"
author: Justin Johns author:
- "Dylan Silva (@thaumos)"
- "Justin Johns"
requirements: requirements:
- "This pingdom python library: https://github.com/mbabineau/pingdom-python" - "This pingdom python library: https://github.com/mbabineau/pingdom-python"
options: options:
@ -111,7 +113,7 @@ def main():
) )
if not HAS_PINGDOM: if not HAS_PINGDOM:
module.fail_json(msg="Missing requried pingdom module (check docs)") module.fail_json(msg="Missing required pingdom module (check docs)")
checkid = module.params['checkid'] checkid = module.params['checkid']
state = module.params['state'] state = module.params['state']

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save