Add GCE guide and retool a bit to show the add_host interactions, improvements/upgrades are welcome.

Had to shoot the recently merged nova_group module in the head temporarily as it contained a dict comprehension, which means it can't work on all the platforms
and was also breaking docs builds on CentOS.  Will engage with list about that shortly.
pull/6493/merge
Michael DeHaan 11 years ago
parent cfe0465934
commit 8082f74798

@ -17,7 +17,6 @@ New Modules:
* system: locale_gen
* cloud: digital_ocean_domain
* cloud: digital_ocean_sshkey
* cloud: nova_group (security groups)
* cloud: nova_fip (floating IPs)
* cloud: rax_identity
* cloud: ec2_asg (configure autoscaling groups)

@ -1,20 +1,23 @@
Google Cloud Platform Guide
============================
Google Cloud Platform Guide
===========================
.. _gce_intro:
.. gce_intro:
Introduction
------------
.. note:: This section of the documentation is under construction. We are in the process of adding more examples about all of the GCE modules and how they work together.
.. note:: This section of the documentation is under construction. We are in the process of adding more examples about all of the GCE modules and how they work together. Upgrades via github pull requests are welcomed!
The GCE modules require the apache-libcloud module, which you can install from pip:
Ansible contains modules for managing Google Compute Engine resources, including creating instances, controlling network access, working with persistent disks, and managing
load balancers. Additionally, there is an inventory plugin that can automatically suck down all of your GCE instances into Ansible dynamic inventory, and create groups by tag and other properties.
The GCE modules all require the apache-libcloud module, which you can install from pip:
.. code-block:: bash
$ pip install apache-libcloud
.. note:: If you're using Ansible on Mac OS X, libcloud needs to access a CA cert chain. You'll need to download one (you can get one for `here <http://curl.haxx.se/docs/caextract.html>`_.)
.. note:: If you're using Ansible on Mac OS X, libcloud also needs to access a CA cert chain. You'll need to download one (you can get one for `here <http://curl.haxx.se/docs/caextract.html>`_.)
Credentials
-----------
@ -25,16 +28,15 @@ To work with the GCE modules, you'll first need to get some credentials. You can
$ openssl pkcs12 -in pkey.pkcs12 -passin pass:notasecret -nodes -nocerts | openssl rsa -out pkey.pem
There's three different ways to provide credentials to Ansible when you want to talk to Google Cloud:
There are two different ways to provide credentials to Ansible so that it can talk with Google Cloud for provisioning and configuration actions:
* by providing to the modules directly
* by populating a ``secrets.py`` file
* by populating the ``gce.ini`` file (for the inventory script only)
Module
``````
Calling Modules By Passing Credentials
``````````````````````````````````````
For the GCE modules you can specify the credentials as argument:
For the GCE modules you can specify the credentials as arguments:
* ``service_account_email``: email associated with the project
* ``pem_file``: path to the pem file
@ -43,21 +45,32 @@ For the GCE modules you can specify the credentials as argument:
For example, to create a new instance using the cloud module, you can use the following configuration:
.. code-block:: yaml
- name: Create instance(s)
hosts: localhost
connection: local
gather_facts: no
vars:
service_account_email: unique-id@developer.gserviceaccount.com
pem_file: /path/to/project.pem
project_id: project-id
machine_type: n1-standard-1
image: debian-7
tasks:
- name: Launch instances
local_action: gce instance_names=dev machine_type={{ machine_type }} image={{ image }} service_account_email={{ service_account_email }} pem_file={{ pem_file }} project_id={{ project_id }}
gce:
instance_names: dev
machine_type: "{{ machine_type }}"
image: "{{ image }}"
service_account_email: "{{ service_account_email }}"
pem_file: "{{ pem_file }}"
project_id: "{{ project_id }}"
secrets.py
``````````
Calling Modules with secrets.py
```````````````````````````````
Create a file ``secrets.py`` looking like following, and put it in some folder which is in your ``$PYTHONPATH``:
@ -66,22 +79,26 @@ Create a file ``secrets.py`` looking like following, and put it in some folder w
GCE_PARAMS = ('i...@project.googleusercontent.com', '/path/to/project.pem')
GCE_KEYWORD_PARAMS = {'project': 'project-name'}
gce.ini
```````
Now the modules can be used as above, but the account information can be omitted.
When using the inventory script ``gce.py``, you need to populate the ``gce.ini`` file that you can find in the inventory directory.
Host Inventory
--------------
GCE Dynamic Inventory
---------------------
The best way to interact with your hosts is to use the gce inventory plugin, which dynamically queries GCE and tells Ansible what nodes can be managed.
gce.py
++++++
Note that when using the inventory script ``gce.py``, you also need to populate the ``gce.ini`` file that you can find in the plugins/inventory directory of the ansible checkout.
To use the GCE dynamic inventory script, copy ``gce.py`` from ``plugings/inventory`` into your inventory directory and make it executable. You can specify credentials for ``gce.py`` using the ``GCE_INI_PATH`` environment variable -- the default is to look for gce.ini in the same directory as the inventory script.
Let's see if inventory is working:
.. code-block: bash
To use the GCE dynamic inventory script, copy ``gce.py`` from ``plugings/inventory`` into your inventory directory and make it executable. You can specify credentials for ``gce.py`` using the ``GCE_INI_PATH`` environment variable.
$ ./gce.py --list
Let's test our inventory script to see if it can talk to Google Cloud.
You should see output describing the hosts you have, if any, running in Google Compute Engine.
Now let's see if we can use the inventory script to talk to Google.
.. code-block:: bash
@ -92,11 +109,11 @@ Let's test our inventory script to see if it can talk to Google Cloud.
"x.x.x.x"
],
The recommended way to use the inventory is to create an ``inventory`` directory, and place both the ``gce.py`` script and a file containing ``localhost`` in it.
As with all dynamic inventory plugins in Ansible, you can configure the inventory path in ansible.cfg. The recommended way to use the inventory is to create an ``inventory`` directory, and place both the ``gce.py`` script and a file containing ``localhost`` in it. This can allow for cloud inventory to be used alongside local inventory (such as a physical datacenter) or machines running in different providers.
Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead of an individual file will cause ansible to evaluate each file in that directory for inventory.
Let's test our inventory script to see if it can talk to Google Cloud:
Let's once again use our inventory script to see if it can talk to Google Cloud:
.. code-block:: bash
@ -107,12 +124,12 @@ Let's test our inventory script to see if it can talk to Google Cloud:
"x.x.x.x"
],
The output should be similar to the previous command.
The output should be similar to the previous command. If you're wanting less output and just want to check for SSH connectivity, use "-m" ping instead.
Use Cases
---------
For the following use case, I'm using a small shell script as a wrapper.
For the following use case, let's use this small shell script as a wrapper.
.. code-block:: bash
@ -146,41 +163,83 @@ A playbook would looks like this:
- name: Create instance(s)
hosts: localhost
gather_facts: no
connection: local
vars:
machine_type: n1-standard-1 # default
image: debian-7
service_account_email: unique-id@developer.gserviceaccount.com
pem_file: /path/to/project.pem
project_id: project-id
tasks:
- name: Launch instances
local_action: gce instance_names=dev machine_type={{ machine_type }} image={{ image }} service_account_email={{ service_account_email }} pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
- name: Wait for SSH to come up
local_action: wait_for host={{ item.public_ip }} port=22 delay=10 timeout=60 state=started
with_items: gce.instance_data
Create a web server
```````````````````
gce:
instance_names: dev
machine_type: "{{ machine_type }}"
image: "{{ image }}"
service_account_email: "{{ service_account_email }}"
pem_file: "{{ pem_file }}"
project_id: "{{ project_id }}"
tags: webserver
register: gce
- name: Wait for SSH to come up
wait_for: host={{ item.public_ip }} port=22 delay=10 timeout=60
with_items: gce.instance_data
- name: add_host hostname={{ item.public_ip }} groupname=new_instances
- name: Manage new instances
hosts: new_instances
connection: ssh
roles:
- base_configuration
- production_server
Note that use of the "add_host" module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines
in the 'new_instances' group, if so desired. Any sort of arbitrary configuration is possible at this point.
Configuring instances in a group
````````````````````````````````
All of the created instances in GCE are grouped by tag. Since this is a cloud, it's probably best to ignore hostnames and just focus on group management.
Normally we'd also use roles here, but the following example is a simple one. Here we will also use the "gce_net" module to open up access to port 80 on
these nodes.
The variables in the 'vars' section could also be kept in a 'vars_files' file or something encrypted with Ansible-vault, if you so choose. This is just
a basic example of what is possible::
- name: Setup web servers
hosts: tag_webserver
gather_facts: no
vars:
machine_type: n1-standard-1 # default
image: debian-7
service_account_email: unique-id@developer.gserviceaccount.com
pem_file: /path/to/project.pem
project_id: project-id
roles:
- name: Install lighttpd
apt: pkg=lighttpd state=installed
sudo: True
- name: Allow HTTP
local_action: gce_net
args:
fwname: "all-http"
name: "default"
allowed: "tcp:80"
state: "present"
service_account_email: "{{ service_account_email }}"
pem_file: "{{ pem_file }}"
project_id: "{{ project_id }}"
With this example we will install a web server (lighttpd) on our new instance and ensure that the port 80 is open for incoming connections.
.. code-block:: yaml
By pointing your browser to the IP of the server, you should see a page welcoming you.
- name: Create a firewall rule to allow HTTP
hosts: dev
gather_facts: no
vars:
machine_type: n1-standard-1 # default
image: debian-7
service_account_email: unique-id@developer.gserviceaccount.com
pem_file: /path/to/project.pem
project_id: project-id
tasks:
- name: Install lighttpd
apt: pkg=lighttpd state=installed
sudo: True
- name: Allow HTTP
local_action: gce_net fwname=all-http name=default allowed=tcp:80 state=present service_account_email={{ service_account_email }} pem_file={{ pem_file }} project_id={{ project_id }}
Upgrades to this documentation are welcome, hit the github link at the top right of this page if you would like to make additions!
By pointing your browser to the IP of the server, you should see a page welcoming you.

@ -8,6 +8,7 @@ This section is new and evolving. The idea here is explore particular use cases
guide_aws
guide_rax
guide_gce
guide_vagrant
guide_rolling_upgrade

@ -1,343 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2013, John Dewey <john@dewey.ws>
#
# This module is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this software. If not, see <http://www.gnu.org/licenses/>.
import locale
import os
import six
try:
from novaclient.openstack.common import uuidutils
from novaclient.openstack.common import strutils
from novaclient.v1_1 import client
from novaclient.v1_1 import security_groups
from novaclient.v1_1 import security_group_rules
from novaclient import exceptions
except ImportError:
print("failed=True msg='novaclient is required for this module to work'")
DOCUMENTATION = '''
---
module: security_group
version_added: "1.6"
short_description: Maintain nova security groups.
description:
- Manage nova security groups using the python-novaclient library.
options:
login_username:
description:
- Login username to authenticate to keystone. If not set then the value of the OS_USERNAME environment variable is used.
required: false
default: None
login_password:
description:
- Password of login user. If not set then the value of the OS_PASSWORD environment variable is used.
required: false
default: None
login_tenant_name:
description:
- The tenant name of the login user. If not set then the value of the OS_TENANT_NAME environment variable is used.
required: false
default: None
auth_url:
description:
- The keystone url for authentication. If not set then the value of the OS_AUTH_URL environment variable is used.
required: false
default: None
region_name:
description:
- Name of the region.
required: false
default: None
name:
description:
- Name of the security group.
required: true
description:
description:
- Description of the security group.
required: true
rules:
description:
- List of firewall rules to enforce in this group (see example).
Must specify either an IPv4 'cidr' address or 'group' UUID.
required: true
state:
description:
- Indicate desired state of the resource.
choices: ['present', 'absent']
required: false
default: 'present'
requirements: ["novaclient"]
'''
EXAMPLES = '''
- name: create example group and rules
local_action:
module: security_group
name: example
description: an example nova group
rules:
- ip_protocol: tcp
from_port: 80
to_port: 80
cidr: 0.0.0.0/0
- ip_protocol: tcp
from_port: 3306
to_port: 3306
group: "{{ group_uuid }}"
- ip_protocol: icmp
from_port: -1
to_port: -1
cidr: 0.0.0.0/0
- name: delete rule from example group
local_action:
module: security_group
name: example
description: an example nova group
rules:
- ip_protocol: tcp
from_port: 80
to_port: 80
cidr: 0.0.0.0/0
- ip_protocol: icmp
from_port: -1
to_port: -1
cidr: 0.0.0.0/0
state: absent
'''
class NovaGroup(object):
def __init__(self, client):
self._sg = security_groups.SecurityGroupManager(client)
# Taken from novaclient/v1_1/shell.py.
def _get_secgroup(self, secgroup):
# Check secgroup is an UUID
if uuidutils.is_uuid_like(strutils.safe_encode(secgroup)):
try:
sg = self._sg.get(secgroup)
return sg
except exceptions.NotFound:
return False
# Check secgroup as a name
for s in self._sg.list():
encoding = (locale.getpreferredencoding() or
sys.stdin.encoding or
'UTF-8')
if not six.PY3:
s.name = s.name.encode(encoding)
if secgroup == s.name:
return s
return False
class SecurityGroup(NovaGroup):
def __init__(self, client, module):
super(SecurityGroup, self).__init__(client)
self._module = module
self._name = module.params.get('name')
self._description = module.params.get('description')
def get(self):
return self._get_secgroup(self._name)
def create(self):
return self._sg.create(self._name, self._description)
def delete(self):
return self._sg.delete(self._name)
class SecurityGroupRule(NovaGroup):
def __init__(self, client, module):
super(SecurityGroupRule, self).__init__(client)
self._module = module
self._name = module.params.get('name')
self._rules = module.params.get('rules')
self._validate_rules()
self._sgr = security_group_rules.SecurityGroupRuleManager(client)
self._secgroup = self._get_secgroup(self._name)
self._current_rules = self._lookup_dict(self._secgroup.rules)
def _concat_security_group_rule(self, rule):
"""
Normalize the given rule into a string in the format of:
protocol-from_port-to_port-group
The `group` needs a bit of massaging.
1. If an empty dict -- return None.
2. If a dict -- lookup group UUID (novaclient only returns the name).
3. Return `group` from rules dict.
:param rule: A novaclient SecurityGroupRule object.
"""
group = rule.get('group')
# Oddly novaclient occasionaly returns None as {}.
if group is not None and not any(group):
group = None
elif type(group) == dict:
g = group.get('name')
group = self._get_secgroup(g)
r = "%s-%s-%s-%s" % (rule.get('ip_protocol'),
rule.get('from_port'),
rule.get('to_port'),
group)
return r
def _lookup_dict(self, rules):
"""
Populate a dict with current rules.
:param rule: A novaclient SecurityGroupRule object.
"""
return {self._concat_security_group_rule(rule): rule for rule in rules}
def _get_rule(self, rule):
"""
Return rule when found and False when not.
:param rule: A novaclient SecurityGroupRule object.
"""
r = self._concat_security_group_rule(rule)
if r in self._current_rules:
return self._current_rules[r]
def _validate_rules(self):
for rule in self._rules:
if 'group' in rule and 'cidr' in rule:
self._module.fail_json(msg="Specify group OR cidr")
def create(self):
changed = False
filtered = [rule for rule in self._rules
if rule.get('state') != 'absent']
for rule in filtered:
if not self._get_rule(rule):
if 'cidr' in rule:
self._sgr.create(self._secgroup.id,
rule.get('ip_protocol'),
rule.get('from_port'),
rule.get('to_port'),
cidr=rule.get('cidr'))
changed = True
if 'group' in rule:
self._sgr.create(self._secgroup.id,
rule.get('ip_protocol'),
rule.get('from_port'),
rule.get('to_port'),
group_id=rule.get('group'))
changed = True
return changed
def delete(self):
changed = False
filtered = [rule for rule in self._rules
if rule.get('state') == 'absent']
for rule in filtered:
r = self._get_rule(rule)
if r:
self._sgr.delete(r.get('id'))
changed = True
return changed
def update(self):
changed = False
if self.create():
changed = True
if self.delete():
changed = True
return changed
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
description=dict(required=True),
rules=dict(),
login_username=dict(),
login_password=dict(no_log=True),
login_tenant_name=dict(),
auth_url= dict(),
region_name=dict(default=None),
state = dict(default='present', choices=['present', 'absent']),
),
supports_check_mode=False,
)
login_username = module.params.get('login_username')
login_password = module.params.get('login_password')
login_tenant_name = module.params.get('login_tenant_name')
auth_url = module.params.get('auth_url')
# allow stackrc environment variables to be used if ansible vars aren't set
if not login_username and 'OS_USERNAME' in os.environ:
login_username = os.environ['OS_USERNAME']
if not login_password and 'OS_PASSWORD' in os.environ:
login_password = os.environ['OS_PASSWORD']
if not login_tenant_name and 'OS_TENANT_NAME' in os.environ:
login_tenant_name = os.environ['OS_TENANT_NAME']
if not auth_url and 'OS_AUTH_URL' in os.environ:
auth_url = os.environ['OS_AUTH_URL']
nova = client.Client(login_username,
login_password,
login_tenant_name,
auth_url,
service_type='compute')
try:
nova.authenticate()
except exceptions.Unauthorized as e:
module.fail_json(msg="Invalid OpenStack Nova credentials.: %s" % e.message)
except exceptions.AuthorizationFailure as e:
module.fail_json(msg="Unable to authorize user: %s" % e.message)
rules = module.params.get('rules')
state = module.params.get('state')
security_group = SecurityGroup(nova, module)
changed = False
group_id = None
group = security_group.get()
if group:
group_id = group.id
if state == 'absent':
security_group.delete()
changed = True
elif state == 'present':
group = security_group.create()
changed = True
group_id = group.id
if rules is not None:
security_group_rules = SecurityGroupRule(nova, module)
if security_group_rules.update():
changed = True
module.exit_json(changed=changed, group_id=group_id)
# import module snippets
from ansible.module_utils.basic import *
main()
Loading…
Cancel
Save