forwarded docker_extra_args to latest upstream/origin/devel

pull/13425/head
Thomas Steinbach 9 years ago
commit cd2c140f69

@ -0,0 +1,50 @@
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
- Feature Idea
- Documentation Report
##### ANSIBLE VERSION
```
<!--- Paste verbatim output from “ansible --version” between quotes -->
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
<!--- Paste example playbooks or commands between quotes -->
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
<!--- Paste verbatim command output between quotes -->
```

@ -0,0 +1,24 @@
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Pull Request
- New Module Pull Request
- Bugfix Pull Request
- Docs Pull Request
##### ANSIBLE VERSION
```
<!--- Paste verbatim output from “ansible --version” between quotes -->
```
##### SUMMARY
<!--- Describe the change, including rationale and design decisions -->
<!---
If you are fixing an existing issue, please include "Fixes #nnn" in your
commit message and your description; but you should still explain what
the change does.
-->
```
<!-- Paste verbatim command output here, e.g. before and after your change -->
```

3
.gitignore vendored

@ -31,6 +31,7 @@ docs/man/man3/*
*.sublime-workspace *.sublime-workspace
# docsite stuff... # docsite stuff...
docsite/rst/modules_by_category.rst docsite/rst/modules_by_category.rst
docsite/rst/playbooks_directives.rst
docsite/rst/list_of_*.rst docsite/rst/list_of_*.rst
docsite/rst/*_module.rst docsite/rst/*_module.rst
docsite/*.html docsite/*.html
@ -47,6 +48,8 @@ deb-build
*.swo *.swo
credentials.yml credentials.yml
# test output # test output
*.retry
*.out
.coverage .coverage
.tox .tox
results.xml results.xml

@ -1,16 +1,25 @@
sudo: false dist: trusty
sudo: required
services:
- docker
language: python language: python
matrix: matrix:
include: include:
- env: TOXENV=py24 INTEGRATION=no - env: TARGET=sanity TOXENV=py24
- env: TOXENV=py26 INTEGRATION=yes - env: TARGET=sanity TOXENV=py26
python: 2.6 python: 2.6
- env: TOXENV=py27 INTEGRATION=yes - env: TARGET=sanity TOXENV=py27
python: 2.7 python: 2.7
- env: TOXENV=py34 INTEGRATION=no - env: TARGET=sanity TOXENV=py34
python: 3.4 python: 3.4
- env: TOXENV=py35 INTEGRATION=no - env: TARGET=sanity TOXENV=py35
python: 3.5 python: 3.5
- env: TARGET=centos6
- env: TARGET=centos7 TARGET_OPTIONS="--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro"
- env: TARGET=fedora23 TARGET_OPTIONS="--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro"
- env: TARGET=fedora-rawhide TARGET_OPTIONS="--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro"
- env: TARGET=ubuntu1204
- env: TARGET=ubuntu1404
addons: addons:
apt: apt:
sources: sources:
@ -18,15 +27,16 @@ addons:
packages: packages:
- python2.4 - python2.4
install: install:
- pip install tox PyYAML Jinja2 sphinx - pip install tox coveralls
script: script:
# urllib2's defaults are not secure enough for us - ./test/utils/run_tests.sh
- ./test/code-smell/replace-urlopen.sh .
- ./test/code-smell/use-compat-six.sh lib
- ./test/code-smell/boilerplate.sh
- if test x"$TOXENV" != x'py24' ; then tox ; fi
- if test x"$TOXENV" = x'py24' ; then python2.4 -V && python2.4 -m compileall -fq -x 'module_utils/(a10|rax|openstack|ec2|gce).py' lib/ansible/module_utils ; fi
#- make -C docsite all
- if test x"$INTEGRATION" = x'yes' ; then source ./hacking/env-setup && cd test/integration/ && make parsing && make test_var_precedence && make unicode ; fi
after_success: after_success:
- coveralls - coveralls
notifications:
irc:
channels:
- "chat.freenode.net#ansible-notices"
on_success: change
on_failure: always
skip_join: true
nick: ansibletravis

File diff suppressed because it is too large Load Diff

@ -1,27 +1,29 @@
Welcome To Ansible GitHub # WELCOME TO ANSIBLE GITHUB
=========================
Hi! Nice to see you here! Hi! Nice to see you here!
If you'd like to ask a question
===============================
Please see [this web page ](http://docs.ansible.com/community.html) for community information, which includes pointers on how to ask questions on the [mailing lists](http://docs.ansible.com/community.html#mailing-list-information) and IRC. ## QUESTIONS ?
The github issue tracker is not the best place for questions for various reasons, but both IRC and the mailing list are very helpful places for those things, and that page has the pointers to those. Please see the [community page](http://docs.ansible.com/community.html) for information on how to ask questions on the [mailing lists](http://docs.ansible.com/community.html#mailing-list-information) and IRC.
If you'd like to contribute code The GitHub issue tracker is not the best place for questions for various reasons, but both IRC and the mailing list are very helpful places for those things, as the community page explains best.
================================
Please see [this web page](http://docs.ansible.com/community.html) for information about the contribution process. Important license agreement information is also included on that page.
If you'd like to file a bug ## CONTRIBUTING ?
===========================
I'd also read the community page above, but in particular, make sure you copy [this issue template](https://github.com/ansible/ansible/blob/devel/ISSUE_TEMPLATE.md) into your ticket description. We have a friendly neighborhood bot that will remind you if you forget :) This template helps us organize tickets faster and prevents asking some repeated questions, so it's very helpful to us and we appreciate your help with it. Please see the [community page](http://docs.ansible.com/community.html) for information regarding the contribution process. Important license agreement information is also included on that page.
Also please make sure you are testing on the latest released version of Ansible or the development branch.
Thanks! ## BUG TO REPORT ?
First and foremost, also check the [community page](http://docs.ansible.com/community.html).
You can report bugs or make enhancement requests at the [Ansible GitHub issue page](http://github.com/ansible/ansible/issues/new) by filling out the issue template that will be presented.
Also please make sure you are testing on the latest released version of Ansible or the development branch. You can find the latest releases and development branch at:
- https://github.com/ansible/ansible/releases
- https://github.com/ansible/ansible/archive/devel.tar.gz
Thanks!

@ -1,39 +0,0 @@
##### Issue Type:
Can you help us out in labelling this by telling us what kind of ticket this this? You can say:
- Bug Report
- Feature Idea
- Feature Pull Request
- New Module Pull Request
- Bugfix Pull Request
- Documentation Report
- Docs Pull Request
##### Ansible Version:
Let us know what version of Ansible you are using. Please supply the verbatim output from running “ansible --version”. If you're filing a ticket on a version of Ansible which is not the latest, we'd greatly appreciate it if you could retest on the latest version first. We don't expect you to test against the development branch most of the time, but we may ask for that if you have cycles. Thanks!
##### Ansible Configuration:
What have you changed about your Ansible installation? What configuration settings have you changed/added/removed? Compare your /etc/ansible/ansible.cfg against a clean version from Github and let us know what's different.
##### Environment:
What OS are you running Ansible from and what OS are you managing? Examples include RHEL 5/6, Centos 5/6, Ubuntu 12.04/13.10, *BSD, Solaris. If this is a generic feature request or it doesnt apply, just say “N/A”. Not all tickets may be about operating system related things and we understand that.
##### Summary:
Please summarize your request in this space. You will earn bonus points for being succinct, but please add enough detail so we can understand the request. Thanks!
##### Steps To Reproduce:
If this is a bug ticket, please enter the steps you use to reproduce the problem in the space below. If this is a feature request, please enter the steps you would use to use the feature. If an example playbook is useful, please include a short reproducer inline, indented by four spaces. If a longer one is necessary, linking to one uploaded to gist.github.com would be great. Much appreciated!
##### Expected Results:
Please enter your expected results in this space. When running the steps supplied above in the previous section, what did you expect to happen? If showing example output, please indent your output by four spaces so it will render correctly in GitHub's viewer thingy.
##### Actual Results:
Please enter your actual results in this space. When running the steps supplied above, what actually happened? If you are showing example output, please indent your output by four spaces so it will render correctly in GitHub. Thanks again!

@ -4,12 +4,14 @@ prune ticket_stubs
prune packaging prune packaging
prune test prune test
prune hacking prune hacking
include README.md packaging/rpm/ansible.spec COPYING include README.md COPYING
include examples/hosts include examples/hosts
include examples/ansible.cfg include examples/ansible.cfg
include lib/ansible/module_utils/powershell.ps1 include lib/ansible/module_utils/powershell.ps1
recursive-include lib/ansible/modules * recursive-include lib/ansible/modules *
recursive-include lib/ansible/galaxy/data *
recursive-include docs * recursive-include docs *
recursive-include packaging *
include Makefile include Makefile
include VERSION include VERSION
include MANIFEST.in include MANIFEST.in

@ -44,7 +44,7 @@ GIT_HASH := $(shell git log -n 1 --format="%h")
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD | sed 's/[-_.\/]//g') GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD | sed 's/[-_.\/]//g')
GITINFO = .$(GIT_HASH).$(GIT_BRANCH) GITINFO = .$(GIT_HASH).$(GIT_BRANCH)
else else
GITINFO = '' GITINFO = ""
endif endif
ifeq ($(shell echo $(OS) | egrep -c 'Darwin|FreeBSD|OpenBSD'),1) ifeq ($(shell echo $(OS) | egrep -c 'Darwin|FreeBSD|OpenBSD'),1)
@ -167,6 +167,9 @@ install:
sdist: clean docs sdist: clean docs
$(PYTHON) setup.py sdist $(PYTHON) setup.py sdist
sdist_upload: clean docs
$(PYTHON) setup.py sdist upload 2>&1 |tee upload.log
rpmcommon: $(MANPAGES) sdist rpmcommon: $(MANPAGES) sdist
@mkdir -p rpm-build @mkdir -p rpm-build
@cp dist/*.gz rpm-build/ @cp dist/*.gz rpm-build/

@ -55,3 +55,4 @@ Ansible was created by [Michael DeHaan](https://github.com/mpdehaan) (michael.de
Ansible is sponsored by [Ansible, Inc](http://ansible.com) Ansible is sponsored by [Ansible, Inc](http://ansible.com)

@ -4,11 +4,13 @@ Ansible Releases at a Glance
Active Development Active Development
++++++++++++++++++ ++++++++++++++++++
2.0 "Over the Hills and Far Away" - in progress 2.1 "TBD" - in progress
Released Released
++++++++ ++++++++
2.0.1 "Over the Hills and Far Away" 02-24-2016
2.0.0 "Over the Hills and Far Away" 01-12-2016
1.9.4 "Dancing In the Streets" 10-09-2015 1.9.4 "Dancing In the Streets" 10-09-2015
1.9.3 "Dancing In the Streets" 09-03-2015 1.9.3 "Dancing In the Streets" 09-03-2015
1.9.2 "Dancing In the Streets" 06-24-2015 1.9.2 "Dancing In the Streets" 06-24-2015

@ -0,0 +1,98 @@
Roadmap For Ansible by Red Hat
=============
This document is now the location for published Ansible Core roadmaps.
The roadmap will be updated by version. Based on team and community feedback, an initial roadmap will be published for a major or minor version (2.0, 2.1). Subminor versions will generally not have roadmaps published.
This is the first time Ansible has published this and asked for feedback in this manner. So feedback on the roadmap and the new process is quite welcome. The team is aiming for further transparency and better inclusion of both community desires and submissions.
These roadmaps are the team's *best guess* roadmaps based on the Ansible team's experience and are also based on requests and feedback from the community. There are things that may not make it on due to time constraints, lack of community maintainers, etc. And there may be things that got missed, so each roadmap is published both as an idea of what is upcoming in Ansible, and as a medium for seeking further feedback from the community. Here are the good places for you to submit feedback:
* Ansible's google-group: ansible-devel
* Ansible Fest conferences.
* IRC freenode channel: #ansible-devel (this one may have things lost in lots of conversation, so a caution).
2.1 Roadmap, Targeted for the End of April
==========
## Windows, General
* Figuring out privilege escalation (runas w/ username/password)
* Implement kerberos encryption over http
* pywinrm conversion to requests (Some mess here on pywinrm/requests. will need docs etc.)
* NTLM support
## Modules
* Windows
* Finish cleaning up tests and support for post-beta release
* Strict mode cleanup (one module in core)
* Domain user/group management
* Finish win\_host and win\_rm in the domain/workgroup modules.
* Close 2 existing PRs (These were deemed insufficient)
* Replicate python module API in PS/C# (deprecate hodgepodge of stuff from module_utils/powershell.ps1)
* Network
* Cisco modules (ios, iosxr, nxos, iosxe)
* Arista modules (eos)
* Juniper modules (junos)
* OpenSwitch
* Cumulus
* Dell (os10) - At risk
* Netconf shared module
* Hooks for supporting Tower credentials
* VMware (This one is a little at risk due to staffing. We're investigating some community maintainers and shifting some people at Ansible around, but it is a VERY high priority).
* vsphere\_guest brought to parity with other vmware modules (vs Viasat and 'whereismyjetpack' provided modules)
* VMware modules moved to official pyvmomi bindings
* VMware inventory script updates for pyvmomi, adding tagging support
* Azure (Notes: This is on hold until Microsoft swaps out the code generator on the Azure Python SDK, which may introduce breaking changes. We have basic modules working against all of these resources at this time. Could ship it against current SDK, but may break. Or should the version be pinned?)
* Minimal Azure coverage using new ARM api
* Resource Group
* Virtual Network
* Subnet
* Public IP
* Network Interface
* Storage Account
* Security Group
* Virtual Machine
* Update of inventory script to use new API, adding tagging support
* Docker:
* Start Docker module refactor
* Update to match current docker CLI capabilities
* Docker exec support
* Upgrade other cloud modules or work with community maintainers to upgrade. (In order)
* AWS (Community maintainers)
* Openstack (Community maintainers)
* Google (Google/Community)
* Digital Ocean (Community)
* Ziploader:
* Write code to create the zipfile that gets passed across the wire to be run on the remote python
* Port most of the functionality in module\_utils to be usage in ziploader instead
* Port a few essential modules to use ziploader instead of module-replacer as proof of concept
* New modules will be able to use ziploader. Old modules will need to be ported in future releases (Some modules will not need porting but others will)
* Better testing of modules, caching of modules clientside(Have not yet arrived at an architecture for this that we like), better code sharing between ansible/ansible and modules
* ziploader is a helpful building block for: python3 porting(high priority), better code sharing between modules(medium priority)
* ziploader is a good idea before: enabling users to have custom module_utils directories
* Expand module diff support (already in progress in devel)
* Framework done. Need to add to modules, test etc.
* Coordinate with community to update their modules
* Things being kicking down the road that we said wed do
* NOT remerging core with ansible/ansible this release cycle
* Community stuff
* Define the process/ETA for reviewing PRs from community
* Publish better docs and how-tos for submitting code/features/fixes

@ -1 +1 @@
2.1 2.1.0

File diff suppressed because it is too large Load Diff

@ -40,6 +40,7 @@ from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.utils.display import Display from ansible.utils.display import Display
from ansible.utils.unicode import to_unicode from ansible.utils.unicode import to_unicode
######################################## ########################################
### OUTPUT OF LAST RESORT ### ### OUTPUT OF LAST RESORT ###
class LastResort(object): class LastResort(object):
@ -60,6 +61,7 @@ if __name__ == '__main__':
try: try:
display = Display() display = Display()
display.debug("starting run")
sub = None sub = None
try: try:
@ -107,7 +109,7 @@ if __name__ == '__main__':
have_cli_options = cli is not None and cli.options is not None have_cli_options = cli is not None and cli.options is not None
display.error("Unexpected Exception: %s" % to_unicode(e), wrap_text=False) display.error("Unexpected Exception: %s" % to_unicode(e), wrap_text=False)
if not have_cli_options or have_cli_options and cli.options.verbosity > 2: if not have_cli_options or have_cli_options and cli.options.verbosity > 2:
display.display("the full traceback was:\n\n%s" % traceback.format_exc()) display.display(u"the full traceback was:\n\n%s" % to_unicode(traceback.format_exc()))
else: else:
display.display("to see the full traceback, use -vvv") display.display("to see the full traceback, use -vvv")
sys.exit(250) sys.exit(250)

@ -109,11 +109,11 @@ class CloudStackInventory(object):
project_id = self.get_project_id(options.project) project_id = self.get_project_id(options.project)
if options.host: if options.host:
data = self.get_host(options.host) data = self.get_host(options.host, project_id)
print(json.dumps(data, indent=2)) print(json.dumps(data, indent=2))
elif options.list: elif options.list:
data = self.get_list() data = self.get_list(project_id)
print(json.dumps(data, indent=2)) print(json.dumps(data, indent=2))
else: else:
print("usage: --list | --host <hostname> [--project <project>]", print("usage: --list | --host <hostname> [--project <project>]",

@ -26,3 +26,9 @@ cache_max_age = 300
# Use the private network IP address instead of the public when available. # Use the private network IP address instead of the public when available.
# #
use_private_network = False use_private_network = False
# Pass variables to every group, e.g.:
#
# group_variables = { 'ansible_user': 'root' }
#
group_variables = {}

@ -137,6 +137,7 @@ import re
import argparse import argparse
from time import time from time import time
import ConfigParser import ConfigParser
import ast
try: try:
import json import json
@ -168,6 +169,7 @@ class DigitalOceanInventory(object):
self.cache_path = '.' self.cache_path = '.'
self.cache_max_age = 0 self.cache_max_age = 0
self.use_private_network = False self.use_private_network = False
self.group_variables = {}
# Read settings, environment variables, and CLI arguments # Read settings, environment variables, and CLI arguments
self.read_settings() self.read_settings()
@ -261,6 +263,10 @@ or environment variables (DO_API_TOKEN)''')
if config.has_option('digital_ocean', 'use_private_network'): if config.has_option('digital_ocean', 'use_private_network'):
self.use_private_network = config.get('digital_ocean', 'use_private_network') self.use_private_network = config.get('digital_ocean', 'use_private_network')
# Group variables
if config.has_option('digital_ocean', 'group_variables'):
self.group_variables = ast.literal_eval(config.get('digital_ocean', 'group_variables'))
def read_environment(self): def read_environment(self):
''' Reads the settings from environment variables ''' ''' Reads the settings from environment variables '''
# Setup credentials # Setup credentials
@ -359,22 +365,24 @@ or environment variables (DO_API_TOKEN)''')
else: else:
dest = droplet['ip_address'] dest = droplet['ip_address']
self.inventory[droplet['id']] = [dest] dest = { 'hosts': [ dest ], 'vars': self.group_variables }
self.push(self.inventory, droplet['name'], dest)
self.push(self.inventory, 'region_' + droplet['region']['slug'], dest) self.inventory[droplet['id']] = dest
self.push(self.inventory, 'image_' + str(droplet['image']['id']), dest) self.inventory[droplet['name']] = dest
self.push(self.inventory, 'size_' + droplet['size']['slug'], dest) self.inventory['region_' + droplet['region']['slug']] = dest
self.inventory['image_' + str(droplet['image']['id'])] = dest
self.inventory['size_' + droplet['size']['slug']] = dest
image_slug = droplet['image']['slug'] image_slug = droplet['image']['slug']
if image_slug: if image_slug:
self.push(self.inventory, 'image_' + self.to_safe(image_slug), dest) self.inventory['image_' + self.to_safe(image_slug)] = dest
else: else:
image_name = droplet['image']['name'] image_name = droplet['image']['name']
if image_name: if image_name:
self.push(self.inventory, 'image_' + self.to_safe(image_name), dest) self.inventory['image_' + self.to_safe(image_name)] = dest
self.push(self.inventory, 'distro_' + self.to_safe(droplet['image']['distribution']), dest) self.inventory['distro_' + self.to_safe(droplet['image']['distribution'])] = dest
self.push(self.inventory, 'status_' + droplet['status'], dest) self.inventory['status_' + droplet['status']] = dest
def load_droplet_variables_for_host(self): def load_droplet_variables_for_host(self):

@ -29,17 +29,32 @@ regions_exclude = us-gov-west-1,cn-north-1
# in the event of a collision. # in the event of a collision.
destination_variable = public_dns_name destination_variable = public_dns_name
# This allows you to override the inventory_name with an ec2 variable, instead
# of using the destination_variable above. Addressing (aka ansible_ssh_host)
# will still use destination_variable. Tags should be written as 'tag_TAGNAME'.
#hostname_variable = tag_Name
# For server inside a VPC, using DNS names may not make sense. When an instance # For server inside a VPC, using DNS names may not make sense. When an instance
# has 'subnet_id' set, this variable is used. If the subnet is public, setting # has 'subnet_id' set, this variable is used. If the subnet is public, setting
# this to 'ip_address' will return the public IP address. For instances in a # this to 'ip_address' will return the public IP address. For instances in a
# private subnet, this should be set to 'private_ip_address', and Ansible must # private subnet, this should be set to 'private_ip_address', and Ansible must
# be run from within EC2. The key of an EC2 tag may optionally be used; however # be run from within EC2. The key of an EC2 tag may optionally be used; however
# the boto instance variables hold precedence in the event of a collision. # the boto instance variables hold precedence in the event of a collision.
# WARNING: - instances that are in the private vpc, _without_ public ip address # WARNING: - instances that are in the private vpc, _without_ public ip address
# will not be listed in the inventory until You set: # will not be listed in the inventory until You set:
# vpc_destination_variable = 'private_ip_address' # vpc_destination_variable = private_ip_address
vpc_destination_variable = ip_address vpc_destination_variable = ip_address
# The following two settings allow flexible ansible host naming based on a
# python format string and a comma-separated list of ec2 tags. Note that:
#
# 1) If the tags referenced are not present for some instances, empty strings
# will be substituted in the format string.
# 2) This overrides both destination_variable and vpc_destination_variable.
#
#destination_format = {0}.{1}.example.com
#destination_format_tags = Name,environment
# To tag instances on EC2 with the resource records that point to them from # To tag instances on EC2 with the resource records that point to them from
# Route53, uncomment and set 'route53' to True. # Route53, uncomment and set 'route53' to True.
route53 = False route53 = False
@ -144,7 +159,7 @@ group_by_elasticache_replication_group = True
# You can use wildcards in filter values also. Below will list instances which # You can use wildcards in filter values also. Below will list instances which
# tag Name value matches webservers1* # tag Name value matches webservers1*
# (ex. webservers15, webservers1a, webservers123 etc) # (ex. webservers15, webservers1a, webservers123 etc)
# instance_filters = tag:Name=webservers1* # instance_filters = tag:Name=webservers1*
# A boto configuration profile may be used to separate out credentials # A boto configuration profile may be used to separate out credentials

@ -237,6 +237,19 @@ class Ec2Inventory(object):
self.destination_variable = config.get('ec2', 'destination_variable') self.destination_variable = config.get('ec2', 'destination_variable')
self.vpc_destination_variable = config.get('ec2', 'vpc_destination_variable') self.vpc_destination_variable = config.get('ec2', 'vpc_destination_variable')
if config.has_option('ec2', 'hostname_variable'):
self.hostname_variable = config.get('ec2', 'hostname_variable')
else:
self.hostname_variable = None
if config.has_option('ec2', 'destination_format') and \
config.has_option('ec2', 'destination_format_tags'):
self.destination_format = config.get('ec2', 'destination_format')
self.destination_format_tags = config.get('ec2', 'destination_format_tags').split(',')
else:
self.destination_format = None
self.destination_format_tags = None
# Route53 # Route53
self.route53_enabled = config.getboolean('ec2', 'route53') self.route53_enabled = config.getboolean('ec2', 'route53')
self.route53_excluded_zones = [] self.route53_excluded_zones = []
@ -318,8 +331,14 @@ class Ec2Inventory(object):
if not os.path.exists(cache_dir): if not os.path.exists(cache_dir):
os.makedirs(cache_dir) os.makedirs(cache_dir)
self.cache_path_cache = cache_dir + "/ansible-ec2.cache" cache_name = 'ansible-ec2'
self.cache_path_index = cache_dir + "/ansible-ec2.index" aws_profile = lambda: (self.boto_profile or
os.environ.get('AWS_PROFILE') or
os.environ.get('AWS_ACCESS_KEY_ID'))
if aws_profile():
cache_name = '%s-%s' % (cache_name, aws_profile())
self.cache_path_cache = cache_dir + "/%s.cache" % cache_name
self.cache_path_index = cache_dir + "/%s.index" % cache_name
self.cache_max_age = config.getint('ec2', 'cache_max_age') self.cache_max_age = config.getint('ec2', 'cache_max_age')
if config.has_option('ec2', 'expand_csv_tags'): if config.has_option('ec2', 'expand_csv_tags'):
@ -388,7 +407,10 @@ class Ec2Inventory(object):
# Instance filters (see boto and EC2 API docs). Ignore invalid filters. # Instance filters (see boto and EC2 API docs). Ignore invalid filters.
self.ec2_instance_filters = defaultdict(list) self.ec2_instance_filters = defaultdict(list)
if config.has_option('ec2', 'instance_filters'): if config.has_option('ec2', 'instance_filters'):
for instance_filter in config.get('ec2', 'instance_filters', '').split(','):
filters = [f for f in config.get('ec2', 'instance_filters').split(',') if f]
for instance_filter in filters:
instance_filter = instance_filter.strip() instance_filter = instance_filter.strip()
if not instance_filter or '=' not in instance_filter: if not instance_filter or '=' not in instance_filter:
continue continue
@ -407,7 +429,7 @@ class Ec2Inventory(object):
help='Get all the variables about a specific instance') help='Get all the variables about a specific instance')
parser.add_argument('--refresh-cache', action='store_true', default=False, parser.add_argument('--refresh-cache', action='store_true', default=False,
help='Force refresh of cache by making API requests to EC2 (default: False - use cache files)') help='Force refresh of cache by making API requests to EC2 (default: False - use cache files)')
parser.add_argument('--boto-profile', action='store', parser.add_argument('--profile', '--boto-profile', action='store', dest='boto_profile',
help='Use boto profile for connections to EC2') help='Use boto profile for connections to EC2')
self.args = parser.parse_args() self.args = parser.parse_args()
@ -491,9 +513,14 @@ class Ec2Inventory(object):
try: try:
conn = self.connect_to_aws(rds, region) conn = self.connect_to_aws(rds, region)
if conn: if conn:
instances = conn.get_all_dbinstances() marker = None
for instance in instances: while True:
self.add_rds_instance(instance, region) instances = conn.get_all_dbinstances(marker=marker)
marker = instances.marker
for instance in instances:
self.add_rds_instance(instance, region)
if not marker:
break
except boto.exception.BotoServerError as e: except boto.exception.BotoServerError as e:
error = e.reason error = e.reason
@ -511,7 +538,7 @@ class Ec2Inventory(object):
# that's why we need to call describe directly (it would be called by # that's why we need to call describe directly (it would be called by
# the shorthand method anyway...) # the shorthand method anyway...)
try: try:
conn = elasticache.connect_to_region(region) conn = self.connect_to_aws(elasticache, region)
if conn: if conn:
# show_cache_node_info = True # show_cache_node_info = True
# because we also want nodes' information # because we also want nodes' information
@ -547,7 +574,7 @@ class Ec2Inventory(object):
# that's why we need to call describe directly (it would be called by # that's why we need to call describe directly (it would be called by
# the shorthand method anyway...) # the shorthand method anyway...)
try: try:
conn = elasticache.connect_to_region(region) conn = self.connect_to_aws(elasticache, region)
if conn: if conn:
response = conn.describe_replication_groups() response = conn.describe_replication_groups()
@ -615,7 +642,9 @@ class Ec2Inventory(object):
return return
# Select the best destination address # Select the best destination address
if instance.subnet_id: if self.destination_format and self.destination_format_tags:
dest = self.destination_format.format(*[ getattr(instance, 'tags').get(tag, '') for tag in self.destination_format_tags ])
elif instance.subnet_id:
dest = getattr(instance, self.vpc_destination_variable, None) dest = getattr(instance, self.vpc_destination_variable, None)
if dest is None: if dest is None:
dest = getattr(instance, 'tags').get(self.vpc_destination_variable, None) dest = getattr(instance, 'tags').get(self.vpc_destination_variable, None)
@ -628,32 +657,46 @@ class Ec2Inventory(object):
# Skip instances we cannot address (e.g. private VPC subnet) # Skip instances we cannot address (e.g. private VPC subnet)
return return
# Set the inventory name
hostname = None
if self.hostname_variable:
if self.hostname_variable.startswith('tag_'):
hostname = instance.tags.get(self.hostname_variable[4:], None)
else:
hostname = getattr(instance, self.hostname_variable)
# If we can't get a nice hostname, use the destination address
if not hostname:
hostname = dest
hostname = self.to_safe(hostname).lower()
# if we only want to include hosts that match a pattern, skip those that don't # if we only want to include hosts that match a pattern, skip those that don't
if self.pattern_include and not self.pattern_include.match(dest): if self.pattern_include and not self.pattern_include.match(hostname):
return return
# if we need to exclude hosts that match a pattern, skip those # if we need to exclude hosts that match a pattern, skip those
if self.pattern_exclude and self.pattern_exclude.match(dest): if self.pattern_exclude and self.pattern_exclude.match(hostname):
return return
# Add to index # Add to index
self.index[dest] = [region, instance.id] self.index[hostname] = [region, instance.id]
# Inventory: Group by instance ID (always a group of 1) # Inventory: Group by instance ID (always a group of 1)
if self.group_by_instance_id: if self.group_by_instance_id:
self.inventory[instance.id] = [dest] self.inventory[instance.id] = [hostname]
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'instances', instance.id) self.push_group(self.inventory, 'instances', instance.id)
# Inventory: Group by region # Inventory: Group by region
if self.group_by_region: if self.group_by_region:
self.push(self.inventory, region, dest) self.push(self.inventory, region, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'regions', region) self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone # Inventory: Group by availability zone
if self.group_by_availability_zone: if self.group_by_availability_zone:
self.push(self.inventory, instance.placement, dest) self.push(self.inventory, instance.placement, hostname)
if self.nested_groups: if self.nested_groups:
if self.group_by_region: if self.group_by_region:
self.push_group(self.inventory, region, instance.placement) self.push_group(self.inventory, region, instance.placement)
@ -662,28 +705,28 @@ class Ec2Inventory(object):
# Inventory: Group by Amazon Machine Image (AMI) ID # Inventory: Group by Amazon Machine Image (AMI) ID
if self.group_by_ami_id: if self.group_by_ami_id:
ami_id = self.to_safe(instance.image_id) ami_id = self.to_safe(instance.image_id)
self.push(self.inventory, ami_id, dest) self.push(self.inventory, ami_id, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'images', ami_id) self.push_group(self.inventory, 'images', ami_id)
# Inventory: Group by instance type # Inventory: Group by instance type
if self.group_by_instance_type: if self.group_by_instance_type:
type_name = self.to_safe('type_' + instance.instance_type) type_name = self.to_safe('type_' + instance.instance_type)
self.push(self.inventory, type_name, dest) self.push(self.inventory, type_name, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'types', type_name) self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by key pair # Inventory: Group by key pair
if self.group_by_key_pair and instance.key_name: if self.group_by_key_pair and instance.key_name:
key_name = self.to_safe('key_' + instance.key_name) key_name = self.to_safe('key_' + instance.key_name)
self.push(self.inventory, key_name, dest) self.push(self.inventory, key_name, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'keys', key_name) self.push_group(self.inventory, 'keys', key_name)
# Inventory: Group by VPC # Inventory: Group by VPC
if self.group_by_vpc_id and instance.vpc_id: if self.group_by_vpc_id and instance.vpc_id:
vpc_id_name = self.to_safe('vpc_id_' + instance.vpc_id) vpc_id_name = self.to_safe('vpc_id_' + instance.vpc_id)
self.push(self.inventory, vpc_id_name, dest) self.push(self.inventory, vpc_id_name, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'vpcs', vpc_id_name) self.push_group(self.inventory, 'vpcs', vpc_id_name)
@ -692,7 +735,7 @@ class Ec2Inventory(object):
try: try:
for group in instance.groups: for group in instance.groups:
key = self.to_safe("security_group_" + group.name) key = self.to_safe("security_group_" + group.name)
self.push(self.inventory, key, dest) self.push(self.inventory, key, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key) self.push_group(self.inventory, 'security_groups', key)
except AttributeError: except AttributeError:
@ -712,7 +755,7 @@ class Ec2Inventory(object):
key = self.to_safe("tag_" + k + "=" + v) key = self.to_safe("tag_" + k + "=" + v)
else: else:
key = self.to_safe("tag_" + k) key = self.to_safe("tag_" + k)
self.push(self.inventory, key, dest) self.push(self.inventory, key, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'tags', self.to_safe("tag_" + k)) self.push_group(self.inventory, 'tags', self.to_safe("tag_" + k))
if v: if v:
@ -722,20 +765,21 @@ class Ec2Inventory(object):
if self.route53_enabled and self.group_by_route53_names: if self.route53_enabled and self.group_by_route53_names:
route53_names = self.get_instance_route53_names(instance) route53_names = self.get_instance_route53_names(instance)
for name in route53_names: for name in route53_names:
self.push(self.inventory, name, dest) self.push(self.inventory, name, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'route53', name) self.push_group(self.inventory, 'route53', name)
# Global Tag: instances without tags # Global Tag: instances without tags
if self.group_by_tag_none and len(instance.tags) == 0: if self.group_by_tag_none and len(instance.tags) == 0:
self.push(self.inventory, 'tag_none', dest) self.push(self.inventory, 'tag_none', hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'tags', 'tag_none') self.push_group(self.inventory, 'tags', 'tag_none')
# Global Tag: tag all EC2 instances # Global Tag: tag all EC2 instances
self.push(self.inventory, 'ec2', dest) self.push(self.inventory, 'ec2', hostname)
self.inventory["_meta"]["hostvars"][dest] = self.get_host_info_dict_from_instance(instance) self.inventory["_meta"]["hostvars"][hostname] = self.get_host_info_dict_from_instance(instance)
self.inventory["_meta"]["hostvars"][hostname]['ansible_ssh_host'] = dest
def add_rds_instance(self, instance, region): def add_rds_instance(self, instance, region):
@ -753,24 +797,38 @@ class Ec2Inventory(object):
# Skip instances we cannot address (e.g. private VPC subnet) # Skip instances we cannot address (e.g. private VPC subnet)
return return
# Set the inventory name
hostname = None
if self.hostname_variable:
if self.hostname_variable.startswith('tag_'):
hostname = instance.tags.get(self.hostname_variable[4:], None)
else:
hostname = getattr(instance, self.hostname_variable)
# If we can't get a nice hostname, use the destination address
if not hostname:
hostname = dest
hostname = self.to_safe(hostname).lower()
# Add to index # Add to index
self.index[dest] = [region, instance.id] self.index[hostname] = [region, instance.id]
# Inventory: Group by instance ID (always a group of 1) # Inventory: Group by instance ID (always a group of 1)
if self.group_by_instance_id: if self.group_by_instance_id:
self.inventory[instance.id] = [dest] self.inventory[instance.id] = [hostname]
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'instances', instance.id) self.push_group(self.inventory, 'instances', instance.id)
# Inventory: Group by region # Inventory: Group by region
if self.group_by_region: if self.group_by_region:
self.push(self.inventory, region, dest) self.push(self.inventory, region, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'regions', region) self.push_group(self.inventory, 'regions', region)
# Inventory: Group by availability zone # Inventory: Group by availability zone
if self.group_by_availability_zone: if self.group_by_availability_zone:
self.push(self.inventory, instance.availability_zone, dest) self.push(self.inventory, instance.availability_zone, hostname)
if self.nested_groups: if self.nested_groups:
if self.group_by_region: if self.group_by_region:
self.push_group(self.inventory, region, instance.availability_zone) self.push_group(self.inventory, region, instance.availability_zone)
@ -779,14 +837,14 @@ class Ec2Inventory(object):
# Inventory: Group by instance type # Inventory: Group by instance type
if self.group_by_instance_type: if self.group_by_instance_type:
type_name = self.to_safe('type_' + instance.instance_class) type_name = self.to_safe('type_' + instance.instance_class)
self.push(self.inventory, type_name, dest) self.push(self.inventory, type_name, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'types', type_name) self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by VPC # Inventory: Group by VPC
if self.group_by_vpc_id and instance.subnet_group and instance.subnet_group.vpc_id: if self.group_by_vpc_id and instance.subnet_group and instance.subnet_group.vpc_id:
vpc_id_name = self.to_safe('vpc_id_' + instance.subnet_group.vpc_id) vpc_id_name = self.to_safe('vpc_id_' + instance.subnet_group.vpc_id)
self.push(self.inventory, vpc_id_name, dest) self.push(self.inventory, vpc_id_name, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'vpcs', vpc_id_name) self.push_group(self.inventory, 'vpcs', vpc_id_name)
@ -795,7 +853,7 @@ class Ec2Inventory(object):
try: try:
if instance.security_group: if instance.security_group:
key = self.to_safe("security_group_" + instance.security_group.name) key = self.to_safe("security_group_" + instance.security_group.name)
self.push(self.inventory, key, dest) self.push(self.inventory, key, hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key) self.push_group(self.inventory, 'security_groups', key)
@ -806,20 +864,21 @@ class Ec2Inventory(object):
# Inventory: Group by engine # Inventory: Group by engine
if self.group_by_rds_engine: if self.group_by_rds_engine:
self.push(self.inventory, self.to_safe("rds_" + instance.engine), dest) self.push(self.inventory, self.to_safe("rds_" + instance.engine), hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'rds_engines', self.to_safe("rds_" + instance.engine)) self.push_group(self.inventory, 'rds_engines', self.to_safe("rds_" + instance.engine))
# Inventory: Group by parameter group # Inventory: Group by parameter group
if self.group_by_rds_parameter_group: if self.group_by_rds_parameter_group:
self.push(self.inventory, self.to_safe("rds_parameter_group_" + instance.parameter_group.name), dest) self.push(self.inventory, self.to_safe("rds_parameter_group_" + instance.parameter_group.name), hostname)
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'rds_parameter_groups', self.to_safe("rds_parameter_group_" + instance.parameter_group.name)) self.push_group(self.inventory, 'rds_parameter_groups', self.to_safe("rds_parameter_group_" + instance.parameter_group.name))
# Global Tag: all RDS instances # Global Tag: all RDS instances
self.push(self.inventory, 'rds', dest) self.push(self.inventory, 'rds', hostname)
self.inventory["_meta"]["hostvars"][dest] = self.get_host_info_dict_from_instance(instance) self.inventory["_meta"]["hostvars"][hostname] = self.get_host_info_dict_from_instance(instance)
self.inventory["_meta"]["hostvars"][hostname]['ansible_ssh_host'] = dest
def add_elasticache_cluster(self, cluster, region): def add_elasticache_cluster(self, cluster, region):
''' Adds an ElastiCache cluster to the inventory and index, as long as ''' Adds an ElastiCache cluster to the inventory and index, as long as

@ -90,6 +90,9 @@ import os
import argparse import argparse
import ConfigParser import ConfigParser
import logging
logging.getLogger('libcloud.common.google').addHandler(logging.NullHandler())
try: try:
import json import json
except ImportError: except ImportError:

@ -27,11 +27,11 @@ result['all'] = {}
pipe = Popen(['virsh', '-q', '-c', 'lxc:///', 'list', '--name', '--all'], stdout=PIPE, universal_newlines=True) pipe = Popen(['virsh', '-q', '-c', 'lxc:///', 'list', '--name', '--all'], stdout=PIPE, universal_newlines=True)
result['all']['hosts'] = [x[:-1] for x in pipe.stdout.readlines()] result['all']['hosts'] = [x[:-1] for x in pipe.stdout.readlines()]
result['all']['vars'] = {} result['all']['vars'] = {}
result['all']['vars']['ansible_connection'] = 'lxc' result['all']['vars']['ansible_connection'] = 'libvirt_lxc'
if len(sys.argv) == 2 and sys.argv[1] == '--list': if len(sys.argv) == 2 and sys.argv[1] == '--list':
print(json.dumps(result)) print(json.dumps(result))
elif len(sys.argv) == 3 and sys.argv[1] == '--host': elif len(sys.argv) == 3 and sys.argv[1] == '--host':
print(json.dumps({'ansible_connection': 'lxc'})) print(json.dumps({'ansible_connection': 'libvirt_lxc'}))
else: else:
print("Need an argument, either --list or --host <host>") print("Need an argument, either --list or --host <host>")

@ -280,6 +280,11 @@ class LinodeInventory(object):
node_vars["datacenter_city"] = self.get_datacenter_city(node) node_vars["datacenter_city"] = self.get_datacenter_city(node)
node_vars["public_ip"] = [addr.address for addr in node.ipaddresses if addr.is_public][0] node_vars["public_ip"] = [addr.address for addr in node.ipaddresses if addr.is_public][0]
# Set the SSH host information, so these inventory items can be used if
# their labels aren't FQDNs
node_vars['ansible_ssh_host'] = node_vars["public_ip"]
node_vars['ansible_host'] = node_vars["public_ip"]
private_ips = [addr.address for addr in node.ipaddresses if not addr.is_public] private_ips = [addr.address for addr in node.ipaddresses if not addr.is_public]
if private_ips: if private_ips:

@ -32,6 +32,13 @@
# all of them and present them as one contiguous inventory. # all of them and present them as one contiguous inventory.
# #
# See the adjacent openstack.yml file for an example config file # See the adjacent openstack.yml file for an example config file
# There are two ansible inventory specific options that can be set in
# the inventory section.
# expand_hostvars controls whether or not the inventory will make extra API
# calls to fill out additional information about each server
# use_hostnames changes the behavior from registering every host with its UUID
# and making a group of its hostname to only doing this if the
# hostname in question has more than one server
import argparse import argparse
import collections import collections
@ -51,7 +58,7 @@ import shade.inventory
CONFIG_FILES = ['/etc/ansible/openstack.yaml'] CONFIG_FILES = ['/etc/ansible/openstack.yaml']
def get_groups_from_server(server_vars): def get_groups_from_server(server_vars, namegroup=True):
groups = [] groups = []
region = server_vars['region'] region = server_vars['region']
@ -76,7 +83,8 @@ def get_groups_from_server(server_vars):
groups.append(extra_group) groups.append(extra_group)
groups.append('instance-%s' % server_vars['id']) groups.append('instance-%s' % server_vars['id'])
groups.append(server_vars['name']) if namegroup:
groups.append(server_vars['name'])
for key in ('flavor', 'image'): for key in ('flavor', 'image'):
if 'name' in server_vars[key]: if 'name' in server_vars[key]:
@ -94,9 +102,9 @@ def get_groups_from_server(server_vars):
return groups return groups
def get_host_groups(inventory): def get_host_groups(inventory, refresh=False):
(cache_file, cache_expiration_time) = get_cache_settings() (cache_file, cache_expiration_time) = get_cache_settings()
if is_cache_stale(cache_file, cache_expiration_time): if is_cache_stale(cache_file, cache_expiration_time, refresh=refresh):
groups = to_json(get_host_groups_from_cloud(inventory)) groups = to_json(get_host_groups_from_cloud(inventory))
open(cache_file, 'w').write(groups) open(cache_file, 'w').write(groups)
else: else:
@ -104,26 +112,54 @@ def get_host_groups(inventory):
return groups return groups
def append_hostvars(hostvars, groups, key, server, namegroup=False):
hostvars[key] = dict(
ansible_ssh_host=server['interface_ip'],
openstack=server)
for group in get_groups_from_server(server, namegroup=namegroup):
groups[group].append(key)
def get_host_groups_from_cloud(inventory): def get_host_groups_from_cloud(inventory):
groups = collections.defaultdict(list) groups = collections.defaultdict(list)
firstpass = collections.defaultdict(list)
hostvars = {} hostvars = {}
for server in inventory.list_hosts(): list_args = {}
if hasattr(inventory, 'extra_config'):
use_hostnames = inventory.extra_config['use_hostnames']
list_args['expand'] = inventory.extra_config['expand_hostvars']
else:
use_hostnames = False
for server in inventory.list_hosts(**list_args):
if 'interface_ip' not in server: if 'interface_ip' not in server:
continue continue
for group in get_groups_from_server(server): firstpass[server['name']].append(server)
groups[group].append(server['id']) for name, servers in firstpass.items():
hostvars[server['id']] = dict( if len(servers) == 1 and use_hostnames:
ansible_ssh_host=server['interface_ip'], append_hostvars(hostvars, groups, name, servers[0])
openstack=server, else:
) server_ids = set()
# Trap for duplicate results
for server in servers:
server_ids.add(server['id'])
if len(server_ids) == 1 and use_hostnames:
append_hostvars(hostvars, groups, name, servers[0])
else:
for server in servers:
append_hostvars(
hostvars, groups, server['id'], server,
namegroup=True)
groups['_meta'] = {'hostvars': hostvars} groups['_meta'] = {'hostvars': hostvars}
return groups return groups
def is_cache_stale(cache_file, cache_expiration_time): def is_cache_stale(cache_file, cache_expiration_time, refresh=False):
''' Determines if cache file has expired, or if it is still valid ''' ''' Determines if cache file has expired, or if it is still valid '''
if os.path.isfile(cache_file): if refresh:
return True
if os.path.isfile(cache_file) and os.path.getsize(cache_file) > 0:
mod_time = os.path.getmtime(cache_file) mod_time = os.path.getmtime(cache_file)
current_time = time.time() current_time = time.time()
if (mod_time + cache_expiration_time) > current_time: if (mod_time + cache_expiration_time) > current_time:
@ -169,14 +205,24 @@ def main():
try: try:
config_files = os_client_config.config.CONFIG_FILES + CONFIG_FILES config_files = os_client_config.config.CONFIG_FILES + CONFIG_FILES
shade.simple_logging(debug=args.debug) shade.simple_logging(debug=args.debug)
inventory = shade.inventory.OpenStackInventory( inventory_args = dict(
refresh=args.refresh, refresh=args.refresh,
config_files=config_files, config_files=config_files,
private=args.private, private=args.private,
) )
if hasattr(shade.inventory.OpenStackInventory, 'extra_config'):
inventory_args.update(dict(
config_key='ansible',
config_defaults={
'use_hostnames': False,
'expand_hostvars': True,
}
))
inventory = shade.inventory.OpenStackInventory(**inventory_args)
if args.list: if args.list:
output = get_host_groups(inventory) output = get_host_groups(inventory, refresh=args.refresh)
elif args.host: elif args.host:
output = to_json(inventory.get_host(args.host)) output = to_json(inventory.get_host(args.host))
print(output) print(output)

@ -26,3 +26,6 @@ clouds:
username: stack username: stack
password: stack password: stack
project_name: stack project_name: stack
ansible:
use_hostnames: True
expand_hostvars: False

@ -172,9 +172,9 @@ class OVirtInventory(object):
# If the appropriate environment variables are set, they override # If the appropriate environment variables are set, they override
# other configuration; process those into our args and kwargs. # other configuration; process those into our args and kwargs.
kwargs['url'] = os.environ.get('OVIRT_URL') kwargs['url'] = os.environ.get('OVIRT_URL', kwargs['url'])
kwargs['username'] = os.environ.get('OVIRT_EMAIL') kwargs['username'] = next(val for val in [os.environ.get('OVIRT_EMAIL'), os.environ.get('OVIRT_USERNAME'), kwargs['username']] if val is not None)
kwargs['password'] = os.environ.get('OVIRT_PASS') kwargs['password'] = next(val for val in [os.environ.get('OVIRT_PASS'), os.environ.get('OVIRT_PASSWORD'), kwargs['password']] if val is not None)
# Retrieve and return the ovirt driver. # Retrieve and return the ovirt driver.
return API(insecure=True, **kwargs) return API(insecure=True, **kwargs)

@ -15,6 +15,16 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
# Updated 2016 by Matt Harris <matthaeus.harris@gmail.com>
#
# Added support for Proxmox VE 4.x
# Added support for using the Notes field of a VM to define groups and variables:
# A well-formatted JSON object in the Notes field will be added to the _meta
# section for that VM. In addition, the "groups" key of this JSON object may be
# used to specify group membership:
#
# { "groups": ["utility", "databases"], "a": false, "b": true }
import urllib import urllib
try: try:
import json import json
@ -32,29 +42,29 @@ class ProxmoxNodeList(list):
def get_names(self): def get_names(self):
return [node['node'] for node in self] return [node['node'] for node in self]
class ProxmoxQemu(dict): class ProxmoxVM(dict):
def get_variables(self): def get_variables(self):
variables = {} variables = {}
for key, value in iteritems(self): for key, value in iteritems(self):
variables['proxmox_' + key] = value variables['proxmox_' + key] = value
return variables return variables
class ProxmoxQemuList(list): class ProxmoxVMList(list):
def __init__(self, data=[]): def __init__(self, data=[]):
for item in data: for item in data:
self.append(ProxmoxQemu(item)) self.append(ProxmoxVM(item))
def get_names(self): def get_names(self):
return [qemu['name'] for qemu in self if qemu['template'] != 1] return [vm['name'] for vm in self if vm['template'] != 1]
def get_by_name(self, name): def get_by_name(self, name):
results = [qemu for qemu in self if qemu['name'] == name] results = [vm for vm in self if vm['name'] == name]
return results[0] if len(results) > 0 else None return results[0] if len(results) > 0 else None
def get_variables(self): def get_variables(self):
variables = {} variables = {}
for qemu in self: for vm in self:
variables[qemu['name']] = qemu.get_variables() variables[vm['name']] = vm.get_variables()
return variables return variables
@ -105,8 +115,23 @@ class ProxmoxAPI(object):
def nodes(self): def nodes(self):
return ProxmoxNodeList(self.get('api2/json/nodes')) return ProxmoxNodeList(self.get('api2/json/nodes'))
def vms_by_type(self, node, type):
return ProxmoxVMList(self.get('api2/json/nodes/{}/{}'.format(node, type)))
def vm_description_by_type(self, node, vm, type):
return self.get('api2/json/nodes/{}/{}/{}/config'.format(node, type, vm))
def node_qemu(self, node): def node_qemu(self, node):
return ProxmoxQemuList(self.get('api2/json/nodes/{}/qemu'.format(node))) return self.vms_by_type(node, 'qemu')
def node_qemu_description(self, node, vm):
return self.vm_description_by_type(node, vm, 'qemu')
def node_lxc(self, node):
return self.vms_by_type(node, 'lxc')
def node_lxc_description(self, node, vm):
return self.vm_description_by_type(node, vm, 'lxc')
def pools(self): def pools(self):
return ProxmoxPoolList(self.get('api2/json/pools')) return ProxmoxPoolList(self.get('api2/json/pools'))
@ -131,6 +156,40 @@ def main_list(options):
qemu_list = proxmox_api.node_qemu(node) qemu_list = proxmox_api.node_qemu(node)
results['all']['hosts'] += qemu_list.get_names() results['all']['hosts'] += qemu_list.get_names()
results['_meta']['hostvars'].update(qemu_list.get_variables()) results['_meta']['hostvars'].update(qemu_list.get_variables())
lxc_list = proxmox_api.node_lxc(node)
results['all']['hosts'] += lxc_list.get_names()
results['_meta']['hostvars'].update(lxc_list.get_variables())
for vm in results['_meta']['hostvars']:
vmid = results['_meta']['hostvars'][vm]['proxmox_vmid']
try:
type = results['_meta']['hostvars'][vm]['proxmox_type']
except KeyError:
type = 'qemu'
try:
description = proxmox_api.vm_description_by_type(node, vmid, type)['description']
except KeyError:
description = None
try:
metadata = json.loads(description)
except TypeError:
metadata = {}
except ValueError:
metadata = {
'notes': description
}
if 'groups' in metadata:
# print metadata
for group in metadata['groups']:
if group not in results:
results[group] = {
'hosts': []
}
results[group]['hosts'] += [vm]
results['_meta']['hostvars'][vm].update(metadata)
# pools # pools
for pool in proxmox_api.pools().get_names(): for pool in proxmox_api.pools().get_names():

@ -0,0 +1,80 @@
#!/usr/bin/python
import json
import requests
import os
import argparse
import types
RACKHD_URL = 'http://localhost:8080'
class RackhdInventory(object):
def __init__(self, nodeids):
self._inventory = {}
for nodeid in nodeids:
self._load_inventory_data(nodeid)
inventory = {}
for nodeid,info in self._inventory.iteritems():
inventory[nodeid]= (self._format_output(nodeid, info))
print(json.dumps(inventory))
def _load_inventory_data(self, nodeid):
info = {}
info['ohai'] = RACKHD_URL + '/api/common/nodes/{0}/catalogs/ohai'.format(nodeid )
info['lookup'] = RACKHD_URL + '/api/common/lookups/?q={0}'.format(nodeid)
results = {}
for key,url in info.iteritems():
r = requests.get( url, verify=False)
results[key] = r.text
self._inventory[nodeid] = results
def _format_output(self, nodeid, info):
try:
node_info = json.loads(info['lookup'])
ipaddress = ''
if len(node_info) > 0:
ipaddress = node_info[0]['ipAddress']
output = { 'hosts':[ipaddress],'vars':{}}
for key,result in info.iteritems():
output['vars'][key] = json.loads(result)
output['vars']['ansible_ssh_user'] = 'monorail'
except KeyError:
pass
return output
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--host')
parser.add_argument('--list', action='store_true')
return parser.parse_args()
try:
#check if rackhd url(ie:10.1.1.45:8080) is specified in the environment
RACKHD_URL = 'http://' + str(os.environ['RACKHD_URL'])
except:
#use default values
pass
# Use the nodeid specified in the environment to limit the data returned
# or return data for all available nodes
nodeids = []
if (parse_args().host):
try:
nodeids += parse_args().host.split(',')
RackhdInventory(nodeids)
except:
pass
if (parse_args().list):
try:
url = RACKHD_URL + '/api/common/nodes'
r = requests.get( url, verify=False)
data = json.loads(r.text)
for entry in data:
if entry['type'] == 'compute':
nodeids.append(entry['id'])
RackhdInventory(nodeids)
except:
pass

@ -55,3 +55,12 @@
# will be ignored, and 4 will be used. Accepts a comma separated list, # will be ignored, and 4 will be used. Accepts a comma separated list,
# the first found wins. # the first found wins.
# access_ip_version = 4 # access_ip_version = 4
# Environment Variable: RAX_CACHE_MAX_AGE
# Default: 600
#
# A configuration the changes the behavior or the inventory cache.
# Inventory listing performed before this value will be returned from
# the cache instead of making a full request for all inventory. Setting
# this value to 0 will force a full request.
# cache_max_age = 600

@ -355,9 +355,12 @@ def get_cache_file_path(regions):
def _list(regions, refresh_cache=True): def _list(regions, refresh_cache=True):
cache_max_age = int(get_config(p, 'rax', 'cache_max_age',
'RAX_CACHE_MAX_AGE', 600))
if (not os.path.exists(get_cache_file_path(regions)) or if (not os.path.exists(get_cache_file_path(regions)) or
refresh_cache or refresh_cache or
(time() - os.stat(get_cache_file_path(regions))[-1]) > 600): (time() - os.stat(get_cache_file_path(regions))[-1]) > cache_max_age):
# Cache file doesn't exist or older than 10m or refresh cache requested # Cache file doesn't exist or older than 10m or refresh cache requested
_list_into_cache(regions) _list_into_cache(regions)

@ -0,0 +1,77 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# (c) 2014, Matt Martz <matt@sivel.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
#
# Script to be used with vault_password_file or --vault-password-file
# to retrieve the vault password via your OSes native keyring application
#
# This script requires the ``keyring`` python module
#
# Add a [vault] section to your ansible.cfg file,
# the only option is 'username'. Example:
#
# [vault]
# username = 'ansible_vault'
#
# Additionally, it would be a good idea to configure vault_password_file in
# ansible.cfg
#
# [defaults]
# ...
# vault_password_file = /path/to/vault-keyring.py
# ...
#
# To set your password: python /path/to/vault-keyring.py set
#
# If you choose to not configure the path to vault_password_file in ansible.cfg
# your ansible-playbook command may look like:
#
# ansible-playbook --vault-password-file=/path/to/vault-keyring.py site.yml
import sys
import getpass
import keyring
import ansible.constants as C
def main():
parser = C.load_config_file()
try:
username = parser.get('vault', 'username')
except:
sys.stderr.write('No [vault] section configured\n')
sys.exit(1)
if len(sys.argv) == 2 and sys.argv[1] == 'set':
password = getpass.getpass()
confirm = getpass.getpass('Confirm password: ')
if password == confirm:
keyring.set_password('ansible', username, password)
else:
sys.stderr.write('Passwords do not match\n')
sys.exit(1)
else:
sys.stdout.write('%s\n' % keyring.get_password('ansible', username))
sys.exit(0)
if __name__ == '__main__':
main()

@ -12,7 +12,7 @@ ansible-galaxy - manage roles using galaxy.ansible.com
SYNOPSIS SYNOPSIS
-------- --------
ansible-galaxy [init|info|install|list|remove] [--help] [options] ... ansible-galaxy [delete|import|info|init|install|list|login|remove|search|setup] [--help] [options] ...
DESCRIPTION DESCRIPTION
@ -20,7 +20,7 @@ DESCRIPTION
*Ansible Galaxy* is a shared repository for Ansible roles. *Ansible Galaxy* is a shared repository for Ansible roles.
The ansible-galaxy command can be used to manage these roles, The ansible-galaxy command can be used to manage these roles,
or by creating a skeleton framework for roles you'd like to upload to Galaxy. or for creating a skeleton framework for roles you'd like to upload to Galaxy.
COMMON OPTIONS COMMON OPTIONS
-------------- --------------
@ -29,7 +29,6 @@ COMMON OPTIONS
Show a help message related to the given sub-command. Show a help message related to the given sub-command.
INSTALL INSTALL
------- -------
@ -145,6 +144,204 @@ The path to the directory containing your roles. The default is the *roles_path*
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured) configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
SEARCH
------
The *search* sub-command returns a filtered list of roles found on the remote
server.
USAGE
~~~~~
$ ansible-galaxy search [options] [searchterm1 searchterm2]
OPTIONS
~~~~~~~
*--galaxy-tags*::
Provide a comma separated list of Galaxy Tags on which to filter.
*--platforms*::
Provide a comma separated list of Platforms on which to filter.
*--author*::
Specify the username of a Galaxy contributor on which to filter.
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
INFO
----
The *info* sub-command shows detailed information for a specific role.
Details returned about the role included information from the local copy
as well as information from galaxy.ansible.com.
USAGE
~~~~~
$ ansible-galaxy info [options] role_name[, version]
OPTIONS
~~~~~~~
*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH'::
The path to the directory containing your roles. The default is the *roles_path*
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
LOGIN
-----
The *login* sub-command is used to authenticate with galaxy.ansible.com.
Authentication is required to use the import, delete and setup commands.
It will authenticate the user, retrieve a token from Galaxy, and store it
in the user's home directory.
USAGE
~~~~~
$ ansible-galaxy login [options]
The *login* sub-command prompts for a *GitHub* username and password. It does
NOT send your password to Galaxy. It actually authenticates with GitHub and
creates a personal access token. It then sends the personal access token to
Galaxy, which in turn verifies that you are you and returns a Galaxy access
token. After authentication completes the *GitHub* personal access token is
destroyed.
If you do not wish to use your GitHub password, or if you have two-factor
authentication enabled with GitHub, use the *--github-token* option to pass a
personal access token that you create. Log into GitHub, go to Settings and
click on Personal Access Token to create a token.
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
*--github-token*::
Authenticate using a *GitHub* personal access token rather than a password.
IMPORT
------
Import a role from *GitHub* to galaxy.ansible.com. Requires the user first
authenticate with galaxy.ansible.com using the *login* subcommand.
USAGE
~~~~~
$ ansible-galaxy import [options] github_user github_repo
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
*--branch*::
Provide a specific branch to import. When a branch is not specified the
branch found in meta/main.yml is used. If no branch is specified in
meta/main.yml, the repo's default branch (usually master) is used.
DELETE
------
The *delete* sub-command will delete a role from galaxy.ansible.com. Requires
the user first authenticate with galaxy.ansible.com using the *login* subcommand.
USAGE
~~~~~
$ ansible-galaxy delete [options] github_user github_repo
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
SETUP
-----
The *setup* sub-command creates an integration point for *Travis CI*, enabling
galaxy.ansible.com to receive notifications from *Travis* on build completion.
Requires the user first authenticate with galaxy.ansible.com using the *login*
subcommand.
USAGE
~~~~~
$ ansible-galaxy setup [options] source github_user github_repo secret
* Use *travis* as the source value. In the future additional source values may
be added.
* Provide your *Travis* user token as the secret. The token is not stored by
galaxy.ansible.com. A hash is created using github_user, github_repo
and your token. The hash value is what actually gets stored.
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
--list::
Show your configured integrations. Provids the ID of each integration
which can be used with the remove option.
--remove::
Remove a specific integration. Provide the ID of the integration to
be removed.
AUTHOR AUTHOR
------ ------

@ -34,7 +34,12 @@ The names of one or more YAML format files to run as ansible playbooks.
OPTIONS OPTIONS
------- -------
*--ask-become-pass*:: *-b*, *--become*::
Use privilege escalation (specific one depends on become_method),
this does not imply prompting for passwords.
*-K*, *--ask-become-pass*::
Ask for privilege escalation password. Ask for privilege escalation password.
@ -47,7 +52,7 @@ For example, using ssh and not having a key-based authentication with ssh-agent
Prompt for su password, used with --su (deprecated, use become). Prompt for su password, used with --su (deprecated, use become).
*-K*, *--ask-sudo-pass*:: *--ask-sudo-pass*::
Prompt for the password to use with --sudo, if any (deprecated, use become). Prompt for the password to use with --sudo, if any (deprecated, use become).
@ -96,12 +101,12 @@ Show help page and exit
*-i* 'PATH', *--inventory=*'PATH':: *-i* 'PATH', *--inventory=*'PATH'::
The 'PATH' to the inventory, which defaults to '/etc/ansible/hosts'. The 'PATH' to the inventory, which defaults to '/etc/ansible/hosts'.
Alternatively you can use a comma separated list of hosts or single host with traling comma 'host,'. Alternatively, you can use a comma-separated list of hosts or a single host with a trailing comma 'host,'.
*-l* 'SUBSET', *--limit=*'SUBSET':: *-l* 'SUBSET', *--limit=*'SUBSET'::
Further limits the selected host/group patterns. Further limits the selected host/group patterns.
You can prefix it with '~' to indicate that the pattern in a regex. You can prefix it with '~' to indicate that the pattern is a regex.
*--list-hosts*:: *--list-hosts*::
@ -125,10 +130,6 @@ environment variable.
Use this file to authenticate the connection Use this file to authenticate the connection
*--skip-tages=*'SKIP_TAGS'::
Only run plays and tasks whose tags do not match these values.
*--start-at-task=*'START_AT':: *--start-at-task=*'START_AT'::
Start the playbook at the task matching this name. Start the playbook at the task matching this name.
@ -169,7 +170,7 @@ Add the specified arguments to any ssh command-line.
*-U* 'SUDO_USERNAME', *--sudo-user=*'SUDO_USERNAME':: *-U* 'SUDO_USERNAME', *--sudo-user=*'SUDO_USERNAME'::
Sudo to 'SUDO_USERNAME' deafult is root. (deprecated, use become). Sudo to 'SUDO_USERNAME' default is root. (deprecated, use become).
*--skip-tags=*'SKIP_TAGS':: *--skip-tags=*'SKIP_TAGS'::
@ -204,6 +205,24 @@ up to three times for more output.
Show program's version number and exit. Show program's version number and exit.
EXIT STATUS
-----------
*0* -- OK or no hosts matched
*1* -- Error
*2* -- One or more hosts failed
*3* -- One or more hosts were unreachable
*4* -- Parser error
*5* -- Bad or incomplete options
*99* -- User interrupted execution
*250* -- Unexpected error
ENVIRONMENT ENVIRONMENT
----------- -----------

@ -54,7 +54,12 @@ OPTIONS
Adds the hostkey for the repo URL if not already added. Adds the hostkey for the repo URL if not already added.
*--ask-become-pass*:: *-b*, *--become*::
Use privilege escalation (specific one depends on become_method),
this does not imply prompting for passwords.
*-K*, *--ask-become-pass*::
Ask for privilege escalation password. Ask for privilege escalation password.
@ -67,7 +72,7 @@ For example, using ssh and not having a key-based authentication with ssh-agent
Prompt for su password, used with --su (deprecated, use become). Prompt for su password, used with --su (deprecated, use become).
*-K*, *--ask-sudo-pass*:: *--ask-sudo-pass*::
Prompt for the password to use with --sudo, if any (deprecated, use become). Prompt for the password to use with --sudo, if any (deprecated, use become).
@ -95,6 +100,10 @@ Force running of playbook even if unable to update playbook repository. This
can be useful, for example, to enforce run-time state when a network can be useful, for example, to enforce run-time state when a network
connection may not always be up or possible. connection may not always be up or possible.
*--full*::
Do a full clone of the repository. By default ansible-pull will do a shallow clone based on the last revision.
*-h*, *--help*:: *-h*, *--help*::
Show the help message and exit. Show the help message and exit.

@ -37,7 +37,12 @@ OPTIONS
The 'ARGUMENTS' to pass to the module. The 'ARGUMENTS' to pass to the module.
*--ask-become-pass*:: *-b*, *--become*::
Use privilege escalation (specific one depends on become_method),
this does not imply prompting for passwords.
*K*, *--ask-become-pass*::
Ask for privilege escalation password. Ask for privilege escalation password.
@ -50,7 +55,7 @@ For example, using ssh and not having a key-based authentication with ssh-agent
Prompt for su password, used with --su (deprecated, use become). Prompt for su password, used with --su (deprecated, use become).
*-K*, *--ask-sudo-pass*:: *--ask-sudo-pass*::
Prompt for the password to use with --sudo, if any (deprecated, use become). Prompt for the password to use with --sudo, if any (deprecated, use become).
@ -104,7 +109,7 @@ Alternatively you can use a comma separated list of hosts or single host with tr
*-l* 'SUBSET', *--limit=*'SUBSET':: *-l* 'SUBSET', *--limit=*'SUBSET'::
Further limits the selected host/group patterns. Further limits the selected host/group patterns.
You can prefix it with '~' to indicate that the patter in a regex. You can prefix it with '~' to indicate that the pattern is a regex.
*--list-hosts*:: *--list-hosts*::

@ -0,0 +1,150 @@
# Auto Install Ansible roles
*Author*: Will Thames <@willthames>
*Date*: 19/02/2016
## Motivation
To use the latest (or even a specific) version of a playbook with the
appropriate roles, the following steps are typically required:
```
git pull upstream branch
ansible-galaxy install -r path/to/rolesfile.yml -p path/to/rolesdir -f
ansible-playbook run-the-playbook.yml
```
### Problems
- The most likely step in this process to be forgotten is the middle step. While we can improve processes and documentation to try and ensure that this step is not skipped, we can improve ansible-playbook so that the step is not required.
- Ansible-galaxy does ot sufficiently handle versioning.
- There is not a consistent format for specifying a role in a playbook or a dependent role in meta/main.yml.
## Approaches
### Approach 1: Specify rolesfile and rolesdir in playbook
Provide new `rolesdir` and `rolesfile` keywords:
```
- hosts: application-env
become: True
rolesfile: path/to/rolesfile.yml
rolesdir: path/to/rolesdir
roles:
- roleA
- { role: roleB, tags: role_roleB }
```
Running ansible-playbook against such a playbook would cause the roles listed in
`rolesfile` to be installed in `rolesdir`.
Add new configuration to allow default rolesfile, default rolesdir and
whether or not to auto update roles (defaulting to False)
#### Advantages
- Existing mechanism for roles management is maintained
- Playbooks are not polluted with roles 'meta' information (version, source)
#### Disadvantage
- Adds two new keywords
- Adds three new configuration variables for defaults
### Approach 2: Allow rolesfile inclusion
Allow the `roles` section to include a roles file:
```
- hosts: application-env
become: True
roles:
- include: path/to/rolesfile.yml
```
Running this playbook would cause the roles to be updated from the included
roles file.
This would also be functionally equivalent to specifying the roles file
content within the playbook:
```
- hosts: application-env
become: True
roles:
- src: https://git.example.com/roleA.git
scm: git
version: 0.1
- src: https://git.example.com/roleB.git
scm: git
version: 0.3
tags: role_roleB
```
#### Advantages
- The existing rolesfile mechanism is maintained
- Uses familiar inclusion mechanism
#### Disadvantage
- Separate playbooks would need separate rolesfiles. For example, a provision
playbook and upgrade playbook would likely have some overlap - currently
you can use the same rolesfile with ansible-galaxy so that the same
roles are available but only a subset of roles is used by the smaller
playbook.
- The roles file would need to be able to include playbook features such
as role tagging.
- New configuration defaults would likely still be required (and possibly
an override keyword for rolesdir and role auto update)
### Approach 3:
*Author*: chouseknecht<@chouseknecht>
*Date*: 24/02/2016
This is a combination of ideas taken from IRC, the ansible development group, and conversations at the recent contributor's summit. It also incorporates most of the ideas from Approach 1 (above) with two notable texceptions: 1) it elmintates maintaing a roles file (or what we think of today as requirements.yml); and 2) it does not include the definition of rolesdir in the playbook.
Here's the approach:
- Share the role install logic between ansible-playbook and ansible-galaxy so that ansible-playbook can resolve and install missing roles at playbook run time simply by evaluating the playbook.
- Ansible-galaxy installs or preloads roles also by examining a playbook.
- Deprecate support for requirements.yaml (the two points above make it unnecessary).
- Make ansible-playbook auto-downloading of roles configurable in ansible.cfg. In certain circumstance it may be desirable to disable auto-download.
- Provide one format for specifying a role whether in a playbook or in meta/main.yml. Suggested format:
```
{
'scm': 'git',
'src': 'http://git.example.com/repos/repo.git',
'version': 'v1.0',
'name': 'repo
}
```
- For roles installed from Galaxy, Galaxy should provide some measure of security against version change. Galaxy should track the commit related to a version. If the role owner changes historical versions (today tags) and thus changes the commit hash, the affected version would become un-installable.
- Refactor the install process to encompass the following :
- Idempotency - If a role version is already installed, dont attempt to install it again. If symlinks are present (see below), dont break or remove them.
- Provide a --force option that overrides idempotency.
- Install roles via tree-ish references, not just tags or commits (PR exists for this).
- Support a whitelist of role sources. Galaxy should not be automatically assumed to be part of the whitelist.
- Continue to be recursive, allowing roles to have dependencies specified in meta/main.yml.
- Continue to install roles in the roles_path.
- Use a symlink approach to managing role versions in the roles_path. Example:
```
roles/
briancoca.oracle_java7.v1.0
briancoca.oracle_java7.v2.2
briancoca.oracle_java7.qs3ih6x
briancoca.oracle_java7 => briancoca.oracle_java7.qs3ih6x
```
## Conclusion
Feedback is requested to improve any of the above approaches, or provide further approaches to solve this problem.

@ -0,0 +1,487 @@
# Docker_Container Module Proposal
## Purpose and Scope:
The purpose of docker_container is to manage the lifecycle of a container. The module will provide a mechanism for
moving the container between absent, present, stopped and started states. It will focus purely on managing container
state. The intention of the narrow focus is to make understanding and using the module clear and keep maintenance
and testing as easy as possible.
Docker_container will manage a container using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar to
how other cloud modules operate.
The container world is moving rapidly, so the goal is to create a suite of docker modules that keep pace, with docker_container
leading the way. If this project is successful, it will naturally deprecate the existing docker module.
## Parameters:
Docker_container will accept the parameters listed below. An attempt has been made to represent all the options available to
docker's create, kill, pause, run, rm, start, stop and update commands.
Parameters for connecting to the API are not listed here. They are included in the common utility module mentioned above.
```
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
default: null
capabilities:
description:
- List of capabilities to add to the container.
default: null
command:
description:
- Command or list of commands to execute in the container when it starts.
default: null
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period
default: 0
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota
default: 0
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
default: null
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1)
default: null
cpu_shares:
description:
- CPU shares (relative weight).
default: null
detach:
description:
- Enable detached mode to leave the container running in background.
If disabled, fail unless the process exits cleanly.
default: true
devices:
description:
- List of host device bindings to add to the container. Each binding is a mapping expressed
in the format: <path_on_host>:<path_in_container>:<cgroup_permissions>
default: null
dns_servers:
description:
- List of custom DNS servers.
default: null
dns_search_domains:
description:
- List of custom DNS search domains.
default: null
env:
description:
- Dictionary of key,value pairs.
default: null
entrypoint:
description:
- String or list of commands that overwrite the default ENTRYPOINT of the image.
default: null
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is key in the dictionary. Hostname will be added to the
container's /etc/hosts file.
default: null
exposed_ports:
description:
- List of additional container ports to expose for port mappings or links.
If the port is already exposed using EXPOSE in a Dockerfile, it does not
need to be exposed again.
default: null
aliases:
- exposed
force_kill:
description:
- Use with absent, present, started and stopped states to use the kill command rather
than the stop command.
default: false
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
default: null
hostname:
description:
- Container hostname.
default: null
image:
description:
- Container image used to create and match containers.
required: true
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
default: false
ipc_mode:
description:
- Set the IPC mode for the container. Can be one of
'container:<name|id>' to reuse another container's IPC namespace
or 'host' to use the host's IPC namespace within the container.
default: null
keep_volumes:
description:
- Retain volumes associated with a removed container.
default: false
kill_signal:
description:
- Override default signal used to kill a running container.
default null:
kernel_memory:
description:
- Kernel memory limit (format: <number>[<unit>]). Number is a positive integer.
Unit can be one of b, k, m, or g. Minimum is 4M.
default: 0
labels:
description:
- Dictionary of key value pairs.
default: null
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias)
default: null
log_driver:
description:
- Specify the logging driver.
choices:
- json-file
- syslog
- journald
- gelf
- fluentd
- awslogs
- splunk
defult: json-file
log_options:
description:
- Dictionary of options specific to the chosen log_driver. See https://docs.docker.com/engine/admin/logging/overview/
for details.
required: false
default: null
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33)
default: null
memory:
description:
- Memory limit (format: <number>[<unit>]). Number is a positive integer.
Unit can be one of b, k, m, or g
default: 0
memory_reservation:
description:
- Memory soft limit (format: <number>[<unit>]). Number is a positive integer.
Unit can be one of b, k, m, or g
default: 0
memory_swap:
description:
- Total memory limit (memory + swap, format:<number>[<unit>]).
Number is a positive integer. Unit can be one of b, k, m, or g.
default: 0
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
default: 0
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
required: true
network_mode:
description:
- Connect the container to a network.
choices:
- bridge
- container:<name|id>
- host
- none
default: null
networks:
description:
- Dictionary of networks to which the container will be connected. The dictionary must have a name key (the name of the network).
Optional keys include: aliases (a list of container aliases), and links (a list of links in the format C(container_name:alias)).
default: null
oom_killer:
desription:
- Whether or not to disable OOM Killer for the container.
default: false
paused:
description:
- Use with the started state to pause running processes inside the container.
default: false
pid_mode:
description:
- Set the PID namespace mode for the container. Currenly only supports 'host'.
default: null
privileged:
description:
- Give extended privileges to the container.
default: false
published_ports:
description:
- List of ports to publish from the container to the host.
- Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface.
- Container ports must be exposed either in the Dockerfile or via the C(expose) option.
- A value of ALL will publish all exposed container ports to random host ports, ignoring
any other mappings.
aliases:
- ports
read_only:
description:
- Mount the container's root file system as read-only.
default: false
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
default: false
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
default: false
restart_policy:
description:
- Container restart policy.
choices:
- on-failure
- always
default: on-failure
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
default: 0
shm_size:
description:
- Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater than `0`.
Unit is optional and can be `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g` (gigabytes).
- Ommitting the unit defaults to bytes. If you omit the size entirely, the system uses `64m`.
default: null
security_opts:
description:
- List of security options in the form of C("label:user:User")
default: null
state:
description:
- "absent" - A container matching the specified name will be stopped and removed. Use force_kill to kill the container
rather than stopping it. Use keep_volumes to retain volumes associated with the removed container.
- "present" - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config. Use recreate to force the re-creation of the matching container. Use force_kill to kill the
container rather than stopping it. Use keep_volumes to retain volumes associated with a removed container.
- "started" - Asserts there is a running container matching the name and any provided configuration. If no container
matches the name, a container will be created and started. If a container matching the name is found but the
configuration does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed
and a new container will be created with the requested configuration and started. Use recreate to always re-create a
matching container, even if it is running. Use restart to force a matching container to be stopped and restarted. Use
force_kill to kill a container rather than stopping it. Use keep_volumes to retain volumes associated with a removed
container.
- "stopped" - a container matching the specified name will be stopped. Use force_kill to kill a container rather than
stopping it.
required: false
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
default: null
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending SIGKILL.
required: false
trust_image_content:
description:
- If true, skip image verification.
default: false
tty:
description:
- Allocate a psuedo-TTY.
default: false
ulimits:
description:
- List of ulimit options. A ulimit is specified as C(nofile:262144:262144)
default: null
user:
description
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- Can be [ user | user:group | uid | uid:gid | user:gid | uid:group ]
default: null
uts:
description:
- Set the UTS namespace mode for the container.
default: null
volumes:
description:
- List of volumes to mount within the container.
- 'Use docker CLI-style syntax: C(/host:/container[:mode])'
- You can specify a read mode for the mount with either C(ro) or C(rw).
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or
private label for the volume.
default: null
volume_driver:
description:
- The container's volume driver.
default: none
volumes_from:
description:
- List of container names or Ids to get volumes from.
default: null
```
## Examples:
```
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
expose:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: ssssh
- name: Container present
docker_container:
name: mycontainer
state: present
recreate: yes
forcekill: yes
image: someplace/image
command: echo "I'm here!"
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
state: started
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
-name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
state: started
log_driver: syslog
log_opt:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
syslog-tag: myservice
```
## Returns:
The JSON object returned by the module will include a *results* object providing `docker inspect` output for the affected container.
```
{
changed: True,
failed: False,
rc: 0
results: {
< the results of `docker inspect` >
}
}
```

@ -0,0 +1,159 @@
# Docker_Files Modules Proposal
## Purpose and Scope
The purpose of docker_files is to provide for retrieving a file or folder from a container's file system,
inserting a file or folder into a container, exporting a container's entire filesystem as a tar archive, or
retrieving a list of changed files from a container's file system.
Docker_files will manage a container using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar to
how other cloud modules operate.
## Parameters
Docker_files accepts the parameters listed below. API connection parameters will be part of a shared utility module
as mentioned above.
```
diff:
description:
- Provide a list of container names or IDs. For each container a list of changed files and directories found on the
container's file system will be returned. Diff is mutually exclusive from all other options except event_type.
Use event_type to choose which events to include in the output.
default: null
export:
description:
- Provide a container name or ID. The container's file system will be exported to a tar archive. Use dest
to provide a path for the archive on the local file system. If the output file already exists, it will not be
overwritten. Use the force option to overwrite an existing archive.
default: null
dest:
description:
- Destination path of copied files. If the destination is a container file system, precede the path with a
container name or ID + ':'. For example, C(mycontainer:/path/to/file.txt). If the destination path does not
exist, it will be created. If the destination path exists on a the local filesystem, it will not be overwritten.
Use the force option to overwrite existing files on the local filesystem.
default: null
force:
description:
- Overwrite existing files on the local filesystem.
default: false
follow_link:
description:
- Follow symbolic links in the src path. If src is local and file is a symbolic link, the symbolic link, not the
target is copied by default. To copy the link target and not the link, set follow_link to true.
default: false
event_type:
description:
- Select the specific event type to list in the diff output.
choices:
- all
- add
- delete
- change
default: all
src:
description:
- The source path of file(s) to be copied. If source files are found on the container's file system, precede the
path with the container name or ID + ':'. For example, C(mycontainer:/path/to/files).
default: null
```
## Examples
```
- name: Copy files from the local file system to a container's file system
docker_files:
src: /tmp/rpm
dest: mycontainer:/tmp
follow_links: yes
- name: Copy files from the container to the local filesystem and overwrite existing files
docker_files:
src: container1:/var/lib/data
dest: /tmp/container1/data
force: yes
- name: Export container filesystem
docker_file:
export: container1
dest: /tmp/conainer1.tar
force: yes
- name: List all differences for multiple containers.
docker_files:
diff:
- mycontainer1
- mycontainer2
- name: Included changed files only in diff output
docker_files:
diff:
- mycontainer1
event_type: change
```
## Returns
Returned from diff:
```
{
changed: false,
failed: false,
rc: 0,
results: {
mycontainer1: [
{ state: 'C', path: '/dev' },
{ state: 'A', path: '/dev/kmsg' },
{ state: 'C', path: '/etc' },
{ state: 'A', path: '/etc/mtab' }
],
mycontainer2: [
{ state: 'C', path: '/foo' },
{ state: 'A', path: '/foo/bar.txt' }
]
}
}
```
Returned when copying files:
```
{
changed: true,
failed: false,
rc: 0,
results: {
src: /tmp/rpms,
dest: mycontainer:/tmp
files_copied: [
'file1.txt',
'file2.jpg'
]
}
}
```
Return when exporting container filesystem:
```
{
changed: true,
failed: false,
rc: 0,
results: {
src: container_name,
dest: local/path/archive_name.tar
}
}
```

@ -0,0 +1,47 @@
# Docker_Image_Facts Module Proposal
## Purpose and Scope
The purpose of docker_image_facts is to inspect docker images.
Docker_image_facts will use docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_image_facts will support the parameters listed below. API connection parameters will be part of a shared
utility module as mentioned above.
```
name:
description:
- An image name or list of image names. The image name can include a tag using the format C(name:tag).
default: null
```
## Examples
```
- name: Inspect all images
docker_image_facts
register: image_facts
- name: Inspect a single image
docker_image_facts:
name: myimage:v1
register: myimage_v1_facts
```
## Returns
```
{
changed: False
failed: False
rc: 0
result: [ < inspection output > ]
}
```

@ -0,0 +1,207 @@
# Docker_Image Module Proposal
## Purpose and Scope
The purpose is to update the existing docker_image module. The updates include expanding the module's capabilities to
match the build, load, pull, push, rmi, and save docker commands and adding support for remote registries.
Docker_image will manage images using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_image will support the parameters listed below. API connection parameters will be part of a shared utility
module as mentioned above.
```
archive_path:
description:
- Save image to the provided path. Use with state present to always save the image to a tar archive. If
intermediate directories in the path do not exist, they will be created. If a matching
archive already exists, it will be overwritten.
default: null
config_path:
description:
- Path to a custom docker config file. Docker-py defaults to using ~/.docker/config.json.
cgroup_parent:
description:
- Optional parent cgroup for build containers.
default: null
cpu_shares:
description:
- CPU shares for build containers. Integer value.
default: 0
cpuset_cpus:
description:
- CPUs in which to allow build container execution C(1,3) or C(1-3).
default: null
dockerfile:
description:
- Name of dockerfile to use when building an image.
default: Dockerfile
email:
description:
- The email for the registry account. Provide with username and password when credentials are not encoded
in docker configuration file or when encoded credentials should be updated.
default: null
nolog: true
force:
description:
- Use with absent state to un-tag and remove all images matching the specified name. Use with present state to
force a pull or rebuild of the image.
default: false
load_path:
description:
- Use with state present to load a previously save image. Provide the full path to the image archive file.
default: null
memory:
description:
- Build container limit. Memory limit specified as a positive integer for number of bytes.
memswap:
description:
- Build container limit. Total memory (memory + swap). Specify as a positive integer for number of bytes or
-1 to disable swap.
default: null
name:
description:
- Image name or ID.
required: true
nocache:
description:
- Do not use cache when building an image.
deafult: false
password:
description:
- Password used when connecting to the registry. Provide with username and email when credentials are not encoded
in docker configuration file or when encoded credentials should be updated.
default: null
nolog: true
path:
description:
- Path to Dockerfile and context from which to build an image.
default: null
push:
description:
- Use with state present to always push an image to the registry.
default: false
registry:
description:
- URL of the registry. If not provided, defaults to Docker Hub.
default: null
rm:
description:
- Remove intermediate containers after build.
default: true
tag:
description:
- Image tags. When pulling or pushing, set to 'all' to include all tags.
default: latest
url:
description:
- The location of a Git repository. The repository acts as the context when building an image.
- Mutually exclusive with path.
username:
description:
- Username used when connecting to the registry. Provide with password and email when credentials are not encoded
in docker configuration file or when encoded credentials should be updated.
default: null
nolog: true
state:
description:
- "absent" - if image exists, unconditionally remove it. Use the force option to un-tag and remove all images
matching the provided name.
- "present" - check if image is present with the provided tag. If the image is not present or the force option
is used, the image will either be pulled from the registry, built or loaded from an archive. To build the image,
provide a path or url to the context and Dockerfile. To load an image, use load_path to provide a path to
an archive file. If no path, url or load_path is provided, the image will be pulled. Use the registry
parameters to control the registry from which the image is pulled.
required: false
default: present
choices:
- absent
- present
http_timeout:
description:
- Timeout for HTTP requests during the image build operation. Provide a positive integer value for the number of
seconds.
default: null
```
## Examples
```
- name: build image
docker_image:
path: "/path/to/build/dir"
name: "my_app"
tags:
- v1.0
- mybuild
- name: force pull an image and all tags
docker_image:
name: "my/app"
force: yes
tags: all
- name: untag and remove image
docker_image:
name: "my/app"
state: absent
force: yes
- name: push an image to Docker Hub with all tags
docker_image:
name: my_image
push: yes
tags: all
- name: pull image from a private registry
docker_image:
name: centos
registry: https://private_registry:8080
```
## Returns
```
{
changed: True
failed: False
rc: 0
action: built | pulled | loaded | removed | none
msg: < text confirming the action that was taken >
results: {
< output from docker inspect for the affected image >
}
}
```

@ -0,0 +1,48 @@
# Docker_Network_Facts Module Proposal
## Purpose and Scope
Docker_network_facts will inspect networks.
Docker_network_facts will use docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_network_facts will accept the parameters listed below. API connection parameters will be part of a shared
utility module as mentioned above.
```
name:
description:
- Network name or list of network names.
default: null
```
## Examples
```
- name: Inspect all networks
docker_network_facts
register: network_facts
- name: Inspect a specific network and format the output
docker_network_facts
name: web_app
register: web_app_facts
```
# Returns
```
{
changed: False
failed: False
rc: 0
results: [ < inspection output > ]
}
```

@ -0,0 +1,130 @@
# Docker_Network Module Proposal
## Purpose and Scope:
The purpose of Docker_network is to create networks, connect containers to networks, disconnect containers from
networks, and delete networks.
Docker network will manage networks using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar to
how other cloud modules operate.
## Parameters:
Docker_network will accept the parameters listed below. Parameters related to connecting to the API will be handled in
a shared utility module, as mentioned above.
```
connected:
description:
- List of container names or container IDs to connect to a network.
default: null
driver:
description:
- Specify the type of network. Docker provides bridge and overlay drivers, but 3rd party drivers can also be used.
default: bridge
driver_options:
description:
- Dictionary of network settings. Consult docker docs for valid options and values.
default: null
force:
description:
- With state 'absent' forces disconnecting all containers from the network prior to deleting the network. With
state 'present' will disconnect all containers, delete the network and re-create the network.
default: false
incremental:
description:
- By default the connected list is canonical, meaning containers not on the list are removed from the network.
Use incremental to leave existing containers connected.
default: false
ipam_driver:
description:
- Specifiy an IPAM driver.
default: null
ipam_options:
description:
- Dictionary of IPAM options.
default: null
network_name:
description:
- Name of the network to operate on.
default: null
required: true
state:
description:
- "absent" deletes the network. If a network has connected containers, it cannot be deleted. Use the force option
to disconnect all containers and delete the network.
- "present" creates the network, if it does not already exist with the specified parameters, and connects the list
of containers provided via the connected parameter. Containers not on the list will be disconnected. An empty
list will leave no containers connected to the network. Use the incremental option to leave existing containers
connected. Use the force options to force re-creation of the network.
default: present
choices:
- absent
- present
```
## Examples:
```
- name: Create a network
docker_network:
name: network_one
- name: Remove all but selected list of containers
docker_network:
name: network_one
connected:
- containera
- containerb
- containerc
- name: Remove a single container
docker_network:
name: network_one
connected: "{{ fulllist|difference(['containera']) }}"
- name: Add a container to a network, leaving existing containers connected
docker_network:
name: network_one
connected:
- containerc
incremental: yes
- name: Create a network with options (Not sure if 'ip_range' is correct key name)
docker_network
name: network_two
options:
subnet: '172.3.26.0/16'
gateway: 172.3.26.1
ip_range: '192.168.1.0/24'
- name: Delete a network, disconnecting all containers
docker_network:
name: network_one
state: absent
force: yes
```
## Returns:
```
{
changed: True,
failed: false
rc: 0
action: created | removed | none
results: {
< results from docker inspect for the affected network >
}
}
```

@ -0,0 +1,48 @@
# Docker_Volume_Facts Module Proposal
## Purpose and Scope
Docker_volume_facts will inspect volumes.
Docker_volume_facts will use docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_volume_facts will accept the parameters listed below. API connection parameters will be part of a shared
utility module as mentioned above.
```
name:
description:
- Volume name or list of volume names.
default: null
```
## Examples
```
- name: Inspect all volumes
docker_volume_facts
register: volume_facts
- name: Inspect a specific volume
docker_volume_facts:
name: data
register: data_vol_facts
```
# Returns
```
{
changed: False
failed: False
rc: 0
results: [ < output from volume inspection > ]
}
```

@ -0,0 +1,82 @@
# Docker_Volume Modules Proposal
## Purpose and Scope
The purpose of docker_volume is to manage volumes.
Docker_volume will manage volumes using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_volume accepts the parameters listed below. Parameters for connecting to the API are not listed here, as they
will be part of the shared module mentioned above.
```
driver:
description:
- Volume driver.
default: local
force:
description:
- Use with state 'present' to force removal and re-creation of an existing volume. This will not remove and
re-create the volume if it is already in use.
name:
description:
- Name of the volume.
required: true
default: null
options:
description:
- Dictionary of driver specific options. The local driver does not currently support
any options.
default: null
state:
description:
- "absent" removes a volume. A volume cannot be removed if it is in use.
- "present" create a volume with the specified name, if the volume does not already exist. Use the force
option to remove and re-create a volume. Even with the force option a volume cannot be removed and re-created if
it is in use.
default: present
choices:
- absent
- present
```
## Examples
```
- name: Create a volume
docker_volume:
name: data
- name: Remove a volume
docker_volume:
name: data
state: absent
- name: Re-create an existing volume
docker_volume:
name: data
state: present
force: yes
```
## Returns
```
{
changed: true,
failed: false,
rc: 0,
action: removed | created | none
results: {
< show the result of docker inspect of an affected volume >
}
}
```

@ -0,0 +1,110 @@
# Proposal: Proposals - have a process and documentation
*Author*: Robyn Bergeron <@robynbergeron>
*Date*: 04/03/2016
- Status: New
- Proposal type: community development process
- Targeted Release: Forever, until we improve it more at a later date.
- PR for Comments: https://github.com/ansible/ansible/pull/14802#
- Estimated time to implement: 2 weeks at most
Comments on this proposal prior to acceptance are accepted in the comments section of the pull request linked above.
## Motivation
Define light process for how proposals are created and accepted, and document the process permanently in community.html somewhere.
The following suggested process was created with the following ideas in mind:
- Transparency: notifications, decisions made in public meetings, etc. helps people to know what is going on.
- Avoid proliferation of multiple comments in multiple places; keep everything in the PR.
- Action is being taken: Knowing when and where decisions are made, and knowing who is the final authority, gives people the sense that things are moving.
- Ensure that new features or enhancements are added to the roadmap and release notes.
### Problems
Proposals are confusing. Should I write one? Where do I put it? Why cant I find any documentation about this? Who approves things? This is why we should have a light and unbureaucratic process.
## Solution proposal
This proposal has multiple parts:
- Proposed process for submitting / accepting proposals
- Suggested proposal template
Once the process and template are approved, a PR will be submitted for documenting the process permanently in documentation, as well as a PR to ansible/docs/proposals for the proposal template.
### Proposed Process
1: PROPOSAL CREATION
- Person making the proposal creates the proposal document in ansible/proposals via PR, following the proposal template/
- Person making the proposal creates an issue in ansible/proposals for that proposal.
- Author of proposal PR updates the proposal with link to the created issue #.
- Notify the community that this proposal exists.
- Author notifies ansible-devel mailing list for transparency, providing link to issue.
- Author includes commentary indicating that comments should *not* be in response to this email, but rather, community members should add comments or feedback in the issue.
- PRs may be made to the proposal, and can merged or not at submitter's discretion, and should be discussed/linked in the issue.
2: KEEP THE PROPOSAL MOVING TOWARDS A DECISION.
- Create tags in the ansible/proposals repo to indicate progress of the various proposal issues; ie: Discussion, Ready for meeting, Approved. (Can be used in conjunction with a board on waffle.io to show this, kanban style.)
- Proposals use public meetings as a mechanism to keep them moving.
- All proposals are decided on in a public meeting by a combination of folks with commit access to Ansible and any interested parties / users, as well as the author of the proposal. Time for approvals will be a portion of the overall schedule; proposals will be reviewed in the order received and may occasionally be deferred to the next meeting. If we are overwhelmed, a separate meeting may be scheduled.
(Note: ample feedback in the comments of the proposal issue should allow for folks to come to broad consensus in one way or another in the meeting rather rapidly, generally without an actual counted vote. However, the decision should be made *in the meeting*, so as to avoid any questions around whether or not the approval of one Ansible maintain / committer reflects the opinions or decision of everyone.)
- *New* proposals are explicitly added to the public IRC meeting agenda for each week by the meeting organizer for for acknowledgement of ongoing discussion and existence, and/or easy approval/rejection. (Either via a separate issue somewhere tracking any meeting items, or by adding a “meeting” label to the PR.)
- Existing new, not-yet-approved proposals are reviewed weekly by meeting organizer to check for slow-moving/stalled proposals, or for flags from the proposal owner indicating that they'd like to have it addressed in the weeks meeting
3: PROPOSAL APPROVED
- Amendments needed to the proposal after IRC discussion should be made immediately.
- The proposal status should be changed to Approved / In Progress in the document.
- The proposal should be moved from /ansible/proposals to a roadmap folder (or similar).
- The proposal issue comments should be updated with a note by the meeting organizer that the proposal has been accepted, and further commentary should be in the PRs implementing the code itself.
- Proposals can also be PENDING or NEEDS INFO (waiting on something), or DECLINED.
4: CODE IN PROGRESS
- Approved proposals should be periodically checked for progress, especially if tied to a release and/or is noted as release blocking.
- PRs implementing the proposal are recommended to link to the original proposal PR or document for context.
5: CODE COMPLETE
- Proposal document, which should be in docs/roadmap, should have their status updated to COMPLETE.
- The release notes file for the targeted release should be updated with a small note regarding the feature or enhancement; completed proposals for community processes should have a follow-up mail sent to the mailing list providing information and links to the new process.
- Hooray! Buy your friend a tasty beverage of their choosing.
### Suggested Proposal Template Outline
Following the .md convention, a proposal template should go in the docs/proposals repository. This is a suggested outline; the template will provide more guidance / context and will be submitted as a PR upon approval of this proposal.
Please note that, in line with the above guidance that some processes will require fine-tuning over time, that the suggested template outline below, as well as the final submitted template to the docs/proposals repo has wiggle room in terms of description, and that what makes sense may vary from one proposal to another. The expectation is that people will simply do what seems right, and over time well figure out what works best — but in the meantime, guidance is nice.
#### TEMPLATE OUTLINE
- Proposal Title
- Author (w/github ID linked)
- Date:
- Status: New, Approved, Pending, Complete
- Proposal type: Feature / enhancement / community development process
- Targeted Release:
- PR for comments:
- Estimated time to implement:
Comments on this proposal prior to acceptance are accepted in the comments of the PR linked above.
- Motivation / Problems solved:
- Proposed Solution: (what youre doing, and why; keeping this loose for now.)
Other Suggested things to include:
- Dependencies / requirements:
- Testing:
- Documentation:
## Dependencies / requirements
- Approval of this proposed process is needed to create the actual documentation of the process.
- Weekly, public IRC meetings (which should probably be documented Wrt time / day of week / etc. in the contributor documentation) of the Ansible development community.
- Creation of appropriate labels in GitHub (or defining some other mechanism to gather items for a weekly meeting agenda, such as a separate issue in GitHub that links to the PRs.)
- Coming to an agreement regarding “what qualifies as a feature or enhancement that requires a proposal, vs. just submitting a PR with code.” It could simply be that if the change is large or very complicated, our recommendation is always to file a proposal to ensure (a) transparency (b) that a contributor doesnt waste their time on something that ultimately cant be merged at this time.
- Nice to have: Any new proposal PR landing in ansible/proposals is automatically merged and an email automatically notifies the mailing list of the existence and location of the proposal & related issue # for comments.
## Testing
Testing of this proposal will literally be via submitting this proposal through the proposed proposal process. If it fails miserably, well know it needs fine-tuning or needs to go in the garbage can.
## Documentation:
- Documentation of the process, including “what is a feature or enhancement vs. just a regular PR,” along with the steps shown above, will be added to the Ansible documentation in .rst format via PR. The documentation should also provide guidance on the standard wording of the email notifying ansible-devel list that the proposal exists and is ready for review in the issue comments.
- A proposal template should also be created in the ansible/proposals repo directory.

@ -0,0 +1,205 @@
# Publish / Subscribe for Handlers
*Author*: René Moser <@resmo>
*Date*: 07/03/2016
## Motivation
In some use cases a publish/subscribe kind of event to run a handler is more convenient, e.g. restart services after replacing SSL certs.
However, ansible does not provide a built-in way to handle it yet.
### Problem
If your SSL cert changes, you usually have to reload/restart services to use the new certificate.
However, If you have a ssl role or a generic ssl play, you usually don't want to add specific handlers to it.
Instead it would be much more convenient to use a publish/subscribe kind of paradigm in the roles where the services are configured in.
The way we implemented it currently:
I use notify to set a fact where later (in different plays) we act on a fact using notify again.
~~~yaml
---
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
notify: publish ssl cert change
handlers:
- name: publish ssl cert change
set_fact:
ssl_cert_changed: true
- hosts: localhost
gather_facts: no
tasks:
- name: subscribe for ssl cert change
shell: echo cert changed
notify: service restart one
when: ssl_cert_changed is defined and ssl_cert_changed
handlers:
- name: service restart one
shell: echo service one restarted
- hosts: localhost
gather_facts: no
tasks:
- name: subscribe for ssl cert change
shell: echo cert changed
when: ssl_cert_changed is defined and ssl_cert_changed
notify: service restart two
handlers:
- name: service restart two
shell: echo service two restarted
~~~
However, this looks like a workaround of a feature that ansible should provide in a much cleaner way.
## Approaches
### Approach 1:
Provide new `subscribe` keyword on handlers:
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
- hosts: localhost
gather_facts: no
handlers:
- name: service restart one
shell: echo service one restarted
subscribe: copy an ssl cert
- hosts: localhost
gather_facts: no
handlers:
- name: service restart two
shell: echo service two restarted
subscribe: copy an ssl cert
~~~
### Approach 2:
Provide new `subscribe` on handlers and `publish` keywords in tasks:
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
publish: yes
- hosts: localhost
gather_facts: no
handlers:
- name: service restart one
shell: echo service one restarted
subscribe: copy an ssl cert
- hosts: localhost
gather_facts: no
handlers:
- name: service restart two
shell: echo service two restarted
subscribe: copy an ssl cert
~~~
### Approach 3:
Provide new `subscribe` module:
A subscribe module could consume the results of a task by name, optionally the value to react on could be specified (default: `changed`)
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
- hosts: localhost
gather_facts: no
tasks:
- subscribe:
name: copy an ssl cert
notify: service restart one
handlers:
- name: service restart one
shell: echo service one restarted
- hosts: localhost
gather_facts: no
tasks:
- subscribe:
name: copy an ssl cert
react_on: changed
notify: service restart two
handlers:
- name: service restart two
shell: echo service two restarted
~~~
### Approach 4:
Provide new `subscribe` module (same as Approach 3) and `publish` keyword:
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
publish: yes
- hosts: localhost
gather_facts: no
tasks:
- subscribe:
name: copy an ssl cert
notify: service restart one
handlers:
- name: service restart one
shell: echo service one restarted
- hosts: localhost
gather_facts: no
tasks:
- subscribe:
name: copy an ssl cert
notify: service restart two
handlers:
- name: service restart two
shell: echo service two restarted
~~~
### Clarifications about role dependencies and publish
When using service roles having the subscription handlers and the publish task (e.g. cert change) is defined in a depended role (SSL role) only the first service role running the "cert change" task as dependency will trigger the publish.
In any other service role in the playbook having "SSL role" as dependency, the task won't be `changed` anymore.
Therefore a once published "message" should not be overwritten or so called "unpublished" by running the same task in a followed role in the playbook.
## Conclusion
Feedback is requested to improve any of the above approaches, or provide further approaches to solve this problem.

@ -0,0 +1,77 @@
# Proposal: Re-run handlers cli option
*Author*: René Moser <@resmo>
*Date*: 07/03/2016
- Status: New
## Motivation
The most annoying thing users face using ansible in production is running handlers manually after a task failed after a notified handler.
### Problems
Handler notifications get lost after a task failed and there is no help from ansible to catch up the notified handlers in a next ansible playbook run.
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: simple task
shell: echo foo
notify: get msg out
- name: this tasks fails
fail: msg="something went wrong"
handlers:
- name: get msg out
shell: echo handler run
~~~
Result:
~~~
$ ansible-playbook test.yml
PLAY ***************************************************************************
TASK [simple task] *************************************************************
changed: [localhost]
TASK [this tasks fails] ********************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "something went wrong"}
NO MORE HOSTS LEFT *************************************************************
RUNNING HANDLER [get msg out] **************************************************
to retry, use: --limit @test.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1
~~~
## Solution proposal
Similar to retry, ansible should provide a way to manully invoke a list of handlers additionaly to the notified handlers in the plays:
~~~
$ ansible-playbook test.yml --notify-handlers <handler>,<handler>,<handler>
$ ansible-playbook test.yml --notify-handlers @test.handlers
~~~
Example:
~~~
$ ansible-playbook test.yml --notify-handlers "get msg out"
~~~
The stdout of a failed play should provide an example how to run notified handlers in the next run:
~~~
...
RUNNING HANDLER [get msg out] **************************************************
to retry, use: --limit @test.retry --notify-handlers @test.handlers
~~~

@ -0,0 +1,34 @@
# Rename always_run to ignore_checkmode
*Author*: René Moser <@resmo>
*Date*: 02/03/2016
## Motivation
The task argument `always_run` is misleading.
Ansible is known to be readable by users without deep knowledge of creating playbooks, they do not understand
what `always_run` does at the first glance.
### Problems
The following looks scary if you have no idea, what `always_run` does:
```
- shell: dangerous_cleanup.sh
when: cleanup == "yes"
always_run: yes
```
You have a conditional but also a word that says `always`. This is a conflict in terms of understanding.
## Solution Proposal
Deprecate `always_run` by rename it to `ignore_checkmode`:
```
- shell: dangerous_cleanup.sh
when: cleanup == "yes"
ignore_checkmode: yes
```

@ -1,10 +1,11 @@
#!/usr/bin/make #!/usr/bin/make
SITELIB = $(shell python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()") SITELIB = $(shell python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")
FORMATTER=../hacking/module_formatter.py FORMATTER=../hacking/module_formatter.py
DUMPER=../hacking/dump_playbook_attributes.py
all: clean docs all: clean docs
docs: clean modules staticmin docs: clean directives modules staticmin
./build-site.py ./build-site.py
-(cp *.ico htmlout/) -(cp *.ico htmlout/)
-(cp *.jpg htmlout/) -(cp *.jpg htmlout/)
@ -20,6 +21,8 @@ viewdocs: clean staticmin
htmldocs: staticmin htmldocs: staticmin
./build-site.py rst ./build-site.py rst
webdocs: htmldocs
clean: clean:
-rm -rf htmlout -rm -rf htmlout
-rm -f .buildinfo -rm -f .buildinfo
@ -39,8 +42,11 @@ clean:
.PHONEY: docs clean .PHONEY: docs clean
directives: $(FORMATTER) ../hacking/templates/rst.j2
PYTHONPATH=../lib $(DUMPER) --template-dir=../hacking/templates --output-dir=rst/
modules: $(FORMATTER) ../hacking/templates/rst.j2 modules: $(FORMATTER) ../hacking/templates/rst.j2
PYTHONPATH=../lib $(FORMATTER) -t rst --template-dir=../hacking/templates --module-dir=../lib/ansible/modules -o rst/ PYTHONPATH=../lib $(FORMATTER) -t rst --template-dir=../hacking/templates --module-dir=../lib/ansible/modules -o rst/
staticmin: staticmin:
cat _themes/srtd/static/css/theme.css | sed -e 's/^[ \t]*//g; s/[ \t]*$$//g; s/\([:{;,]\) /\1/g; s/ {/{/g; s/\/\*.*\*\///g; /^$$/d' | sed -e :a -e '$$!N; s/\n\(.\)/\1/; ta' > _themes/srtd/static/css/theme.min.css cat _themes/srtd/static/css/theme.css | sed -e 's/^[ ]*//g; s/[ ]*$$//g; s/\([:{;,]\) /\1/g; s/ {/{/g; s/\/\*.*\*\///g; /^$$/d' | sed -e :a -e '$$!N; s/\n\(.\)/\1/; ta' > _themes/srtd/static/css/theme.min.css

@ -12,8 +12,17 @@
<hr/> <hr/>
<script type="text/javascript">
(function(w,d,t,u,n,s,e){w['SwiftypeObject']=n;w[n]=w[n]||function(){
(w[n].q=w[n].q||[]).push(arguments);};s=d.createElement(t);
e=d.getElementsByTagName(t)[0];s.async=1;s.src=u;e.parentNode.insertBefore(s,e);
})(window,document,'script','//s.swiftypecdn.com/install/v2/st.js','_st');
_st('install','yABGvz2N8PwcwBxyfzUc','2.0.0');
</script>
<p> <p>
&copy; Copyright 2015 <a href="http://ansible.com">Ansible, Inc.</a>. &copy; Copyright 2016 <a href="http://ansible.com">Ansible, Inc.</a>.
{%- if last_updated %} {%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %} {% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}

@ -150,11 +150,6 @@
</a> </a>
</div> </div>
<div class="wy-side-nav-search" style="background-color:#5bbdbf;height=80px;margin:'auto auto auto auto'">
<!-- <a href="{{ pathto(master_doc) }}" class="icon icon-home"> {{ project }}</a> -->
{% include "searchbox.html" %}
</div>
<div id="menu-id" class="wy-menu wy-menu-vertical" data-spy="affix"> <div id="menu-id" class="wy-menu wy-menu-vertical" data-spy="affix">
{% set toctree = toctree(maxdepth=2, collapse=False) %} {% set toctree = toctree(maxdepth=2, collapse=False) %}
{% if toctree %} {% if toctree %}
@ -166,7 +161,7 @@
<!-- changeable widget --> <!-- changeable widget -->
<center> <center>
<br/> <br/>
<a href="http://www.ansible.com/tower?utm_source=docs"> <a href="http://www.ansible.com/docs-left?utm_source=docs">
<img style="border-width:0px;" src="https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-left-rail.png" /> <img style="border-width:0px;" src="https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-left-rail.png" />
</a> </a>
</center> </center>
@ -189,15 +184,17 @@
<div class="wy-nav-content"> <div class="wy-nav-content">
<div class="rst-content"> <div class="rst-content">
<!-- Tower ads --> <!-- Banner ads -->
<a class="DocSiteBanner" href="http://www.ansible.com/tower?utm_source=docs"> <div class="DocSiteBanner">
<div class="DocSiteBanner-imgWrapper"> <a class="DocSiteBanner-imgWrapper"
<img src="https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-top-left.png"> href="http://www.ansible.com/docs-top?utm_source=docs">
</div> <img src="https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-top-left.png">
<div class="DocSiteBanner-imgWrapper"> </a>
<img src="https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-top-right.png"> <a class="DocSiteBanner-imgWrapper"
</div> href="http://www.ansible.com/docs-top?utm_source=docs">
</a> <img src="https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-top-right.png">
</a>
</div>
{% include "breadcrumbs.html" %} {% include "breadcrumbs.html" %}
<div id="page-content"> <div id="page-content">

@ -1,205 +0,0 @@
{#
basic/layout.html
~~~~~~~~~~~~~~~~~
Master layout template for Sphinx themes.
:copyright: Copyright 2007-2013 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
#}
{%- block doctype -%}
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
{%- endblock %}
{%- set reldelim1 = reldelim1 is not defined and ' &raquo;' or reldelim1 %}
{%- set reldelim2 = reldelim2 is not defined and ' |' or reldelim2 %}
{%- set render_sidebar = (not embedded) and (not theme_nosidebar|tobool) and
(sidebars != []) %}
{%- set url_root = pathto('', 1) %}
{# XXX necessary? #}
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
{%- if not embedded and docstitle %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- else %}
{%- set titlesuffix = "" %}
{%- endif %}
{%- macro relbar() %}
<div class="related">
<h3>{{ _('Navigation') }}</h3>
<ul>
{%- for rellink in rellinks %}
<li class="right" {% if loop.first %}style="margin-right: 10px"{% endif %}>
<a href="{{ pathto(rellink[0]) }}" title="{{ rellink[1]|striptags|e }}"
{{ accesskey(rellink[2]) }}>{{ rellink[3] }}</a>
{%- if not loop.first %}{{ reldelim2 }}{% endif %}</li>
{%- endfor %}
{%- block rootrellink %}
<li><a href="{{ pathto(master_doc) }}">{{ shorttitle|e }}</a>{{ reldelim1 }}</li>
{%- endblock %}
{%- for parent in parents %}
<li><a href="{{ parent.link|e }}" {% if loop.last %}{{ accesskey("U") }}{% endif %}>{{ parent.title }}</a>{{ reldelim1 }}</li>
{%- endfor %}
{%- block relbaritems %} {% endblock %}
</ul>
</div>
{%- endmacro %}
{%- macro sidebar() %}
{%- if render_sidebar %}
<div class="sphinxsidebar">
<div class="sphinxsidebarwrapper">
{%- block sidebarlogo %}
{%- if logo %}
<p class="logo"><a href="{{ pathto(master_doc) }}">
<img class="logo" src="{{ pathto('_static/' + logo, 1) }}" alt="Logo"/>
</a></p>
{%- endif %}
{%- endblock %}
{%- if sidebars != None %}
{#- new style sidebar: explicitly include/exclude templates #}
{%- for sidebartemplate in sidebars %}
{%- include sidebartemplate %}
{%- endfor %}
{%- else %}
{#- old style sidebars: using blocks -- should be deprecated #}
{%- block sidebartoc %}
{%- include "localtoc.html" %}
{%- endblock %}
{%- block sidebarrel %}
{%- include "relations.html" %}
{%- endblock %}
{%- block sidebarsourcelink %}
{%- include "sourcelink.html" %}
{%- endblock %}
{%- if customsidebar %}
{%- include customsidebar %}
{%- endif %}
{%- block sidebarsearch %}
{%- include "searchbox.html" %}
{%- endblock %}
{%- endif %}
</div>
</div>
{%- endif %}
{%- endmacro %}
{%- macro script() %}
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: '{{ url_root }}',
VERSION: '{{ release|e }}',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '{{ '' if no_search_suffix else file_suffix }}',
HAS_SOURCE: {{ has_source|lower }}
};
</script>
{%- for scriptfile in script_files %}
<script type="text/javascript" src="{{ pathto(scriptfile, 1) }}"></script>
{%- endfor %}
{%- endmacro %}
{%- macro css() %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
<link rel="stylesheet" href="{{ pathto('_static/pygments.css', 1) }}" type="text/css" />
{%- for cssfile in css_files %}
<link rel="stylesheet" href="{{ pathto(cssfile, 1) }}" type="text/css" />
{%- endfor %}
{%- endmacro %}
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset={{ encoding }}" />
{{ metatags }}
{%- block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{%- endblock %}
{{ css() }}
{%- if not embedded %}
{{ script() }}
{%- if use_opensearch %}
<link rel="search" type="application/opensearchdescription+xml"
title="{% trans docstitle=docstitle|e %}Search within {{ docstitle }}{% endtrans %}"
href="{{ pathto('_static/opensearch.xml', 1) }}"/>
{%- endif %}
{%- if favicon %}
<link rel="shortcut icon" href="{{ pathto('_static/' + favicon, 1) }}"/>
{%- endif %}
{%- endif %}
{%- block linktags %}
{%- if hasdoc('about') %}
<link rel="author" title="{{ _('About these documents') }}" href="{{ pathto('about') }}" />
{%- endif %}
{%- if hasdoc('genindex') %}
<link rel="index" title="{{ _('Index') }}" href="{{ pathto('genindex') }}" />
{%- endif %}
{%- if hasdoc('search') %}
<link rel="search" title="{{ _('Search') }}" href="{{ pathto('search') }}" />
{%- endif %}
{%- if hasdoc('copyright') %}
<link rel="copyright" title="{{ _('Copyright') }}" href="{{ pathto('copyright') }}" />
{%- endif %}
<link rel="top" title="{{ docstitle|e }}" href="{{ pathto('index') }}" />
{%- if parents %}
<link rel="up" title="{{ parents[-1].title|striptags|e }}" href="{{ parents[-1].link|e }}" />
{%- endif %}
{%- if next %}
<link rel="next" title="{{ next.title|striptags|e }}" href="{{ next.link|e }}" />
{%- endif %}
{%- if prev %}
<link rel="prev" title="{{ prev.title|striptags|e }}" href="{{ prev.link|e }}" />
{%- endif %}
{%- endblock %}
{%- block extrahead %} {% endblock %}
</head>
<body>
{%- block header %}{% endblock %}
{%- block relbar1 %}{{ relbar() }}{% endblock %}
{%- block content %}
{%- block sidebar1 %} {# possible location for sidebar #} {% endblock %}
<div class="document">
{%- block document %}
<div class="documentwrapper">
{%- if render_sidebar %}
<div class="bodywrapper">
{%- endif %}
<div class="body">
{% block body %} {% endblock %}
</div>
{%- if render_sidebar %}
</div>
{%- endif %}
</div>
{%- endblock %}
{%- block sidebar2 %}{{ sidebar() }}{% endblock %}
<div class="clearer"></div>
</div>
{%- endblock %}
{%- block relbar2 %}{{ relbar() }}{% endblock %}
{%- block footer %}
<div class="footer">
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; <a href="{{ path }}">Copyright</a> {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; Copyright {{ copyright }}.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_sphinx %}
{% trans sphinx_version=sphinx_version|e %}Created using <a href="http://sphinx-doc.org/">Sphinx</a> {{ sphinx_version }}.{% endtrans %}
{%- endif %}
</div>
<p>asdf asdf asdf asdf 22</p>
{%- endblock %}
</body>
</html>

@ -1,61 +0,0 @@
<!-- <form class="wy-form" action="{{ pathto('search') }}" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form> -->
<script>
(function() {
var cx = '006019874985968165468:eu5pbnxp4po';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
'//www.google.com/cse/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>
<form id="search-form-id" action="">
<input type="text" name="query" id="search-box-id" />
<a class="search-reset-start" id="search-reset"><i class="fa fa-times"></i></a>
<a class="search-reset-start" id="search-start"><i class="fa fa-search"></i></a>
</form>
<script type="text/javascript" src="http://www.google.com/cse/brand?form=search-form-id&inputbox=search-box-id"></script>
<script>
function executeQuery() {
var input = document.getElementById('search-box-id');
var element = google.search.cse.element.getElement('searchresults-only0');
element.resultsUrl = '/htmlout/search.html'
if (input.value == '') {
element.clearAllResults();
$('#page-content, .rst-footer-buttons, #search-start').show();
$('#search-results, #search-reset').hide();
} else {
$('#page-content, .rst-footer-buttons, #search-start').hide();
$('#search-results, #search-reset').show();
element.execute(input.value);
}
return false;
}
$('#search-reset').hide();
$('#search-box-id').css('background-position', '1em center');
$('#search-box-id').on('blur', function() {
$('#search-box-id').css('background-position', '1em center');
});
$('#search-start').click(function(e) { executeQuery(); });
$('#search-reset').click(function(e) { $('#search-box-id').val(''); executeQuery(); });
$('#search-form-id').submit(function(e) {
console.log('submitting!');
executeQuery();
e.preventDefault();
});
</script>

@ -4723,33 +4723,16 @@ span[id*='MathJax-Span'] {
padding: 0.4045em 1.618em; padding: 0.4045em 1.618em;
} }
.DocSiteBanner { .DocSiteBanner {
width: 100%;
display: flex; display: flex;
display: -webkit-flex; display: -webkit-flex;
justify-content: center;
-webkit-justify-content: center;
flex-wrap: wrap; flex-wrap: wrap;
-webkit-flex-wrap: wrap; -webkit-flex-wrap: wrap;
justify-content: space-between;
-webkit-justify-content: space-between;
background-color: #ff5850;
margin-bottom: 25px; margin-bottom: 25px;
} }
.DocSiteBanner-imgWrapper { .DocSiteBanner-imgWrapper {
max-width: 100%; max-width: 100%;
} }
@media screen and (max-width: 1403px) {
.DocSiteBanner {
width: 100%;
display: flex;
display: -webkit-flex;
flex-wrap: wrap;
-webkit-flex-wrap: wrap;
justify-content: center;
-webkit-justify-content: center;
background-color: #fff;
margin-bottom: 25px;
}
}

@ -15,6 +15,7 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>. # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import print_function
__docformat__ = 'restructuredtext' __docformat__ = 'restructuredtext'
@ -24,9 +25,9 @@ import traceback
try: try:
from sphinx.application import Sphinx from sphinx.application import Sphinx
except ImportError: except ImportError:
print "#################################" print("#################################")
print "Dependency missing: Python Sphinx" print("Dependency missing: Python Sphinx")
print "#################################" print("#################################")
sys.exit(1) sys.exit(1)
import os import os
@ -40,7 +41,7 @@ class SphinxBuilder(object):
""" """
Run the DocCommand. Run the DocCommand.
""" """
print "Creating html documentation ..." print("Creating html documentation ...")
try: try:
buildername = 'html' buildername = 'html'
@ -69,10 +70,10 @@ class SphinxBuilder(object):
app.builder.build_all() app.builder.build_all()
except ImportError, ie: except ImportError:
traceback.print_exc() traceback.print_exc()
except Exception, ex: except Exception as ex:
print >> sys.stderr, "FAIL! exiting ... (%s)" % ex print("FAIL! exiting ... (%s)" % ex, file=sys.stderr)
def build_docs(self): def build_docs(self):
self.app.builder.build_all() self.app.builder.build_all()
@ -83,9 +84,9 @@ def build_rst_docs():
if __name__ == '__main__': if __name__ == '__main__':
if '-h' in sys.argv or '--help' in sys.argv: if '-h' in sys.argv or '--help' in sys.argv:
print "This script builds the html documentation from rst/asciidoc sources.\n" print("This script builds the html documentation from rst/asciidoc sources.\n")
print " Run 'make docs' to build everything." print(" Run 'make docs' to build everything.")
print " Run 'make viewdocs' to build and then preview in a web browser." print(" Run 'make viewdocs' to build and then preview in a web browser.")
sys.exit(0) sys.exit(0)
build_rst_docs() build_rst_docs()
@ -93,4 +94,4 @@ if __name__ == '__main__':
if "view" in sys.argv: if "view" in sys.argv:
import webbrowser import webbrowser
if not webbrowser.open('htmlout/index.html'): if not webbrowser.open('htmlout/index.html'):
print >> sys.stderr, "Could not open on your webbrowser." print("Could not open on your webbrowser.", file=sys.stderr)

@ -100,7 +100,7 @@ exclude_patterns = ['modules']
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = 'sphinx'
highlight_language = 'YAML' highlight_language = 'yaml'
# Options for HTML output # Options for HTML output

@ -37,17 +37,34 @@ All members of a list are lines beginning at the same indentation level starting
A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space):: A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space)::
# An employee record # An employee record
- martin: martin:
name: Martin D'vloper name: Martin D'vloper
job: Developer job: Developer
skill: Elite skill: Elite
More complicated data structures are possible, such as lists of dictionaries, dictionaries whose values are lists or a mix of both::
# Employee records
- martin:
name: Martin D'vloper
job: Developer
skills:
- python
- perl
- pascal
- tabitha:
name: Tabitha Bitumen
job: Developer
skills:
- lisp
- fortran
- erlang
Dictionaries and lists can also be represented in an abbreviated form if you really want to:: Dictionaries and lists can also be represented in an abbreviated form if you really want to::
--- ---
employees: martin: {name: Martin D'vloper, job: Developer, skill: Elite}
- martin: {name: Martin D'vloper, job: Developer, skill: Elite} fruits: ['Apple', 'Orange', 'Strawberry', 'Mango]
fruits: ['Apple', 'Orange', 'Strawberry', 'Mango']
.. _truthiness: .. _truthiness:
@ -59,6 +76,20 @@ Ansible doesn't really use these too much, but you can also specify a boolean va
likes_emacs: TRUE likes_emacs: TRUE
uses_cvs: false uses_cvs: false
Values can span multiple lines using ``|`` or ``>``. Spanning multiple lines using a ``|`` will include the newlines. Using a ``>`` will ignore newlines; it's used to make what would otherwise be a very long line easier to read and edit.
In either case the indentation will be ignored.
Examples are::
include_newlines: |
exactly as you see
will appear these three
lines of poetry
ignore_newlines: >
this is really a
single line of text
despite appearances
Let's combine what we learned so far in an arbitrary YAML example. Let's combine what we learned so far in an arbitrary YAML example.
This really has nothing to do with Ansible, but will give you a feel for the format:: This really has nothing to do with Ansible, but will give you a feel for the format::
@ -75,9 +106,13 @@ This really has nothing to do with Ansible, but will give you a feel for the for
- Strawberry - Strawberry
- Mango - Mango
languages: languages:
ruby: Elite perl: Elite
python: Elite python: Elite
dotnet: Lame pascal: Lame
education: |
4 GCSEs
3 A-Levels
BSc in the Internet of Things
That's all you really need to know about YAML to start writing `Ansible` playbooks. That's all you really need to know about YAML to start writing `Ansible` playbooks.
@ -116,6 +151,8 @@ In these cases just use quotes::
YAML Lint (online) helps you debug YAML syntax if you are having problems YAML Lint (online) helps you debug YAML syntax if you are having problems
`Github examples directory <https://github.com/ansible/ansible-examples>`_ `Github examples directory <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the github project source Complete playbook files from the github project source
`Wikipedia YAML syntax reference <https://en.wikipedia.org/wiki/YAML>`_
A good guide to YAML syntax
`Mailing List <http://groups.google.com/group/ansible-project>`_ `Mailing List <http://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_

@ -1,5 +1,5 @@
Ansible Privilege Escalation Become (Privilege Escalation)
++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++
Ansible can use existing privilege escalation systems to allow a user to execute tasks as another. Ansible can use existing privilege escalation systems to allow a user to execute tasks as another.
@ -7,49 +7,55 @@ Ansible can use existing privilege escalation systems to allow a user to execute
Become Become
`````` ``````
Before 1.9 Ansible mostly allowed the use of sudo and a limited use of su to allow a login/remote user to become a different user Ansible allows you 'become' another user, different from the user that logged into the machine (remote user). This is done using existing
and execute tasks, create resources with the 2nd user's permissions. As of 1.9 'become' supersedes the old sudo/su, while still privilege escalation tools, which you probably already use or have configured, like 'sudo', 'su', 'pfexec', 'doas', 'pbrun' and others.
being backwards compatible. This new system also makes it easier to add other privilege escalation tools like pbrun (Powerbroker),
pfexec and others.
New directives .. note:: Before 1.9 Ansible mostly allowed the use of `sudo` and a limited use of `su` to allow a login/remote user to become a different user
-------------- and execute tasks, create resources with the 2nd user's permissions. As of 1.9 `become` supersedes the old sudo/su, while still being backwards compatible.
This new system also makes it easier to add other privilege escalation tools like `pbrun` (Powerbroker), `pfexec` and others.
.. note:: Setting any var or directive makes no implications on the values of the other related directives, i.e. setting become_user does not set become.
Directives
-----------
These can be set from play to task level, but are overriden by connection variables as they can be host specific.
become become
equivalent to adding 'sudo:' or 'su:' to a play or task, set to 'true'/'yes' to activate privilege escalation set to 'true'/'yes' to activate privilege escalation.
become_user become_user
equivalent to adding 'sudo_user:' or 'su_user:' to a play or task, set to user with desired privileges set to user with desired privileges, the user you 'become', NOT the user you login as. Does NOT imply `become: yes`, to allow it to be set at host level.
become_method become_method
at play or task level overrides the default method set in ansible.cfg, set to 'sudo'/'su'/'pbrun'/'pfexec'/'doas' at play or task level overrides the default method set in ansible.cfg, set to 'sudo'/'su'/'pbrun'/'pfexec'/'doas'
New ansible\_ variables Connection variables
----------------------- --------------------
Each allows you to set an option per group and/or host Each allows you to set an option per group and/or host, these are normally defined in inventory but can be used as normal variables.
ansible_become ansible_become
equivalent to ansible_sudo or ansible_su, allows to force privilege escalation equivalent of the become directive, decides if privilege escalation is used or not.
ansible_become_method ansible_become_method
allows to set privilege escalation method allows to set privilege escalation method
ansible_become_user ansible_become_user
equivalent to ansible_sudo_user or ansible_su_user, allows to set the user you become through privilege escalation allows to set the user you become through privilege escalation, does not imply `ansible_become: True`
ansible_become_pass ansible_become_pass
equivalent to ansible_sudo_pass or ansible_su_pass, allows you to set the privilege escalation password allows you to set the privilege escalation password
New command line options New command line options
------------------------ ------------------------
--ask-become-pass --ask-become-pass, -K
ask for privilege escalation password ask for privilege escalation password, does not imply become will be used
--become,-b --become, -b
run operations with become (no password implied) run operations with become (no password implied)
--become-method=BECOME_METHOD --become-method=BECOME_METHOD
@ -57,26 +63,115 @@ New command line options
valid choices: [ sudo | su | pbrun | pfexec | doas ] valid choices: [ sudo | su | pbrun | pfexec | doas ]
--become-user=BECOME_USER --become-user=BECOME_USER
run operations as this user (default=root) run operations as this user (default=root), does not imply --become/-b
sudo and su still work!
-----------------------
Old playbooks will not need to be changed, even though they are deprecated, sudo and su directives will continue to work though it
is recommended to move to become as they may be retired at one point. You cannot mix directives on the same object though, Ansible
will complain if you try to.
Become will default to using the old sudo/su configs and variables if they exist, but will override them if you specify any of the
new ones.
.. note:: Privilege escalation methods must also be supported by the connection plugin used, most will warn if they do not, some will just ignore it as they always run as root (jail, chroot, etc).
.. note:: Methods cannot be chained, you cannot use 'sudo /bin/su -' to become a user, you need to have privileges to run the command as that user in sudo or be able to su directly to it (the same for pbrun, pfexec or other supported methods). For those from Pre 1.9 , sudo and su still work!
------------------------------------------------
For those using old playbooks will not need to be changed, even though they are deprecated, sudo and su directives, variables and options
will continue to work. It is recommended to move to become as they may be retired at one point.
You cannot mix directives on the same object (become and sudo) though, Ansible will complain if you try to.
Become will default to using the old sudo/su configs and variables if they exist, but will override them if you specify any of the new ones.
Limitations
-----------
Although privilege escalation is mostly intuitive, there are a few limitations
on how it works. Users should be aware of these to avoid surprises.
Becoming an Unprivileged User
=============================
Ansible 2.0.x and below has a limitation with regards to becoming an
unprivileged user that can be a security risk if users are not aware of it.
Ansible modules are executed on the remote machine by first substituting the
parameters into the module file, then copying the file to the remote machine,
and finally executing it there.
Everything is fine if the module file is executed without using ``become``,
when the ``become_user`` is root, or when the connection to the remote machine
is made as root. In these cases the module file is created with permissions
that only allow reading by the user and root.
The problem occurs when the ``become_user`` is an unprivileged user. Ansible
2.0.x and below make the module file world readable in this case as the module
file is written as the user that Ansible connects as but the file needs to
be reasable by the user Ansible is set to ``become``.
.. note:: In Ansible 2.1, this window is further narrowed: If the connection
is made as a privileged user (root) then Ansible 2.1 and above will use
chown to set the file's owner to the unprivileged user being switched to.
This means both the user making the connection and the user being switched
to via ``become`` must be unprivileged in order to trigger this problem.
If any of the parameters passed to the module are sensitive in nature then
those pieces of data are located in a world readable module file for the
duration of the Ansible module execution. Once the module is done executing
Ansible will delete the temporary file. If you trust the client machines then
there's no problem here. If you do not trust the client machines then this is
a potential danger.
Ways to resolve this include:
* Use :ref:`pipelining`. When pipelining is enabled, Ansible doesn't save the
module to a temporary file on the client. Instead it pipes the module to
the remote python interpreter's stdin. Pipelining does not work for
non-python modules.
* (Available in Ansible 2.1) Install filesystem acl support on the managed
host. If the temporary directory on the remote host is mounted with
filesystem acls enabled and the :command:`setfacl` tool is in the remote
``PATH`` then Ansible will use filesystem acls to share the module file with
the second unprivileged instead of having to make the file readable by
everyone.
* Don't perform an action on the remote machine by becoming an unprivileged
user. Temporary files are protected by UNIX file permissions when you
``become`` root or do not use ``become``. In Ansible 2.1 and above, UNIX
file permissions are also secure if you make the connection to the managed
machine as root and then use ``become`` to an unprivileged account.
.. versionchanged:: 2.1
In addition to the additional means of doing this securely, Ansible 2.1 also
makes it harder to unknowingly do this insecurely. Whereas in Ansible 2.0.x
and below, Ansible will silently allow the insecure behaviour if it was unable
to find another way to share the files with the unprivileged user, in Ansible
2.1 and above Ansible defaults to issuing an error if it can't do this
securely. If you can't make any of the changes above to resolve the problem
and you decide that the machine you're running on is secure enough for the
modules you want to run there to be world readable you can turn on
``allow_world_readable_tmpfiles`` in the :file:`ansible.cfg` file. Setting
``allow_world_readable_tmpfiles`` will change this from an error into
a warning and allow the task to run as it did prior to 2.1.
Connection Plugin Support
=========================
Privilege escalation methods must also be supported by the connection plugin
used. Most connection plugins will warn if they do not support become. Some
will just ignore it as they always run as root (jail, chroot, etc).
Only one method may be enabled per host
=======================================
Methods cannot be chained. You cannot use ``sudo /bin/su -`` to become a user,
you need to have privileges to run the command as that user in sudo or be able
to su directly to it (the same for pbrun, pfexec or other supported methods).
Can't limit escalation to certain commands
==========================================
Privilege escalation permissions have to be general. Ansible does not always
use a specific command to do something but runs modules (code) from
a temporary file name which changes every time. If you have '/sbin/service'
or '/bin/chmod' as the allowed commands this will fail with ansible as those
paths won't match with the temporary file that ansible creates to run the
module.
.. note:: Privilege escalation permissions have to be general, Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. So if you have '/sbin/service' or '/bin/chmod' as the allowed commands this will fail with ansible.
.. seealso:: .. seealso::

@ -0,0 +1,82 @@
Committers Guidelines (for people with commit rights to Ansible on GitHub)
``````````````````````````````````````````````````````````````````````````
These are the guidelines for people with commit access to Ansible. Committers are essentially acting as members of the Ansible Core team, although not necessarily as an employee of Ansible and Red Hat. Please read the guidelines before you commit.
These guidelines apply to everyone. At the same time, this ISNT a process document. So just use good judgement. Youve been given commit access because we trust your judgement.
That said, use the trust wisely.
If you abuse the trust and break components and builds, etc., the trust level falls and you may be asked not to commit or you may lose access to do so.
Features, High Level Design, and Roadmap
========================================
As a core team member, you are an integral part of the team that develops the roadmap. Please be engaged, and push for the features and fixes that you want to see. Also keep in mind that Red Hat, as a company, will commit to certain features, fixes, APIs, etc. for various releases. Red Hat, the company, and the Ansible team must get these committed features (etc.) completed and released as scheduled. Obligations to users, the community, and customers must come first. Because of these commitments, a feature you want to develop yourself many not get into a release if it impacts a lot of other parts within Ansible.
Any other new features and changes to high level design should go through the proposal process (TBD), to ensure the community and core team have had a chance to review the idea and approve it. The core team has sole responsibility for merging new features based on proposals.
Our Workflow on GitHub
======================
As a committer, you may already know this, but our workflow forms a lot of our team policies. Please ensure youre aware of the following workflow steps:
* Fork the repository upon which you want to do some work to your own personal repository
* Work on the specific branch upon which you need to commit
* Create a Pull Request back to the Ansible repository and tag the people you would like to review; assign someone as the primary “owner” of your request
* Adjust code as necessary based on the Comments provided
* Ask someone on the Core Team to do a final review and merge
Addendum to workflow for Committers:
------------------------------------
The Core Team is aware that this can be a difficult process at times. Sometimes, the team breaks the rules: Direct commits, merging their own PRs. This section is a set of guidelines. If youre changing a comma in a doc, or making a very minor change, you can use your best judgement. This is another trust thing. The process is critical for any major change, but for little things or getting something done quickly, use your best judgement and make sure people on the team are aware of your work.
Roles on Core
=============
* Core Committers: Fine to do PRs for most things, but we should have a timebox. Hanging PRs may merge on the judgement of these devs.
* Module Owners: Module Owners own specific modules and have indirect commit access via the current module PR mechanisms.
General Rules
=============
Individuals with direct commit access to ansible/ansible (+core, + extras) are entrusted with powers that allow them to do a broad variety of things--probably more than we can write down. Rather than rules, treat these as general *guidelines*, individuals with this power are expected to use their best judgement.
* Dont commit directly.
* Don't omit tests. PRs with tests are looked at with more priority than PRs without tests that should have them included. While not all changes require tests, be sure to add them for bug fixes or functionality changes.
* Don't forget the docs! If your PR is a new feature or a change to behavior, make sure you've updated all associated documentation or have notified the right people to do so. It also helps to add the version of Core against which this documentation is compatible (to avoid confusion with stable versus devel docs, for backwards compatibility, etc.).
* Don't merge your own PRs. Someone else should have a chance to review and approve the PR merge. If you are a Core Committer, you have a small amount of leeway here for very minor changes.
* Consider backwards compatibility (dont break existing playbooks).
* Don't forget about alternate environments. Consider the alternatives--yes, people have bad environments, but they are the ones who need us the most.
* Don't drag your community team members down. Always discuss the technical merits, but you should never address the persons limitations (you can later go for beers and call them idiots, but not in IRC/Github/etc.).
* Don't forget about the maintenance burden. Some things are really cool to have, but they might not be worth shoehorning in if the maintenance burden is too great.
* Don't break playbooks. Always keep backwards compatibility in mind.
* Don't forget to keep it simple. Complexity breeds all kinds of problems.
* Don't forget to be active. Committers who have no activity on the project (through merges, triage, commits, etc.) will have their permissions suspended.
Committers are expected to continue to follow the same community and contribution guidelines followed by the rest of the Ansible community.
People
======
Individuals who've been asked to become a part of this group have generally been contributing in significant ways to the Ansible community for some time. Should they agree, they are requested to add their names and GitHub IDs to this file, in the section below, via a pull request. Doing so indicates that these individuals agree to act in the ways that their fellow committers trust that they will act.
* James Cammarata
* Brian Coca
* Matt Davis
* Toshio Kuratomi
* Jason McKerr
* Robyn Bergeron
* Greg DeKoenigsberg
* Monty Taylor
* Matt Martz
* Nate Case
* James Tanner
* Peter Sprygata
* Abhijit Menon-Sen
* Michael Scherer
* René Moser
* David Shrewsbury
* Sandra Wills
* Graham Mainwaring
* Jon Davila

@ -227,6 +227,12 @@ which is our official conference series.
To subscribe to a group from a non-google account, you can send an email to the subscription address requesting the subscription. For example: ansible-devel+subscribe@googlegroups.com To subscribe to a group from a non-google account, you can send an email to the subscription address requesting the subscription. For example: ansible-devel+subscribe@googlegroups.com
IRC Meetings
------------
The Ansible community holds regular IRC meetings on various topics, and anyone who is interested is invited to
participate. For more information about Ansible meetings, consult the [Ansible community meeting page](https://github.com/ansible/community/blob/master/MEETINGS.md).
Release Numbering Release Numbering
----------------- -----------------
@ -251,7 +257,12 @@ channel or the general project mailing list.
IRC Channel IRC Channel
----------- -----------
Ansible has an IRC channel #ansible on irc.freenode.net. Ansible has several IRC channels on Freenode (irc.freenode.net):
- #ansible - For general use questions and support.
- #ansible-devel - For discussions on developer topics and code related to features/bugs.
- #ansible-meeting - For public community meetings. We will generally announce these on one or more of the above mailing lists.
- #ansible-notices - Mostly bot output from things like Github, etc.
Notes on Priority Flags Notes on Priority Flags
----------------------- -----------------------

@ -11,6 +11,7 @@ Learn how to build modules of your own in any language, and also how to extend A
developing_modules developing_modules
developing_plugins developing_plugins
developing_test_pr developing_test_pr
developing_releases
Developers will also likely be interested in the fully-discoverable in :doc:`tower`. It's great for embedding Ansible in all manner of applications. Developers will also likely be interested in the fully-discoverable in :doc:`tower`. It's great for embedding Ansible in all manner of applications.

@ -3,10 +3,17 @@ Python API
.. contents:: Topics .. contents:: Topics
Please note that while we make this API available it is not intended for direct consumption, it is here
for the support of the Ansible command line tools. We try not to make breaking changes but we reserve the
right to do so at any time if it makes sense for the Ansible toolset.
The following documentation is provided for those that still want to use the API directly, but be mindful this is not something the Ansible team supports.
There are several interesting ways to use Ansible from an API perspective. You can use There are several interesting ways to use Ansible from an API perspective. You can use
the Ansible python API to control nodes, you can extend Ansible to respond to various python events, you can the Ansible python API to control nodes, you can extend Ansible to respond to various python events, you can
write various plugins, and you can plug in inventory data from external data sources. This document write various plugins, and you can plug in inventory data from external data sources. This document
covers the Runner and Playbook API at a basic level. covers the execution and Playbook API at a basic level.
If you are looking to use Ansible programmatically from something other than Python, trigger events asynchronously, If you are looking to use Ansible programmatically from something other than Python, trigger events asynchronously,
or have access control and logging demands, take a look at :doc:`tower` or have access control and logging demands, take a look at :doc:`tower`
@ -17,8 +24,10 @@ This chapter discusses the Python API.
.. _python_api: .. _python_api:
The Python API is very powerful, and is how the ansible CLI and ansible-playbook The Python API is very powerful, and is how the all the ansible CLI tools are implemented.
are implemented. In version 2.0 the core ansible got rewritten and the API was mostly rewritten. In version 2.0 the core ansible got rewritten and the API was mostly rewritten.
.. note:: Ansible relies on forking processes, as such the API is not thread safe.
.. _python_api_20: .. _python_api_20:
@ -37,11 +46,11 @@ In 2.0 things get a bit more complicated to start, but you end up with much more
from ansible.playbook.play import Play from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager from ansible.executor.task_queue_manager import TaskQueueManager
Options = namedtuple('Options', ['connection','module_path', 'forks', 'remote_user', 'private_key_file', 'ssh_common_args', 'ssh_extra_args', 'sftp_extra_args', 'scp_extra_args', 'become', 'become_method', 'become_user', 'verbosity', 'check']) Options = namedtuple('Options', ['connection', 'module_path', 'forks', 'become', 'become_method', 'become_user', 'check'])
# initialize needed objects # initialize needed objects
variable_manager = VariableManager() variable_manager = VariableManager()
loader = DataLoader() loader = DataLoader()
options = Options(connection='local', module_path='/path/to/mymodules', forks=100, remote_user=None, private_key_file=None, ssh_common_args=None, ssh_extra_args=None, sftp_extra_args=None, scp_extra_args=None, become=None, become_method=None, become_user=None, verbosity=None, check=False) options = Options(connection='local', module_path='/path/to/mymodules', forks=100, become=None, become_method=None, become_user=None, check=False)
passwords = dict(vault_pass='secret') passwords = dict(vault_pass='secret')
# create inventory and pass to var manager # create inventory and pass to var manager
@ -53,7 +62,10 @@ In 2.0 things get a bit more complicated to start, but you end up with much more
name = "Ansible Play", name = "Ansible Play",
hosts = 'localhost', hosts = 'localhost',
gather_facts = 'no', gather_facts = 'no',
tasks = [ dict(action=dict(module='debug', args=(msg='Hello Galaxy!'))) ] tasks = [
dict(action=dict(module='shell', args='ls'), register='shell_out'),
dict(action=dict(module='debug', args=dict(msg='{{shell_out.stdout}}')))
]
) )
play = Play().load(play_source, variable_manager=variable_manager, loader=loader) play = Play().load(play_source, variable_manager=variable_manager, loader=loader)

@ -71,6 +71,9 @@ There's a useful test script in the source checkout for ansible::
source ansible/hacking/env-setup source ansible/hacking/env-setup
chmod +x ansible/hacking/test-module chmod +x ansible/hacking/test-module
For instructions on setting up ansible from source, please see
:doc:`intro_installation`.
Let's run the script you just wrote with that:: Let's run the script you just wrote with that::
ansible/hacking/test-module -m ./timetest.py ansible/hacking/test-module -m ./timetest.py
@ -191,7 +194,7 @@ a lot shorter than this::
Let's test that module:: Let's test that module::
ansible/hacking/test-module -m ./time -a "time=\"March 14 12:23\"" ansible/hacking/test-module -m ./timetest.py -a "time=\"March 14 12:23\""
This should return something like:: This should return something like::
@ -247,7 +250,7 @@ And instantiating the module class like::
argument_spec = dict( argument_spec = dict(
state = dict(default='present', choices=['present', 'absent']), state = dict(default='present', choices=['present', 'absent']),
name = dict(required=True), name = dict(required=True),
enabled = dict(required=True, choices=BOOLEANS), enabled = dict(required=True, type='bool'),
something = dict(aliases=['whatever']) something = dict(aliases=['whatever'])
) )
) )
@ -264,7 +267,7 @@ And failures are just as simple (where 'msg' is a required parameter to explain
module.fail_json(msg="Something fatal happened") module.fail_json(msg="Something fatal happened")
There are also other useful functions in the module class, such as module.sha1(path). See There are also other useful functions in the module class, such as module.sha1(path). See
lib/ansible/module_common.py in the source checkout for implementation details. lib/ansible/module_utils/basic.py in the source checkout for implementation details.
Again, modules developed this way are best tested with the hacking/test-module script in the git Again, modules developed this way are best tested with the hacking/test-module script in the git
source checkout. Because of the magic involved, this is really the only way the scripts source checkout. Because of the magic involved, this is really the only way the scripts
@ -335,7 +338,7 @@ and guidelines:
* If you have a company module that returns facts specific to your installations, a good name for this module is `site_facts`. * If you have a company module that returns facts specific to your installations, a good name for this module is `site_facts`.
* Modules accepting boolean status should generally accept 'yes', 'no', 'true', 'false', or anything else a user may likely throw at them. The AnsibleModule common code supports this with "choices=BOOLEANS" and a module.boolean(value) casting function. * Modules accepting boolean status should generally accept 'yes', 'no', 'true', 'false', or anything else a user may likely throw at them. The AnsibleModule common code supports this with "type='bool'".
* Include a minimum of dependencies if possible. If there are dependencies, document them at the top of the module file, and have the module raise JSON error messages when the import fails. * Include a minimum of dependencies if possible. If there are dependencies, document them at the top of the module file, and have the module raise JSON error messages when the import fails.
@ -347,7 +350,7 @@ and guidelines:
* In the event of failure, a key of 'failed' should be included, along with a string explanation in 'msg'. Modules that raise tracebacks (stacktraces) are generally considered 'poor' modules, though Ansible can deal with these returns and will automatically convert anything unparseable into a failed result. If you are using the AnsibleModule common Python code, the 'failed' element will be included for you automatically when you call 'fail_json'. * In the event of failure, a key of 'failed' should be included, along with a string explanation in 'msg'. Modules that raise tracebacks (stacktraces) are generally considered 'poor' modules, though Ansible can deal with these returns and will automatically convert anything unparseable into a failed result. If you are using the AnsibleModule common Python code, the 'failed' element will be included for you automatically when you call 'fail_json'.
* Return codes from modules are not actually not significant, but continue on with 0=success and non-zero=failure for reasons of future proofing. * Return codes from modules are actually not significant, but continue on with 0=success and non-zero=failure for reasons of future proofing.
* As results from many hosts will be aggregated at once, modules should return only relevant output. Returning the entire contents of a log file is generally bad form. * As results from many hosts will be aggregated at once, modules should return only relevant output. Returning the entire contents of a log file is generally bad form.
@ -479,9 +482,10 @@ Module checklist
```````````````` ````````````````
* The shebang should always be #!/usr/bin/python, this allows ansible_python_interpreter to work * The shebang should always be #!/usr/bin/python, this allows ansible_python_interpreter to work
* Modules must be written to support Python 2.4. If this is not possible, required minimum python version and rationale should be explained in the requirements section in DOCUMENTATION.
* Documentation: Make sure it exists * Documentation: Make sure it exists
* `required` should always be present, be it true or false * `required` should always be present, be it true or false
* If `required` is false you need to document `default`, even if the default is 'None' (which is the default if no parameter is supplied). Make sure default parameter in docs matches default parameter in code. * If `required` is false you need to document `default`, even if the default is 'null' (which is the default if no parameter is supplied). Make sure default parameter in docs matches default parameter in code.
* `default` is not needed for `required: true` * `default` is not needed for `required: true`
* Remove unnecessary doc like `aliases: []` or `choices: []` * Remove unnecessary doc like `aliases: []` or `choices: []`
* The version is not a float number and value the current development version * The version is not a float number and value the current development version
@ -538,24 +542,34 @@ Windows modules checklist
#!powershell #!powershell
then:: then::
<GPL header> <GPL header>
then::
then::
# WANT_JSON # WANT_JSON
# POWERSHELL_COMMON # POWERSHELL_COMMON
then, to parse all arguments into a variable modules generally use:: then, to parse all arguments into a variable modules generally use::
$params = Parse-Args $args $params = Parse-Args $args
* Arguments: * Arguments:
* Try and use state present and state absent like other modules * Try and use state present and state absent like other modules
* You need to check that all your mandatory args are present. You can do this using the builtin Get-AnsibleParam function. * You need to check that all your mandatory args are present. You can do this using the builtin Get-AnsibleParam function.
* Required arguments:: * Required arguments::
$package = Get-AnsibleParam -obj $params -name name -failifempty $true $package = Get-AnsibleParam -obj $params -name name -failifempty $true
* Required arguments with name validation:: * Required arguments with name validation::
$state = Get-AnsibleParam -obj $params -name "State" -ValidateSet "Present","Absent" -resultobj $resultobj -failifempty $true $state = Get-AnsibleParam -obj $params -name "State" -ValidateSet "Present","Absent" -resultobj $resultobj -failifempty $true
* Optional arguments with name validation:: * Optional arguments with name validation::
$state = Get-AnsibleParam -obj $params -name "State" -default "Present" -ValidateSet "Present","Absent" $state = Get-AnsibleParam -obj $params -name "State" -default "Present" -ValidateSet "Present","Absent"
* the If "FailIfEmpty" is true, the resultobj parameter is used to specify the object returned to fail-json. You can also override the default message * the If "FailIfEmpty" is true, the resultobj parameter is used to specify the object returned to fail-json. You can also override the default message
using $emptyattributefailmessage (for missing required attributes) and $ValidateSetErrorMessage (for attribute validation errors) using $emptyattributefailmessage (for missing required attributes) and $ValidateSetErrorMessage (for attribute validation errors)
* Look at existing modules for more examples of argument checking. * Look at existing modules for more examples of argument checking.
@ -586,7 +600,7 @@ Starting in 1.8 you can deprecate modules by renaming them with a preceding _, i
_old_cloud.py, This will keep the module available but hide it from the primary docs and listing. _old_cloud.py, This will keep the module available but hide it from the primary docs and listing.
You can also rename modules and keep an alias to the old name by using a symlink that starts with _. You can also rename modules and keep an alias to the old name by using a symlink that starts with _.
This example allows the stat module to be called with fileinfo, making the following examples equivalent This example allows the stat module to be called with fileinfo, making the following examples equivalent::
EXAMPLES = ''' EXAMPLES = '''
ln -s stat.py _fileinfo.py ln -s stat.py _fileinfo.py

@ -112,6 +112,7 @@ to /usr/share/ansible/plugins, in a subfolder for each plugin type::
* connection_plugins * connection_plugins
* filter_plugins * filter_plugins
* vars_plugins * vars_plugins
* strategy_plugins
To change this path, edit the ansible configuration file. To change this path, edit the ansible configuration file.

@ -0,0 +1,48 @@
Releases
========
.. contents:: Topics
:local:
.. _schedule:
Release Schedule
````````````````
Ansible is on a 'flexible' 4 month release schedule, sometimes this can be extended if there is a major change that requires a longer cycle (i.e. 2.0 core rewrite).
Currently modules get released at the same time as the main Ansible repo, even though they are separated into ansible-modules-core and ansible-modules-extras.
The major features and bugs fixed in a release should be reflected in the CHANGELOG.md, minor ones will be in the commit history (FIXME: add git exmaple to list).
When a fix/feature gets added to the `devel` branch it will be part of the next release, some bugfixes can be backported to previous releases and might be part of a minor point release if it is deemed necessary.
Sometimes an RC can be extended by a few days if a bugfix makes a change that can have far reaching consequences, so users have enough time to find any new issues that may stem from this.
.. _methods:
Release methods
````````````````
Ansible normally goes through a 'release candidate', issuing an RC1 for a release, if no major bugs are discovered in it after 5 business days we'll get a final release.
Otherwise fixes will be applied and an RC2 will be provided for testing and if no bugs after 2 days, the final release will be made, iterating this last step and incrementing the candidate number as we find major bugs.
.. _freezing:
Release feature freeze
``````````````````````
During the release candidate process, the focus will be on bugfixes that affect the RC, new features will be delayed while we try to produce a final version. Some bugfixes that are minor or don't affect the RC will also be postponed until after the release is finalized.
.. seealso::
:doc:`developing_api`
Python API to Playbooks and Ad Hoc Task Execution
:doc:`developing_modules`
How to develop modules
:doc:`developing_plugins`
How to develop plugins
`Ansible Tower <http://ansible.com/ansible-tower>`_
REST API endpoint and GUI for Ansible, syncs with dynamic inventory
`Development Mailing List <http://groups.google.com/group/ansible-devel>`_
Mailing list for development topics
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel

@ -81,27 +81,34 @@ and destination repositories. It will look something like this::
Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name
.. note:: .. note::
It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by ansible staff.
Dot releases are cherry-picked manually by ansible staff.
The username and branch at the end are the important parts, which will be turned into git commands as follows:: The username and branch at the end are the important parts, which will be turned into git commands as follows::
git checkout -b testing_PRXXXX devel git checkout -b testing_PRXXXX devel
git pull https://github.com/someuser/ansible.git feature_branch_name git pull https://github.com/someuser/ansible.git feature_branch_name
The first command creates and switches to a new branch named testing_PRXXXX, where the XXXX is the actual issue number associated The first command creates and switches to a new branch named testing_PRXXXX, where the XXXX is the actual issue number associated with the pull request (for example, 1234). This branch is based on the devel branch. The second command pulls the new code from the users feature branch into the newly created branch.
with the pull request (for example, 1234). This branch is based on the devel branch. The second command pulls the new code from the
users feature branch into the newly created branch.
.. note:: .. note::
If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.
are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of
the original pull request contributor.
.. note:: .. note::
Some users do not create feature branches, which can cause problems when they have multiple, un-related commits in Some users do not create feature branches, which can cause problems when they have multiple, un-related commits in their version of `devel`. If the source looks like `someuser:devel`, make sure there is only one commit listed on the pull request.
their version of `devel`. If the source looks like `someuser:devel`, make sure there is only one commit listed on
the pull request. Finding a Pull Request for Ansible Modules
++++++++++++++++++++++++++++++++++++++++++
Ansible modules are in separate repositories, which are managed as Git submodules. Here's a step by step process for checking out a PR for an Ansible extras module, for instance:
1. git clone https://github.com/ansible/ansible.git
2. cd ansible
3. git submodule init
4. git submodule update --recursive [ fetches the submodules ]
5. cd lib/ansible/modules/extras
6. git fetch origin pull/1234/head:pr/1234 [ fetches the specific PR ]
7. git checkout pr/1234 [ do your testing here ]
8. cd /path/to/ansible/clone
9. git submodule update --recursive
For Those About To Test, We Salute You For Those About To Test, We Salute You
++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++
@ -145,7 +152,7 @@ Once the files are in place, you can run the provided playbook (if there is one)
ansible-playbook -vvv playbook_name.yml ansible-playbook -vvv playbook_name.yml
If there's not a playbook, you may have to copy and paste playbook snippets or run a ad-hoc command that was pasted in. If there's no playbook, you may have to copy and paste playbook snippets or run an ad-hoc command that was pasted in.
Our issue template also included sections for "Expected Output" and "Actual Output", which should be used to gauge the output Our issue template also included sections for "Expected Output" and "Actual Output", which should be used to gauge the output
from the provided examples. from the provided examples.

@ -15,6 +15,7 @@ Setting environment variables can be done with the `environment` keyword. It can
PATH: "{{ ansible_env.PATH }}:/thingy/bin" PATH: "{{ ansible_env.PATH }}:/thingy/bin"
SOME: value SOME: value
.. note:: starting in 2.0.1 the setup task from gather_facts also inherits the environment directive from the play, you might need to use the `|default` filter to avoid errors if setting this at play level.
How do I handle different machines needing different user accounts or ports to log in with? How do I handle different machines needing different user accounts or ports to log in with?
@ -38,7 +39,7 @@ You can also dictate the connection type to be used, if you want::
foo.example.com foo.example.com
bar.example.com bar.example.com
You may also wish to keep these in group variables instead, or file in them in a group_vars/<groupname> file. You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
See the rest of the documentation for more information about how to organize variables. See the rest of the documentation for more information about how to organize variables.
.. _use_ssh: .. _use_ssh:
@ -174,7 +175,9 @@ How do I loop over a list of hosts in a group, inside of a template?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration
file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this:: file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %} {% for host in groups['db_servers'] %}
{{ host }} {{ host }}
@ -184,7 +187,7 @@ If you need to access facts about these hosts, for instance, the IP address of e
- hosts: db_servers - hosts: db_servers
tasks: tasks:
- # doesn't matter what you do, just that they were talked to previously. - debug: msg="doesn't matter what you do, just that they were talked to previously."
Then you can use the facts inside your template, like this:: Then you can use the facts inside your template, like this::
@ -304,8 +307,6 @@ How do I keep secret data in my playbook?
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :doc:`playbooks_vault`. If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :doc:`playbooks_vault`.
.. _i_dont_see_my_question:
In Ansible 1.8 and later, if you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:: In Ansible 1.8 and later, if you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful::
- name: secret task - name: secret task
@ -320,8 +321,33 @@ The no_log attribute can also apply to an entire play::
no_log: True no_log: True
Though this will make the play somewhat difficult to debug. It's recommended that this Though this will make the play somewhat difficult to debug. It's recommended that this
be applied to single tasks only, once a playbook is completed. be applied to single tasks only, once a playbook is completed.
.. _when_to_use_brackets:
.. _dynamic_variables:
.. _interpolate_variables:
When should I use {{ }}? Also, howto interpolate variables or dyanmic variable names
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A steadfast rule is 'always use {{ }} except when `when:`'.
Conditionals are always run through Jinja2 as to resolve the expression,
so `when:`, `failed_when:` and `changed_when:` are always templated and you should avoid adding `{{}}`.
In most other cases you should always use the brackets, even if previouslly you could use variables without specifying (like `with_` clauses),
as this made it hard to distinguish between an undefined variable and a string.
Another rule is 'moustaches don't stack'. We often see this::
{{ somvar_{{other_var}} }}
The above DOES NOT WORK, if you need to use a dynamic variable use the hostvars or vars dictionary as appropriate::
{{ hostvars[inventory_hostname]['somevar_' + other_var] }}
.. _i_dont_see_my_question:
I don't see my question here I don't see my question here
++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++

@ -1,55 +1,60 @@
Ansible Galaxy Ansible Galaxy
++++++++++++++ ++++++++++++++
"Ansible Galaxy" can either refer to a website for sharing and downloading Ansible roles, or a command line tool that helps work with roles. "Ansible Galaxy" can either refer to a website for sharing and downloading Ansible roles, or a command line tool for managing and creating roles.
.. contents:: Topics .. contents:: Topics
The Website The Website
``````````` ```````````
The website `Ansible Galaxy <https://galaxy.ansible.com>`_, is a free site for finding, downloading, rating, and reviewing all kinds of community developed Ansible roles and can be a great way to get a jumpstart on your automation projects. The website `Ansible Galaxy <https://galaxy.ansible.com>`_, is a free site for finding, downloading, and sharing community developed Ansible roles. Downloading roles from Galaxy is a great way to jumpstart your automation projects.
You can sign up with social auth and use the download client 'ansible-galaxy' which is included in Ansible 1.4.2 and later. Access the Galaxy web site using GitHub OAuth, and to install roles use the 'ansible-galaxy' command line tool included in Ansible 1.4.2 and later.
Read the "About" page on the Galaxy site for more information. Read the "About" page on the Galaxy site for more information.
The ansible-galaxy command line tool The ansible-galaxy command line tool
```````````````````````````````````` ````````````````````````````````````
The command line ansible-galaxy has many different subcommands. The ansible-galaxy command has many different sub-commands for managing roles both locally and at `galaxy.ansible.com <https://galaxy.ansible.com>`_.
.. note::
The search, login, import, delete, and setup commands in the Ansible 2.0 version of ansible-galaxy require access to the
2.0 Beta release of the Galaxy web site available at `https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_.
Use the ``--server`` option to access the beta site. For example::
$ ansible-galaxy search --server https://galaxy-qa.ansible.com mysql --author geerlingguy
Additionally, you can define a server in ansible.cfg::
[galaxy]
server=https://galaxy-qa.ansible.com
Installing Roles Installing Roles
---------------- ----------------
The most obvious is downloading roles from the Ansible Galaxy website:: The most obvious use of the ansible-galaxy command is downloading roles from `the Ansible Galaxy website <https://galaxy.ansible.com>`_::
ansible-galaxy install username.rolename $ ansible-galaxy install username.rolename
.. _galaxy_cli_roles_path:
roles_path roles_path
=============== ==========
You can specify a particular directory where you want the downloaded roles to be placed:: You can specify a particular directory where you want the downloaded roles to be placed::
ansible-galaxy install username.role -p ~/Code/ansible_roles/ $ ansible-galaxy install username.role -p ~/Code/ansible_roles/
This can be useful if you have a master folder that contains ansible galaxy roles shared across several projects. The default is the roles_path configured in your ansible.cfg file (/etc/ansible/roles if not configured). This can be useful if you have a master folder that contains ansible galaxy roles shared across several projects. The default is the roles_path configured in your ansible.cfg file (/etc/ansible/roles if not configured).
Building out Role Scaffolding
-----------------------------
It can also be used to initialize the base structure of a new role, saving time on creating the various directories and main.yml files a role requires::
ansible-galaxy init rolename
Installing Multiple Roles From A File Installing Multiple Roles From A File
------------------------------------- =====================================
To install multiple roles, the ansible-galaxy CLI can be fed a requirements file. All versions of ansible allow the following syntax for installing roles from the Ansible Galaxy website:: To install multiple roles, the ansible-galaxy CLI can be fed a requirements file. All versions of ansible allow the following syntax for installing roles from the Ansible Galaxy website::
ansible-galaxy install -r requirements.txt $ ansible-galaxy install -r requirements.txt
Where the requirements.txt looks like:: Where the requirements.txt looks like::
@ -64,7 +69,7 @@ To request specific versions (tags) of a role, use this syntax in the roles file
Available versions will be listed on the Ansible Galaxy webpage for that role. Available versions will be listed on the Ansible Galaxy webpage for that role.
Advanced Control over Role Requirements Files Advanced Control over Role Requirements Files
--------------------------------------------- =============================================
For more advanced control over where to download roles from, including support for remote repositories, Ansible 1.8 and later support a new YAML format for the role requirements file, which must end in a 'yml' extension. It works like this:: For more advanced control over where to download roles from, including support for remote repositories, Ansible 1.8 and later support a new YAML format for the role requirements file, which must end in a 'yml' extension. It works like this::
@ -80,10 +85,6 @@ And here's an example showing some specific version downloads from multiple sour
# from GitHub # from GitHub
- src: https://github.com/bennojoy/nginx - src: https://github.com/bennojoy/nginx
# from GitHub installing to a relative path
- src: https://github.com/bennojoy/nginx
path: vagrant/roles/
# from GitHub, overriding the name and specifying a specific tag # from GitHub, overriding the name and specifying a specific tag
- src: https://github.com/bennojoy/nginx - src: https://github.com/bennojoy/nginx
version: master version: master
@ -105,7 +106,6 @@ And here's an example showing some specific version downloads from multiple sour
- src: git@gitlab.company.com:mygroup/ansible-base.git - src: git@gitlab.company.com:mygroup/ansible-base.git
scm: git scm: git
version: 0.1.0 version: 0.1.0
path: roles/
As you can see in the above, there are a large amount of controls available As you can see in the above, there are a large amount of controls available
to customize where roles can be pulled from, and what to save roles as. to customize where roles can be pulled from, and what to save roles as.
@ -121,3 +121,283 @@ Roles pulled from galaxy work as with other SCM sourced roles above. To download
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel #ansible IRC chat channel
Building Role Scaffolding
-------------------------
Use the init command to initialize the base structure of a new role, saving time on creating the various directories and main.yml files a role requires::
$ ansible-galaxy init rolename
The above will create the following directory structure in the current working directory:
::
README.md
.travis.yml
defaults/
main.yml
files/
handlers/
main.yml
meta/
main.yml
templates/
tests/
inventory
test.yml
vars/
main.yml
.. note::
.travis.yml and tests/ are new in Ansible 2.0
If a directory matching the name of the role already exists in the current working directory, the init command will result in an error. To ignore the error use the --force option. Force will create the above subdirectories and files, replacing anything that matches.
Search for Roles
----------------
The search command provides for querying the Galaxy database, allowing for searching by tags, platforms, author and multiple keywords. For example:
::
$ ansible-galaxy search elasticsearch --author geerlingguy
The search command will return a list of the first 1000 results matching your search:
::
Found 2 roles matching your search:
Name Description
---- -----------
geerlingguy.elasticsearch Elasticsearch for Linux.
geerlingguy.elasticsearch-curator Elasticsearch curator for Linux.
.. note::
The format of results pictured here is new in Ansible 2.0.
Get More Information About a Role
---------------------------------
Use the info command To view more detail about a specific role:
::
$ ansible-galaxy info username.role_name
This returns everything found in Galaxy for the role:
::
Role: username.rolename
description: Installs and configures a thing, a distributed, highly available NoSQL thing.
active: True
commit: c01947b7bc89ebc0b8a2e298b87ab416aed9dd57
commit_message: Adding travis
commit_url: https://github.com/username/repo_name/commit/c01947b7bc89ebc0b8a2e298b87ab
company: My Company, Inc.
created: 2015-12-08T14:17:52.773Z
download_count: 1
forks_count: 0
github_branch:
github_repo: repo_name
github_user: username
id: 6381
is_valid: True
issue_tracker_url:
license: Apache
min_ansible_version: 1.4
modified: 2015-12-08T18:43:49.085Z
namespace: username
open_issues_count: 0
path: /Users/username/projects/roles
scm: None
src: username.repo_name
stargazers_count: 0
travis_status_url: https://travis-ci.org/username/repo_name.svg?branch=master
version:
watchers_count: 1
List Installed Roles
--------------------
The list command shows the name and version of each role installed in roles_path.
::
$ ansible-galaxy list
- chouseknecht.role-install_mongod, master
- chouseknecht.test-role-1, v1.0.2
- chrismeyersfsu.role-iptables, master
- chrismeyersfsu.role-required_vars, master
Remove an Installed Role
------------------------
The remove command will delete a role from roles_path:
::
$ ansible-galaxy remove username.rolename
Authenticate with Galaxy
------------------------
To use the import, delete and setup commands authentication with Galaxy is required. The login command will authenticate the user,retrieve a token from Galaxy, and store it in the user's home directory.
::
$ ansible-galaxy login
We need your Github login to identify you.
This information will not be sent to Galaxy, only to api.github.com.
The password will not be displayed.
Use --github-token if you do not want to enter your password.
Github Username: dsmith
Password for dsmith:
Succesfully logged into Galaxy as dsmith
As depicted above, the login command prompts for a GitHub username and password. It does NOT send your password to Galaxy. It actually authenticates with GitHub and creates a personal access token. It then sends the personal access token to Galaxy, which in turn verifies that you are you and returns a Galaxy access token. After authentication completes the GitHub personal access token is destroyed.
If you do not wish to use your GitHub password, or if you have two-factor authentication enabled with GitHub, use the --github-token option to pass a personal access token that you create. Log into GitHub, go to Settings and click on Personal Access Token to create a token.
.. note::
The login command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
Import a Role
-------------
Roles can be imported using ansible-galaxy. The import command expects that the user previously authenticated with Galaxy using the login command.
Import any GitHub repo you have access to:
::
$ ansible-galaxy import github_user github_repo
By default the command will wait for the role to be imported by Galaxy, displaying the results as the import progresses:
::
Successfully submitted import request 41
Starting import 41: role_name=myrole repo=githubuser/ansible-role-repo ref=
Retrieving Github repo githubuser/ansible-role-repo
Accessing branch: master
Parsing and validating meta/main.yml
Parsing galaxy_tags
Parsing platforms
Adding dependencies
Parsing and validating README.md
Adding repo tags as role versions
Import completed
Status SUCCESS : warnings=0 errors=0
Use the --branch option to import a specific branch. If not specified, the default branch for the repo will be used.
If the --no-wait option is present, the command will not wait for results. Results of the most recent import for any of your roles is available on the Galaxy web site under My Imports.
.. note::
The import command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
Delete a Role
-------------
Remove a role from the Galaxy web site using the delete command. You can delete any role that you have access to in GitHub. The delete command expects that the user previously authenticated with Galaxy using the login command.
::
$ ansible-galaxy delete github_user github_repo
This only removes the role from Galaxy. It does not impact the actual GitHub repo.
.. note::
The delete command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
Setup Travis Integrations
--------------------------
Using the setup command you can enable notifications from `travis <http://travis-ci.org>`_. The setup command expects that the user previously authenticated with Galaxy using the login command.
::
$ ansible-galaxy setup travis github_user github_repo xxxtravistokenxxx
Added integration for travis github_user/github_repo
The setup command requires your Travis token. The Travis token is not stored in Galaxy. It is used along with the GitHub username and repo to create a hash as described in `the Travis documentation <https://docs.travis-ci.com/user/notifications/>`_. The calculated hash is stored in Galaxy and used to verify notifications received from Travis.
The setup command enables Galaxy to respond to notifications. Follow the `Travis getting started guide <https://docs.travis-ci.com/user/getting-started/>`_ to enable the Travis build process for the role repository.
When you create your .travis.yml file add the following to cause Travis to notify Galaxy when a build completes:
::
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/
.. note::
The setup command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
List Travis Integrations
========================
Use the --list option to display your Travis integrations:
::
$ ansible-galaxy setup --list
ID Source Repo
---------- ---------- ----------
2 travis github_user/github_repo
1 travis github_user/github_repo
Remove Travis Integrations
==========================
Use the --remove option to disable and remove a Travis integration:
::
$ ansible-galaxy setup --remove ID
Provide the ID of the integration you want disabled. Use the --list option to get the ID.

@ -71,8 +71,7 @@ Facts
Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and templates just like variables, facts Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and templates just like variables, facts
are things that are inferred, rather than set. Facts are automatically discovered by Ansible when running plays by executing the internal 'setup' are things that are inferred, rather than set. Facts are automatically discovered by Ansible when running plays by executing the internal 'setup'
module on the remote nodes. You never have to call the setup module explicitly, it just runs, but it can be disabled to save time if it is module on the remote nodes. You never have to call the setup module explicitly, it just runs, but it can be disabled to save time if it is
not needed. For the convenience of users who are switching from other configuration management systems, the fact module will also pull in facts from the 'ohai' and 'facter' not needed or you can tell ansible to collect only a subset of the full facts via the `gather_subset:` option. For the convenience of users who are switching from other configuration management systems, the fact module will also pull in facts from the 'ohai' and 'facter' tools if they are installed, which are fact libraries from Chef and Puppet, respectively. (These may also be disabled via `gather_subset:`)
tools if they are installed, which are fact libraries from Chef and Puppet, respectively.
Filter Plugin Filter Plugin
+++++++++++++ +++++++++++++
@ -399,7 +398,7 @@ An optional conditional statement attached to a task that is used to determine i
Van Halen Van Halen
+++++++++ +++++++++
For no particular reason, other than the fact that Michael really likes them, all Ansible releases are codenamed after Van Halen songs. There is no preference given to David Lee Roth vs. Sammy Lee Hagar-era songs, and instrumentals are also allowed. It is unlikely that there will ever be a Jump release, but a Van Halen III codename release is possible. You never know. For no particular reason, other than the fact that Michael really likes them, all Ansible 0.x and 1.x releases are codenamed after Van Halen songs. There is no preference given to David Lee Roth vs. Sammy Lee Hagar-era songs, and instrumentals are also allowed. It is unlikely that there will ever be a Jump release, but a Van Halen III codename release is possible. You never know.
Vars (Variables) Vars (Variables)
++++++++++++++++ ++++++++++++++++

@ -23,6 +23,10 @@ You'll need this Python module installed on the execution host, usually your wor
.. note:: cs also includes a command line interface for ad-hoc interaction with the CloudStack API e.g. ``$ cs listVirtualMachines state=Running``. .. note:: cs also includes a command line interface for ad-hoc interaction with the CloudStack API e.g. ``$ cs listVirtualMachines state=Running``.
Limitations and Known Issues
````````````````````````````
VPC support is not yet fully implemented and tested. The community is working on the VPC integration.
Credentials File Credentials File
```````````````` ````````````````
You can pass credentials and the endpoint of your cloud as module arguments, however in most cases it is a far less work to store your credentials in the cloudstack.ini file. You can pass credentials and the endpoint of your cloud as module arguments, however in most cases it is a far less work to store your credentials in the cloudstack.ini file.
@ -192,9 +196,9 @@ In the above play we defined 3 tasks and use the group ``cloud-vm`` as target to
In the first task, we ensure we have a running VM created with the Debian template. If the VM is already created but stopped, it would just start it. If you like to change the offering on an exisiting VM, you must add ``force: yes`` to the task, which would stop the VM, change the offering and start the VM again. In the first task, we ensure we have a running VM created with the Debian template. If the VM is already created but stopped, it would just start it. If you like to change the offering on an exisiting VM, you must add ``force: yes`` to the task, which would stop the VM, change the offering and start the VM again.
In the second task we ensure the ports are opened if we give a public IP to the VM. In the second task we ensure the ports are opened if we give a public IP to the VM.
In the third task we add static NAT to the VMs having a public IP defined. In the third task we add static NAT to the VMs having a public IP defined.
.. Note:: The public IP addresses must have been acquired in advance, also see ``cs_ip_address`` .. Note:: The public IP addresses must have been acquired in advance, also see ``cs_ip_address``

@ -16,7 +16,7 @@ We believe simplicity is relevant to all sizes of environments, so we design for
Ansible manages machines in an agent-less manner. There is never a question of how to Ansible manages machines in an agent-less manner. There is never a question of how to
upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized--it relies on your existing OS credentials to control access to remote machines. If needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems. upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized--it relies on your existing OS credentials to control access to remote machines. If needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems.
This documentation covers the current released version of Ansible (1.9.1) and also some development version features (2.0). For recent features, we note in each section the version of Ansible where the feature was added. This documentation covers the current released version of Ansible (2.0.1) and also some development version features (2.1). For recent features, we note in each section the version of Ansible where the feature was added.
Ansible, Inc. releases a new major release of Ansible approximately every two months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup. However, the community around new modules and plugins being developed and contributed moves very quickly, typically adding 20 or so new modules in each release. Ansible, Inc. releases a new major release of Ansible approximately every two months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup. However, the community around new modules and plugins being developed and contributed moves very quickly, typically adding 20 or so new modules in each release.
@ -40,4 +40,5 @@ Ansible, Inc. releases a new major release of Ansible approximately every two mo
faq faq
glossary glossary
YAMLSyntax YAMLSyntax
porting_guide_2.0

@ -11,12 +11,11 @@ ad hoc tasks.
What's an ad-hoc command? What's an ad-hoc command?
An ad-hoc command is something that you might type in to do something really An ad-hoc command is something that you might type in to do something really
quick, but don't want to save for later. quick, but don't want to save for later.
This is a good place to start to understand the basics of what Ansible can do This is a good place to start to understand the basics of what Ansible can do
prior to learning the playbooks language -- ad-hoc commands can also be used prior to learning the playbooks language -- ad-hoc commands can also be used
to do quick things that you might not necessarily want to write a full playbook to do quick things that you might not necessarily want to write a full playbook for.
for.
Generally speaking, the true power of Ansible lies in playbooks. Generally speaking, the true power of Ansible lies in playbooks.
Why would you use ad-hoc tasks versus playbooks? Why would you use ad-hoc tasks versus playbooks?
@ -25,7 +24,7 @@ For instance, if you wanted to power off all of your lab for Christmas vacation,
you could execute a quick one-liner in Ansible without writing a playbook. you could execute a quick one-liner in Ansible without writing a playbook.
For configuration management and deployments, though, you'll want to pick up on For configuration management and deployments, though, you'll want to pick up on
using '/usr/bin/ansible-playbook' -- the concepts you will learn here will using '/usr/bin/ansible-playbook' -- the concepts you will learn here will
port over directly to the playbook language. port over directly to the playbook language.
(See :doc:`playbooks` for more information about those) (See :doc:`playbooks` for more information about those)
@ -60,25 +59,24 @@ behavior, pass in "-u username". If you want to run commands as a different use
$ ansible atlanta -a "/usr/bin/foo" -u username $ ansible atlanta -a "/usr/bin/foo" -u username
Often you'll not want to just do things from your user account. If you want to run commands through sudo:: Often you'll not want to just do things from your user account. If you want to run commands through privilege escalation::
$ ansible atlanta -a "/usr/bin/foo" -u username --sudo [--ask-sudo-pass] $ ansible atlanta -a "/usr/bin/foo" -u username --become [--ask-become-pass]
Use ``--ask-sudo-pass`` (``-K``) if you are not using passwordless Use ``--ask-become-pass`` (``-K``) if you are not using a passwordless privilege escalation method (sudo/su/pfexec/doas/etc).
sudo. This will interactively prompt you for the password to use. This will interactively prompt you for the password to use.
Use of passwordless sudo makes things easier to automate, but it's not Use of a passwordless setup makes things easier to automate, but it's not required.
required.
It is also possible to sudo to a user other than root using It is also possible to become a user other than root using
``--sudo-user`` (``-U``):: ``--become-user``::
$ ansible atlanta -a "/usr/bin/foo" -u username -U otheruser [--ask-sudo-pass] $ ansible atlanta -a "/usr/bin/foo" -u username --become-user otheruser [--ask-become-pass]
.. note:: .. note::
Rarely, some users have security rules where they constrain their sudo environment to running specific command paths only. Rarely, some users have security rules where they constrain their sudo/pbrun/doas environment to running specific command paths only.
This does not work with ansible's no-bootstrapping philosophy and hundreds of different modules. This does not work with ansible's no-bootstrapping philosophy and hundreds of different modules.
If doing this, use Ansible from a special account that does not have this constraint. If doing this, use Ansible from a special account that does not have this constraint.
One way of doing this without sharing access to unauthorized users would be gating Ansible with :doc:`tower`, which One way of doing this without sharing access to unauthorized users would be gating Ansible with :doc:`tower`, which
can hold on to an SSH credential and let members of certain organizations use it on their behalf without having direct access. can hold on to an SSH credential and let members of certain organizations use it on their behalf without having direct access.
@ -88,7 +86,7 @@ The ``-f 10`` in the above specifies the usage of 10 simultaneous
processes to use. You can also set this in :doc:`intro_configuration` to avoid setting it again. The default is actually 5, which processes to use. You can also set this in :doc:`intro_configuration` to avoid setting it again. The default is actually 5, which
is really small and conservative. You are probably going to want to talk to a lot more simultaneous hosts so feel free is really small and conservative. You are probably going to want to talk to a lot more simultaneous hosts so feel free
to crank this up. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will to crank this up. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will
take a little longer. Feel free to push this value as high as your system can handle it! take a little longer. Feel free to push this value as high as your system can handle!
You can also select what Ansible "module" you want to run. Normally commands also take a ``-m`` for module name, but You can also select what Ansible "module" you want to run. Normally commands also take a ``-m`` for module name, but
the default module name is 'command', so we didn't need to the default module name is 'command', so we didn't need to
@ -112,7 +110,7 @@ For example, using double rather than single quotes in the above example would
evaluate the variable on the box you were on. evaluate the variable on the box you were on.
So far we've been demoing simple command execution, but most Ansible modules usually do not work like So far we've been demoing simple command execution, but most Ansible modules usually do not work like
simple scripts. They make the remote system look like you state, and run the commands necessary to simple scripts. They make the remote system look like a state, and run the commands necessary to
get it there. This is commonly referred to as 'idempotence', and is a core design goal of Ansible. get it there. This is commonly referred to as 'idempotence', and is a core design goal of Ansible.
However, we also recognize that running arbitrary commands is equally important, so Ansible easily supports both. However, we also recognize that running arbitrary commands is equally important, so Ansible easily supports both.
@ -170,7 +168,7 @@ Ensure a package is not installed::
Ansible has modules for managing packages under many platforms. If your package manager Ansible has modules for managing packages under many platforms. If your package manager
does not have a module available for it, you can install does not have a module available for it, you can install
for other packages using the command module or (better!) contribute a module packages using the command module or (better!) contribute a module
for other package managers. Stop by the mailing list for info/details. for other package managers. Stop by the mailing list for info/details.
.. _users_and_groups: .. _users_and_groups:
@ -249,7 +247,7 @@ very quickly. After the time limit (in seconds) runs out (``-B``), the process o
the remote nodes will be terminated. the remote nodes will be terminated.
Typically you'll only be backgrounding long-running Typically you'll only be backgrounding long-running
shell commands or software upgrades only. Backgrounding the copy module does not do a background file transfer. :doc:`Playbooks <playbooks>` also support polling, and have a simplified syntax for this. shell commands or software upgrades. Backgrounding the copy module does not do a background file transfer. :doc:`Playbooks <playbooks>` also support polling, and have a simplified syntax for this.
.. _checking_facts: .. _checking_facts:

@ -1,4 +1,4 @@
BSD support BSD Support
=========== ===========
.. contents:: Topics .. contents:: Topics
@ -8,80 +8,73 @@ BSD support
Working with BSD Working with BSD
```````````````` ````````````````
As you may have already read, Ansible manages Linux/Unix machines using SSH by default. Ansible handles BSD machines in the same manner. Ansible manages Linux/Unix machines using SSH by default. BSD machines are no exception, however this document covers some of the differences you may encounter with Ansible when working with BSD variants.
Depending on your control machine, Ansible will try to default to using OpenSSH. This works fine when using SSH keys to authenticate, but when using SSH passwords, Ansible relies on sshpass. Most Typically, Ansible will try to default to using OpenSSH as a connection method. This is suitable when when using SSH keys to authenticate, but when using SSH passwords, Ansible relies on sshpass. Most
versions of sshpass do not deal well with BSD login prompts, so in these cases we recommend changing the transport to paramiko. You can do this in ansible.cfg globaly or set it as versions of sshpass do not deal particularly well with BSD login prompts, so when using SSH passwords against BSD machines, it is recommended to change the transport method to paramiko. You can do this in ansible.cfg globally or you can set it as an inventory/group/host variable. For example::
an inventory/group/host variable::
[freebsd] [freebsd]
mybsdhost1 ansible_connection=paramiko mybsdhost1 ansible_connection=paramiko
Ansible is agentless by default, but it needs some software installed on the target machines, mainly Python 2.4 or higher with an included json library (which is standard in Python 2.5 and above). Ansible is agentless by default, however certain software is required on the target machines. Using Python 2.4 on the agents requires an additional py-simplejson package/library to be installed, however this library is already included in Python 2.5 and above.
Without python you can still use the ``raw`` module to execute commands. This module is very limited, however it can be used to bootstrap Ansible on BSDs. Operating without Python is possible with the ``raw`` module. Although this module can be used to bootstrap Ansible and install Python on BSD variants (see below), it is very limited and the use of Python is required to make full use of Ansible's features.
.. _bootstrap_bsd: .. _bootstrap_bsd:
Bootstrapping BSD Bootstrapping BSD
````````````````` `````````````````
For Ansible to effectively manage your machine, we need to install Python along with a json library, in this case we are using Python 2.7 which already has json included. As mentioned above, you can bootstrap Ansible with the ``raw`` module and remotely install Python on targets. The following example installs Python 2.7 which includes the json library required for full functionality of Ansible.
On your control machine you can simply execute the following for most versions of FreeBSD:: On your control machine you can simply execute the following for most versions of FreeBSD::
ansible -m raw -a “pkg_add -r python27” mybsdhost1 ansible -m raw -a “pkg install -y python27” mybsdhost1
Once this is done you can now use other Ansible modules aside from the ``raw`` module. Once this is done you can now use other Ansible modules apart from the ``raw`` module.
.. note:: .. note::
This example uses pkg_add, you should be able to substitute for the appropriate tool for your BSD, This example used pkg_add as used on FreeBSD, however you should be able to substitute the appropriate package tool for your BSD; the package name may also differ. Refer to the package list or documentation of the BSD variant you are using for the exact Python package name you intend to install.
also you might need to look up the exact package name you need.
.. _python_location: .. _python_location:
Setting python interpreter Setting the Python interpreter
`````````````````````````` ``````````````````````````````
To support the multitude of Unix/Linux OSs and distributions, Ansible cannot rely on the environment or ``env`` to find the correct Python. By default, modules point at ``/usr/bin/python`` as this is the most common location. On the BSDs you cannot rely on this so you should tell ansible where python is located, through the ``ansible_python_interpreter`` inventory variable:: To support a variety of Unix/Linux operating systems and distributions, Ansible cannot always rely on the existing environment or ``env`` variables to locate the correct Python binary. By default, modules point at ``/usr/bin/python`` as this is the most common location. On BSD variants, this path may differ, so it is advised to inform Ansible of the binary's location, through the ``ansible_python_interpreter`` inventory variable. For example::
[freebsd:vars] [freebsd:vars]
ansible_python_interpreter=/usr/local/bin/python2.7 ansible_python_interpreter=/usr/local/bin/python2.7
If you use plugins other than those included with Ansible you might need to set similar variables for ``bash``, ``perl`` or ``ruby``, depending on how the plugin was written:: If you use additional plugins beyond those bundled with Ansible, you can set similar variables for ``bash``, ``perl`` or ``ruby``, depending on how the plugin is written. For example::
[freebsd:vars] [freebsd:vars]
ansible_python_interpreter=/usr/local/bin/python ansible_python_interpreter=/usr/local/bin/python
ansible_perl_interpreter=/usr/bin/perl5 ansible_perl_interpreter=/usr/bin/perl5
What modules are available Which modules are available?
`````````````````````````` ````````````````````````````
Most of the core Ansible modules are written for a combination of Linux/Unix machines and arbitrary web services; most should work fine on the BSDs with the exception of those that are aimed at Linux specific technologies (i.e. lvg). The majority of the core Ansible modules are written for a combination of Linux/Unix machines and other generic services, so most should function well on the BSDs with the obvious exception of those that are aimed at Linux-only technologies (such as LVG).
Using BSD as the control machine
````````````````````````````````
You can also use a BSD as the control machine Using BSD as the control machine is as simple as installing the Ansible package for your BSD variant or by following the ``pip`` or 'from source' instructions.
`````````````````````````````````````````````
It should be as simple as installing the Ansible package or follow the ``pip`` or 'from source' instructions.
.. _bsd_facts: .. _bsd_facts:
BSD Facts BSD Facts
````````` `````````
Ansible gathers facts from the BSDs as it would from Linux machines, but since the data, names and structures can be different for network, disks and other devices, one should expect the output to be different, but still familiar to a BSD administrator. Ansible gathers facts from the BSDs in a similar manner to Linux machines, but since the data, names and structures can vary for network, disks and other devices, one should expect the output to be slightly different yet still familiar to a BSD administrator.
.. _bsd_contributions: .. _bsd_contributions:
BSD Contributions BSD Efforts and Contributions
````````````````` `````````````````````````````
BSD support is important for Ansible. Even though the majority of our contributors use and target Linux we have an active BSD community and will strive to be as BSD friendly as possible. BSD support is important to us at Ansible. Even though the majority of our contributors use and target Linux we have an active BSD community and strive to be as BSD friendly as possible.
Report any issues you see with BSD incompatibilities, even better to submit a pull request with the fix! Please feel free to report any issues or incompatibilities you discover with BSD; pull requests with an included fix are also welcome!
.. seealso:: .. seealso::

@ -60,7 +60,7 @@ General defaults
In the [defaults] section of ansible.cfg, the following settings are tunable: In the [defaults] section of ansible.cfg, the following settings are tunable:
.. _action_plugins: .. _cfg_action_plugins:
action_plugins action_plugins
============== ==============
@ -228,6 +228,34 @@ Allows disabling of deprecating warnings in ansible-playbook output::
Deprecation warnings indicate usage of legacy features that are slated for removal in a future release of Ansible. Deprecation warnings indicate usage of legacy features that are slated for removal in a future release of Ansible.
.. _display_args_to_stdout:
display_args_to_stdout
======================
.. versionadded:: 2.1.0
By default, ansible-playbook will print a header for each task that is run to
stdout. These headers will contain the ``name:`` field from the task if you
specified one. If you didn't then ansible-playbook uses the task's action to
help you tell which task is presently running. Sometimes you run many of the
same action and so you want more information about the task to differentiate
it from others of the same action. If you set this variable to ``True`` in
the config then ansible-playbook will also include the task's arguments in the
header.
This setting defaults to ``False`` because there is a chance that you have
sensitive values in your parameters and do not want those to be printed to
stdout::
display_args_to_stdout=False
If you set this to ``True`` you should be sure that you have secured your
environment's stdout (no one can shoulder surf your screen and you aren't
saving stdout to an insecure file) or made sure that all of your playbooks
explicitly added the ``no_log: True`` parameter to tasks which have sensistive
values See :ref:`keep_secret_data` for more information.
.. _display_skipped_hosts: .. _display_skipped_hosts:
display_skipped_hosts display_skipped_hosts
@ -261,6 +289,8 @@ This indicates the command to use to spawn a shell under a sudo environment. Us
executable = /bin/bash executable = /bin/bash
Starting in version 2.1 this can be overriden by the inventory var ``ansible_shell_executable``.
.. _filter_plugins: .. _filter_plugins:
filter_plugins filter_plugins
@ -325,6 +355,32 @@ This option can be useful for those wishing to save fact gathering time. Both 's
gathering = smart gathering = smart
.. versionadded:: 2.1
You can specify a subset of gathered facts using the following option::
gather_subset = all
:all: gather all subsets (the default)
:network: gather network facts
:hardware: gather hardware facts (longest facts to retrieve)
:virtual: gather facts about virtual machines hosted on the machine
:ohai: gather facts from ohai
:facter: gather facts from facter
You can combine them using a comma separated list (ex: network,virtual,facter)
You can also disable specific subsets by prepending with a `!` like this::
# Don't gather hardware facts, facts from chef's ohai or puppet's facter
gather_subset = !hardware,!ohai,!facter
A set of basic facts are always collected no matter which additional subsets
are selected. If you want to collect the minimal amount of facts, use
`!all`::
gather_subset = !all
hash_behaviour hash_behaviour
============== ==============
@ -339,7 +395,7 @@ official examples repos do not use this setting::
The valid values are either 'replace' (the default) or 'merge'. The valid values are either 'replace' (the default) or 'merge'.
.. versionadded: '2.0' .. versionadded:: 2.0
If you want to merge hashes without changing the global settings, use If you want to merge hashes without changing the global settings, use
the `combine` filter described in :doc:`playbooks_filters`. the `combine` filter described in :doc:`playbooks_filters`.
@ -557,7 +613,7 @@ The directory will be created if it does not already exist.
roles_path roles_path
========== ==========
.. versionadded: '1.4' .. versionadded:: 1.4
The roles path indicate additional directories beyond the 'roles/' subdirectory of a playbook project to search to find Ansible The roles path indicate additional directories beyond the 'roles/' subdirectory of a playbook project to search to find Ansible
roles. For instance, if there was a source control repository of common roles and a different repository of playbooks, you might roles. For instance, if there was a source control repository of common roles and a different repository of playbooks, you might
@ -572,6 +628,37 @@ Additional paths can be provided separated by colon characters, in the same way
Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible paths Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible paths
that were searched. that were searched.
.. _cfg_squash_actions:
squash_actions
==============
.. versionadded:: 2.0
Ansible can optimise actions that call modules that support list parameters when using with\_ looping.
Instead of calling the module once for each item, the module is called once with the full list.
The default value for this setting is only for certain package managers, but it can be used for any module::
squash_actions = apk,apt,dnf,package,pacman,pkgng,yum,zypper
Currently, this is only supported for modules that have a name parameter, and only when the item is the
only thing being passed to the parameter.
.. _cfg_strategy_plugins:
strategy_plugins
==================
Strategy plugin allow users to change the way in which Ansible runs tasks on targeted hosts.
This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations::
strategy_plugins = ~/.ansible/plugins/strategy_plugins/:/usr/share/ansible_plugins/strategy_plugins
Most users will not need to use this feature. See :doc:`developing_plugins` for more details
.. _sudo_exe: .. _sudo_exe:
sudo_exe sudo_exe
@ -587,11 +674,12 @@ the sudo implementation is matching CLI flags with the standard sudo::
sudo_flags sudo_flags
========== ==========
Additional flags to pass to sudo when engaging sudo support. The default is '-H' which preserves the $HOME environment variable Additional flags to pass to sudo when engaging sudo support. The default is '-H -S -n' which sets the HOME environment
of the original user. In some situations you may wish to add or remove flags, but in general most users variable, prompts for passwords via STDIN, and avoids prompting the user for input of any kind. Note that '-n' will conflict
will not need to change this setting:: with using password-less sudo auth, such as pam_ssh_agent_auth. In some situations you may wish to add or remove flags, but
in general most users will not need to change this setting:::
sudo_flags=-H sudo_flags=-H -S -n
.. _sudo_user: .. _sudo_user:
@ -739,6 +827,17 @@ instead. Setting it to False will improve performance and is recommended when h
record_host_keys=True record_host_keys=True
.. _paramiko_proxy_command:
proxy_command
=============
.. versionadded:: 2.1
Use an OpenSSH like ProxyCommand for proxying all Paramiko SSH connections through a bastion or jump host. Requires a minimum of Paramiko version 1.9.0. On Enterprise Linux 6 this is provided by ``python-paramiko1.10`` in the EPEL repository::
proxy_command = ssh -W "%h:%p" bastion
.. _openssh_settings: .. _openssh_settings:
OpenSSH Specific Settings OpenSSH Specific Settings
@ -897,3 +996,30 @@ The normal behaviour is for operations to copy the existing context or use the u
The default list is: nfs,vboxsf,fuse,ramfs:: The default list is: nfs,vboxsf,fuse,ramfs::
special_context_filesystems = nfs,vboxsf,fuse,ramfs,myspecialfs special_context_filesystems = nfs,vboxsf,fuse,ramfs,myspecialfs
libvirt_lxc_noseclabel
======================
.. versionadded:: 2.1
This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux.
The default behavior is no::
libvirt_lxc_noseclabel = True
Galaxy Settings
---------------
The following options can be set in the [galaxy] section of ansible.cfg:
server
======
Override the default Galaxy server value of https://galaxy.ansible.com. Useful if you have a hosted version of the Galaxy web app or want to point to the testing site https://galaxy-qa.ansible.com. It does not work against private, hosted repos, which Galaxy can use for fetching and installing roles.
ignore_certs
============
If set to *yes*, ansible-galaxy will not validate TLS certificates. Handy for testing against a server with a self-signed certificate
.

@ -111,9 +111,8 @@ If you use boto profiles to manage multiple AWS accounts, you can pass ``--profi
aws_access_key_id = <prod access key> aws_access_key_id = <prod access key>
aws_secret_access_key = <prod secret key> aws_secret_access_key = <prod secret key>
You can then run ``ec2.py --profile prod`` to get the inventory for the prod account, or run playbooks with: ``ansible-playbook -i 'ec2.py --profile prod' myplaybook.yml``. You can then run ``ec2.py --profile prod`` to get the inventory for the prod account, this option is not supported by ``anisble-playbook`` though.
But you can use the ``AWS_PROFILE`` variable - e.g. ``AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml``
Alternatively, use the ``AWS_PROFILE`` variable - e.g. ``AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml``
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables. Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables.
@ -207,6 +206,77 @@ explicitly clear the cache, you can run the ec2.py script with the ``--refresh-c
# ./ec2.py --refresh-cache # ./ec2.py --refresh-cache
.. _openstack_example:
Example: OpenStack External Inventory Script
````````````````````````````````````````````
If you use an OpenStack based cloud, instead of manually maintaining your own inventory file, you can use the openstack.py dynamic inventory to pull information about your compute instances directly from OpenStack.
You can download the latest version of the OpenStack inventory script at: https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py
You can use the inventory script explicitly (by passing the `-i openstack.py` argument to Ansible) or implicitly (by placing the script at `/etc/ansible/hosts`).
Explicit use of inventory script
++++++++++++++++++++++++++++++++
Download the latest version of the OpenStack dynamic inventory script and make it executable::
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py
chmod +x openstack.py
Source an OpenStack RC file::
source openstack.rc
.. note::
An OpenStack RC file contains the environment variables required by the client tools to establish a connection with the cloud provider, such as the authentication URL, user name, password and region name. For more information on how to download, create or source an OpenStack RC file, please refer to http://docs.openstack.org/cli-reference/content/cli_openrc.html.
You can confirm the file has been successfully sourced by running a simple command, such as `nova list` and ensuring it return no errors.
.. note::
The OpenStack command line clients are required to run the `nova list` command. For more information on how to install them, please refer to http://docs.openstack.org/cli-reference/content/install_clients.html.
You can test the OpenStack dynamic inventory script manually to confirm it is working as expected::
./openstack.py --list
After a few moments you should see some JSON output with information about your compute instances.
Once you confirm the dynamic inventory script is working as expected, you can tell Ansible to use the `openstack.py` script as an inventory file, as illustrated below::
ansible -i openstack.py all -m ping
Implicit use of inventory script
++++++++++++++++++++++++++++++++
Download the latest version of the OpenStack dynamic inventory script, make it executable and copy it to `/etc/ansible/hosts`::
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py
chmod +x openstack.py
sudo cp openstack.py /etc/ansible/hosts
Download the sample configuration file, modify it to suit your needs and copy it to `/etc/ansible/openstack.yml`::
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.yml
vi openstack.yml
sudo cp openstack.yml /etc/ansible/
You can test the OpenStack dynamic inventory script manually to confirm it is working as expected::
/etc/ansible/hosts --list
After a few moments you should see some JSON output with information about your compute instances.
Refresh the cache
+++++++++++++++++
Note that the OpenStack dynamic inventory script will cache results to avoid repeated API calls. To explicitly clear the cache, you can run the openstack.py (or hosts) script with the --refresh parameter:
./openstack.py --refresh
.. _other_inventory_scripts: .. _other_inventory_scripts:
Other inventory scripts Other inventory scripts
@ -220,7 +290,8 @@ In addition to Cobbler and EC2, inventory scripts are also available for::
Linode Linode
OpenShift OpenShift
OpenStack Nova OpenStack Nova
Red Hat's SpaceWalk Ovirt
SpaceWalk
Vagrant (not to be confused with the provisioner in vagrant, which is preferred) Vagrant (not to be confused with the provisioner in vagrant, which is preferred)
Zabbix Zabbix
@ -232,13 +303,21 @@ to include it in the project.
.. _using_multiple_sources: .. _using_multiple_sources:
Using Multiple Inventory Sources Using Inventory Directories and Multiple Inventory Sources
```````````````````````````````` ``````````````````````````````````````````````````````````
If the location given to -i in Ansible is a directory (or as so configured in ansible.cfg), Ansible can use multiple inventory sources If the location given to -i in Ansible is a directory (or as so configured in ansible.cfg), Ansible can use multiple inventory sources
at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory sources in the same ansible run. Instant at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory sources in the same ansible run. Instant
hybrid cloud! hybrid cloud!
In an inventory directory, executable files will be treated as dynamic inventory sources and most other files as static sources. Files which end with any of the following will be ignored::
~, .orig, .bak, .ini, .retry, .pyc, .pyo
You can replace this list with your own selection by configuring an ``inventory_ignore_extensions`` list in ansible.cfg, or setting the ANSIBLE_INVENTORY_IGNORE environment variable. The value in either case should be a comma-separated list of patterns, as shown above.
Any ``group_vars`` and ``host_vars`` subdirectories in an inventory directory will be interpreted as expected, making inventory directories a powerful way to organize different sets of configurations.
.. _static_groups_of_dynamic: .. _static_groups_of_dynamic:
Static Groups of Dynamic Groups Static Groups of Dynamic Groups

@ -33,7 +33,7 @@ In releases up to and including Ansible 1.2, the default was strictly paramiko.
Occasionally you'll encounter a device that doesn't support SFTP. This is rare, but should it occur, you can switch to SCP mode in :doc:`intro_configuration`. Occasionally you'll encounter a device that doesn't support SFTP. This is rare, but should it occur, you can switch to SCP mode in :doc:`intro_configuration`.
When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option ``--ask-pass``. If using sudo features and when sudo requires a password, also supply ``--ask-sudo-pass``. When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option ``--ask-pass``. If using sudo features and when sudo requires a password, also supply ``--ask-become-pass`` (previously ``--ask-sudo-pass`` which has been deprecated).
While it may be common sense, it is worth sharing: Any management system benefits from being run near the machines being managed. If you are running Ansible in a cloud, consider running it from a machine inside that cloud. In most cases this will work better than on the open Internet. While it may be common sense, it is worth sharing: Any management system benefits from being run near the machines being managed. If you are running Ansible in a cloud, consider running it from a machine inside that cloud. In most cases this will work better than on the open Internet.

@ -27,12 +27,11 @@ What Version To Pick?
````````````````````` `````````````````````
Because it runs so easily from source and does not require any installation of software on remote Because it runs so easily from source and does not require any installation of software on remote
machines, many users will actually track the development version. machines, many users will actually track the development version.
Ansible's release cycles are usually about two months long. Due to this Ansible's release cycles are usually about four months long. Due to this short release cycle,
short release cycle, minor bugs will generally be fixed in the next release versus maintaining minor bugs will generally be fixed in the next release versus maintaining backports on the stable branch.
backports on the stable branch. Major bugs will still have maintenance releases when needed, though Major bugs will still have maintenance releases when needed, though these are infrequent.
these are infrequent.
If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager. If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager.
@ -53,9 +52,13 @@ This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.
.. note:: .. note::
As of 2.0 ansible uses a few more file handles to manage its forks, OS X has a very low setting so if you want to use 15 or more forks As of 2.0 ansible uses a few more file handles to manage its forks, OS X has a very low setting so if you want to use 15 or more forks
you'll need to raise the ulimit, like so ``sudo launchctl limit maxfiles 1024 2048``. Or just any time you see a "Too many open files" error. you'll need to raise the ulimit, like so ``sudo launchctl limit maxfiles unlimited``. Or just any time you see a "Too many open files" error.
.. warning::
Please note that some modules and plugins have additional requirements, for modules these need to be satisfied on the 'target' machine and should be listed in the module specific docs.
.. _managed_node_requirements: .. _managed_node_requirements:
Managed Node Requirements Managed Node Requirements
@ -198,7 +201,7 @@ Fedora users can install Ansible directly, though if you are using RHEL or CentO
# install the epel-release RPM if needed on CentOS, RHEL, or Scientific Linux # install the epel-release RPM if needed on CentOS, RHEL, or Scientific Linux
$ sudo yum install ansible $ sudo yum install ansible
You can also build an RPM yourself. From the root of a checkout or tarball, use the ``make rpm`` command to build an RPM you can distribute and install. Make sure you have ``rpm-build``, ``make``, and ``python2-devel`` installed. You can also build an RPM yourself. From the root of a checkout or tarball, use the ``make rpm`` command to build an RPM you can distribute and install. Make sure you have ``rpm-build``, ``make``, ``asciidoc``, ``git``, ``python-setuptools`` and ``python2-devel`` installed.
.. code-block:: bash .. code-block:: bash
@ -223,6 +226,7 @@ To configure the PPA on your machine and install ansible run these commands:
$ sudo apt-get update $ sudo apt-get update
$ sudo apt-get install ansible $ sudo apt-get install ansible
.. note:: For the older version 1.9 we use this ppa:ansible/ansible-1.9
.. note:: On older Ubuntu distributions, "software-properties-common" is called "python-software-properties". .. note:: On older Ubuntu distributions, "software-properties-common" is called "python-software-properties".
Debian/Ubuntu packages can also be built from the source checkout, run: Debian/Ubuntu packages can also be built from the source checkout, run:

@ -186,7 +186,7 @@ available to them. This can be very useful to keep your variables organized when
file starts to be too big, or when you want to use :doc:`Ansible Vault<playbooks_vault>` on a part of a group's file starts to be too big, or when you want to use :doc:`Ansible Vault<playbooks_vault>` on a part of a group's
variables. Note that this only works on Ansible 1.4 or later. variables. Note that this only works on Ansible 1.4 or later.
Tip: In Ansible 1.2 or later the group_vars/ and host_vars/ directories can exist in either Tip: In Ansible 1.2 or later the group_vars/ and host_vars/ directories can exist in
the playbook directory OR the inventory directory. If both paths exist, variables in the playbook the playbook directory OR the inventory directory. If both paths exist, variables in the playbook
directory will override variables set in the inventory directory. directory will override variables set in the inventory directory.
@ -200,64 +200,74 @@ List of Behavioral Inventory Parameters
As alluded to above, setting the following variables controls how ansible interacts with remote hosts. As alluded to above, setting the following variables controls how ansible interacts with remote hosts.
Host connection:: Host connection:
ansible_connection ansible_connection
Connection type to the host. Candidates are local, smart, ssh or paramiko. The default is smart. Connection type to the host. This can be the name of any of ansible's connection plugins. Common connection types are local, smart, ssh or paramiko. The default is smart.
.. include:: ansible_ssh_changes_note.rst .. include:: ansible_ssh_changes_note.rst
SSH connection:: SSH connection:
ansible_host ansible_host
The name of the host to connect to, if different from the alias you wish to give to it. The name of the host to connect to, if different from the alias you wish to give to it.
ansible_port ansible_port
The ssh port number, if not 22 The ssh port number, if not 22
ansible_user ansible_user
The default ssh user name to use. The default ssh user name to use.
ansible_ssh_pass ansible_ssh_pass
The ssh password to use (this is insecure, we strongly recommend using --ask-pass or SSH keys) The ssh password to use (this is insecure, we strongly recommend using :option:`--ask-pass` or SSH keys)
ansible_ssh_private_key_file ansible_ssh_private_key_file
Private key file used by ssh. Useful if using multiple keys and you don't want to use SSH agent. Private key file used by ssh. Useful if using multiple keys and you don't want to use SSH agent.
ansible_ssh_common_args ansible_ssh_common_args
This setting is always appended to the default command line for This setting is always appended to the default command line for :command:`sftp`, :command:`scp`,
sftp, scp, and ssh. Useful to configure a ``ProxyCommand`` for a and :command:`ssh`. Useful to configure a ``ProxyCommand`` for a certain host (or
certain host (or group). group).
ansible_sftp_extra_args ansible_sftp_extra_args
This setting is always appended to the default sftp command line. This setting is always appended to the default :command:`sftp` command line.
ansible_scp_extra_args ansible_scp_extra_args
This setting is always appended to the default scp command line. This setting is always appended to the default :command:`scp` command line.
ansible_ssh_extra_args ansible_ssh_extra_args
This setting is always appended to the default ssh command line. This setting is always appended to the default :command:`ssh` command line.
ansible_ssh_pipelining ansible_ssh_pipelining
Determines whether or not to use SSH pipelining. This can override the Determines whether or not to use SSH pipelining. This can override the ``pipelining`` setting in :file:`ansible.cfg`.
``pipelining`` setting in ``ansible.cfg``.
Privilege escalation (see :doc:`Ansible Privilege Escalation<become>` for further details):
Privilege escalation (see :doc:`Ansible Privilege Escalation<become>` for further details)::
ansible_become
ansible_become Equivalent to ``ansible_sudo`` or ``ansible_su``, allows to force privilege escalation
Equivalent to ansible_sudo or ansible_su, allows to force privilege escalation ansible_become_method
ansible_become_method Allows to set privilege escalation method
Allows to set privilege escalation method ansible_become_user
ansible_become_user Equivalent to ``ansible_sudo_user`` or ``ansible_su_user``, allows to set the user you become through privilege escalation
Equivalent to ansible_sudo_user or ansible_su_user, allows to set the user you become through privilege escalation ansible_become_pass
ansible_become_pass Equivalent to ``ansible_sudo_pass`` or ``ansible_su_pass``, allows you to set the privilege escalation password
Equivalent to ansible_sudo_pass or ansible_su_pass, allows you to set the privilege escalation password
Remote host environment parameters:
Remote host environment parameters::
ansible_shell_type
ansible_shell_type The shell type of the target system. You should not use this setting unless you have set the ``ansible_shell_executable`` to a non-Bourne (sh) compatible shell.
The shell type of the target system. Commands are formatted using 'sh'-style syntax by default. Setting this to 'csh' or 'fish' will cause commands executed on target systems to follow those shell's syntax instead. By default commands are formatted using ``sh``-style syntax.
ansible_python_interpreter Setting this to ``csh`` or ``fish`` will cause commands executed on target systems to follow those shell's syntax instead.
The target host python path. This is useful for systems with more ansible_python_interpreter
than one Python or not located at "/usr/bin/python" such as \*BSD, or where /usr/bin/python The target host python path. This is useful for systems with more
is not a 2.X series Python. We do not use the "/usr/bin/env" mechanism as that requires the remote user's than one Python or not located at :command:`/usr/bin/python` such as \*BSD, or where :command:`/usr/bin/python`
path to be set right and also assumes the "python" executable is named python, where the executable might is not a 2.X series Python. We do not use the :command:`/usr/bin/env` mechanism as that requires the remote user's
be named something like "python26". path to be set right and also assumes the :program:`python` executable is named python, where the executable might
ansible\_\*\_interpreter be named something like :program:`python2.6`.
Works for anything such as ruby or perl and works just like ansible_python_interpreter. ansible_*_interpreter
This replaces shebang of modules which will run on that host. Works for anything such as ruby or perl and works just like ``ansible_python_interpreter``.
This replaces shebang of modules which will run on that host.
.. versionadded:: 2.1
ansible_shell_executable
This sets the shell the ansible controller will use on the target machine,
overrides ``executable`` in :file:`ansible.cfg` which defaults to
:command:`/bin/sh`. You should really only change it if is not possible
to use :command:`/bin/sh` (i.e. :command:`/bin/sh` is not installed on the target
machine or cannot be run from sudo.).
Examples from a host file:: Examples from a host file::

@ -8,7 +8,7 @@ Windows Support
Windows: How Does It Work Windows: How Does It Work
````````````````````````` `````````````````````````
As you may have already read, Ansible manages Linux/Unix machines using SSH by default. As you may have already read, Ansible manages Linux/Unix machines using SSH by default.
Starting in version 1.7, Ansible also contains support for managing Windows machines. This uses Starting in version 1.7, Ansible also contains support for managing Windows machines. This uses
native PowerShell remoting, rather than SSH. native PowerShell remoting, rather than SSH.
@ -28,10 +28,12 @@ On a Linux control machine::
pip install "pywinrm>=0.1.1" pip install "pywinrm>=0.1.1"
Note:: on distributions with multiple python versions, use pip2 or pip2.x, where x matches the python minor version Ansible is running under.
Active Directory Support Active Directory Support
++++++++++++++++++++++++ ++++++++++++++++++++++++
If you wish to connect to domain accounts published through Active Directory (as opposed to local accounts created on the remote host), you will need to install the "python-kerberos" module and the MIT krb5 libraries it depends on. If you wish to connect to domain accounts published through Active Directory (as opposed to local accounts created on the remote host), you will need to install the "python-kerberos" module on the Ansible control host (and the MIT krb5 libraries it depends on). The Ansible control host also requires a properly configured computer account in Active Directory.
Installing python-kerberos dependencies Installing python-kerberos dependencies
--------------------------------------- ---------------------------------------
@ -40,22 +42,22 @@ Installing python-kerberos dependencies
# Via Yum # Via Yum
yum -y install python-devel krb5-devel krb5-libs krb5-workstation yum -y install python-devel krb5-devel krb5-libs krb5-workstation
# Via Apt (Ubuntu) # Via Apt (Ubuntu)
sudo apt-get install python-dev libkrb5-dev sudo apt-get install python-dev libkrb5-dev
# Via Portage (Gentoo) # Via Portage (Gentoo)
emerge -av app-crypt/mit-krb5 emerge -av app-crypt/mit-krb5
emerge -av dev-python/setuptools emerge -av dev-python/setuptools
# Via pkg (FreeBSD) # Via pkg (FreeBSD)
sudo pkg install security/krb5 sudo pkg install security/krb5
# Via OpenCSW (Solaris) # Via OpenCSW (Solaris)
pkgadd -d http://get.opencsw.org/now pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U /opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i libkrb5_3 /opt/csw/bin/pkgutil -y -i libkrb5_3
# Via Pacman (Arch Linux) # Via Pacman (Arch Linux)
pacman -S krb5 pacman -S krb5
@ -131,7 +133,9 @@ To test this, ping the windows host you want to control by name then use the ip
If you get different hostnames back than the name you originally pinged, speak to your active directory administrator and get them to check that DNS Scavenging is enabled and that DNS and DHCP are updating each other. If you get different hostnames back than the name you originally pinged, speak to your active directory administrator and get them to check that DNS Scavenging is enabled and that DNS and DHCP are updating each other.
Check your ansible controller's clock is synchronised with your domain controller. Kerberos is time sensitive and a little clock drift can cause tickets not be granted. Ensure that the Ansible controller has a properly configured computer account in the domain.
Check your Ansible controller's clock is synchronised with your domain controller. Kerberos is time sensitive and a little clock drift can cause tickets not be granted.
Check you are using the real fully qualified domain name for the domain. Sometimes domains are commonly known to users by aliases. To check this run: Check you are using the real fully qualified domain name for the domain. Sometimes domains are commonly known to users by aliases. To check this run:
@ -165,6 +169,10 @@ In group_vars/windows.yml, define the following inventory variables::
ansible_password: SecretPasswordGoesHere ansible_password: SecretPasswordGoesHere
ansible_port: 5986 ansible_port: 5986
ansible_connection: winrm ansible_connection: winrm
# The following is necessary for Python 2.7.9+ when using default WinRM self-signed certificates:
ansible_winrm_server_cert_validation: ignore
Attention for the older style variables (``ansible_ssh_*``): ansible_ssh_password doesn't exist, should be ansible_ssh_pass.
Although Ansible is mostly an SSH-oriented system, Windows management will not happen over SSH (`yet <http://blogs.msdn.com/b/powershell/archive/2015/06/03/looking-forward-microsoft-support-for-secure-shell-ssh.aspx>`). Although Ansible is mostly an SSH-oriented system, Windows management will not happen over SSH (`yet <http://blogs.msdn.com/b/powershell/archive/2015/06/03/looking-forward-microsoft-support-for-secure-shell-ssh.aspx>`).
@ -189,6 +197,7 @@ Since 2.0, the following custom inventory variables are also supported for addit
* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint. Ansible uses ``/wsman`` by default. * ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint. Ansible uses ``/wsman`` by default.
* ``ansible_winrm_realm``: Specify the realm to use for Kerberos authentication. If the username contains ``@``, Ansible will use the part of the username after ``@`` by default. * ``ansible_winrm_realm``: Specify the realm to use for Kerberos authentication. If the username contains ``@``, Ansible will use the part of the username after ``@`` by default.
* ``ansible_winrm_transport``: Specify one or more transports as a comma-separated list. By default, Ansible will use ``kerberos,plaintext`` if the ``kerberos`` module is installed and a realm is defined, otherwise ``plaintext``. * ``ansible_winrm_transport``: Specify one or more transports as a comma-separated list. By default, Ansible will use ``kerberos,plaintext`` if the ``kerberos`` module is installed and a realm is defined, otherwise ``plaintext``.
* ``ansible_winrm_server_cert_validation``: Specify the server certificate validation mode (``ignore`` or ``validate``). Ansible defaults to ``validate`` on Python 2.7.9 and higher, which will result in certificate validation errors against the Windows self-signed certificates. Unless verifiable certificates have been configured on the WinRM listeners, this should be set to ``ignore``
* ``ansible_winrm_*``: Any additional keyword arguments supported by ``winrm.Protocol`` may be provided. * ``ansible_winrm_*``: Any additional keyword arguments supported by ``winrm.Protocol`` may be provided.
.. _windows_system_prep: .. _windows_system_prep:
@ -198,18 +207,23 @@ Windows System Prep
In order for Ansible to manage your windows machines, you will have to enable PowerShell remoting configured. In order for Ansible to manage your windows machines, you will have to enable PowerShell remoting configured.
To automate setup of WinRM, you can run `this PowerShell script <https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1>`_ on the remote machine. To automate setup of WinRM, you can run `this PowerShell script <https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1>`_ on the remote machine.
The example script accepts a few arguments which Admins may choose to use to modify the default setup slightly, which might be appropriate in some cases.
Pass the -CertValidityDays option to customize the expiration date of the generated certificate.
powershell.exe -File ConfigureRemotingForAnsible.ps1 -CertValidityDays 100
Admins may wish to modify this setup slightly, for instance to increase the timeframe of Pass the -SkipNetworkProfileCheck switch to configure winrm to listen on PUBLIC zone interfaces. (Without this option, the script will fail if any network interface on device is in PUBLIC zone)
the certificate. powershell.exe -File ConfigureRemotingForAnsible.ps1 -SkipNetworkProfileCheck
.. note:: .. note::
On Windows 7 and Server 2008 R2 machines, due to a bug in Windows On Windows 7 and Server 2008 R2 machines, due to a bug in Windows
Management Framework 3.0, it may be necessary to install this Management Framework 3.0, it may be necessary to install this
hotfix http://support.microsoft.com/kb/2842230 to avoid receiving hotfix http://support.microsoft.com/kb/2842230 to avoid receiving
out of memory and stack overflow exceptions. Newly-installed Server 2008 out of memory and stack overflow exceptions. Newly-installed Server 2008
R2 systems which are not fully up to date with windows updates are known R2 systems which are not fully up to date with windows updates are known
to have this issue. to have this issue.
Windows 8.1 and Server 2012 R2 are not affected by this issue as they Windows 8.1 and Server 2012 R2 are not affected by this issue as they
come with Windows Management Framework 4.0. come with Windows Management Framework 4.0.
@ -221,15 +235,15 @@ Getting to PowerShell 3.0 or higher
PowerShell 3.0 or higher is needed for most provided Ansible modules for Windows, and is also required to run the above setup script. Note that PowerShell 3.0 is only supported on Windows 7 SP1, Windows Server 2008 SP1, and later releases of Windows. PowerShell 3.0 or higher is needed for most provided Ansible modules for Windows, and is also required to run the above setup script. Note that PowerShell 3.0 is only supported on Windows 7 SP1, Windows Server 2008 SP1, and later releases of Windows.
Looking at an ansible checkout, copy the `examples/scripts/upgrade_to_ps3.ps1 <https://github.com/cchurch/ansible/blob/devel/examples/scripts/upgrade_to_ps3.ps1>`_ script onto the remote host and run a PowerShell console as an administrator. You will now be running PowerShell 3 and can try connectivity again using the win_ping technique referenced above. Looking at an Ansible checkout, copy the `examples/scripts/upgrade_to_ps3.ps1 <https://github.com/cchurch/ansible/blob/devel/examples/scripts/upgrade_to_ps3.ps1>`_ script onto the remote host and run a PowerShell console as an administrator. You will now be running PowerShell 3 and can try connectivity again using the win_ping technique referenced above.
.. _what_windows_modules_are_available: .. _what_windows_modules_are_available:
What modules are available What modules are available
`````````````````````````` ``````````````````````````
Most of the Ansible modules in core Ansible are written for a combination of Linux/Unix machines and arbitrary web services, though there are various Most of the Ansible modules in core Ansible are written for a combination of Linux/Unix machines and arbitrary web services, though there are various
Windows modules as listed in the `"windows" subcategory of the Ansible module index <http://docs.ansible.com/list_of_windows_modules.html>`_. Windows modules as listed in the `"windows" subcategory of the Ansible module index <http://docs.ansible.com/list_of_windows_modules.html>`_.
Browse this index to see what is available. Browse this index to see what is available.
@ -248,13 +262,10 @@ Note there are a few other Ansible modules that don't start with "win" that also
Developers: Supported modules and how it works Developers: Supported modules and how it works
`````````````````````````````````````````````` ``````````````````````````````````````````````
Developing ansible modules are covered in a `later section of the documentation <http://docs.ansible.com/developing_modules.html>`_, with a focus on Linux/Unix. Developing Ansible modules are covered in a `later section of the documentation <http://docs.ansible.com/developing_modules.html>`_, with a focus on Linux/Unix.
What if you want to write Windows modules for ansible though? What if you want to write Windows modules for Ansible though?
For Windows, ansible modules are implemented in PowerShell. Skim those Linux/Unix module development chapters before proceeding. For Windows, Ansible modules are implemented in PowerShell. Skim those Linux/Unix module development chapters before proceeding. Windows modules in the core and extras repo live in a "windows/" subdir. Custom modules can go directly into the Ansible "library/" directories or those added in ansible.cfg. Documentation lives in a `.py` file with the same name. For example, if a module is named "win_ping", there will be embedded documentation in the "win_ping.py" file, and the actual PowerShell code will live in a "win_ping.ps1" file. Take a look at the sources and this will make more sense.
Windows modules live in a "windows/" subfolder in the Ansible "library/" subtree. For example, if a module is named
"library/windows/win_ping", there will be embedded documentation in the "win_ping" file, and the actual PowerShell code will live in a "win_ping.ps1" file. Take a look at the sources and this will make more sense.
Modules (ps1 files) should start as follows:: Modules (ps1 files) should start as follows::
@ -317,6 +328,14 @@ Running individual commands uses the 'raw' module, as opposed to the shell or co
register: ipconfig register: ipconfig
- debug: var=ipconfig - debug: var=ipconfig
Running common DOS commands like 'del", 'move', or 'copy" is unlikely to work on a remote Windows Server using Powershell, but they can work by prefacing the commands with "CMD /C" and enclosing the command in double quotes as in this example::
- name: another raw module example
hosts: windows
tasks:
- name: Move file on remote Windows Server from one location to another
raw: CMD /C "MOVE /Y C:\teststuff\myfile.conf C:\builds\smtp.conf"
And for a final example, here's how to use the win_stat module to test for file existence. Note that the data returned by the win_stat module is slightly different than what is provided by the Linux equivalent:: And for a final example, here's how to use the win_stat module to test for file existence. Note that the data returned by the win_stat module is slightly different than what is provided by the Linux equivalent::
- name: test stat module - name: test stat module
@ -351,12 +370,10 @@ form of new modules, tweaks to existing modules, documentation, or something els
:doc:`developing_modules` :doc:`developing_modules`
How to write modules How to write modules
:doc:`playbooks` :doc:`playbooks`
Learning ansible's configuration management language Learning Ansible's configuration management language
`List of Windows Modules <http://docs.ansible.com/list_of_windows_modules.html>`_ `List of Windows Modules <http://docs.ansible.com/list_of_windows_modules.html>`_
Windows specific module list, all implemented in PowerShell Windows specific module list, all implemented in PowerShell
`Mailing List <http://groups.google.com/group/ansible-project>`_ `Mailing List <http://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel #ansible IRC chat channel

@ -0,0 +1,59 @@
Advanced Syntax
===============
.. contents:: Topics
This page describes advanced YAML syntax that enables you to have more control over the data placed in YAML files used by Ansible.
.. _yaml_tags_and_python_types:
YAML tags and Python types
``````````````````````````
The documentation covered here is an extension of the documentation that can be found in the `PyYAML Documentation <http://pyyaml.org/wiki/PyYAMLDocumentation#YAMLtagsandPythontypes>`_
.. _unsafe_strings:
Unsafe or Raw Strings
~~~~~~~~~~~~~~~~~~~~~
As of Ansible 2.0, there is an internal data type for declaring variable values as "unsafe". This means that the data held within the variables value should be treated as unsafe preventing unsafe character subsitition and information disclosure.
Jinja2 contains functionality for escaping, or telling Jinja2 to not template data by means of functionality such as ``{% raw %} ... {% endraw %}``, however this uses a more comprehensive implementation to ensure that the value is never templated.
Using YAML tags, you can also mark a value as "unsafe" by using the ``!unsafe`` tag such as::
---
my_unsafe_variable: !unsafe 'this variable has {{ characters that shouldn't be treated as a jinja2 template'
In a playbook, this may look like::
---
hosts: all
vars:
my_unsafe_variable: !unsafe 'unsafe value'
tasks:
...
For complex variables such as hashes or arrays, ``!unsafe`` should be used on the individual elements such as::
---
my_unsafe_array:
- !unsafe 'unsafe element'
- 'safe element'
my_unsafe_hash:
unsafe_key: !unsafe 'unsafe value'
.. seealso::
:doc:`playbooks_variables`
All about variables
`User Mailing List <http://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel

@ -254,8 +254,8 @@ What about just my webservers in Boston?::
What about just the first 10, and then the next 10?:: What about just the first 10, and then the next 10?::
ansible-playbook -i production webservers.yml --limit boston[0-10] ansible-playbook -i production webservers.yml --limit boston[1-10]
ansible-playbook -i production webservers.yml --limit boston[10-20] ansible-playbook -i production webservers.yml --limit boston[11-20]
And of course just basic ad-hoc stuff is also possible.:: And of course just basic ad-hoc stuff is also possible.::

@ -27,8 +27,9 @@ to the tasks.
become_user: root become_user: root
In the example above the 3 tasks will be executed only when the block's when condition is met and enables In the example above the each of the 3 tasks will be executed after appending the `when` condition from the block
privilege escalation for all the enclosed tasks. and evaluating it in the task's context. Also they inherit the privilege escalation directives enabling "become to root"
for all the enclosed tasks.
.. _block_error_handling: .. _block_error_handling:

@ -47,7 +47,7 @@ decide to do something conditionally based on success or failure::
- command: /bin/something - command: /bin/something
when: result|failed when: result|failed
- command: /bin/something_else - command: /bin/something_else
when: result|success when: result|succeeded
- command: /bin/still/something_else - command: /bin/still/something_else
when: result|skipped when: result|skipped

@ -130,6 +130,29 @@ Here is an example::
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync
will need to ask for a passphrase. will need to ask for a passphrase.
.. _delegate_facts:
Delegated facts
```````````````
.. versionadded:: 2.0
By default, any fact gathered by a delegated task are assigned to the `inventory_hostname` (the current host) instead of the host which actually produced the facts (the delegated to host).
In 2.0, the directive `delegate_facts` may be set to `True` to assign the task's gathered facts to the delegated host instead of the current one.::
- hosts: app_servers
tasks:
- name: gather facts from db servers
setup:
delegate_to: "{{item}}"
delegate_facts: True
with_items: "{{groups['dbservers']}}"
The above will gather facts for the machines in the dbservers group and assign the facts to those machines and not to app_servers.
This way you can lookup `hostvars['dbhost1']['default_ipv4_addresses'][0]` even though dbservers were not part of the play, or left out by using `--limit`.
.. _run_once: .. _run_once:
Run Once Run Once
@ -159,13 +182,18 @@ This can be optionally paired with "delegate_to" to specify an individual host t
delegate_to: web01.example.org delegate_to: web01.example.org
When "run_once" is not used with "delegate_to" it will execute on the first host, as defined by inventory, When "run_once" is not used with "delegate_to" it will execute on the first host, as defined by inventory,
in the group(s) of hosts targeted by the play. e.g. webservers[0] if the play targeted "hosts: webservers". in the group(s) of hosts targeted by the play - e.g. webservers[0] if the play targeted "hosts: webservers".
This approach is similar, although more concise and cleaner than applying a conditional to a task such as:: This approach is similar to applying a conditional to a task such as::
- command: /opt/application/upgrade_db.py - command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0] when: inventory_hostname == webservers[0]
.. note::
When used together with "serial", tasks marked as "run_once" will be ran on one host in *each* serial batch.
If it's crucial that the task is run only once regardless of "serial" mode, use
:code:`inventory_hostname == my_group_name[0]` construct.
.. _local_playbooks: .. _local_playbooks:
Local Playbooks Local Playbooks

@ -31,7 +31,7 @@ The environment can also be stored in a variable, and accessed like so::
tasks: tasks:
- apt: name=cobbler state=installed - apt: name=cobbler state=installed
environment: proxy_env environment: "{{proxy_env}}"
You can also use it at a playbook level:: You can also use it at a playbook level::

@ -58,12 +58,17 @@ The following tasks are illustrative of how filters can be used with conditional
- debug: msg="it changed" - debug: msg="it changed"
when: result|changed when: result|changed
- debug: msg="it succeeded in Ansible >= 2.1"
when: result|succeeded
- debug: msg="it succeeded" - debug: msg="it succeeded"
when: result|success when: result|success
- debug: msg="it was skipped" - debug: msg="it was skipped"
when: result|skipped when: result|skipped
.. note:: From 2.1 You can also use success, failure, change, skip so the grammer matches, for those that want to be strict about it.
.. _forcing_variables_to_be_defined: .. _forcing_variables_to_be_defined:
Forcing Variables To Be Defined Forcing Variables To Be Defined
@ -357,7 +362,7 @@ setting in `ansible.cfg`.
Extracting values from containers Extracting values from containers
--------------------------------- ---------------------------------
.. versionadded:: 2.0 .. versionadded:: 2.1
The `extract` filter is used to map from a list of indices to a list of The `extract` filter is used to map from a list of indices to a list of
values from a container (hash or array):: values from a container (hash or array)::
@ -547,7 +552,7 @@ To match strings against a regex, use the "match" or "search" filter::
To replace text in a string with regex, use the "regex_replace" filter:: To replace text in a string with regex, use the "regex_replace" filter::
# convert "ansible" to "able" # convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }} {{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# convert "foobar" to "bar" # convert "foobar" to "bar"
@ -559,11 +564,13 @@ To replace text in a string with regex, use the "regex_replace" filter::
.. note:: Prior to ansible 2.0, if "regex_replace" filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), .. note:: Prior to ansible 2.0, if "regex_replace" filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments),
then you needed to escape backreferences (e.g. ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``). then you needed to escape backreferences (e.g. ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a regex, use the "regex_escape" filter:: To escape special characters within a regex, use the "regex_escape" filter::
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$' # convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }} {{ '^f.*o(.*)$' | regex_escape() }}
To make use of one attribute from each item in a list of complex variables, use the "map" filter (see the `Jinja2 map() docs`_ for more):: To make use of one attribute from each item in a list of complex variables, use the "map" filter (see the `Jinja2 map() docs`_ for more)::
# get a comma-separated list of the mount points (e.g. "/,/mnt/stuff") on a host # get a comma-separated list of the mount points (e.g. "/,/mnt/stuff") on a host

@ -283,6 +283,37 @@ If needed, you can extract subnet and prefix information from 'host/prefix' valu
# {{ host_prefix | ipaddr('host/prefix') | ipaddr('prefix') }} # {{ host_prefix | ipaddr('host/prefix') | ipaddr('prefix') }}
[64, 24] [64, 24]
Converting subnet masks to CIDR notation
----------------------------------------
Given a subnet in the form of network address and subnet mask, it can be converted into CIDR notation using ``ipaddr()``. This can be useful for converting Ansible facts gathered about network configuration from subnet masks into CIDR format::
ansible_default_ipv4: {
address: "192.168.0.11",
alias: "eth0",
broadcast: "192.168.0.255",
gateway: "192.168.0.1",
interface: "eth0",
macaddress: "fa:16:3e:c4:bd:89",
mtu: 1500,
netmask: "255.255.255.0",
network: "192.168.0.0",
type: "ether"
}
First concatenate network and netmask::
net_mask = "{{ ansible_default_ipv4.network }}/{{ ansible_default_ipv4.netmask }}"
'192.168.0.0/255.255.255.0'
This result can be canonicalised with ``ipaddr()`` to produce a subnet in CIDR format::
# {{ net_mask | ipaddr('prefix') }}
'24'
# {{ net_mask | ipaddr('net') }}
'192.168.0.0/24'
IP address conversion IP address conversion
--------------------- ---------------------

@ -41,7 +41,7 @@ Each playbook is composed of one or more 'plays' in a list.
The goal of a play is to map a group of hosts to some well defined roles, represented by The goal of a play is to map a group of hosts to some well defined roles, represented by
things ansible calls tasks. At a basic level, a task is nothing more than a call things ansible calls tasks. At a basic level, a task is nothing more than a call
to an ansible module, which you should have learned about in earlier chapters. to an ansible module (see :doc:`modules`).
By composing a playbook of multiple 'plays', it is possible to By composing a playbook of multiple 'plays', it is possible to
orchestrate multi-machine deployments, running certain steps on all orchestrate multi-machine deployments, running certain steps on all
@ -180,9 +180,9 @@ Support for running things as another user is also available (see :doc:`become`)
--- ---
- hosts: webservers - hosts: webservers
remote_user: yourname remote_user: yourname
sudo: yes become: yes
You can also use sudo on a particular task instead of the whole play:: You can also use become on a particular task instead of the whole play::
--- ---
- hosts: webservers - hosts: webservers
@ -382,10 +382,11 @@ Handlers are best used to restart services and trigger reboots. You probably
won't need them for much else. won't need them for much else.
.. note:: .. note::
* Notify handlers are always run in the order written. * Notify handlers are always run in the same order they are defined, `not` in the order listed in the notify-statement.
* Handler names live in a global namespace. * Handler names live in a global namespace.
* If two handler tasks have the same name, only one will run. * If two handler tasks have the same name, only one will run.
`* <https://github.com/ansible/ansible/issues/4943>`_ `* <https://github.com/ansible/ansible/issues/4943>`_
* You cannot notify a handler that is defined inside of an include
Roles are described later on, but it's worthwhile to point out that: Roles are described later on, but it's worthwhile to point out that:

@ -25,6 +25,7 @@ The file lookup is the most basic lookup type.
Contents can be read off the filesystem as follows:: Contents can be read off the filesystem as follows::
---
- hosts: all - hosts: all
vars: vars:
contents: "{{ lookup('file', '/etc/foo.txt') }}" contents: "{{ lookup('file', '/etc/foo.txt') }}"
@ -240,6 +241,112 @@ If you're not using 2.0 yet, you can do something similar with the credstash too
debug: msg="Poor man's credstash lookup! {{ lookup('pipe', 'credstash -r us-west-1 get my-other-password') }}" debug: msg="Poor man's credstash lookup! {{ lookup('pipe', 'credstash -r us-west-1 get my-other-password') }}"
.. _dns_lookup:
The DNS Lookup (dig)
````````````````````
.. versionadded:: 1.9.0
.. warning:: This lookup depends on the `dnspython <http://www.dnspython.org/>`_
library.
The ``dig`` lookup runs queries against DNS servers to retrieve DNS records for
a specific name (*FQDN* - fully qualified domain name). It is possible to lookup any DNS record in this manner.
There is a couple of different syntaxes that can be used to specify what record
should be retrieved, and for which name. It is also possible to explicitly
specify the DNS server(s) to use for lookups.
In its simplest form, the ``dig`` lookup plugin can be used to retrieve an IPv4
address (DNS ``A`` record) associated with *FQDN*:
.. note:: If you need to obtain the ``AAAA`` record (IPv6 address), you must
specify the record type explicitly. Syntax for specifying the record
type is described below.
.. note:: The trailing dot in most of the examples listed is purely optional,
but is specified for completeness/correctness sake.
::
- debug: msg="The IPv4 address for example.com. is {{ lookup('dig', 'example.com.')}}"
In addition to (default) ``A`` record, it is also possible to specify a different
record type that should be queried. This can be done by either passing-in
additional parameter of format ``qtype=TYPE`` to the ``dig`` lookup, or by
appending ``/TYPE`` to the *FQDN* being queried. For example::
- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com.', 'qtype=TXT') }}"
- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com./TXT') }}"
If multiple values are associated with the requested record, the results will be
returned as a comma-separated list. In such cases you may want to pass option
``wantlist=True`` to the plugin, which will result in the record values being
returned as a list over which you can iterate later on::
- debug: msg="One of the MX records for gmail.com. is {{ item }}"
with_items: "{{ lookup('dig', 'gmail.com./MX', wantlist=True) }}"
In case of reverse DNS lookups (``PTR`` records), you can also use a convenience
syntax of format ``IP_ADDRESS/PTR``. The following three lines would produce the
same output::
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8/PTR') }}"
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa./PTR') }}"
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa.', 'qtype=PTR') }}"
By default, the lookup will rely on system-wide configured DNS servers for
performing the query. It is also possible to explicitly specify DNS servers to
query using the ``@DNS_SERVER_1,DNS_SERVER_2,...,DNS_SERVER_N`` notation. This
needs to be passed-in as an additional parameter to the lookup. For example::
- debug: msg="Querying 8.8.8.8 for IPv4 address for example.com. produces {{ lookup('dig', 'example.com', '@8.8.8.8') }}"
In some cases the DNS records may hold a more complex data structure, or it may
be useful to obtain the results in a form of a dictionary for future
processing. The ``dig`` lookup supports parsing of a number of such records,
with the result being returned as a dictionary. This way it is possible to
easily access such nested data. This return format can be requested by
passing-in the ``flat=0`` option to the lookup. For example::
- debug: msg="XMPP service for gmail.com. is available at {{ item.target }} on port {{ item.port }}"
with_items: "{{ lookup('dig', '_xmpp-server._tcp.gmail.com./SRV', 'flat=0', wantlist=True) }}"
Take note that due to the way Ansible lookups work, you must pass the
``wantlist=True`` argument to the lookup, otherwise Ansible will report errors.
Currently the dictionary results are supported for the following records:
.. note:: *ALL* is not a record per-se, merely the listed fields are available
for any record results you retrieve in the form of a dictionary.
========== =============================================================================
Record Fields
---------- -----------------------------------------------------------------------------
*ALL* owner, ttl, type
A address
AAAA address
CNAME target
DNAME target
DLV algorithm, digest_type, key_tag, digest
DNSKEY flags, algorithm, protocol, key
DS algorithm, digest_type, key_tag, digest
HINFO cpu, os
LOC latitude, longitude, altitude, size, horizontal_precision, vertical_precision
MX preference, exchange
NAPTR order, preference, flags, service, regexp, replacement
NS target
NSEC3PARAM algorithm, flags, iterations, salt
PTR target
RP mbox, txt
SOA mname, rname, serial, refresh, retry, expire, minimum
SPF strings
SRV priority, weight, port, target
SSHFP algorithm, fp_type, fingerprint
TLSA usage, selector, mtype, cert
TXT strings
========== =============================================================================
.. _more_lookups: .. _more_lookups:
More Lookups More Lookups

@ -96,7 +96,7 @@ And you want to print every user's name and phone number. You can loop through
Looping over Files Looping over Files
`````````````````` ``````````````````
``with_file`` iterates over a list of files, setting `item` to the content of each file in sequence. It can be used like this:: ``with_file`` iterates over the content of a list of files, `item` will be set to the content of each file in sequence. It can be used like this::
--- ---
- hosts: all - hosts: all
@ -204,7 +204,7 @@ It might happen like so::
- authorized_key: "user={{ item.0.name }} key='{{ lookup('file', item.1) }}'" - authorized_key: "user={{ item.0.name }} key='{{ lookup('file', item.1) }}'"
with_subelements: with_subelements:
- users - "{{ users }}"
- authorized - authorized
Given the mysql hosts and privs subkey lists, you can also iterate over a list in a nested subkey:: Given the mysql hosts and privs subkey lists, you can also iterate over a list in a nested subkey::
@ -212,7 +212,7 @@ Given the mysql hosts and privs subkey lists, you can also iterate over a list i
- name: Setup MySQL users - name: Setup MySQL users
mysql_user: name={{ item.0.name }} password={{ item.0.mysql.password }} host={{ item.1 }} priv={{ item.0.mysql.privs | join('/') }} mysql_user: name={{ item.0.name }} password={{ item.0.mysql.password }} host={{ item.1 }} priv={{ item.0.mysql.privs | join('/') }}
with_subelements: with_subelements:
- users - "{{ users }}"
- mysql.hosts - mysql.hosts
Subelements walks a list of hashes (aka dictionaries) and then traverses a list with a given (nested sub-)key inside of those Subelements walks a list of hashes (aka dictionaries) and then traverses a list with a given (nested sub-)key inside of those
@ -536,11 +536,11 @@ There is also a specific lookup plugin ``inventory_hostname`` that can be used l
# show all the hosts in the inventory # show all the hosts in the inventory
- debug: msg={{ item }} - debug: msg={{ item }}
with_inventory_hostname: all with_inventory_hostnames: all
# show all the hosts matching the pattern, ie all but the group www # show all the hosts matching the pattern, ie all but the group www
- debug: msg={{ item }} - debug: msg={{ item }}
with_inventory_hostname: all:!www with_inventory_hostnames: all:!www
More information on the patterns can be found on :doc:`intro_patterns` More information on the patterns can be found on :doc:`intro_patterns`
@ -550,8 +550,8 @@ Loops and Includes
`````````````````` ``````````````````
In 2.0 you are able to use `with_` loops and task includes (but not playbook includes), this adds the ability to loop over the set of tasks in one shot. In 2.0 you are able to use `with_` loops and task includes (but not playbook includes), this adds the ability to loop over the set of tasks in one shot.
There are a couple of things that you need to keep in mind, a included task that has it's own `with_` loop will overwrite the value of the special `item` variable. There are a couple of things that you need to keep in mind, an included task that has its own `with_` loop will overwrite the value of the special `item` variable.
So if you want access to both the include's `item` and the current task's `item` you should use `set_fact` to create a alias to the outer one.:: So if you want access to both the include's `item` and the current task's `item` you should use `set_fact` to create an alias to the outer one.::
- include: test.yml - include: test.yml

@ -132,7 +132,7 @@ Note that you cannot do variable substitution when including one playbook
inside another. inside another.
.. note:: .. note::
You can not conditionally path the location to an include file, You can not conditionally pass the location to an include file,
like you can with 'vars_files'. If you find yourself needing to do like you can with 'vars_files'. If you find yourself needing to do
this, consider how you can restructure your playbook to be more this, consider how you can restructure your playbook to be more
class/role oriented. This is to say you cannot use a 'fact' to class/role oriented. This is to say you cannot use a 'fact' to
@ -191,11 +191,8 @@ This designates the following behaviors, for each role 'x':
- If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play - If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play
- If roles/x/vars/main.yml exists, variables listed therein will be added to the play - If roles/x/vars/main.yml exists, variables listed therein will be added to the play
- If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles (1.3 and later) - If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles (1.3 and later)
- Any copy tasks can reference files in roles/x/files/ without having to path them relatively or absolutely - Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely
- Any script tasks can reference scripts in roles/x/files/ without having to path them relatively or absolutely
- Any template tasks can reference files in roles/x/templates/ without having to path them relatively or absolutely
- Any include tasks can reference files in roles/x/tasks/ without having to path them relatively or absolutely
In Ansible 1.4 and later you can configure a roles_path to search for roles. Use this to check all of your common roles out to one location, and share In Ansible 1.4 and later you can configure a roles_path to search for roles. Use this to check all of your common roles out to one location, and share
them easily between multiple playbook projects. See :doc:`intro_configuration` for details about how to set this up in ansible.cfg. them easily between multiple playbook projects. See :doc:`intro_configuration` for details about how to set this up in ansible.cfg.
@ -216,8 +213,8 @@ Also, should you wish to parameterize roles, by adding variables, you can do so,
- hosts: webservers - hosts: webservers
roles: roles:
- common - common
- { role: foo_app_instance, dir: '/opt/a', port: 5000 } - { role: foo_app_instance, dir: '/opt/a', app_port: 5000 }
- { role: foo_app_instance, dir: '/opt/b', port: 5001 } - { role: foo_app_instance, dir: '/opt/b', app_port: 5001 }
While it's probably not something you should do often, you can also conditionally apply roles like so:: While it's probably not something you should do often, you can also conditionally apply roles like so::
@ -230,7 +227,7 @@ While it's probably not something you should do often, you can also conditionall
This works by applying the conditional to every task in the role. Conditionals are covered later on in This works by applying the conditional to every task in the role. Conditionals are covered later on in
the documentation. the documentation.
Finally, you may wish to assign tags to the roles you specify. You can do so inline::: Finally, you may wish to assign tags to the roles you specify. You can do so inline::
--- ---
@ -287,7 +284,7 @@ a list of roles and parameters to insert before the specified role, such as the
--- ---
dependencies: dependencies:
- { role: common, some_parameter: 3 } - { role: common, some_parameter: 3 }
- { role: apache, port: 80 } - { role: apache, appache_port: 80 }
- { role: postgres, dbname: blarg, other_parameter: 12 } - { role: postgres, dbname: blarg, other_parameter: 12 }
Role dependencies can also be specified as a full path, just like top level roles:: Role dependencies can also be specified as a full path, just like top level roles::

@ -14,8 +14,10 @@ and adopt these only if they seem relevant or useful to your environment.
playbooks_delegation playbooks_delegation
playbooks_environment playbooks_environment
playbooks_error_handling playbooks_error_handling
playbooks_advanced_syntax
playbooks_lookups playbooks_lookups
playbooks_prompts playbooks_prompts
playbooks_tags playbooks_tags
playbooks_vault playbooks_vault
playbooks_startnstep playbooks_startnstep
playbooks_directives

@ -36,7 +36,8 @@ You may also apply tags to roles::
And you may also tag basic include statements:: And you may also tag basic include statements::
- include: foo.yml tags=web,foo - include: foo.yml
tags: [web,foo]
Both of these apply the specified tags to every task inside the included Both of these apply the specified tags to every task inside the included
file or role, so that these tasks can be selectively run when the playbook file or role, so that these tasks can be selectively run when the playbook

@ -495,6 +495,24 @@ Here is an example of what that might look like::
In this pattern however, you could also write a fact module as well, and may wish to consider this as an option. In this pattern however, you could also write a fact module as well, and may wish to consider this as an option.
.. _ansible_version:
Ansible version
```````````````
.. versionadded:: 1.8
To adapt playbook behavior to specific version of ansible, a variable ansible_version is available, with the following
structure::
"ansible_version": {
"full": "2.0.0.2",
"major": 2,
"minor": 0,
"revision": 0,
"string": "2.0.0.2"
}
.. _fact_caching: .. _fact_caching:
Fact Caching Fact Caching
@ -586,7 +604,7 @@ in Ansible. Effectively registered variables are just like facts.
.. _accessing_complex_variable_data: .. _accessing_complex_variable_data:
Accessing Complex Variable Data Accessing Complex Variable Data
``````````````````````````````` ````````````````````````````````
We already talked about facts a little higher up in the documentation. We already talked about facts a little higher up in the documentation.
@ -730,6 +748,9 @@ As of Ansible 1.2, you can also pass in extra vars as quoted JSON, like so::
The ``key=value`` form is obviously simpler, but it's there if you need it! The ``key=value`` form is obviously simpler, but it's there if you need it!
.. note:: Values passed in using the ``key=value`` syntax are interpreted as strings.
Use the JSON format if you need to pass in anything that shouldn't be a string (Booleans, integers, floats, lists etc).
As of Ansible 1.3, extra vars can be loaded from a JSON file with the ``@`` syntax:: As of Ansible 1.3, extra vars can be loaded from a JSON file with the ``@`` syntax::
--extra-vars "@some_file.json" --extra-vars "@some_file.json"
@ -758,19 +779,20 @@ If multiple variables of the same name are defined in different places, they get
.. include:: ansible_ssh_changes_note.rst .. include:: ansible_ssh_changes_note.rst
In 1.x the precedence is (last listed wins): In 1.x, the precedence is as follows (with the last listed variables winning prioritization):
* then "role defaults", which are the most "defaulty" and lose in priority to everything. * "role defaults", which lose in priority to everything and are the most easily overridden
* then come the variables defined in inventory * variables defined in inventory
* then come the facts discovered about a system * facts discovered about a system
* then comes "most everything else" (command line switches, vars in play, included vars, role vars, etc) * "most everything else" (command line switches, vars in play, included vars, role vars, etc.)
* then come connection variables (``ansible_user``, etc) * connection variables (``ansible_user``, etc.)
* extra vars (``-e`` in the command line) always win * extra vars (``-e`` in the command line) always win
.. note:: In versions prior to 1.5.4, facts discovered about a system were in the "most everything else" category above. .. note::
In versions prior to 1.5.4, facts discovered about a system were in the "most everything else" category above.
In 2.x we have made the order of precedence more specific (last listed wins): In 2.x, we have made the order of precedence more specific (with the last listed variables winning prioritization):
* role defaults [1]_ * role defaults [1]_
* inventory vars [2]_ * inventory vars [2]_
@ -787,16 +809,16 @@ In 2.x we have made the order of precedence more specific (last listed wins):
* role and include vars * role and include vars
* block vars (only for tasks in block) * block vars (only for tasks in block)
* task vars (only for the task) * task vars (only for the task)
* extra vars * extra vars (always win precedence)
Basically, anything that goes into "role defaults" (the defaults folder inside the role) is the most malleable and easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in namespace. The idea here to follow is that the more explicit you get in scope, the more precedence it takes with command line ``-e`` extra vars always winning. Host and/or inventory variables can win over role defaults, but not explicit includes like the vars directory or an ``include_vars`` task. Basically, anything that goes into "role defaults" (the defaults folder inside the role) is the most malleable and easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in namespace. The idea here to follow is that the more explicit you get in scope, the more precedence it takes with command line ``-e`` extra vars always winning. Host and/or inventory variables can win over role defaults, but not explicit includes like the vars directory or an ``include_vars`` task.
.. rubric:: Footnotes .. rubric:: Footnotes
.. [1] Tasks in each role will see their own role's defaults tasks outside of roles will the last role's defaults .. [1] Tasks in each role will see their own role's defaults. Tasks defined outside of a role will see the last role's defaults.
.. [2] Variables defined in inventory file or provided by dynamic inventory .. [2] Variables defined in inventory file or provided by dynamic inventory.
.. note:: Within a any section, redefining a var will overwrite the previous instance. .. note:: Within any section, redefining a var will overwrite the previous instance.
If multiple groups have the same variable, the last one loaded wins. If multiple groups have the same variable, the last one loaded wins.
If you define a variable twice in a play's vars: section, the 2nd one wins. If you define a variable twice in a play's vars: section, the 2nd one wins.
.. note:: the previous describes the default config `hash_behavior=replace`, switch to 'merge' to only partially overwrite. .. note:: the previous describes the default config `hash_behavior=replace`, switch to 'merge' to only partially overwrite.
@ -815,7 +837,7 @@ but they behave like other variables, so if you really want to override the remo
.. _variable_scopes: .. _variable_scopes:
Variable Scopes Variable Scopes
``````````````` ````````````````
Ansible has 3 main scopes: Ansible has 3 main scopes:
@ -932,6 +954,11 @@ how all of these things can work together.
.. _ansible-examples: https://github.com/ansible/ansible-examples .. _ansible-examples: https://github.com/ansible/ansible-examples
.. _builtin filters: http://jinja.pocoo.org/docs/templates/#builtin-filters .. _builtin filters: http://jinja.pocoo.org/docs/templates/#builtin-filters
Advanced Syntax
```````````````
For information about advanced YAML syntax used to declare variables and have more control over the data placed in YAML files used by Ansible, see :doc:`playbooks_advanced_syntax`.
.. seealso:: .. seealso::
:doc:`playbooks` :doc:`playbooks`

@ -104,6 +104,9 @@ Alternatively, passwords can be specified with a file or a script, the script ve
The password should be a string stored as a single line in the file. The password should be a string stored as a single line in the file.
.. note::
You can also set ``ANSIBLE_VAULT_PASSWORD_FILE`` environment variable, e.g. ``ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt`` and Ansible will automatically search for the password in that file.
If you are using a script instead of a flat file, ensure that it is marked as executable, and that the password is printed to standard output. If your script needs to prompt for data, prompts can be sent to standard error. If you are using a script instead of a flat file, ensure that it is marked as executable, and that the password is printed to standard output. If your script needs to prompt for data, prompts can be sent to standard error.
This is something you may wish to do if using Ansible from a continuous integration system like Jenkins. This is something you may wish to do if using Ansible from a continuous integration system like Jenkins.

@ -0,0 +1,375 @@
Porting Guide
=============
Playbook
--------
* backslash escapes When specifying parameters in jinja2 expressions in YAML
dicts, backslashes sometimes needed to be escaped twice. This has been fixed
in 2.0.x so that escaping once works. The following example shows how
playbooks must be modified::
# Syntax in 1.9.x
- debug:
msg: "{{ 'test1_junk 1\\\\3' | regex_replace('(.*)_junk (.*)', '\\\\1 \\\\2') }}"
# Syntax in 2.0.x
- debug:
msg: "{{ 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') }}"
# Output:
"msg": "test1 1\\3"
To make an escaped string that will work on all versions you have two options::
- debug: msg="{{ 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') }}"
uses key=value escaping which has not changed. The other option is to check for the ansible version::
"{{ (ansible_version|version_compare('ge', '2.0'))|ternary( 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') , 'test1_junk 1\\\\3' | regex_replace('(.*)_junk (.*)', '\\\\1 \\\\2') ) }}"
* trailing newline When a string with a trailing newline was specified in the
playbook via yaml dict format, the trailing newline was stripped. When
specified in key=value format, the trailing newlines were kept. In v2, both
methods of specifying the string will keep the trailing newlines. If you
relied on the trailing newline being stripped, you can change your playbook
using the following as an example::
# Syntax in 1.9.x
vars:
message: >
Testing
some things
tasks:
- debug:
msg: "{{ message }}"
# Syntax in 2.0.x
vars:
old_message: >
Testing
some things
message: "{{ old_messsage[:-1] }}"
- debug:
msg: "{{ message }}"
# Output
"msg": "Testing some things"
* When specifying complex args as a variable, the variable must use the full jinja2
variable syntax (```{{var_name}}```) - bare variable names there are no longer accepted.
In fact, even specifying args with variables has been deprecated, and will not be
allowed in future versions::
---
- hosts: localhost
connection: local
gather_facts: false
vars:
my_dirs:
- { path: /tmp/3a, state: directory, mode: 0755 }
- { path: /tmp/3b, state: directory, mode: 0700 }
tasks:
- file:
args: "{{item}}" # <- args here uses the full variable syntax
with_items: "{{my_dirs}}"
* porting task includes
* More dynamic. Corner-case formats that were not supposed to work now do not, as expected.
* variables defined in the yaml dict format https://github.com/ansible/ansible/issues/13324
* templating (variables in playbooks and template lookups) has improved with regard to keeping the original instead of turning everything into a string.
If you need the old behavior, quote the value to pass it around as a string.
* Empty variables and variables set to null in yaml are no longer converted to empty strings. They will retain the value of `None`.
You can override the `null_representation` setting to an empty string in your config file by setting the `ANSIBLE_NULL_REPRESENTATION` environment variable.
* Extras callbacks must be whitelisted in ansible.cfg. Copying is no longer necessary but whitelisting in ansible.cfg must be completed.
* dnf module has been rewritten. Some minor changes in behavior may be observed.
* win_updates has been rewritten and works as expected now.
* from 2.0.1 onwards, the implicit setup task from gather_facts now correctly inherits everything from play, but this might cause issues for those setting
`environment` at the play level and depending on `ansible_env` existing. Previouslly this was ignored but now might issue an 'Undefined' error.
Deprecated
----------
While all items listed here will show a deprecation warning message, they still work as they did in 1.9.x. Please note that they will be removed in 2.2 (Ansible always waits two major releases to remove a deprecated feature).
* Bare variables in `with_` loops should instead use the “{{var}}” syntax, which helps eliminate ambiguity.
* The ansible-galaxy text format requirements file. Users should use the YAML format for requirements instead.
* Undefined variables within a `with_` loops list currently do not interrupt the loop, but they do issue a warning; in the future, they will issue an error.
* Using dictionary variables to set all task parameters is unsafe and will be removed in a future version. For example::
- hosts: localhost
gather_facts: no
vars:
debug_params:
msg: "hello there"
tasks:
# These are both deprecated:
- debug: "{{debug_params}}"
- debug:
args: "{{debug_params}}"
# Use this instead:
- debug:
msg: "{{debug_params['msg']}}"
* Host patterns should use a comma (,) or colon (:) instead of a semicolon (;) to separate hosts/groups in the pattern.
* Ranges specified in host patterns should use the [x:y] syntax, instead of [x-y].
* Playbooks using privilege escalation should always use “become*” options rather than the old su*/sudo* options.
* The “short form” for vars_prompt is no longer supported.
For example::
vars_prompt:
variable_name: "Prompt string"
* Specifying variables at the top level of a task include statement is no longer supported. For example::
- include: foo.yml
a: 1
Should now be::
- include: foo.yml
vars:
a: 1
* Setting any_errors_fatal on a task is no longer supported. This should be set at the play level only.
* Bare variables in the `environment` dictionary (for plays/tasks/etc.) are no longer supported. Variables specified there should use the full variable syntax: {{foo}}.
* Tags (or any directive) should no longer be specified with other parameters in a task include. Instead, they should be specified as an option on the task.
For example::
- include: foo.yml tags=a,b,c
Should be::
- include: foo.yml
tags: [a, b, c]
* The first_available_file option on tasks has been deprecated. Users should use the with_first_found option or lookup (first_found, …) plugin.
Other caveats
-------------
Here are some corner cases encountered when updating, these are mostly caused by the more stringent parser validation and the capture of errors that were previouslly ignored.
* Bad variable composition::
with_items: myvar_{{rest_of_name}}
This worked 'by accident' as the errors were retemplated and ended up resolving the variable, it was never intended as valid syntax and now properly returns an error, use the following instead.::
with_items: "{{vars['myvar_' + res_of_name]}}"
Or `hostvars[inventory_hostname]['myvar_' + rest_of_name]` if appropriate.
* Misspelled directives::
- task: dostuf
becom: yes
The task always ran without using privilege escalation (for that you need `become`) but was also silently ignored so the play 'ran' even though it should not, now this is a parsing error.
* Duplicate directives::
- task: dostuf
when: True
when: False
The first `when` was ignored and only the 2nd one was used as the play ran w/o warning it was ignoring one of the directives, now this produces a parsing error.
* Conflating variables and directives::
- role: {name=rosy, port=435 }
# in tasks/main.yml
- wait_for: port={{port}}
The `port` variable is reserved as a play/task directive for overriding the connection port, in previous versions this got conflated with a variable named `port` and was usable
later in the play, this created issues if a host tried to reconnect or was using a non caching connection. Now it will be correctly identified as a directive and the `port` variable
will appear as undefined, this now forces the use of non conflicting names and removes ambiguity when adding settings and variables to a role invocation.
* Bare operations on `with_`::
with_items: var1 + var2
An issue with the 'bare variable' features, which was supposed only template a single variable without the need of braces ({{ )}}, would in some versions of Ansible template full expressions.
Now you need to use proper templating and braces for all expressions everywhere except conditionals (`when`)::
with_items: "{{var1 + var2}}"
The bare feature itself is deprecated as an undefined variable is indistiguishable from a string which makes it difficult to display a proper error.
Porting plugins
===============
In ansible-1.9.x, you would generally copy an existing plugin to create a new one. Simply implementing the methods and attributes that the caller of the plugin expected made it a plugin of that type. In ansible-2.0, most plugins are implemented by subclassing a base class for each plugin type. This way the custom plugin does not need to contain methods which are not customized.
Lookup plugins
--------------
* lookup plugins ; import version
Connection plugins
------------------
* connection plugins
Action plugins
--------------
* action plugins
Callback plugins
----------------
Although Ansible 2.0 provides a new callback API the old one continues to work
for most callback plugins. However, if your callback plugin makes use of
:attr:`self.playbook`, :attr:`self.play`, or :attr:`self.task` then you will
have to store the values for these yourself as ansible no longer automatically
populates the callback with them. Here's a short snippet that shows you how::
import os
from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
def __init__(self):
self.playbook = None
self.playbook_name = None
self.play = None
self.task = None
def v2_playbook_on_start(self, playbook):
self.playbook = playbook
self.playbook_name = os.path.basename(self.playbook._filename)
def v2_playbook_on_play_start(self, play):
self.play = play
def v2_playbook_on_task_start(self, task, is_conditional):
self.task = task
def v2_on_any(self, *args, **kwargs):
self._display.display('%s: %s: %s' % (self.playbook_name,
self.play.name, self.task))
Connection plugins
------------------
* connection plugins
Hybrid plugins
==============
In specific cases you may want a plugin that supports both ansible-1.9.x *and* ansible-2.0. Much like porting plugins from v1 to v2, you need to understand how plugins work in each version and support both requirements. It may mean playing tricks on Ansible.
Since the ansible-2.0 plugin system is more advanced, it is easier to adapt your plugin to provide similar pieces (subclasses, methods) for ansible-1.9.x as ansible-2.0 expects. This way your code will look a lot cleaner.
You may find the following tips useful:
* Check whether the ansible-2.0 class(es) are available and if they are missing (ansible-1.9.x) mimic them with the needed methods (e.g. ``__init__``)
* When ansible-2.0 python modules are imported, and they fail (ansible-1.9.x), catch the ``ImportError`` exception and perform the equivalent imports for ansible-1.9.x. With possible translations (e.g. importing specific methods).
* Use the existence of these methods as a qualifier to what version of Ansible you are running. So rather than using version checks, you can do capability checks instead. (See examples below)
* Document for each if-then-else case for which specific version each block is needed. This will help others to understand how they have to adapt their plugins, but it will also help you to remove the older ansible-1.9.x support when it is deprecated.
* When doing plugin development, it is very useful to have the ``warning()`` method during development, but it is also important to emit warnings for deadends (cases that you expect should never be triggered) or corner cases (e.g. cases where you expect misconfigurations).
* It helps to look at other plugins in ansible-1.9.x and ansible-2.0 to understand how the API works and what modules, classes and methods are available.
Lookup plugins
--------------
As a simple example we are going to make a hybrid ``fileglob`` lookup plugin. The ``fileglob`` lookup plugin is pretty simple to understand::
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import glob
try:
# ansible-2.0
from ansible.plugins.lookup import LookupBase
except ImportError:
# ansible-1.9.x
class LookupBase(object):
def __init__(self, basedir=None, runner=None, **kwargs):
self.runner = runner
self.basedir = self.runner.basedir
def get_basedir(self, variables):
return self.basedir
try:
# ansible-1.9.x
from ansible.utils import (listify_lookup_plugin_terms, path_dwim, warning)
except ImportError:
# ansible-2.0
from __main__ import display
warning = display.warning
class LookupModule(LookupBase):
# For ansible-1.9.x, we added inject=None as valid argument
def run(self, terms, inject=None, variables=None, **kwargs):
# ansible-2.0, but we made this work for ansible-1.9.x too !
basedir = self.get_basedir(variables)
# ansible-1.9.x
if 'listify_lookup_plugin_terms' in globals():
terms = listify_lookup_plugin_terms(terms, basedir, inject)
ret = []
for term in terms:
term_file = os.path.basename(term)
# For ansible-1.9.x, we imported path_dwim() from ansible.utils
if 'path_dwim' in globals():
# ansible-1.9.x
dwimmed_path = path_dwim(basedir, os.path.dirname(term))
else:
# ansible-2.0
dwimmed_path = self._loader.path_dwim_relative(basedir, 'files', os.path.dirname(term))
globbed = glob.glob(os.path.join(dwimmed_path, term_file))
ret.extend(g for g in globbed if os.path.isfile(g))
return ret
.. Note:: In the above example we did not use the ``warning()`` method as we had no direct use for it in the final version. However we left this code in so people can use this part during development/porting/use.
Connection plugins
------------------
* connection plugins
Action plugins
--------------
* action plugins
Callback plugins
----------------
* callback plugins
Connection plugins
------------------
* connection plugins
Porting custom scripts
======================
Custom scripts that used the ``ansible.runner.Runner`` API in 1.x have to be ported in 2.x. Please refer to:
https://github.com/ansible/ansible/blob/devel/docsite/rst/developing_api.rst

@ -3,7 +3,7 @@ Quickstart Video
We've recorded a short video that shows how to get started with Ansible that you may like to use alongside the documentation. We've recorded a short video that shows how to get started with Ansible that you may like to use alongside the documentation.
The `quickstart video <http://ansible.com/resources>`_ is about 30 minutes long and will show you some of the basics about your The `quickstart video <http://www.ansible.com/videos>`_ is about 30 minutes long and will show you some of the basics about your
first steps with Ansible. first steps with Ansible.
Enjoy, and be sure to visit the rest of the documentation to learn more. Enjoy, and be sure to visit the rest of the documentation to learn more.

@ -1,7 +1,7 @@
# config file for ansible -- http://ansible.com/ # config file for ansible -- http://ansible.com/
# ============================================== # ==============================================
# nearly all parameters can be overridden in ansible-playbook # nearly all parameters can be overridden in ansible-playbook
# or with command line flags. ansible will read ANSIBLE_CONFIG, # or with command line flags. ansible will read ANSIBLE_CONFIG,
# ansible.cfg in the current working directory, .ansible.cfg in # ansible.cfg in the current working directory, .ansible.cfg in
# the home directory or /etc/ansible/ansible.cfg, whichever it # the home directory or /etc/ansible/ansible.cfg, whichever it
@ -14,7 +14,6 @@
#inventory = /etc/ansible/hosts #inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/ #library = /usr/share/my_modules/
#remote_tmp = $HOME/.ansible/tmp #remote_tmp = $HOME/.ansible/tmp
#pattern = *
#forks = 5 #forks = 5
#poll_interval = 15 #poll_interval = 15
#sudo_user = root #sudo_user = root
@ -32,6 +31,18 @@
# explicit - do not gather by default, must say gather_facts: True # explicit - do not gather by default, must say gather_facts: True
#gathering = implicit #gathering = implicit
# by default retrieve all facts subsets
# all - gather all subsets
# network - gather min and network facts
# hardware - gather hardware facts (longest facts to retrieve)
# virtual - gather min and virtual facts
# facter - import facts from facter
# ohai - import facts from ohai
# You can combine them using comma (ex: network,virtual)
# You can negate them using ! (ex: !hardware,!facter,!ohai)
# A minimal set of facts is always gathered.
#gather_subset = all
# additional paths to search for roles in, colon separated # additional paths to search for roles in, colon separated
#roles_path = /etc/ansible/roles #roles_path = /etc/ansible/roles
@ -43,6 +54,13 @@
# enable additional callbacks # enable additional callbacks
#callback_whitelist = timer, mail #callback_whitelist = timer, mail
# Determine whether includes in tasks and handlers are "static" by
# default. As of 2.0, includes are dynamic by default. Setting these
# values to True will make includes behave more like they did in the
# 1.x versions.
#task_includes_static = True
#handler_includes_static = True
# change this for alternative sudo implementations # change this for alternative sudo implementations
#sudo_exe = sudo #sudo_exe = sudo
@ -82,7 +100,7 @@
# list any Jinja2 extensions to enable here: # list any Jinja2 extensions to enable here:
#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n #jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n
# if set, always use this private key file for authentication, same as # if set, always use this private key file for authentication, same as
# if passing --private-key to ansible or ansible-playbook # if passing --private-key to ansible or ansible-playbook
#private_key_file = /path/to/file #private_key_file = /path/to/file
@ -94,12 +112,22 @@
#ansible_managed = Ansible managed: {file} on {host} #ansible_managed = Ansible managed: {file} on {host}
# by default, ansible-playbook will display "Skipping [host]" if it determines a task # by default, ansible-playbook will display "Skipping [host]" if it determines a task
# should not be run on a host. Set this to "False" if you don't want to see these "Skipping" # should not be run on a host. Set this to "False" if you don't want to see these "Skipping"
# messages. NOTE: the task header will still be shown regardless of whether or not the # messages. NOTE: the task header will still be shown regardless of whether or not the
# task is skipped. # task is skipped.
#display_skipped_hosts = True #display_skipped_hosts = True
# by default (as of 1.3), Ansible will raise errors when attempting to dereference # by default, if a task in a playbook does not include a name: field then
# ansible-playbook will construct a header that includes the task's action but
# not the task's args. This is a security feature because ansible cannot know
# if the *module* considers an argument to be no_log at the time that the
# header is printed. If your environment doesn't have a problem securing
# stdout from ansible-playbook (or you have manually specified no_log in your
# playbook on all of the tasks where you have secret information) then you can
# safely set this to True to get more informative messages.
#display_args_to_stdout = False
# by default (as of 1.3), Ansible will raise errors when attempting to dereference
# Jinja2 variables that are not set in templates or action lines. Uncomment this line # Jinja2 variables that are not set in templates or action lines. Uncomment this line
# to revert the behavior to pre-1.3. # to revert the behavior to pre-1.3.
#error_on_undefined_vars = False #error_on_undefined_vars = False
@ -118,7 +146,7 @@
# (as of 1.8), Ansible can optionally warn when usage of the shell and # (as of 1.8), Ansible can optionally warn when usage of the shell and
# command module appear to be simplified by using a default Ansible module # command module appear to be simplified by using a default Ansible module
# instead. These warnings can be silenced by adjusting the following # instead. These warnings can be silenced by adjusting the following
# setting or adding warn=yes or warn=no to the end of the command line # setting or adding warn=yes or warn=no to the end of the command line
# parameter string. This will for example suggest using the git module # parameter string. This will for example suggest using the git module
# instead of shelling out to the git command. # instead of shelling out to the git command.
# command_warnings = False # command_warnings = False
@ -132,15 +160,16 @@
#vars_plugins = /usr/share/ansible/plugins/vars #vars_plugins = /usr/share/ansible/plugins/vars
#filter_plugins = /usr/share/ansible/plugins/filter #filter_plugins = /usr/share/ansible/plugins/filter
#test_plugins = /usr/share/ansible/plugins/test #test_plugins = /usr/share/ansible/plugins/test
#strategy_plugins = /usr/share/ansible/plugins/strategy
# by default callbacks are not loaded for /bin/ansible, enable this if you # by default callbacks are not loaded for /bin/ansible, enable this if you
# want, for example, a notification or logging callback to also apply to # want, for example, a notification or logging callback to also apply to
# /bin/ansible runs # /bin/ansible runs
#bin_ansible_callbacks = False #bin_ansible_callbacks = False
# don't like cows? that's unfortunate. # don't like cows? that's unfortunate.
# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 # set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1
#nocows = 1 #nocows = 1
# set which cowsay stencil you'd like to use by default. When set to 'random', # set which cowsay stencil you'd like to use by default. When set to 'random',
@ -177,18 +206,36 @@
#retry_files_enabled = False #retry_files_enabled = False
#retry_files_save_path = ~/.ansible-retry #retry_files_save_path = ~/.ansible-retry
# squash actions
# Ansible can optimise actions that call modules with list parameters
# when looping. Instead of calling the module once per with_ item, the
# module is called once with all items at once. Currently this only works
# under limited circumstances, and only with parameters named 'name'.
#squash_actions = apk,apt,dnf,package,pacman,pkgng,yum,zypper
# prevents logging of task data, off by default # prevents logging of task data, off by default
#no_log = False #no_log = False
# prevents logging of tasks, but only on the targets, data is still logged on the master/controller # prevents logging of tasks, but only on the targets, data is still logged on the master/controller
#no_target_syslog = True #no_target_syslog = False
# controls whether Ansible will raise an error or warning if a task has no
# choice but to create world readable temporary files to execute a module on
# the remote machine. This option is False by default for security. Users may
# turn this on to have behaviour more like Ansible prior to 2.1.x. See
# https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user
# for more secure ways to fix this than enabling this option.
#allow_world_readable_tmpfiles = False
# controls the compression level of variables sent to # controls the compression level of variables sent to
# worker processes. At the default of 0, no compression # worker processes. At the default of 0, no compression
# is used. This value must be an integer from 0 to 9. # is used. This value must be an integer from 0 to 9.
#var_compression_level = 9 #var_compression_level = 9
# This controls the cutoff point (in bytes) on --diff for files
# set to 0 for unlimited (RAM may suffer!).
#max_diff_size = 1048576
[privilege_escalation] [privilege_escalation]
#become=True #become=True
#become_method=sudo #become_method=sudo
@ -209,32 +256,32 @@
[ssh_connection] [ssh_connection]
# ssh arguments to use # ssh arguments to use
# Leaving off ControlPersist will result in poor performance, so use # Leaving off ControlPersist will result in poor performance, so use
# paramiko on older platforms rather than removing it # paramiko on older platforms rather than removing it
#ssh_args = -o ControlMaster=auto -o ControlPersist=60s #ssh_args = -o ControlMaster=auto -o ControlPersist=60s
# The path to use for the ControlPath sockets. This defaults to # The path to use for the ControlPath sockets. This defaults to
# "%(directory)s/ansible-ssh-%%h-%%p-%%r", however on some systems with # "%(directory)s/ansible-ssh-%%h-%%p-%%r", however on some systems with
# very long hostnames or very long path names (caused by long user names or # very long hostnames or very long path names (caused by long user names or
# deeply nested home directories) this can exceed the character limit on # deeply nested home directories) this can exceed the character limit on
# file socket names (108 characters for most platforms). In that case, you # file socket names (108 characters for most platforms). In that case, you
# may wish to shorten the string below. # may wish to shorten the string below.
# #
# Example: # Example:
# control_path = %(directory)s/%%h-%%r # control_path = %(directory)s/%%h-%%r
#control_path = %(directory)s/ansible-ssh-%%h-%%p-%%r #control_path = %(directory)s/ansible-ssh-%%h-%%p-%%r
# Enabling pipelining reduces the number of SSH operations required to # Enabling pipelining reduces the number of SSH operations required to
# execute a module on the remote server. This can result in a significant # execute a module on the remote server. This can result in a significant
# performance improvement when enabled, however when using "sudo:" you must # performance improvement when enabled, however when using "sudo:" you must
# first disable 'requiretty' in /etc/sudoers # first disable 'requiretty' in /etc/sudoers
# #
# By default, this option is disabled to preserve compatibility with # By default, this option is disabled to preserve compatibility with
# sudoers configurations that have requiretty (the default on many distros). # sudoers configurations that have requiretty (the default on many distros).
# #
#pipelining = False #pipelining = False
# if True, make ansible use scp if the connection type is ssh # if True, make ansible use scp if the connection type is ssh
# (default is sftp) # (default is sftp)
#scp_if_ssh = True #scp_if_ssh = True
@ -250,7 +297,7 @@
# The daemon timeout is measured in minutes. This time is measured # The daemon timeout is measured in minutes. This time is measured
# from the last activity to the accelerate daemon. # from the last activity to the accelerate daemon.
#accelerate_daemon_timeout = 30 #accelerate_daemon_timeout = 30
# If set to yes, accelerate_multi_key will allow multiple # If set to yes, accelerate_multi_key will allow multiple
# private keys to be uploaded to it, though each user must # private keys to be uploaded to it, though each user must
@ -263,3 +310,21 @@
# the default behaviour that copies the existing context or uses the user default # the default behaviour that copies the existing context or uses the user default
# needs to be changed to use the file system dependent context. # needs to be changed to use the file system dependent context.
#special_context_filesystems=nfs,vboxsf,fuse,ramfs #special_context_filesystems=nfs,vboxsf,fuse,ramfs
# Set this to yes to allow libvirt_lxc connections to work without SELinux.
#libvirt_lxc_noseclabel = yes
[colors]
#higlight = white
#verbose = blue
#warn = bright purple
#error = red
#debug = dark gray
#deprecate = purple
#skip = cyan
#unreachable = red
#ok = green
#changed = yellow
#diff_add = green
#diff_remove = red
#diff_lines = cyan

@ -10,35 +10,35 @@
# Ex 1: Ungrouped hosts, specify before any group headers. # Ex 1: Ungrouped hosts, specify before any group headers.
green.example.com ## green.example.com
blue.example.com ## blue.example.com
192.168.100.1 ## 192.168.100.1
192.168.100.10 ## 192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group # Ex 2: A collection of hosts belonging to the 'webservers' group
[webservers] ## [webservers]
alpha.example.org ## alpha.example.org
beta.example.org ## beta.example.org
192.168.1.100 ## 192.168.1.100
192.168.1.110 ## 192.168.1.110
# If you have multiple hosts following a pattern you can specify # If you have multiple hosts following a pattern you can specify
# them like this: # them like this:
www[001:006].example.com ## www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group # Ex 3: A collection of database servers in the 'dbservers' group
[dbservers] ## [dbservers]
##
db01.intranet.mydomain.net ## db01.intranet.mydomain.net
db02.intranet.mydomain.net ## db02.intranet.mydomain.net
10.25.1.56 ## 10.25.1.56
10.25.1.57 ## 10.25.1.57
# Here's another example of host ranges, this time there are no # Here's another example of host ranges, this time there are no
# leading 0s: # leading 0s:
db-[99:101]-node.example.com ## db-[99:101]-node.example.com

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save