Merge remote-tracking branch 'upstream/devel' into devel

Conflicts:
	library/files/stat

Resolved: Using theirs
pull/10224/head
Tongliang Liu 10 years ago
commit 2f3dff95f5

16
.gitmodules vendored

@ -0,0 +1,16 @@
[submodule "lib/ansible/modules/core"]
path = lib/ansible/modules/core
url = https://github.com/ansible/ansible-modules-core.git
branch = devel
[submodule "lib/ansible/modules/extras"]
path = lib/ansible/modules/extras
url = https://github.com/ansible/ansible-modules-extras.git
branch = devel
[submodule "v2/ansible/modules/core"]
path = v2/ansible/modules/core
url = https://github.com/ansible/ansible-modules-core.git
branch = devel
[submodule "v2/ansible/modules/extras"]
path = v2/ansible/modules/extras
url = https://github.com/ansible/ansible-modules-extras.git
branch = devel

@ -1,7 +1,39 @@
Ansible Changes By Release
==========================
## 1.8 "You Really Got Me" - Active Development
## 1.9 "Dancing In the Street" - ACTIVE DEVELOPMENT
in progress, details pending
* Add a clone parameter to git module that allows you to get information about a remote repo even if it doesn't exist locally.
* Safety changes: several modules have force parameters that defaulted to true.
These have been changed to default to false so as not to accidentally lose
work. Playbooks that depended on the former behaviour simply to add
force=True to the task that needs it. Affected modules:
* bzr: When local modifications exist in a checkout, the bzr module used to
default to temoving the modifications on any operation. Now the module
will not remove the modifications unless force=yes is specified.
Operations that depend on a clean working tree may fail unless force=yes is
added.
* git: When local modifications exist in a checkout, the git module will now
fail unless force is explictly specified. Specifying force will allow the
module to revert and overwrite local modifications to make git actions
succeed.
* hg: When local modifications exist in a checkout, the hg module used to
default to removing the modifications on any operation. Now the module
will not remove the modifications unless force=yes is specified.
* subversion: When updating a checkout with local modifications, you now need
to add force so the module will revert the modifications before updating.
## 1.8.1 "You Really Got Me" - Nov 26, 2014
* Various bug fixes in postgresql and mysql modules.
* Fixed a bug related to lookup plugins used within roles not finding files based on the relative paths to the roles files/ directory.
* Fixed a bug related to vars specified in plays being templated too early, resulting in incorrect variable interpolation.
* Fixed a bug related to git submodules in bare repos.
## 1.8 "You Really Got Me" - Nov 25, 2014
Major changes:
@ -16,6 +48,10 @@ Major changes:
* command_warnings feature will warn about when usage of the shell/command module can be simplified to use core modules - this can be enabled in ansible.cfg
* new omit value can be used to leave off a parameter when not set, like so module_name: a=1 b={{ c | default(omit) }}, would not pass value for b (not even an empty value) if c was not set.
* developers: 'baby JSON' in module responses, originally intended for writing modules in bash, is removed as a feature to simplify logic, script module remains available for running bash scripts.
* async jobs started in "fire & forget" mode can now be checked on at a later time.
* added ability to subcategorize modules for docs.ansible.com
* added ability for shipped modules to have aliases with symlinks
* added ability to deprecate older modules by starting with "_" and including "deprecated: message why" in module docs
New Modules:
@ -31,6 +67,9 @@ New Modules:
Some other notable changes:
* added the ability to set "instance filters" in the ec2.ini to limit results from the inventory plugin.
* upgrades for various variable precedence items and parsing related items
* added a new "follow" parameter to the file and copy modules, which allows actions to be taken on the target of a symlink rather than the symlink itself.
* if a module should ever traceback, it will return a standard error, catchable by ignore_errors, versus an 'unreachable'
* ec2_lc: added support for multiple new parameters like kernel_id, ramdisk_id and ebs_optimized.
* ec2_elb_lb: added support for the connection_draining_timeout and cross_az_load_balancing options.
@ -53,10 +92,42 @@ Some other notable changes:
* various parser improvements
* produce a friendly error message if the SSH key is too permissive
* ec2_ami_search: support for SSD and IOPS provisioned EBS images
* can set ansible_sudo_exe as an inventory variable which allows specifying
a different sudo (or equivalent) command
* git module: Submodule handling has changed. Previously if you used the
``recursive`` parameter to handle submodules, ansible would track the
submodule upstream's head revision. This has been changed to checkout the
version of the submodule specified in the superproject's git repository.
This is inline with what git submodule update does. If you want the old
behaviour use the new module parameter track_submodules=yes
* Checksumming of transferred files has been made more portable and now uses
the sha1 algorithm instead of md5 to be compatible with FIPS-140.
- As a small side effect, the fetch module no longer returns a useful value
in remote_md5. If you need a replacement, switch to using remote_checksum
which returns the sha1sum of the remote file.
* ansible-doc CLI tool contains various improvements for working with different terminals
And various other bug fixes and improvements ...
## 1.7.2 "Summer Nights" - Sep 24, 2014
- Fixes a bug in accelerate mode which caused a traceback when trying to use that connection method.
- Fixes a bug in vault where the password file option was not being used correctly internally.
- Improved multi-line parsing when using YAML literal blocks (using > or |).
- Fixed a bug with the file module and the creation of relative symlinks.
- Fixed a bug where checkmode was not being honored during the templating of files.
- Other various bug fixes.
## 1.7.1 "Summer Nights" - Aug 14, 2014
- Security fix to disallow specifying 'args:' as a string, which could allow the insertion of extra module parameters through variables.
- Performance enhancements related to previous security fixes, which could cause slowness when modules returned very large JSON results. This specifically impacted the unarchive module frequently, which returns the details of all unarchived files in the result.
- Docker module bug fixes:
* Fixed support for specifying rw/ro bind modes for volumes
* Fixed support for allowing the tag in the image parameter
- Various other bug fixes
## 1.7 "Summer Nights" - Aug 06, 2014
Major new features:

@ -1,11 +1,11 @@
include README.md packaging/rpm/ansible.spec COPYING
include examples/hosts
include examples/ansible.cfg
graft examples/playbooks
include packaging/distutils/setup.py
include lib/ansible/module_utils/powershell.ps1
recursive-include lib/ansible/modules *
recursive-include docs *
recursive-include library *
include Makefile
include VERSION
include MANIFEST.in
prune lib/ansible/modules/core/.git
prune lib/ansible/modules/extras/.git

@ -10,7 +10,7 @@
# make deb-src -------------- produce a DEB source
# make deb ------------------ produce a DEB
# make docs ----------------- rebuild the manpages (results are checked in)
# make tests ---------------- run the tests
# make tests ---------------- run the tests (see test/README.md for requirements)
# make pyflakes, make pep8 -- source code checks
########################################################
@ -86,12 +86,20 @@ MOCK_CFG ?=
NOSETESTS ?= nosetests
NOSETESTS3 ?= nosetests-3.3
########################################################
all: clean python
tests:
PYTHONPATH=./lib ANSIBLE_LIBRARY=./library $(NOSETESTS) -d -w test/units -v
PYTHONPATH=./lib $(NOSETESTS) -d -w test/units -v # Could do: --with-coverage --cover-package=ansible
newtests:
PYTHONPATH=./v2:./lib $(NOSETESTS) -d -w v2/test -v --with-coverage --cover-package=ansible --cover-branches
newtests-py3:
PYTHONPATH=./v2:./lib $(NOSETESTS3) -d -w v2/test -v --with-coverage --cover-package=ansible --cover-branches
authors:
sh hacking/authors.sh
@ -114,7 +122,7 @@ pep8:
@echo "# Running PEP8 Compliance Tests"
@echo "#############################################"
-pep8 -r --ignore=E501,E221,W291,W391,E302,E251,E203,W293,E231,E303,E201,E225,E261,E241 lib/ bin/
-pep8 -r --ignore=E501,E221,W291,W391,E302,E251,E203,W293,E231,E303,E201,E225,E261,E241 --filename "*" library/
# -pep8 -r --ignore=E501,E221,W291,W391,E302,E251,E203,W293,E231,E303,E201,E225,E261,E241 --filename "*" library/
pyflakes:
pyflakes lib/ansible/*.py lib/ansible/*/*.py bin/*

@ -4,22 +4,25 @@
Ansible
=======
Ansible is a radically simple configuration-management, application deployment, task-execution, and multinode orchestration engine.
Ansible is a radically simple IT automation system. It handles configuration-management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration - including trivializing things like zero downtime rolling updates with load balancers.
Read the documentation and more at http://ansible.com/
Many users run straight from the development branch (it's generally fine to do so), but you might also wish to consume a release. You can find
instructions [here](http://docs.ansible.com/intro_getting_started.html) for a variety of platforms. If you want a tarball of the last release, go to [releases.ansible.com](http://releases.ansible.com/ansible) and you can also install with pip.
Many users run straight from the development branch (it's generally fine to do so), but you might also wish to consume a release.
You can find instructions [here](http://docs.ansible.com/intro_getting_started.html) for a variety of platforms. If you decide to go with the development branch, be sure to run "git submodule update --init --recursive" after doing a checkout.
If you want to download a tarball of a release, go to [releases.ansible.com](http://releases.ansible.com/ansible), though most users use yum (using the EPEL instructions linked above), apt (using the PPA instructions linked above), or "pip install ansible".
Design Principles
=================
* Have a dead simple setup process and a minimal learning curve
* Be super fast & parallel by default
* Require no server or client daemons; use existing SSHd
* Use a language that is both machine and human friendly
* Manage machines very quickly and in parallel
* Avoid custom-agents and additional open ports, be agentless by leveraging the existing SSH daemon
* Describe infrastructure in a language that is both machine and human friendly
* Focus on security and easy auditability/review/rewriting of content
* Manage remote machines instantly, without bootstrapping
* Manage new remote machines instantly, without bootstrapping any software
* Allow module development in any dynamic language, not just Python
* Be usable as non-root
* Be the easiest IT automation system to use, ever.
@ -27,8 +30,11 @@ Design Principles
Get Involved
============
* Read [Contributing.md](https://github.com/ansible/ansible/blob/devel/CONTRIBUTING.md) for all kinds of ways to contribute to and interact with the project, including mailing list information and how to submit bug reports and code to Ansible.
* Read [Community Information](http://docs.ansible.com/community.html) for all kinds of ways to contribute to and interact with the project, including mailing list information and how to submit bug reports and code to Ansible.
* All code submissions are done through pull requests. Take care to make sure no merge commits are in the submission, and use "git rebase" vs "git merge" for this reason. If submitting a large code change (other than modules), it's probably a good idea to join ansible-devel and talk about what you would like to do or add first and to avoid duplicate efforts. This not only helps everyone know what's going on, it also helps save time and effort if we decide some changes are needed.
* Users list: [ansible-project](http://groups.google.com/group/ansible-project)
* Development list: [ansible-devel](http://groups.google.com/group/ansible-devel)
* Announcement list: [ansible-announce](http://groups.google.com/group/ansible-announce) - read only
* irc.freenode.net: #ansible
Branch Info
@ -36,14 +42,14 @@ Branch Info
* Releases are named after Van Halen songs.
* The devel branch corresponds to the release actively under development.
* As of 1.8, modules are kept in different repos, you'll want to follow [core](https://github.com/ansible/ansible-modules-core) and [extras](https://github.com/ansible/ansible-modules-extras)
* Various release-X.Y branches exist for previous releases.
* We'd love to have your contributions, read "CONTRIBUTING.md" for process notes.
* We'd love to have your contributions, read [Community Information](http://docs.ansible.com/community.html) for notes on how to get started.
Author
======
Authors
=======
Ansible was created by Michael DeHaan (michael@ansible.com) and has contributions from over
800 users (and growing). Thanks everyone!
Ansible was created by [Michael DeHaan](https://github.com/mpdehaan) (michael.dehaan/gmail/com) and has contributions from over 900 users (and growing). Thanks everyone!
[Ansible, Inc](http://ansible.com)
Ansible is sponsored by [Ansible, Inc](http://ansible.com)

@ -4,11 +4,14 @@ Ansible Releases at a Glance
Active Development
++++++++++++++++++
1.8 "You Really Got Me" ---- FALL 2014
1.9 "Dancing In the Street" - in progress
Released
++++++++
1.8.1 "You Really Got Me" -- 11-26-2014
1.7.2 "Summer Nights" -------- 09-24-2014
1.7.1 "Summer Nights" -------- 08-14-2014
1.7 "Summer Nights" -------- 08-06-2014
1.6.10 "The Cradle Will Rock" - 07-25-2014
1.6.9 "The Cradle Will Rock" - 07-24-2014

@ -1 +1 @@
1.8
1.9

@ -19,6 +19,17 @@
########################################################
__requires__ = ['ansible']
try:
import pkg_resources
except Exception:
# Use pkg_resources to find the correct versions of libraries and set
# sys.path appropriately when there are multiversion installs. But we
# have code that better expresses the errors in the places where the code
# is actually used (the deps are optional for many code paths) so we don't
# want to fail here.
pass
import os
import sys
@ -90,26 +101,6 @@ class Cli(object):
pattern = args[0]
"""
inventory_manager = inventory.Inventory(options.inventory)
if options.subset:
inventory_manager.subset(options.subset)
hosts = inventory_manager.list_hosts(pattern)
if len(hosts) == 0:
callbacks.display("No hosts matched", stderr=True)
sys.exit(0)
if options.listhosts:
for host in hosts:
callbacks.display(' %s' % host)
sys.exit(0)
if ((options.module_name == 'command' or options.module_name == 'shell')
and not options.module_args):
callbacks.display("No argument passed to %s module" % options.module_name, color='red', stderr=True)
sys.exit(1)
"""
sshpass = None
sudopass = None
su_pass = None
@ -129,6 +120,8 @@ class Cli(object):
if not options.ask_vault_pass and options.vault_password_file:
vault_pass = utils.read_vault_file(options.vault_password_file)
extra_vars = utils.parse_extra_vars(options.extra_vars, vault_pass)
inventory_manager = inventory.Inventory(options.inventory, vault_password=vault_pass)
if options.subset:
inventory_manager.subset(options.subset)
@ -177,7 +170,8 @@ class Cli(object):
su=options.su,
su_pass=su_pass,
su_user=options.su_user,
vault_pass=vault_pass
vault_pass=vault_pass,
extra_vars=extra_vars,
)
if options.seconds:

@ -25,6 +25,10 @@ import re
import optparse
import datetime
import subprocess
import fcntl
import termios
import struct
from ansible import utils
from ansible.utils import module_docs
import ansible.constants as C
@ -33,7 +37,8 @@ import traceback
MODULEDIR = C.DEFAULT_MODULE_PATH
BLACKLIST_EXTS = ('.swp', '.bak', '~', '.rpm')
BLACKLIST_EXTS = ('.pyc', '.swp', '.bak', '~', '.rpm')
IGNORE_FILES = [ "COPYING", "CONTRIBUTING", "LICENSE", "README", "VERSION"]
_ITALIC = re.compile(r"I\(([^)]+)\)")
_BOLD = re.compile(r"B\(([^)]+)\)")
@ -70,7 +75,7 @@ def pager(text):
pager_print(text)
else:
pager_pipe(text, os.environ['PAGER'])
elif hasattr(os, 'system') and os.system('(less) 2> /dev/null') == 0:
elif subprocess.call('(less --version) 2> /dev/null', shell = True) == 0:
pager_pipe(text, 'less')
else:
pager_print(text)
@ -94,7 +99,7 @@ def get_man_text(doc):
desc = " ".join(doc['description'])
text.append("%s\n" % textwrap.fill(tty_ify(desc), initial_indent=" ", subsequent_indent=" "))
if 'option_keys' in doc and len(doc['option_keys']) > 0:
text.append("Options (= is mandatory):\n")
@ -164,7 +169,15 @@ def get_snippet_text(doc):
return "\n".join(text)
def get_module_list_text(module_list):
tty_size = 0
if os.isatty(0):
tty_size = struct.unpack('HHHH',
fcntl.ioctl(0, termios.TIOCGWINSZ, struct.pack('HHHH', 0, 0, 0, 0)))[1]
columns = max(60, tty_size)
displace = max(len(x) for x in module_list)
linelimit = columns - displace - 5
text = []
deprecated = []
for module in sorted(set(module_list)):
if module in module_docs.BLACKLIST_MODULES:
@ -181,15 +194,45 @@ def get_module_list_text(module_list):
try:
doc, plainexamples = module_docs.get_docstring(filename)
desc = tty_ify(doc.get('short_description', '?'))
if len(desc) > 55:
desc = desc + '...'
text.append("%-20s %-60.60s" % (module, desc))
desc = tty_ify(doc.get('short_description', '?')).strip()
if len(desc) > linelimit:
desc = desc[:linelimit] + '...'
if module.startswith('_'): # Handle deprecated
deprecated.append("%-*s %-*.*s" % (displace, module[1:], linelimit, len(desc), desc))
else:
text.append("%-*s %-*.*s" % (displace, module, linelimit, len(desc), desc))
except:
traceback.print_exc()
sys.stderr.write("ERROR: module %s has a documentation error formatting or is missing documentation\n" % module)
if len(deprecated) > 0:
text.append("\nDEPRECATED:")
text.extend(deprecated)
return "\n".join(text)
def find_modules(path, module_list):
if os.path.isdir(path):
for module in os.listdir(path):
if module.startswith('.'):
continue
elif os.path.isdir(module):
find_modules(module, module_list)
elif any(module.endswith(x) for x in BLACKLIST_EXTS):
continue
elif module.startswith('__'):
continue
elif module in IGNORE_FILES:
continue
elif module.startswith('_'):
fullpath = '/'.join([path,module])
if os.path.islink(fullpath): # avoids aliases
continue
module = os.path.splitext(module)[0] # removes the extension
module_list.append(module)
def main():
p = optparse.OptionParser(
@ -222,23 +265,18 @@ def main():
utils.plugins.module_finder.add_directory(i)
if options.list_dir:
# list all modules
# list modules
paths = utils.plugins.module_finder._get_paths()
module_list = []
for path in paths:
# os.system("ls -C %s" % (path))
if os.path.isdir(path):
for module in os.listdir(path):
if any(module.endswith(x) for x in BLACKLIST_EXTS):
continue
module_list.append(module)
find_modules(path, module_list)
pager(get_module_list_text(module_list))
sys.exit()
if len(args) == 0:
p.print_help()
def print_paths(finder):
''' Returns a string suitable for printing of the search path '''
@ -248,14 +286,13 @@ def main():
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
text = ''
for module in args:
filename = utils.plugins.module_finder.find_plugin(module)
if filename is None:
sys.stderr.write("module %s not found in %s\n" % (module,
print_paths(utils.plugins.module_finder)))
sys.stderr.write("module %s not found in %s\n" % (module, print_paths(utils.plugins.module_finder)))
continue
if any(filename.endswith(x) for x in BLACKLIST_EXTS):

@ -48,6 +48,9 @@ galaxy_info:
author: {{ author }}
description: {{description}}
company: {{ company }}
# If the issue tracker for your role is not on github, uncomment the
# next line and provide a value
# issue_tracker_url: {{ issue_tracker_url }}
# Some suggested licenses:
# - BSD (default)
# - MIT
@ -135,6 +138,7 @@ An optional section for the role authors to include contact information, or a we
#-------------------------------------------------------------------------------------
VALID_ACTIONS = ("init", "info", "install", "list", "remove")
SKIP_INFO_KEYS = ("platforms","readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url" )
def get_action(args):
"""
@ -237,6 +241,7 @@ def exit_without_ignore(options, rc=1):
print '- you can use --ignore-errors to skip failed roles.'
sys.exit(rc)
#-------------------------------------------------------------------------------------
# Galaxy API functions
#-------------------------------------------------------------------------------------
@ -257,7 +262,7 @@ def api_get_config(api_server):
except:
return None
def api_lookup_role_by_name(api_server, role_name):
def api_lookup_role_by_name(api_server, role_name, notify=True):
"""
Uses the Galaxy API to do a lookup on the role owner/name.
"""
@ -268,7 +273,8 @@ def api_lookup_role_by_name(api_server, role_name):
parts = role_name.split(".")
user_name = ".".join(parts[0:-1])
role_name = parts[-1]
print "- downloading role '%s', owned by %s" % (role_name, user_name)
if notify:
print "- downloading role '%s', owned by %s" % (role_name, user_name)
except:
parser.print_help()
print "- invalid role name (%s). Specify role as format: username.rolename" % role_name
@ -377,7 +383,7 @@ def scm_archive_role(scm, role_url, role_version, role_name):
print " in directory %s" % tempdir
return False
shutil.rmtree(tempdir)
shutil.rmtree(tempdir, ignore_errors=True)
return temp_file.name
@ -640,7 +646,7 @@ def execute_init(args, options, parser):
categories = []
if not offline:
categories = api_get_list(api_server, "categories") or []
# group the list of platforms from the api based
# on their names, with the release field being
# appended to a list of versions
@ -653,6 +659,7 @@ def execute_init(args, options, parser):
author = 'your name',
company = 'your company (optional)',
license = 'license (GPLv2, CC-BY, etc)',
issue_tracker_url = 'http://example.com/issue/tracker',
min_ansible_version = '1.2',
platforms = platform_groups,
categories = categories,
@ -676,7 +683,56 @@ def execute_info(args, options, parser):
from the galaxy API.
"""
pass
if len(args) == 0:
# the user needs to specify a role
parser.print_help()
print "- you must specify a user/role name"
sys.exit(1)
api_server = get_opt(options, "api_server", "galaxy.ansible.com")
api_config = api_get_config(api_server)
roles_path = get_opt(options, "roles_path")
for role in args:
role_info = {}
install_info = get_galaxy_install_info(role, options)
if install_info:
if 'version' in install_info:
install_info['intalled_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
remote_data = api_lookup_role_by_name(api_server, role, False)
if remote_data:
role_info.update(remote_data)
metadata = get_role_metadata(role, options)
if metadata:
role_info.update(metadata)
role_spec = ansible.utils.role_spec_parse(role)
if role_spec:
role_info.update(role_spec)
if role_info:
print "- %s:" % (role)
for k in sorted(role_info.keys()):
if k in SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
print "\t%s: " % (k)
for key in sorted(role_info[k].keys()):
if key in SKIP_INFO_KEYS:
continue
print "\t\t%s: %s" % (key, role_info[k][key])
else:
print "\t%s: %s" % (k, role_info[k])
else:
print "- the role %s was not found" % role
def execute_install(args, options, parser):
"""
@ -687,30 +743,23 @@ def execute_install(args, options, parser):
"""
role_file = get_opt(options, "role_file", None)
api_server = get_opt(options, "api_server", "galaxy.ansible.com")
no_deps = get_opt(options, "no_deps", False)
roles_path = get_opt(options, "roles_path")
if len(args) == 0 and not role_file:
if len(args) == 0 and role_file is None:
# the user needs to specify one of either --role-file
# or specify a single user/role name
parser.print_help()
print "- you must specify a user/role name or a roles file"
sys.exit()
elif len(args) == 1 and role_file:
elif len(args) == 1 and not role_file is None:
# using a role file is mutually exclusive of specifying
# the role name on the command line
parser.print_help()
print "- please specify a user/role name, or a roles file, but not both"
sys.exit(1)
# error checking to ensure the specified roles path exists and is a directory
if not os.path.exists(roles_path):
print "- the specified role path %s does not exist" % roles_path
sys.exit(1)
elif not os.path.isdir(roles_path):
print "- the specified role path %s is not a directory" % roles_path
sys.exit(1)
api_server = get_opt(options, "api_server", "galaxy.ansible.com")
no_deps = get_opt(options, "no_deps", False)
roles_path = get_opt(options, "roles_path")
roles_done = []
if role_file:
@ -759,10 +808,11 @@ def execute_install(args, options, parser):
role_data = api_lookup_role_by_name(api_server, role_src)
if not role_data:
print "- sorry, %s was not found on %s." % (role_src, api_server)
exit_without_ignore(options)
continue
role_versions = api_fetch_role_related(api_server, 'versions', role_data['id'])
if "version" not in role:
if "version" not in role or role['version'] == '':
# convert the version names to LooseVersion objects
# and sort them to get the latest version. If there
# are no versions in the list, we'll grab the head
@ -787,7 +837,8 @@ def execute_install(args, options, parser):
if tmp_file:
installed = install_role(role.get("name"), role.get("version"), tmp_file, options)
# we're done with the temp file, clean it up
os.unlink(tmp_file)
if tmp_file != role_src:
os.unlink(tmp_file)
# install dependencies, if we want them
if not no_deps and installed:
if not role_data:
@ -809,8 +860,6 @@ def execute_install(args, options, parser):
else:
print '- dependency %s is already installed, skipping.' % dep["name"]
if not tmp_file or not installed:
if tmp_file and installed:
os.unlink(tmp_file)
print "- %s was NOT installed successfully." % role.get("name")
exit_without_ignore(options)
sys.exit(0)

@ -18,8 +18,16 @@
#######################################################
#__requires__ = ['ansible']
#import pkg_resources
__requires__ = ['ansible']
try:
import pkg_resources
except Exception:
# Use pkg_resources to find the correct versions of libraries and set
# sys.path appropriately when there are multiversion installs. But we
# have code that better expresses the errors in the places where the code
# is actually used (the deps are optional for many code paths) so we don't
# want to fail here.
pass
import sys
import os
@ -75,8 +83,6 @@ def main(args):
)
#parser.add_option('--vault-password', dest="vault_password",
# help="password for vault encrypted files")
parser.add_option('-e', '--extra-vars', dest="extra_vars", action="append",
help="set additional variables as key=value or YAML/JSON", default=[])
parser.add_option('-t', '--tags', dest='tags', default='all',
help="only run plays and tasks tagged with these values")
parser.add_option('--skip-tags', dest='skip_tags',
@ -134,17 +140,7 @@ def main(args):
if not options.ask_vault_pass and options.vault_password_file:
vault_pass = utils.read_vault_file(options.vault_password_file)
extra_vars = {}
for extra_vars_opt in options.extra_vars:
if extra_vars_opt.startswith("@"):
# Argument is a YAML file (JSON is a subset of YAML)
extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml_from_file(extra_vars_opt[1:], vault_password=vault_pass))
elif extra_vars_opt and extra_vars_opt[0] in '[{':
# Arguments as YAML
extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml(extra_vars_opt))
else:
# Arguments as Key-value
extra_vars = utils.combine_vars(extra_vars, utils.parse_kv(extra_vars_opt))
extra_vars = utils.parse_extra_vars(options.extra_vars, vault_pass)
only_tags = options.tags.split(",")
skip_tags = options.skip_tags
@ -158,9 +154,23 @@ def main(args):
raise errors.AnsibleError("the playbook: %s does not appear to be a file" % playbook)
inventory = ansible.inventory.Inventory(options.inventory, vault_password=vault_pass)
inventory.subset(options.subset)
# Note: slightly wrong, this is written so that implicit localhost
# (which is not returned in list_hosts()) is taken into account for
# warning if inventory is empty. But it can't be taken into account for
# checking if limit doesn't match any hosts. Instead we don't worry about
# limit if only implicit localhost was in inventory to start with.
#
# Fix this in v2
no_hosts = False
if len(inventory.list_hosts()) == 0:
raise errors.AnsibleError("provided hosts list is empty")
# Empty inventory
utils.warning("provided hosts list is empty, only localhost is available")
no_hosts = True
inventory.subset(options.subset)
if len(inventory.list_hosts()) == 0 and no_hosts is False:
# Invalid limit
raise errors.AnsibleError("Specified --limit does not match any hosts")
# run all playbooks specified on the command line
for playbook in args:
@ -276,7 +286,7 @@ def main(args):
retries = failed_hosts + unreachable_hosts
if len(retries) > 0:
if C.RETRY_FILES_ENABLED and len(retries) > 0:
filename = pb.generate_retry_inventory(retries)
if filename:
display(" to retry, use: --limit @%s\n" % filename)

@ -40,7 +40,6 @@
import os
import shutil
import subprocess
import sys
import datetime
import socket
@ -135,6 +134,12 @@ def main(args):
help="vault password file")
parser.add_option('-K', '--ask-sudo-pass', default=False, dest='ask_sudo_pass', action='store_true',
help='ask for sudo password')
parser.add_option('-t', '--tags', dest='tags', default=False,
help='only run plays and tasks tagged with these values')
parser.add_option('--accept-host-key', default=False, dest='accept_host_key', action='store_true',
help='adds the hostkey for the repo url if not already added')
parser.add_option('--key-file', dest='key_file',
help="Pass '-i <key_file>' to the SSH arguments used by git.")
options, args = parser.parse_args(args)
hostname = socket.getfqdn()
@ -149,7 +154,7 @@ def main(args):
return 1
now = datetime.datetime.now()
print >>sys.stderr, now.strftime("Starting ansible-pull at %F %T")
print now.strftime("Starting ansible-pull at %F %T")
# Attempt to use the inventory passed in as an argument
# It might not yet have been downloaded so use localhost if note
@ -168,6 +173,15 @@ def main(args):
if options.checkout:
repo_opts += ' version=%s' % options.checkout
# Only git module is supported
if options.module_name == DEFAULT_REPO_TYPE:
if options.accept_host_key:
repo_opts += ' accept_hostkey=yes'
if options.key_file:
repo_opts += ' key_file=%s' % options.key_file
path = utils.plugins.module_finder.find_plugin(options.module_name)
if path is None:
sys.stderr.write("module '%s' not found.\n" % options.module_name)
@ -175,6 +189,8 @@ def main(args):
cmd = 'ansible localhost -i "%s" %s -m %s -a "%s"' % (
inv_opts, base_opts, options.module_name, repo_opts
)
for ev in options.extra_vars:
cmd += ' -e "%s"' % ev
if options.sleep:
try:
@ -192,7 +208,7 @@ def main(args):
if rc != 0:
if options.force:
print "Unable to update repository. Continuing with (forced) run of playbook."
print >>sys.stderr, "Unable to update repository. Continuing with (forced) run of playbook."
else:
return rc
elif options.ifchanged and '"changed": true' not in out:
@ -214,6 +230,8 @@ def main(args):
cmd += ' -e "%s"' % ev
if options.ask_sudo_pass:
cmd += ' -K'
if options.tags:
cmd += ' -t "%s"' % options.tags
os.chdir(options.dest)
# RUN THE PLAYBOOK COMMAND

@ -15,13 +15,19 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# ansible-pull is a script that runs ansible in local mode
# after checking out a playbooks directory from source repo. There is an
# example playbook to bootstrap this script in the examples/ dir which
# installs ansible and sets it up to run on cron.
#__requires__ = ['ansible']
#import pkg_resources
# ansible-vault is a script that encrypts/decrypts YAML files. See
# http://docs.ansible.com/playbooks_vault.html for more details.
__requires__ = ['ansible']
try:
import pkg_resources
except Exception:
# Use pkg_resources to find the correct versions of libraries and set
# sys.path appropriately when there are multiversion installs. But we
# have code that better expresses the errors in the places where the code
# is actually used (the deps are optional for many code paths) so we don't
# want to fail here.
pass
import os
import sys

@ -1,13 +1,13 @@
'\" t
.\" Title: ansible-doc
.\" Author: :doctype:manpage
.\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
.\" Date: 05/26/2014
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Manual: System administration commands
.\" Source: Ansible 1.7
.\" Source: Ansible 1.9
.\" Language: English
.\"
.TH "ANSIBLE\-DOC" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
.TH "ANSIBLE\-DOC" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
@ -64,9 +64,3 @@ Ansible is released under the terms of the GPLv3 License\&.
\fBansible\-playbook\fR(1), \fBansible\fR(1), \fBansible\-pull\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
.SH "AUTHOR"
.PP
\fB:doctype:manpage\fR
.RS 4
Author.
.RE

@ -1,13 +1,13 @@
'\" t
.\" Title: ansible-galaxy
.\" Author: [see the "AUTHOR" section]
.\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
.\" Date: 05/26/2014
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Manual: System administration commands
.\" Source: Ansible 1.7
.\" Source: Ansible 1.9
.\" Language: English
.\"
.TH "ANSIBLE\-GALAXY" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
.TH "ANSIBLE\-GALAXY" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
@ -149,6 +149,11 @@ Force overwriting an existing role\&.
.RS 4
The path in which the skeleton role will be created\&.The default is the current working directory\&.
.RE
.PP
\fB\-\-offline\fR
.RS 4
Don\(cqt query the galaxy API when creating roles
.RE
.SH "LIST"
.sp
The \fBlist\fR sub\-command is used to show what roles are currently instaled\&. You can specify a role name, and if installed only that role will be shown\&.

@ -122,6 +122,10 @@ Force overwriting an existing role.
The path in which the skeleton role will be created.The default is the current
working directory.
*--offline*::
Don't query the galaxy API when creating roles
LIST
----

@ -1,13 +1,13 @@
'\" t
.\" Title: ansible-playbook
.\" Author: :doctype:manpage
.\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
.\" Date: 05/26/2014
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Manual: System administration commands
.\" Source: Ansible 1.7
.\" Source: Ansible 1.9
.\" Language: English
.\"
.TH "ANSIBLE\-PLAYBOOK" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
.TH "ANSIBLE\-PLAYBOOK" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
@ -176,9 +176,3 @@ Ansible is released under the terms of the GPLv3 License\&.
\fBansible\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
.SH "AUTHOR"
.PP
\fB:doctype:manpage\fR
.RS 4
Author.
.RE

@ -1,13 +1,13 @@
'\" t
.\" Title: ansible
.\" Author: :doctype:manpage
.\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
.\" Date: 05/26/2014
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Manual: System administration commands
.\" Source: Ansible 1.7
.\" Source: Ansible 1.9
.\" Language: English
.\"
.TH "ANSIBLE" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
.TH "ANSIBLE" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
@ -31,7 +31,7 @@
ansible-pull \- set up a remote copy of ansible on each managed node
.SH "SYNOPSIS"
.sp
ansible \-d DEST \-U URL [options] [ <filename\&.yml> ]
ansible\-pull \-d DEST \-U URL [options] [ <filename\&.yml> ]
.SH "DESCRIPTION"
.sp
\fBAnsible\fR is an extra\-simple tool/framework/API for doing \*(Aqremote things\*(Aq over SSH\&.
@ -104,9 +104,3 @@ Ansible is released under the terms of the GPLv3 License\&.
\fBansible\fR(1), \fBansible\-playbook\fR(1), \fBansible\-doc\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
.SH "AUTHOR"
.PP
\fB:doctype:manpage\fR
.RS 4
Author.
.RE

@ -12,7 +12,7 @@ ansible-pull - set up a remote copy of ansible on each managed node
SYNOPSIS
--------
ansible -d DEST -U URL [options] [ <filename.yml> ]
ansible-pull -d DEST -U URL [options] [ <filename.yml> ]
DESCRIPTION

@ -1,13 +1,13 @@
'\" t
.\" Title: ansible-vault
.\" Author: [see the "AUTHOR" section]
.\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
.\" Date: 05/26/2014
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Manual: System administration commands
.\" Source: Ansible 1.7
.\" Source: Ansible 1.9
.\" Language: English
.\"
.TH "ANSIBLE\-VAULT" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
.TH "ANSIBLE\-VAULT" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------

@ -1,13 +1,13 @@
'\" t
.\" Title: ansible
.\" Author: :doctype:manpage
.\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
.\" Date: 05/26/2014
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Manual: System administration commands
.\" Source: Ansible 1.7
.\" Source: Ansible 1.9
.\" Language: English
.\"
.TH "ANSIBLE" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
.TH "ANSIBLE" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
@ -206,9 +206,3 @@ Ansible is released under the terms of the GPLv3 License\&.
\fBansible\-playbook\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
.SH "AUTHOR"
.PP
\fB:doctype:manpage\fR
.RS 4
Author.
.RE

@ -40,7 +40,7 @@ clean:
.PHONEY: docs clean
modules: $(FORMATTER) ../hacking/templates/rst.j2
PYTHONPATH=../lib $(FORMATTER) -t rst --template-dir=../hacking/templates --module-dir=../library -o rst/
PYTHONPATH=../lib $(FORMATTER) -t rst --template-dir=../hacking/templates --module-dir=../lib/ansible/modules -o rst/
staticmin:
cat _themes/srtd/static/css/theme.css | sed -e 's/^[ \t]*//g; s/[ \t]*$$//g; s/\([:{;,]\) /\1/g; s/ {/{/g; s/\/\*.*\*\///g; /^$$/d' | sed -e :a -e '$$!N; s/\n\(.\)/\1/; ta' > _themes/srtd/static/css/theme.min.css

@ -177,15 +177,17 @@
<div class="wy-nav-content">
<div class="rst-content">
<!-- temporary image for AnsibleFest SF -->
<center>
<a href="http://www.ansible.com/ansiblefest-san-francisco">
<img src="http://www.ansible.com/hs-fs/hub/330046/file-1550696672-png/DL_Folder/AF-Top_Banner.png width="500px" height="90px">
</a>
<br/>&nbsp;<br/>
<br/>&nbsp;<br/>
</center>
<!-- AnsibleFest and free eBook preview stuff -->
<center>
<a href="http://www.ansible.com/tower?utm_source=docs">
<img src="http://www.ansible.com/hs-fs/hub/330046/file-2031636235-png/DL_Folder/festlondon-docs.png">
</a>
<a href="http://www.ansible.com/ansible-book">
<img src="http://www.ansible.com/hs-fs/hub/330046/file-2031636250-png/DL_Folder/Ebook-docs.png">
</a>
<br/>&nbsp;<br/>
<br/>&nbsp;<br/>
</center>
{% include "breadcrumbs.html" %}
<div id="page-content">

@ -88,14 +88,7 @@ if __name__ == '__main__':
print " Run 'make viewdocs' to build and then preview in a web browser."
sys.exit(0)
# The 'htmldocs' make target will call this scrip twith the 'rst'
# parameter' We don't need to run the 'htmlman' target then.
if "rst" in sys.argv:
build_rst_docs()
else:
# By default, preform the rst->html transformation and then
# the asciidoc->html trasnformation
build_rst_docs()
build_rst_docs()
if "view" in sys.argv:
import webbrowser

@ -25,7 +25,7 @@ Ansible or not) should begin with ``---``. This is part of the YAML
format and indicates the start of a document.
All members of a list are lines beginning at the same indentation level starting
with a ``-`` (dash) character::
with a ``"- "`` (a dash and a space)::
---
# A list of tasty fruits
@ -34,7 +34,7 @@ with a ``-`` (dash) character::
- Strawberry
- Mango
A dictionary is represented in a simple ``key:`` and ``value`` form::
A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space)::
---
# An employee record

@ -57,16 +57,19 @@ feature development, so clearing bugs out of the way is one of the best things y
If you're not a developer, helping test pull requests for bug fixes and features is still immensely valuable. You can do this by checking out ansible, making a test
branch off the main one, merging a GitHub issue, testing, and then commenting on that particular issue on GitHub.
I'd Like To Report A Bugs
I'd Like To Report A Bug
------------------------------------
Ansible practices responsible disclosure - if this is a security related bug, email `security@ansible.com <mailto:security@ansible.com>`_ instead of filing a ticket or posting to the Google Group and you will receive a prompt response.
Bugs should be reported to `github.com/ansible/ansible <http://github.com/ansible/ansible>`_ after
Bugs related to the core language should be reported to `github.com/ansible/ansible <http://github.com/ansible/ansible>`_ after
signing up for a free github account. Before reporting a bug, please use the bug/issue search
to see if the issue has already been reported.
When filing a bug, please use the `issue template <https://raw2.github.com/ansible/ansible/devel/ISSUE_TEMPLATE.md>`_ to provide all relevant information.
MODULE related bugs however should go to `ansible-modules-core <github.com/ansible/ansible-modules-core>`_ or `ansible-modules-extras <github.com/ansible/ansible-modules-extras>`_ based on the classification of the module. This is listed on the bottom of the docs page for any module.
When filing a bug, please use the `issue template <https://github.com/ansible/ansible/raw/devel/ISSUE_TEMPLATE.md>`_ to provide all relevant information, regardless of what repo you are filing a ticket against.
Knowing your ansible version and the exact commands you are running, and what you expect, saves time and helps us help everyone with their issues
more quickly.
@ -102,8 +105,7 @@ documenting a new feature, submit a github pull request to the code that
lives in the “docsite/rst” subdirectory of the project for most pages, and there is an "Edit on GitHub"
link up on those.
Module documentation is generated from a DOCUMENTATION structure embedded in the source code of each module
in the library/ directory.
Module documentation is generated from a DOCUMENTATION structure embedded in the source code of each module, which is in either the ansible-modules-core or ansible-modules-extra repos on github, depending on the module. Information about this is always listed on the bottom of the web documentation for each module.
Aside from modules, the main docs are in restructured text
format.
@ -113,7 +115,7 @@ github about any errors you spot or sections you would like to see added. For mo
on creating pull requests, please refer to the
`github help guide <https://help.github.com/articles/using-pull-requests>`_.
For Current and Propspective Developers
For Current and Prospective Developers
=======================================
I'd Like To Learn How To Develop on Ansible
@ -130,10 +132,10 @@ Modules are some of the easiest places to get started.
Contributing Code (Features or Bugfixes)
----------------------------------------
The Ansible project keeps its source on github at
`github.com/ansible/ansible <http://github.com/ansible/ansible>`_
and takes contributions through
The Ansible project keeps its source on github at
`github.com/ansible/ansible <http://github.com/ansible/ansible>`_ for the core application, and two sub repos ansible/ansible-modules-core and ansible/ansible-modules-extras for module related items. If you need to know if a module is in 'core' or 'extras', consult the web documentation page for that module.
The project takes contributions through
`github pull requests <https://help.github.com/articles/using-pull-requests>`_.
It is usually a good idea to join the ansible-devel list to discuss any large features prior to submission, and this especially helps in avoiding duplicate work or efforts where we decide, upon seeing a pull request for the first time, that revisions are needed. (This is not usually needed for module development, but can be nice for large changes).
@ -144,7 +146,7 @@ to modify a pull request later.
When submitting patches, be sure to run the unit tests first “make tests” and always use
“git rebase” vs “git merge” (aliasing git pull to git pull --rebase is a great idea) to
avoid merge commits in your submissions. There are also integration tests that can be run in the "tests/integration" directory.
avoid merge commits in your submissions. There are also integration tests that can be run in the "test/integration" directory.
In order to keep the history clean and better audit incoming code, we will require resubmission of pull requests that contain merge commits. Use "git pull --rebase" vs "git pull" and "git rebase" vs "git merge". Also be sure to use topic branches to keep your additions on different branches, such that they won't pick up stray commits later.

@ -11,9 +11,17 @@ See :doc:`modules` for a list of various ones developed in core.
Modules can be written in any language and are found in the path specified
by `ANSIBLE_LIBRARY` or the ``--module-path`` command line option.
By default, everything that ships with ansible is pulled from its source tree, but
additional paths can be added.
The directory "./library", alongside your top level playbooks, is also automatically
added as a search directory.
Should you develop an interesting Ansible module, consider sending a pull request to the
`github project <http://github.com/ansible/ansible>`_ to see about getting your module
included in the core project.
`modules-extras project <http://github.com/ansible/ansible-modules-extras>`_. There's also a core
repo for more established and widely used modules. "Extras" modules may be promoted to core periodically,
but there's no fundamental difference in the end - both ship with ansible, all in one package, regardless
of how you acquire ansible.
.. _module_dev_tutorial:
@ -40,7 +48,7 @@ modules. Keep in mind, though, that some modules in ansible's source tree are
so look at `service` or `yum`, and don't stare too close into things like `async_wrapper` or
you'll turn to stone. Nobody ever executes async_wrapper directly.
Ok, let's get going with an example. We'll use Python. For starters, save this as a file named `time`::
Ok, let's get going with an example. We'll use Python. For starters, save this as a file named `timetest.py`::
#!/usr/bin/python
@ -59,13 +67,13 @@ Testing Modules
There's a useful test script in the source checkout for ansible::
git clone git@github.com:ansible/ansible.git
git clone git@github.com:ansible/ansible.git --recursive
source ansible/hacking/env-setup
chmod +x ansible/hacking/test-module
Let's run the script you just wrote with that::
ansible/hacking/test-module -m ./time
ansible/hacking/test-module -m ./timetest.py
You should see output that looks something like this::
@ -78,6 +86,7 @@ If you did not, you might have a typo in your module, so recheck it and try agai
Reading Input
`````````````
Let's modify the module to allow setting the current time. We'll do this by seeing
if a key value pair in the form `time=<string>` is passed in to the module.
@ -222,7 +231,7 @@ As mentioned, if you are writing a module in Python, there are some very powerfu
Modules are still transferred as one file, but an arguments file is no longer needed, so these are not
only shorter in terms of code, they are actually FASTER in terms of execution time.
Rather than mention these here, the best way to learn is to read some of the `source of the modules <https://github.com/ansible/ansible/tree/devel/library>`_ that come with Ansible.
Rather than mention these here, the best way to learn is to read some of the `source of the modules <https://github.com/ansible/ansible-modules-core>`_ that come with Ansible.
The 'group' and 'user' modules are reasonably non-trivial and showcase what this looks like.
@ -253,7 +262,7 @@ And failures are just as simple (where 'msg' is a required parameter to explain
module.fail_json(msg="Something fatal happened")
There are also other useful functions in the module class, such as module.md5(path). See
There are also other useful functions in the module class, such as module.sha1(path). See
lib/ansible/module_common.py in the source checkout for implementation details.
Again, modules developed this way are best tested with the hacking/test-module script in the git
@ -300,8 +309,7 @@ You should also never do this in a module::
print "some status message"
Because the output is supposed to be valid JSON. Except that's not quite true,
but we'll get to that later.
Because the output is supposed to be valid JSON.
Modules must not output anything on standard error, because the system will merge
standard out with standard error and prevent the JSON from parsing. Capturing standard
@ -334,7 +342,7 @@ and guidelines:
* If packaging modules in an RPM, they only need to be installed on the control machine and should be dropped into /usr/share/ansible. This is entirely optional and up to you.
* Modules should return JSON or key=value results all on one line. JSON is best if you can do JSON. All return types must be hashes (dictionaries) although they can be nested. Lists or simple scalar values are not supported, though they can be trivially contained inside a dictionary.
* Modules should output valid JSON only. All return types must be hashes (dictionaries) although they can be nested. Lists or simple scalar values are not supported, though they can be trivially contained inside a dictionary.
* In the event of failure, a key of 'failed' should be included, along with a string explanation in 'msg'. Modules that raise tracebacks (stacktraces) are generally considered 'poor' modules, though Ansible can deal with these returns and will automatically convert anything unparseable into a failed result. If you are using the AnsibleModule common Python code, the 'failed' element will be included for you automatically when you call 'fail_json'.
@ -342,21 +350,6 @@ and guidelines:
* As results from many hosts will be aggregated at once, modules should return only relevant output. Returning the entire contents of a log file is generally bad form.
.. _module_dev_shorthand:
Shorthand Vs JSON
`````````````````
To make it easier to write modules in bash and in cases where a JSON
module might not be available, it is acceptable for a module to return
key=value output all on one line, like this. The Ansible parser
will know what to do::
somekey=1 somevalue=2 rc=3 favcolor=red
If you're writing a module in Python or Ruby or whatever, though, returning
JSON is probably the simplest way to go.
.. _module_documenting:
Documenting Your Module
@ -393,7 +386,7 @@ support formatting with some special macros.
These formatting functions are ``U()``, ``M()``, ``I()``, and ``C()``
for URL, module, italic, and constant-width respectively. It is suggested
to use ``C()`` for file and option names, and ``I()`` when referencing
parameters; module names should be specifies as ``M(module)``.
parameters; module names should be specified as ``M(module)``.
Examples (which typically contain colons, quotes, etc.) are difficult
to format with YAML, so these must be
@ -423,20 +416,55 @@ built and appear in the 'docsite/' directory.
.. tip::
You can use ANSIBLE_KEEP_REMOTE_FILES=1 to prevent ansible from
You can set the environment variable ANSIBLE_KEEP_REMOTE_FILES=1 on the controlling host to prevent ansible from
deleting the remote files so you can debug your module.
.. _module_contribution:
Getting Your Module Into Core
`````````````````````````````
Module Paths
````````````
If you are having trouble getting your module "found" by ansible, be sure it is in the ANSIBLE_LIBRARY_PATH.
If you have a fork of one of the ansible module projects, do something like this::
ANSIBLE_LIBRARY=~/ansible-modules-core:~/ansible-modules-extras
And this will make the items in your fork be loaded ahead of what ships with Ansible. Just be sure
to make sure you're not reporting bugs on versions from your fork!
To be safe, if you're working on a variant on something in Ansible's normal distribution, it's not
a bad idea to give it a new name while you are working on it, to be sure you know you're pulling
your version.
Getting Your Module Into Ansible
````````````````````````````````
High-quality modules with minimal dependencies
can be included in the core, but core modules (just due to the programming
can be included in Ansible, but modules (just due to the programming
preferences of the developers) will need to be implemented in Python and use
the AnsibleModule common code, and should generally use consistent arguments with the rest of
the program. Stop by the mailing list to inquire about requirements if you like, and submit
a github pull request to the main project.
a github pull request to the `extras <https://github.com/ansible/ansible-modules-extras>`_ project.
Included modules will ship with ansible, and also have a chance to be promoted to 'core' status, which
gives them slightly higher development priority (though they'll work in exactly the same way).
Deprecating and making module aliases
``````````````````````````````````````
Starting in 1.8 you can deprecate modules by renaming them with a preceding _, i.e. old_cloud.py to
_old_cloud.py, This will keep the module available but hide it from the primary docs and listing.
You can also rename modules and keep an alias to the old name by using a symlink that starts with _.
This example allows the stat module to be called with fileinfo, making the following examples equivalent
EXAMPLES = '''
ln -s stat.py _fileinfo.py
ansible -m stat -a "path=/tmp" localhost
ansible -m fileinfo -a "path=/tmp" localhost
'''
.. seealso::

@ -30,7 +30,7 @@ Lookup Plugins
Language constructs like "with_fileglob" and "with_items" are implemented via lookup plugins. Just like other plugin types, you can write your own.
More documentation on writing connection plugins is pending, though you can jump into `lib/ansible/runner/lookup_plugins <https://github.com/ansible/ansible/tree/devel/lib/ansible/runner/lookup_plugins>`_ and figure
More documentation on writing lookup plugins is pending, though you can jump into `lib/ansible/runner/lookup_plugins <https://github.com/ansible/ansible/tree/devel/lib/ansible/runner/lookup_plugins>`_ and figure
things out pretty easily.
.. _developing_vars_plugins:
@ -42,7 +42,7 @@ Playbook constructs like 'host_vars' and 'group_vars' work via 'vars' plugins.
data into ansible runs that did not come from an inventory, playbook, or command line. Note that variables
can also be returned from inventory, so in most cases, you won't need to write or understand vars_plugins.
More documentation on writing connection plugins is pending, though you can jump into `lib/ansible/inventory/vars_plugins <https://github.com/ansible/ansible/tree/devel/lib/ansible/inventory/vars_plugins>`_ and figure
More documentation on writing vars plugins is pending, though you can jump into `lib/ansible/inventory/vars_plugins <https://github.com/ansible/ansible/tree/devel/lib/ansible/inventory/vars_plugins>`_ and figure
things out pretty easily.
If you find yourself wanting to write a vars_plugin, it's more likely you should write an inventory script instead.

@ -29,20 +29,32 @@ and then commenting on that particular issue on GitHub. Here's how:
or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or
other flavors, since some features (apt vs. yum, for example) are specific to those OS versions.
First, you will need to configure your testing environment with the neccessary tools required to run our test
First, you will need to configure your testing environment with the necessary tools required to run our test
suites. You will need at least::
git
python-nosetests
python-nosetests (sometimes named python-nose)
python-passlib
If you want to run the full integration test suite you'll also need the following packages installed::
svn
hg
python-pip
gem
Second, if you haven't already, clone the Ansible source code from GitHub::
git clone https://github.com/ansible/ansible.git
git clone https://github.com/ansible/ansible.git --recursive
cd ansible/
.. note::
If you have previously forked the repository on GitHub, you could also clone it from there.
.. note::
If updating your repo for testing something module related, use "git rebase origin/devel" and then "git submodule update" to fetch
the latest development versions of modules. Skipping the "git submodule update" step will result in versions that will be stale.
Activating The Source Checkout
++++++++++++++++++++++++++++++

@ -20,21 +20,21 @@ The ansible-galaxy command line tool
The command line ansible-galaxy has many different subcommands.
Installing Roles
++++++++++++++++
----------------
The most obvious is downloading roles from the Ansible Galaxy website::
ansible-galaxy install username.rolename
Building out Role Scaffolding
+++++++++++++++++++++++++++++
-----------------------------
It can also be used to initialize the base structure of a new role, saving time on creating the various directories and main.yml files a role requires::
ansible-galaxy init rolename
Installing Multiple Roles From A File
+++++++++++++++++++++++++++++++++++++
-------------------------------------
To install multiple roles, the ansible-galaxy CLI can be fed a requirements file. All versions of ansible allow the following syntax for installing roles from the Ansible Galaxy website::
@ -53,7 +53,7 @@ To request specific versions (tags) of a role, use this syntax in the roles file
Available versions will be listed on the Ansible Galaxy webpage for that role.
Advanced Control over Role Requirements Files
+++++++++++++++++++++++++++++++++++++++++++++
---------------------------------------------
For more advanced control over where to download roles from, including support for remote repositories, Ansible 1.8 and later support a new YAML format for the role requirements file, which must end in a 'yml' extension. It works like this::
@ -93,7 +93,7 @@ And here's an example showing some specific version downloads from multiple sour
As you can see in the above, there are a large amount of controls available
to customize where roles can be pulled from, and what to save roles as.
Roles pulled from galaxy work as with othe SCM sourced roles above. To download a role with dependencies, and automatically install those dependencies, the role must be uploaded to the Ansible Galaxy website.
Roles pulled from galaxy work as with other SCM sourced roles above. To download a role with dependencies, and automatically install those dependencies, the role must be uploaded to the Ansible Galaxy website.
.. seealso::

@ -6,120 +6,146 @@ Amazon Web Services Guide
Introduction
````````````
.. note:: This section of the documentation is under construction. We are in the process of adding more examples about all of the EC2 modules
and how they work together. There's also an ec2 example in the language_features directory of `the ansible-examples github repository <http://github.com/ansible/ansible-examples/>`_ that you may wish to consult. Once complete, there will also be new examples of ec2 in ansible-examples.
Ansible contains a number of core modules for interacting with Amazon Web Services (AWS). These also work with Eucalyptus, which is an AWS compatible private cloud solution. There are other supported cloud types, but this documentation chapter is about AWS API clouds. The purpose of this
Ansible contains a number of modules for controlling Amazon Web Services (AWS). The purpose of this
section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in AWS context.
Requirements for the AWS modules are minimal. All of the modules require and are tested against boto 2.5 or higher. You'll need this Python module installed on the execution host. If you are using Red Hat Enterprise Linux or CentOS, install boto from `EPEL <http://fedoraproject.org/wiki/EPEL>`_:
.. code-block:: bash
$ yum install python-boto
Requirements for the AWS modules are minimal.
You can also install it via pip if you want.
All of the modules require and are tested against recent versions of boto. You'll need this Python module installed on your control machine. Boto can be installed from your OS distribution or python's "pip install boto".
The following steps will often execute outside the host loop, so it makes sense to add localhost to inventory. Ansible
may not require this step in the future::
Whereas classically ansible will execute tasks in it's host loop against multiple remote machines, most cloud-control steps occur on your local machine with reference to the regions to control.
[local]
localhost
And in your playbook steps we'll typically be using the following pattern for provisioning steps::
In your playbook steps we'll typically be using the following pattern for provisioning steps::
- hosts: localhost
connection: local
gather_facts: False
tasks:
- ...
.. _aws_authentication:
Authentication
``````````````
Authentication with the AWS-related modules is handled by either
specifying your access and secret key as ENV variables or module arguments.
For environment variables::
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
For storing these in a vars_file, ideally encrypted with ansible-vault::
---
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
.. _aws_provisioning:
Provisioning
````````````
The ec2 module provides the ability to provision instances within EC2. Typically the provisioning task will be performed against your Ansible master server in a play that operates on localhost using the ``local`` connection type. If you are doing an EC2 operation mid-stream inside a regular play operating on remote hosts, you may want to use the ``local_action`` keyword for that particular task. Read :doc:`playbooks_delegation` for more about local actions.
The ec2 module provisions and de-provisions instances within EC2.
.. note::
An example of making sure there are only 5 instances tagged 'Demo' in EC2 follows.
Authentication with the AWS-related modules is handled by either
specifying your access and secret key as ENV variables or passing
them as module arguments.
In the example below, the "exact_count" of instances is set to 5. This means if there are 0 instances already existing, then
5 new instances would be created. If there were 2 instances, only 3 would be created, and if there were 8 instances, 3 instances would
be terminated.
.. note::
What is being counted is specified by the "count_tag" parameter. The parameter "instance_tags" is used to apply tags to the newly created
instance.::
To talk to specific endpoints, the environmental variable EC2_URL
can be set. This is useful if using a private cloud like Eucalyptus,
exporting the variable as EC2_URL=https://myhost:8773/services/Eucalyptus.
This can be set using the 'environment' keyword in Ansible if you like.
# demo_setup.yml
Here is an example of provisioning a number of instances in ad-hoc mode:
- hosts: localhost
connection: local
gather_facts: False
.. code-block:: bash
tasks:
# ansible localhost -m ec2 -a "image=ami-6e649707 instance_type=m1.large keypair=mykey group=webservers wait=yes" -c local
- name: Provision a set of instances
ec2:
key_name: my_key
group: test
instance_type: t2.micro
image: "{{ ami_id }}"
wait: true
exact_count: 5
count_tag:
Name: Demo
instance_tags:
Name: Demo
register: ec2
In a play, this might look like (assuming the parameters are held as vars)::
The data about what instances are created is being saved by the "register" keyword in the variable named "ec2".
tasks:
- name: Provision a set of instances
ec2: >
keypair={{mykeypair}}
group={{security_group}}
instance_type={{instance_type}}
image={{image}}
wait=true
count={{number}}
register: ec2
By registering the return its then possible to dynamically create a host group consisting of these new instances. This facilitates performing configuration actions on the hosts immediately in a subsequent task::
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2.instances
With the host group now created, a second play in your provision playbook might now have some configuration steps::
- name: Configuration play
hosts: ec2hosts
user: ec2-user
gather_facts: true
From this, we'll use the add_host module to dynamically create a host group consisting of these new instances. This facilitates performing configuration actions on the hosts immediately in a subsequent task.::
# demo_setup.yml
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Check NTP service
service: name=ntpd state=started
Rather than include configuration inline, you may also choose to just do it as a task include or a role.
- name: Provision a set of instances
ec2:
key_name: my_key
group: test
instance_type: t2.micro
image: "{{ ami_id }}"
wait: true
exact_count: 5
count_tag:
Name: Demo
instance_tags:
Name: Demo
register: ec2
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2.instances
The method above ties the configuration of a host with the provisioning step. This isn't always ideal and leads us onto the next section.
With the host group now created, a second play at the bottom of the the same provisioning playbook file might now have some configuration steps::
.. _aws_advanced:
# demo_setup.yml
Advanced Usage
``````````````
- name: Provision a set of instances
hosts: localhost
# ... AS ABOVE ...
- hosts: ec2hosts
name: configuration play
user: ec2-user
gather_facts: true
tasks:
- name: Check NTP service
service: name=ntpd state=started
.. _aws_host_inventory:
Host Inventory
++++++++++++++
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the ec2 inventory plugin.
``````````````
Even for larger environments, you might have nodes spun up from Cloud Formations or other tooling. You don't have to use Ansible to spin up guests. Once these are created and you wish to configure them, the EC2 API can be used to return system grouping with the help of the EC2 inventory script. This script can be used to group resources by their security group or tags. Tagging is highly recommended in EC2 and can provide an easy way to sort between host groups and roles. The inventory script is documented doc:`api` section.
Once your nodes are spun up, you'll probably want to talk to them again. With a cloud setup, it's best to not maintain a static list of cloud hostnames
in text files. Rather, the best way to handle this is to use the ec2 dynamic inventory script.
You may wish to schedule a regular refresh of the inventory cache to accommodate for frequent changes in resources:
This will also dynamically select nodes that were even created outside of Ansible, and allow Ansible to manage them.
.. code-block:: bash
# ./ec2.py --refresh-cache
See the doc:`aws_example` for how to use this, then flip back over to this chapter.
Put this into a crontab as appropriate to make calls from your Ansible master server to the EC2 API endpoints and gather host information. The aim is to keep the view of hosts as up-to-date as possible, so schedule accordingly. Playbook calls could then also be scheduled to act on the refreshed hosts inventory after each refresh. This approach means that machine images can remain "raw", containing no payload and OS-only. Configuration of the workload is handled entirely by Ansible.
.. _aws_tags_and_groups:
Tags
++++
Tags And Groups And Variables
`````````````````````````````
There's a feature in the ec2 inventory script where hosts tagged with
certain keys and values automatically appear in certain groups.
When using the ec2 inventory script, hosts automatically appear in groups based on how they are tagged in EC2.
For instance, if a host is given the "class" tag with the value of "webserver",
it will be automatically discoverable via a dynamic group like so::
@ -128,178 +154,83 @@ it will be automatically discoverable via a dynamic group like so::
tasks:
- ping
Using this philosophy can be a great way to manage groups dynamically, without
having to maintain separate inventory.
Using this philosophy can be a great way to keep systems separated by the function they perform.
In this example, if we wanted to define variables that are automatically applied to each machine tagged with the 'class' of 'webserver', 'group_vars'
in ansible can be used. See :doc:`splitting_out_vars`.
Similar groups are available for regions and other classifications, and can be similarly assigned variables using the same mechanism.
.. _aws_pull:
Pull Configuration
++++++++++++++++++
Autoscaling with Ansible Pull
`````````````````````````````
For some the delay between refreshing host information and acting on that host information (i.e. running Ansible tasks against the hosts) may be too long. This may be the case in such scenarios where EC2 AutoScaling is being used to scale the number of instances as a result of a particular event. Such an event may require that hosts come online and are configured as soon as possible (even a 1 minute delay may be undesirable). Its possible to pre-bake machine images which contain the necessary ansible-pull script and components to pull and run a playbook via git. The machine images could be configured to run ansible-pull upon boot as part of the bootstrapping procedure.
Amazon Autoscaling features automatically increase or decrease capacity based on load. There are also Ansible ansibles shown in the cloud documentation that
can configure autoscaling policy.
Read :ref:`ansible-pull` for more information on pull-mode playbooks.
When nodes come online, it may not be sufficient to wait for the next cycle of an ansible command to come along and configure that node.
To do this, pre-bake machine images which contain the necessary ansible-pull invocation. Ansible-pull is a command line tool that fetches a playbook from a git server and runs it locally.
One of the challenges of this approach is that there needs to be a centralized way to store data about the results of pull commands in an autoscaling context.
For this reason, the autoscaling solution provided below in the next section can be a better approach.
(Various developments around Ansible are also going to make this easier in the near future. Stay tuned!)
Read :ref:`ansible-pull` for more information on pull-mode playbooks.
.. _aws_autoscale:
Autoscaling with Ansible Tower
++++++++++++++++++++++++++++++
``````````````````````````````
:doc:`tower` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call
a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way
to reconfigure ephemeral nodes. See the Tower documentation for more details. Click on the Tower link in the sidebar for details.
to reconfigure ephemeral nodes. See the Tower install and product documentation for more details.
A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared
with remote hosts.
.. _aws_use_cases:
Use Cases
`````````
This section covers some usage examples built around a specific use case.
.. _aws_cloudformation_example:
Example 1
+++++++++
Example 1: I'm using CloudFormation to deploy a specific infrastructure stack. I'd like to manage configuration of the instances with Ansible.
Ansible With (And Versus) CloudFormation
````````````````````````````````````````
Provision instances with your tool of choice and consider using the inventory plugin to group hosts based on particular tags or security group. Consider tagging instances you wish to managed with Ansible with a suitably unique key=value tag.
CloudFormation is a Amazon technology for defining a cloud stack as a JSON document.
.. note:: Ansible also has a cloudformation module you may wish to explore.
Ansible modules provide an easier to use interface than CloudFormation in many examples, without defining a complex JSON document.
This is recommended for most users.
.. _aws_autoscale_example:
However, for users that have decided to use CloudFormation, there is an Ansible module that can be used to apply a CloudFormation template
to Amazon.
Example 2
+++++++++
When using Ansible with CloudFormation, typically Ansible will be used with a tool like Packer to build images, and CloudFormation will launch
those images, or ansible will be invoked through user data once the image comes online, or a combination of the two.
Example 2: I'm using AutoScaling to dynamically scale up and scale down the number of instances. This means the number of hosts is constantly fluctuating but I'm letting EC2 automatically handle the provisioning of these instances. I don't want to fully bake a machine image, I'd like to use Ansible to configure the hosts.
Please see the examples in the Ansible CloudFormation module for more details.
There are several approaches to this use case. The first is to use the inventory plugin to regularly refresh host information and then target hosts based on the latest inventory data. The second is to use ansible-pull triggered by a user-data script (specified in the launch configuration) which would then mean that each instance would fetch Ansible and the latest playbook from a git repository and run locally to configure itself. You could also use the Tower callback feature.
.. _aws_image_build:
.. _aws_builds:
AWS Image Building With Ansible
```````````````````````````````
Example 3
+++++++++
Example 3: I don't want to use Ansible to manage my instances but I'd like to consider using Ansible to build my fully-baked machine images.
There's nothing to stop you doing this. If you like working with Ansible's playbook format then writing a playbook to create an image; create an image file with dd, give it a filesystem and then install packages and finally chroot into it for further configuration. Ansible has the 'chroot' plugin for this purpose, just add the following to your inventory file::
/chroot/path ansible_connection=chroot
And in your playbook::
hosts: /chroot/path
Example 4
+++++++++
How would I create a new ec2 instance, provision it and then destroy it all in the same play?
.. code-block:: yaml
# Use the ec2 module to create a new host and then add
# it to a special "ec2hosts" group.
- hosts: localhost
connection: local
gather_facts: False
vars:
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
keypair: "mykeyname"
instance_type: "t1.micro"
image: "ami-d03ea1e0"
group: "mysecuritygroup"
region: "us-west-2"
zone: "us-west-2c"
tasks:
- name: make one instance
ec2: image={{ image }}
instance_type={{ instance_type }}
aws_access_key={{ ec2_access_key }}
aws_secret_key={{ ec2_secret_key }}
keypair={{ keypair }}
instance_tags='{"foo":"bar"}'
region={{ region }}
group={{ group }}
wait=true
register: ec2_info
- debug: var=ec2_info
- debug: var=item
with_items: ec2_info.instance_ids
- add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2_info.instances
- name: wait for instances to listen on port:22
wait_for:
state=started
host={{ item.public_dns_name }}
port=22
with_items: ec2_info.instances
# Connect to the node and gather facts,
# including the instance-id. These facts
# are added to inventory hostvars for the
# duration of the playbook's execution
# Typical "provisioning" tasks would go in
# this playbook.
- hosts: ec2hosts
gather_facts: True
user: ec2-user
sudo: True
tasks:
# fetch instance data from the metadata servers in ec2
- ec2_facts:
# show all known facts for this host
- debug: var=hostvars[inventory_hostname]
# just show the instance-id
- debug: msg="{{ hostvars[inventory_hostname]['ansible_ec2_instance_id'] }}"
# Using the instanceid, call the ec2 module
# locally to remove the instance by declaring
# it's state is "absent"
- hosts: ec2hosts
gather_facts: True
connection: local
vars:
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
region: "us-west-2"
tasks:
- name: destroy all instances
ec2: state='absent'
aws_access_key={{ ec2_access_key }}
aws_secret_key={{ ec2_secret_key }}
region={{ region }}
instance_ids={{ item }}
wait=true
with_items: hostvars[inventory_hostname]['ansible_ec2_instance_id']
Many users may want to have images boot to a more complete configuration rather than configuring them entirely after instantiation. To do this,
one of many programs can be used with Ansible playbooks to define and upload a base image, which will then get it's own AMI ID for usage with
the ec2 module or other Ansible AWS modules such as ec2_asg or the cloudformation module. Possible tools include Packer, aminator, and Ansible's
ec2_ami module.
Generally speaking, we find most users using Packer.
.. note:: more examples of this are pending. You may also be interested in the ec2_ami module for taking AMIs of running instances.
`Documentation for the Ansible Packer provisioner can be found here <https://www.packer.io/docs/provisioners/ansible-local.html>`_.
.. _aws_pending:
If you do not want to adopt Packer at this time, configuring a base-image with Ansible after provisioning (as shown above) is acceptable.
Pending Information
```````````````````
.. _aws_next_steps:
In the future look here for more topics.
Next Steps: Explore Modules
```````````````````````````
Ansible ships with lots of modules for configuring a wide array of EC2 services. Browse the "Cloud" category of the module
documentation for a full list with examples.
.. seealso::
@ -309,7 +240,7 @@ In the future look here for more topics.
An introduction to playbooks
:doc:`playbooks_delegation`
Delegation, useful for working with loud balancers, clouds, and locally executed steps.
`User Mailing List <http://groups.google.com/group/ansible-project>`_
`User Mailing List <http://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel

@ -22,7 +22,7 @@ The GCE modules all require the apache-libcloud module, which you can install fr
Credentials
-----------
To work with the GCE modules, you'll first need to get some credentials. You can create new one from the `console <https://console.developers.google.com/>`_ by going to the "APIs and Auth" section. Once you've created a new client ID and downloaded the generated private key (in the `pkcs12 format <http://en.wikipedia.org/wiki/PKCS_12>`_), you'll need to convert the key by running the following command:
To work with the GCE modules, you'll first need to get some credentials. You can create new one from the `console <https://console.developers.google.com/>`_ by going to the "APIs and Auth" section and choosing to create a new client ID for a service account. Once you've created a new client ID and downloaded the generated private key (in the `pkcs12 format <http://en.wikipedia.org/wiki/PKCS_12>`_), you'll need to convert the key by running the following command:
.. code-block:: bash
@ -133,18 +133,18 @@ For the following use case, let's use this small shell script as a wrapper.
.. code-block:: bash
#!/bin/bash
#!/usr/bin/env bash
PLAYBOOK="$1"
if [ -z $PLAYBOOK ]; then
echo "You need to pass a playback as argument to this script."
if [[ -z $PLAYBOOK ]]; then
echo "You need to pass a playbook as argument to this script."
exit 1
fi
export SSL_CERT_FILE=$(pwd)/cacert.cer
export ANSIBLE_HOST_KEY_CHECKING=False
if [ ! -f "$SSL_CERT_FILE" ]; then
if [[ ! -f "$SSL_CERT_FILE" ]]; then
curl -O http://curl.haxx.se/ca/cacert.pem
fi
@ -175,11 +175,11 @@ A playbook would looks like this:
tasks:
- name: Launch instances
gce:
instance_names: dev
machine_type: "{{ machine_type }}"
image: "{{ image }}"
service_account_email: "{{ service_account_email }}"
pem_file: "{{ pem_file }}"
instance_names: dev
machine_type: "{{ machine_type }}"
image: "{{ image }}"
service_account_email: "{{ service_account_email }}"
pem_file: "{{ pem_file }}"
project_id: "{{ project_id }}"
tags: webserver
register: gce
@ -188,15 +188,18 @@ A playbook would looks like this:
wait_for: host={{ item.public_ip }} port=22 delay=10 timeout=60
with_items: gce.instance_data
- name: add_host hostname={{ item.public_ip }} groupname=new_instances
- name: Add host to groupname
add_host: hostname={{ item.public_ip }} groupname=new_instances
with_items: gce.instance_data
- name: Manage new instances
hosts: new_instances
connection: ssh
sudo: True
roles:
- base_configuration
- production_server
Note that use of the "add_host" module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines
in the 'new_instances' group, if so desired. Any sort of arbitrary configuration is possible at this point.

@ -163,7 +163,7 @@ In Ansible it is quite possible to use multiple dynamic inventory plugins along
rax.py
++++++
To use the rackspace dynamic inventory script, copy ``rax.py`` into your inventory directory and make it executable. You can specify a credentails file for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable.
To use the rackspace dynamic inventory script, copy ``rax.py`` into your inventory directory and make it executable. You can specify a credentials file for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable.
.. note:: Dynamic inventory scripts (like ``rax.py``) are saved in ``/usr/share/ansible/inventory`` if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to ``$VIRTUALENV/share/inventory``.

@ -107,14 +107,16 @@ inventory file may look something like this:
If you want to run Ansible manually, you will want to make sure to pass
``ansible`` or ``ansible-playbook`` commands the correct arguments for the
username (usually ``vagrant``) and the SSH key (usually
``~/.vagrant.d/insecure_private_key``), and the autogenerated inventory file.
username (usually ``vagrant``) and the SSH key (since Vagrant 1.7.0, this will be something like
``.vagrant/machines/[machine name]/[provider]/private_key``), and the autogenerated inventory file.
Here is an example:
.. code-block:: bash
$ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=~/.vagrant.d/insecure_private_key -u vagrant playbook.yml
$ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant playbook.yml
Note: Vagrant versions prior to 1.7.0 will use the private key located at ``~/.vagrant.d/insecure_private_key.``
.. seealso::

@ -1,10 +0,0 @@
Ansible Guru
````````````
While many users should be able to get on fine with the documentation, mailing list, and IRC, sometimes you want a bit more.
`Ansible Guru <http://ansible.com/ansible-guru>`_ is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible, including building playbooks, best practices, architecture suggestions, and more -- all from our awesome support and services team. It also includes some useful discounts and also some free T-shirts, though you shouldn't get it just for the free shirts! It's a great way to train up to becoming an Ansible expert.
For those interested, click through the link above. You can sign up in minutes!
For users looking for more hands-on help, we also have some more information on our `Services page <http://www.ansible.com/ansible-services>`_, and support is also included with :doc:`tower`.

@ -16,7 +16,7 @@ We believe simplicity is relevant to all sizes of environments and design for bu
Ansible manages machines in an agentless manner. There is never a question of how to
upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. As OpenSSH is one of the most peer reviewed open source components, the security exposure of using the tool is greatly reduced. Ansible is decentralized -- it relies on your existing OS credentials to control access to remote machines; if needed it can easily connect with Kerberos, LDAP, and other centralized authentication management systems.
This documentation covers the current released version of Ansible (1.6.10) and also some development version features (1.7). For recent features, in each section, the version of Ansible where the feature is added is indicated. Ansible, Inc releases a new major release of Ansible approximately every 2 months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup, while the community around new modules and plugins being developed and contributed moves very very quickly, typically adding 20 or so new modules in each release.
This documentation covers the current released version of Ansible (1.8.2) and also some development version features (1.9). For recent features, in each section, the version of Ansible where the feature is added is indicated. Ansible, Inc releases a new major release of Ansible approximately every 2 months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup, while the community around new modules and plugins being developed and contributed moves very very quickly, typically adding 20 or so new modules in each release.
.. _an_introduction:
@ -38,5 +38,4 @@ This documentation covers the current released version of Ansible (1.6.10) and a
faq
glossary
YAMLSyntax
guru

@ -154,11 +154,11 @@ with yum.
Ensure a package is installed, but don't update it::
$ ansible webservers -m yum -a "name=acme state=installed"
$ ansible webservers -m yum -a "name=acme state=present"
Ensure a package is installed to a specific version::
$ ansible webservers -m yum -a "name=acme-1.5 state=installed"
$ ansible webservers -m yum -a "name=acme-1.5 state=present"
Ensure a package is at the latest version::
@ -166,7 +166,7 @@ Ensure a package is at the latest version::
Ensure a package is not installed::
$ ansible webservers -m yum -a "name=acme state=removed"
$ ansible webservers -m yum -a "name=acme state=absent"
Ansible has modules for managing packages under many platforms. If your package manager
does not have a module available for it, you can install
@ -225,16 +225,16 @@ Ensure a service is stopped::
Time Limited Background Operations
``````````````````````````````````
Long running operations can be backgrounded, and their status can be
checked on later. The same job ID is given to the same task on all
hosts, so you won't lose track. If you kick hosts and don't want
to poll, it looks like this::
Long running operations can be backgrounded, and their status can be checked on
later. If you kick hosts and don't want to poll, it looks like this::
$ ansible all -B 3600 -a "/usr/bin/long_running_operation --do-stuff"
$ ansible all -B 3600 -P 0 -a "/usr/bin/long_running_operation --do-stuff"
If you do decide you want to check on the job status later, you can::
If you do decide you want to check on the job status later, you can use the
async_status module, passing it the job id that was returned when you ran
the original job in the background::
$ ansible all -m async_status -a "jid=123456789"
$ ansible web1.example.com -m async_status -a "jid=488359678239.2844"
Polling is built-in and looks like this::

@ -70,7 +70,7 @@ Actions are pieces of code in ansible that enable things like module execution,
This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations::
action_plugins = /usr/share/ansible_plugins/action_plugins
action_plugins = ~/.ansible/plugins/action_plugins/:/usr/share/ansible_plugins/action_plugins
Most users will not need to use this feature. See :doc:`developing_plugins` for more details.
@ -135,10 +135,12 @@ Prior to 1.8, callbacks were never loaded for /usr/bin/ansible.
callback_plugins
================
Callbacks are pieces of code in ansible that get called on specific events, permitting to trigger notifications.
This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations::
callback_plugins = /usr/share/ansible_plugins/callback_plugins
callback_plugins = ~/.ansible/plugins/callback_plugins/:/usr/share/ansible_plugins/callback_plugins
Most users will not need to use this feature. See :doc:`developing_plugins` for more details
@ -154,9 +156,9 @@ command module appear to be simplified by using a default Ansible module
instead. This can include reminders to use the 'git' module instead of
shell commands to execute 'git'. Using modules when possible over arbitrary
shell commands can lead to more reliable and consistent playbook runs, and
also easier to maintain playbooks.
also easier to maintain playbooks::
command_warnings=False
command_warnings = False
These warnings can be silenced by adjusting the following
setting or adding warn=yes or warn=no to the end of the command line
@ -171,10 +173,12 @@ parameter string, like so::
connection_plugins
==================
Connections plugin permit to extend the channel used by ansible to transport commands and files.
This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations::
connection_plugins = /usr/share/ansible_plugins/connection_plugins
connection_plugins = ~/.ansible/plugins/connection_plugins/:/usr/share/ansible_plugins/connection_plugins
Most users will not need to use this feature. See :doc:`developing_plugins` for more details
@ -230,13 +234,24 @@ rare instances to /bin/bash in rare instances when sudo is constrained, but in m
filter_plugins
==============
Filters are specific functions that can be used to extend the template system.
This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations::
filter_plugins = /usr/share/ansible_plugins/filter_plugins
filter_plugins = ~/.ansible/plugins/filter_plugins/:/usr/share/ansible_plugins/filter_plugins
Most users will not need to use this feature. See :doc:`developing_plugins` for more details
.. _force_color:
force_color
===========
This options forces color mode even when running without a TTY::
force_color = 1
.. _forks:
forks
@ -280,10 +295,7 @@ The valid values are either 'replace' (the default) or 'merge'.
hostfile
========
This is the default location of the inventory file, script, or directory that Ansible will use to determine what hosts it has available
to talk to::
hostfile = /etc/ansible/hosts
This is a deprecated setting since 1.9, please look at :ref:`inventory` for the new setting.
.. _host_key_checking:
@ -295,6 +307,18 @@ implications and wish to disable it, you may do so here by setting the value to
host_key_checking=True
.. _inventory:
inventory
=========
This is the default location of the inventory file, script, or directory that Ansible will use to determine what hosts it has available
to talk to::
inventory = /etc/ansible/hosts
It used to be called hostfile in Ansible before 1.9
.. _jinja2_extensions:
jinja2_extensions
@ -341,7 +365,7 @@ lookup_plugins
This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations::
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins
lookup_plugins = ~/.ansible/plugins/lookup_plugins/:/usr/share/ansible_plugins/lookup_plugins
Most users will not need to use this feature. See :doc:`developing_plugins` for more details
@ -487,7 +511,7 @@ sudo_flags
==========
Additional flags to pass to sudo when engaging sudo support. The default is '-H' which preserves the environment
of the original user. In some situations you may wish to add or remote flags, but in general most users
of the original user. In some situations you may wish to add or remove flags, but in general most users
will not need to change this setting::
sudo_flags=-H
@ -544,10 +568,24 @@ vars_plugins
This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations::
vars_plugins = /usr/share/ansible_plugins/vars_plugins
vars_plugins = ~/.ansible/plugins/vars_plugins/:/usr/share/ansible_plugins/vars_plugins
Most users will not need to use this feature. See :doc:`developing_plugins` for more details
.. _vault_password_file:
vault_password_file
===================
.. versionadded:: 1.7
Configures the path to the Vault password file as an alternative to specifying ``--vault-password-file`` on the command line::
vault_password_file = /path/to/vault_password_file
As of 1.7 this file can also be a script. If you are using a script instead of a flat file, ensure that it is marked as executable, and that the password is printed to standard output. If your script needs to prompt for data, prompts can be sent to standard error.
.. _paramiko_settings:
Paramiko Specific Settings
@ -639,8 +677,8 @@ recommended if you can enable it, eliminating the need for :doc:`playbooks_accel
.. _accelerate_settings:
Accelerate Mode Settings
------------------------
Accelerated Mode Settings
-------------------------
Under the [accelerate] header, the following settings are tunable for :doc:`playbooks_acceleration`. Acceleration is
a useful performance feature to use if you cannot enable :ref:`pipelining` in your environment, but is probably
@ -653,7 +691,7 @@ accelerate_port
.. versionadded:: 1.3
This is the port to use for accelerate mode::
This is the port to use for accelerated mode::
accelerate_port = 5099

@ -87,7 +87,7 @@ marking it executable::
ansible -i ec2.py -u ubuntu us-east-1d -m ping
The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You will also need to copy the `ec2.ini <https://raw.github.com/ansible/ansible/devel/plugins/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You will also need to copy the `ec2.ini <https://raw.githubusercontent.com/ansible/ansible/devel/plugins/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a `variety of methods <http://docs.pythonboto.org/en/latest/boto_config_tut.html>`_ available, but the simplest is just to export two environment variables::
@ -189,7 +189,9 @@ To see the complete list of variables available for an instance, run the script
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com
Note that the AWS inventory script will cache results to avoid repeated API calls, and this cache setting is configurable in ec2.ini. To
explicitly clear the cache, you can run the ec2.py script with the ``--refresh-cache`` parameter.
explicitly clear the cache, you can run the ec2.py script with the ``--refresh-cache`` parameter::
# ./ec2.py --refresh-cache
.. _other_inventory_scripts:
@ -223,6 +225,26 @@ If the location given to -i in Ansible is a directory (or as so configured in an
at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory sources in the same ansible run. Instant
hybrid cloud!
.. _static_groups_of_dynamic:
Static Groups of Dynamic Groups
```````````````````````````````
When defining groups of groups in the static inventory file, the child groups
must also be defined in the static inventory file, or ansible will return an
error. If you want to define a static group of dynamic child groups, define
the dynamic groups as empty in the static inventory file. For example::
[tag_Name_staging_foo]
[tag_Name_staging_bar]
[staging:children]
tag_Name_staging_foo
tag_Name_staging_bar
.. seealso::
:doc:`intro_inventory`

@ -107,17 +107,29 @@ To install from source.
.. code-block:: bash
$ git clone git://github.com/ansible/ansible.git
$ git clone git://github.com/ansible/ansible.git --recursive
$ cd ./ansible
$ source ./hacking/env-setup
If you want to suppress spurious warnings/errors, use:
$ source ./hacking/env-setup -q
If you don't have pip installed in your version of Python, install pip::
$ sudo easy_install pip
Ansible also uses the following Python modules that need to be installed::
$ sudo pip install paramiko PyYAML jinja2 httplib2
$ sudo pip install paramiko PyYAML Jinja2 httplib2
Note when updating ansible, be sure to not only update the source tree, but also the "submodules" in git
which point at Ansible's own modules (not the same kind of modules, alas).
.. code-block:: bash
$ git pull --rebase
$ git submodule update --init --recursive
Once running the env-setup script you'll be running from checkout and the default inventory file
will be /etc/ansible/hosts. You can optionally specify an inventory file (see :doc:`intro_inventory`)
@ -194,6 +206,24 @@ You may also wish to run from source to get the latest, which is covered above.
.. _from_pkg:
Latest Releases Via Portage (Gentoo)
++++++++++++++++++++++++++++++++++++
.. code-block:: bash
$ emerge -av app-admin/ansible
To install the newest version, you may need to unmask the ansible package prior to emerging:
.. code-block:: bash
$ echo 'app-admin/ansible' >> /etc/portage/package.accept_keywords
.. note::
If you have Python 3 as a default Python slot on your Gentoo nodes (default setting), then you
must set ``ansible_python_interpreter = /usr/bin/python2`` in your group or inventory variables.
Latest Releases Via pkg (FreeBSD)
+++++++++++++++++++++++++++++++++
@ -219,6 +249,18 @@ To install on a Mac, make sure you have Homebrew, then run:
$ brew update
$ brew install ansible
.. _from_pkgutil:
Latest Releases Via OpenCSW (Solaris)
+++++++++++++++++++++++++++++++++++++
Ansible is available for Solaris as `SysV package from OpenCSW <https://www.opencsw.org/packages/ansible/>`_.
.. code-block:: bash
# pkgadd -d http://get.opencsw.org/now
# /opt/csw/bin/pkgutil -i ansible
.. _from_pip:
Latest Releases Via Pip

@ -19,7 +19,7 @@ pull inventory from dynamic or cloud sources, as described in :doc:`intro_dynami
Hosts and Groups
++++++++++++++++
The format for /etc/ansible/hosts is an INI format and looks like this::
The format for /etc/ansible/hosts is an INI-like format and looks like this::
mail.example.com
@ -184,7 +184,7 @@ variables. Note that this only works on Ansible 1.4 or later.
Tip: In Ansible 1.2 or later the group_vars/ and host_vars/ directories can exist in either
the playbook directory OR the inventory directory. If both paths exist, variables in the playbook
directory will be loaded second.
directory will override variables set in the inventory directory.
Tip: Keeping your inventory file and variables in a git repo (or other version control)
is an excellent way to track changes to your inventory and host variables.
@ -205,8 +205,12 @@ mentioned::
The default ssh user name to use.
ansible_ssh_pass
The ssh password to use (this is insecure, we strongly recommend using --ask-pass or SSH keys)
ansible_sudo
The boolean to decide if sudo should be used for this host. Defaults to false.
ansible_sudo_pass
The sudo password to use (this is insecure, we strongly recommend using --ask-sudo-pass)
ansible_sudo_exe (new in version 1.8)
The sudo command path.
ansible_connection
Connection type of the host. Candidates are local, ssh or paramiko. The default is paramiko before Ansible 1.2, and 'smart' afterwards which detects whether usage of 'ssh' would be feasible based on whether ControlPersist is supported.
ansible_ssh_private_key_file

@ -68,6 +68,14 @@ It's also ok to mix wildcard patterns and groups at the same time::
one*.com:dbservers
As an advanced usage, you can also select the numbered server in a group::
webservers[0]
Or a portion of servers in a group::
webservers[0:25]
Most people don't specify patterns as regular expressions, but you can. Just start the pattern with a '~'::
~(web|db).*\.example\.com
@ -76,6 +84,10 @@ While we're jumping a bit ahead, additionally, you can add an exclusion criteria
ansible-playbook site.yml --limit datacenter2
And if you want to read the list of hosts from a file, prefix the file name with '@'. Since Ansible 1.2::
ansible-playbook site.yml --limit @retry_hosts.txt
Easy enough. See :doc:`intro_adhoc` and then :doc:`playbooks` for how to apply this knowledge.
.. seealso::

@ -45,7 +45,7 @@ In group_vars/windows.yml, define the following inventory variables::
# ansible-vault edit group_vars/windows.yml
ansible_ssh_user: Administrator
ansible_ssh_pass: SekritPasswordGoesHere
ansible_ssh_pass: SecretPasswordGoesHere
ansible_ssh_port: 5986
ansible_connection: winrm

@ -6,24 +6,24 @@ Accelerated Mode
You Might Not Need This!
````````````````````````
Are you running Ansible 1.5 or later? If so, you may not need accelerate mode due to a new feature called "SSH pipelining" and should read the :ref:`pipelining` section of the documentation.
Are you running Ansible 1.5 or later? If so, you may not need accelerated mode due to a new feature called "SSH pipelining" and should read the :ref:`pipelining` section of the documentation.
For users on 1.5 and later, accelerate mode only makes sense if you (A) are managing from an Enterprise Linux 6 or earlier host
For users on 1.5 and later, accelerated mode only makes sense if you (A) are managing from an Enterprise Linux 6 or earlier host
and still are on paramiko, or (B) can't enable TTYs with sudo as described in the pipelining docs.
If you can use pipelining, Ansible will reduce the amount of files transferred over the wire,
making everything much more efficient, and performance will be on par with accelerate mode in nearly all cases, possibly excluding very large file transfer. Because less moving parts are involved, pipelining is better than accelerate mode for nearly all use cases.
making everything much more efficient, and performance will be on par with accelerated mode in nearly all cases, possibly excluding very large file transfer. Because less moving parts are involved, pipelining is better than accelerated mode for nearly all use cases.
Accelerate mode remains around in support of EL6
Accelerated moded remains around in support of EL6
control machines and other constrained environments.
Accelerate Mode Details
```````````````````````
Accelerated Mode Details
````````````````````````
While OpenSSH using the ControlPersist feature is quite fast and scalable, there is a certain small amount of overhead involved in
using SSH connections. While many people will not encounter a need, if you are running on a platform that doesn't have ControlPersist support (such as an EL6 control machine), you'll probably be even more interested in tuning options.
Accelerate mode is there to help connections work faster, but still uses SSH for initial secure key exchange. There is no
Accelerated mode is there to help connections work faster, but still uses SSH for initial secure key exchange. There is no
additional public key infrastructure to manage, and this does not require things like NTP or even DNS.
Accelerated mode can be anywhere from 2-6x faster than SSH with ControlPersist enabled, and 10x faster than paramiko.

@ -56,6 +56,28 @@ Alternatively, if you do not need to wait on the task to complete, you may
Using a higher value for ``--forks`` will result in kicking off asynchronous
tasks even faster. This also increases the efficiency of polling.
If you would like to perform a variation of the "fire and forget" where you
"fire and forget, check on it later" you can perform a task similar to the
following::
---
# Requires ansible 1.8+
- name: 'YUM - fire and forget task'
yum: name=docker-io state=installed
async: 1000
poll: 0
register: yum_sleeper
- name: 'YUM - check on fire and forget task'
async_status: jid={{ yum_sleeper.ansible_job_id }}
register: job_result
until: job_result.finished
retries: 30
.. note::
If the value of ``async:`` is not high enough, this will cause the
"check on it later" task to fail because the temporary status file that
the ``async_status:`` is looking for will not have been written
.. seealso::

@ -1,7 +1,7 @@
Best Practices
==============
Here are some tips for making the most of Ansible playbooks.
Here are some tips for making the most of Ansible and Ansible playbooks.
You can find some example playbooks illustrating these best practices in our `ansible-examples repository <https://github.com/ansible/ansible-examples>`_. (NOTE: These may not use all of the features in the latest release, but are still an excellent reference!).
@ -12,10 +12,13 @@ You can find some example playbooks illustrating these best practices in our `an
Content Organization
++++++++++++++++++++++
The following section shows one of many possible ways to organize playbook content. Your usage of Ansible should fit your needs, however, not ours, so feel free to modify this approach and organize as you see fit.
The following section shows one of many possible ways to organize playbook content.
(One thing you will definitely want to do though, is use the "roles" organization feature, which is documented as part
of the main playbooks page. See :doc:`playbooks_roles`).
Your usage of Ansible should fit your needs, however, not ours, so feel free to modify this approach and organize as you see fit.
One thing you will definitely want to do though, is use the "roles" organization feature, which is documented as part
of the main playbooks page. See :doc:`playbooks_roles`. You absolutely should be using roles. Roles are great. Use roles. Roles!
Did we say that enough? Roles are great.
.. _directory_layout:
@ -34,6 +37,9 @@ The top level of the directory would contain files and directories like so::
hostname1 # if systems need specific variables, put them here
hostname2 # ""
library/ # if any custom modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
@ -51,6 +57,8 @@ The top level of the directory would contain files and directories like so::
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
defaults/ #
main.yml # <-- default lower priority variables for this role
meta/ #
main.yml # <-- role dependencies
@ -58,12 +66,28 @@ The top level of the directory would contain files and directories like so::
monitoring/ # ""
fooapp/ # ""
.. note: If you find yourself having too many top level playbooks (for instance you have a playbook you wrote for a specific hotfix, etc), it may make sense to have a playbooks/ directory instead. This can be a good idea as you get larger. If you do this, configure your roles_path in ansible.cfg to find your roles location.
.. _use_dynamic_inventory_with_clouds:
Use Dynamic Inventory With Clouds
`````````````````````````````````
If you are using a cloud provider, you should not be managing your inventory in a static file. See :doc:`intro_dynamic_inventory`.
This does not just apply to clouds -- If you have another system maintaining a canonical list of systems
in your infrastructure, usage of dynamic inventory is a great idea in general.
.. _stage_vs_prod:
How to Arrange Inventory, Stage vs Production
`````````````````````````````````````````````
How to Differentiate Stage vs Production
`````````````````````````````````````````
If managing static inventory, it is frequently asked how to differentiate different types of environments. The following example
shows a good way to do this. Similar methods of grouping could be adapted to dynamic inventory (for instance, consider applying the AWS
tag "environment:production", and you'll get a group of systems automatically discovered named "ec2_tag_environment_production".
In the example below, the *production* file contains the inventory of all of your production hosts. Of course you can pull inventory from an external data source as well, but this is just a basic example.
Let's show a static inventory example though. Below, the *production* file contains the inventory of all of your production hosts.
It is suggested that you define groups based on purpose of the host (roles) and also geography or datacenter location (if applicable)::
@ -104,13 +128,14 @@ It is suggested that you define groups based on purpose of the host (roles) and
boston-webservers
boston-dbservers
.. _groups_and_hosts:
Group And Host Variables
````````````````````````
Now, groups are nice for organization, but that's not all groups are good for. You can also assign variables to them! For instance, atlanta has its own NTP servers, so when setting up ntp.conf, we should use them. Let's set those now::
This section extends on the previous example.
Groups are nice for organization, but that's not all groups are good for. You can also assign variables to them! For instance, atlanta has its own NTP servers, so when setting up ntp.conf, we should use them. Let's set those now::
---
# file: group_vars/atlanta
@ -138,6 +163,9 @@ We can define specific hardware variance in systems in a host_vars file, but avo
foo_agent_port: 86
bar_agent_port: 99
Again, if we are using dynamic inventory sources, many dynamic groups are automatically created. So a tag like "class:webserver" would load in
variables from the file "group_vars/ec2_tag_class_webserver" automatically.
.. _split_by_role:
Top Level Playbooks Are Separated By Role
@ -160,6 +188,12 @@ In a file like webservers.yml (also at the top level), we simply map the configu
- common
- webtier
The idea here is that we can choose to configure our whole infrastructure by "running" site.yml or we could just choose to run a subset by running
webservers.yml. This is analogous to the "--limit" parameter to ansible but a little more explicit::
ansible-playbook site.yml --limit webservers
ansible-playbook webservers.yml
.. _role_organization:
Task And Handler Organization For A Role
@ -284,7 +318,7 @@ parameter in your playbooks to make it clear, especially as some modules support
Group By Roles
++++++++++++++
A system can be in multiple groups. See :doc:`intro_inventory` and :doc:`intro_patterns`. Having groups named after things like
We're somewhat repeating ourselves with this tip, but it's worth repeating. A system can be in multiple groups. See :doc:`intro_inventory` and :doc:`intro_patterns`. Having groups named after things like
*webservers* and *dbservers* is repeated in the examples because it's a very powerful concept.
This allows playbooks to target machines based on role, as well as to assign role specific variables
@ -297,7 +331,7 @@ See :doc:`playbooks_roles`.
Operating System and Distribution Variance
++++++++++++++++++++++++++++++++++++++++++
When dealing with a parameter that is different between two different operating systems, the best way to handle this is
When dealing with a parameter that is different between two different operating systems, a great way to handle this is
by using the group_by module.
This makes a dynamic group of hosts matching certain criteria, even if that group is not defined in the inventory file::
@ -305,20 +339,19 @@ This makes a dynamic group of hosts matching certain criteria, even if that grou
---
# talk to all hosts just so we can learn about them
- hosts: all
tasks:
- group_by: key={{ ansible_distribution }}
- group_by: key=os_{{ ansible_distribution }}
# now just on the CentOS hosts...
- hosts: CentOS
- hosts: os_CentOS
gather_facts: False
tasks:
- # tasks that only happen on CentOS go here
This will throw all systems into a dynamic group based on the operating system name.
If group-specific settings are needed, this can also be done. For example::
---
@ -326,20 +359,29 @@ If group-specific settings are needed, this can also be done. For example::
asdf: 10
---
# file: group_vars/CentOS
# file: group_vars/os_CentOS
asdf: 42
In the above example, CentOS machines get the value of '42' for asdf, but other machines get '10'.
This can be used not only to set variables, but also to apply certain roles to only certain systems.
Alternatively, if only variables are needed::
- hosts: all
tasks:
- include_vars: "os_{{ ansible_distribution }}.yml"
- debug: var=asdf
This will pull in variables based on the OS name.
.. _ship_modules_with_playbooks:
Bundling Ansible Modules With Playbooks
+++++++++++++++++++++++++++++++++++++++
.. versionadded:: 0.5
If a playbook has a "./library" directory relative to its YAML file, this directory can be used to add ansible modules that will
automatically be in the ansible module path. This is a great way to keep modules that go with a playbook together.
automatically be in the ansible module path. This is a great way to keep modules that go with a playbook together. This is shown
in the directory structure example at the start of this section.
.. _whitespace:
@ -367,6 +409,8 @@ for you. For example, you will probably not need ``vars``,
``vars_files``, ``vars_prompt`` and ``--extra-vars`` all at once,
while also using an external inventory file.
If something feels complicated, it probably is, and may be a good opportunity to simplify things.
.. _version_control:
Version Control
@ -393,3 +437,4 @@ changed the rules that are automating your infrastructure.
Complete playbook files from the github project source
`Mailing List <http://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups

@ -166,11 +166,11 @@ To use this conditional import feature, you'll need facter or ohai installed pri
you can of course push this out with Ansible if you like::
# for facter
ansible -m yum -a "pkg=facter ensure=installed"
ansible -m yum -a "pkg=ruby-json ensure=installed"
ansible -m yum -a "pkg=facter state=present"
ansible -m yum -a "pkg=ruby-json state=present"
# for ohai
ansible -m yum -a "pkg=ohai ensure=installed"
ansible -m yum -a "pkg=ohai state=present"
Ansible's approach to configuration -- separating variables from tasks, keeps your playbooks
from turning into arbitrary code with ugly nested ifs, conditionals, and so on - and results

@ -161,7 +161,7 @@ This can be optionally paired with "delegate_to" to specify an individual host t
When "run_once" is not used with "delegate_to" it will execute on the first host, as defined by inventory,
in the group(s) of hosts targeted by the play. e.g. webservers[0] if the play targeted "hosts: webservers".
This aproach is similar, although more concise and cleaner than applying a conditional to a task such as::
This approach is similar, although more concise and cleaner than applying a conditional to a task such as::
- command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0]
@ -175,7 +175,7 @@ It may be useful to use a playbook locally, rather than by connecting over SSH.
for assuring the configuration of a system by putting a playbook on a crontab. This may also be used
to run a playbook inside an OS installer, such as an Anaconda kickstart.
To run an entire playbook locally, just set the "hosts:" line to "hosts:127.0.0.1" and then run the playbook like so::
To run an entire playbook locally, just set the "hosts:" line to "hosts: 127.0.0.1" and then run the playbook like so::
ansible-playbook playbook.yml --connection=local

@ -151,8 +151,8 @@ Just `Control-C` to kill it and run it again with `-K`.
These are deleted immediately after the command is executed. This
only occurs when sudoing from a user like 'bob' to 'timmy', not
when going from 'bob' to 'root', or logging in directly as 'bob' or
'root'. If this concerns you that this data is briefly readable
(not writable), avoid transferring uncrypted passwords with
'root'. If it concerns you that this data is briefly readable
(not writable), avoid transferring unencrypted passwords with
`sudo_user` set. In other cases, '/tmp' is not used and this does
not come into play. Ansible also takes care to not log password
parameters.
@ -196,7 +196,7 @@ it is recommended that you use the more conventional "module: options" format.
This recommended format is used throughout the documentation, but you may
encounter the older format in some playbooks.
Here is what a basic task looks like, as with most modules,
Here is what a basic task looks like. As with most modules,
the service module takes key=value arguments::
tasks:

@ -121,10 +121,17 @@ Here are some examples::
- debug: msg="{{ lookup('redis_kv', 'redis://localhost:6379,somekey') }} is value in Redis for somekey"
# dnstxt lookup requires the Python dnspython package
- debug: msg="{{ lookup('dnstxt', 'example.com') }} is a DNS TXT record for example.com"
- debug: msg="{{ lookup('template', './some_template.j2') }} is a value from evaluation of this template"
- debug: msg="{{ lookup('etcd', 'foo') }} is a value from a locally running etcd"
- debug: msg="{{item}}"
with_url:
- 'http://github.com/gremlin.keys'
As an alternative you can also assign lookup plugins to variables or use them
elsewhere. This macros are evaluated each time they are used in a task (or
template)::

@ -55,7 +55,7 @@ entered value so you can use it, for instance, with the user module to define a
- name: "my_password2"
prompt: "Enter password2"
private: yes
encrypt: "md5_crypt"
encrypt: "sha512_crypt"
confirm: yes
salt_size: 7

@ -61,19 +61,19 @@ For instance, if deploying multiple wordpress instances, I could
contain all of my wordpress tasks in a single wordpress.yml file, and use it like so::
tasks:
- include: wordpress.yml user=timmy
- include: wordpress.yml user=alice
- include: wordpress.yml user=bob
- include: wordpress.yml wp_user=timmy
- include: wordpress.yml wp_user=alice
- include: wordpress.yml wp_user=bob
If you are running Ansible 1.4 and later, include syntax is streamlined to match roles, and also allows passing list and dictionary parameters::
tasks:
- { include: wordpress.yml, user: timmy, ssh_keys: [ 'keys/one.txt', 'keys/two.txt' ] }
- { include: wordpress.yml, wp_user: timmy, ssh_keys: [ 'keys/one.txt', 'keys/two.txt' ] }
Using either syntax, variables passed in can then be used in the included files. We'll cover them in :doc:`playbooks_variables`.
You can reference them like this::
{{ user }}
{{ wp_user }}
(In addition to the explicitly passed-in parameters, all variables from
the vars section are also available for use here as well.)
@ -85,7 +85,7 @@ which also supports structured variables::
- include: wordpress.yml
vars:
remote_user: timmy
wp_user: timmy
some_list_variable:
- alpha
- beta
@ -153,7 +153,7 @@ Roles
.. versionadded:: 1.2
Now that you have learned about vars_files, tasks, and handlers, what is the best way to organize your playbooks?
Now that you have learned about tasks and handlers, what is the best way to organize your playbooks?
The short answer is to use roles! Roles are ways of automatically loading certain vars_files, tasks, and
handlers based on a known file structure. Grouping content by roles also allows easy sharing of roles with other users.
@ -172,6 +172,7 @@ Example project structure::
tasks/
handlers/
vars/
defaults/
meta/
webservers/
files/
@ -179,6 +180,7 @@ Example project structure::
tasks/
handlers/
vars/
defaults/
meta/
In a playbook, it would look like this::

@ -17,3 +17,4 @@ and adopt these only if they seem relevant or useful to your environment.
playbooks_prompts
playbooks_tags
playbooks_vault
playbooks_startnstep

@ -0,0 +1,34 @@
Start and Step
======================
This shows a few alternative ways to run playbooks. These modes are very useful for testing new plays or debugging.
.. _start_at_task:
Start-at-task
`````````````
If you want to start executing your playbook at a particular task, you can do so with the ``--start-at`` option::
ansible-playbook playbook.yml --start-at="install packages"
The above will start executing your playbook at a task named "install packages".
.. _step:
Step
````
Playbooks can also be executed interactively with ``--step``::
ansible-playbook playbook.yml --step
This will cause ansible to stop on each task, and ask if it should execute that task.
Say you had a task called "configure ssh", the playbook run will stop and ask::
Perform task: configure ssh (y/n/c):
Answering "y" will execute the task, answering "n" will skip the task, and answering "c"
will continue executing all the remaining tasks without asking.

@ -297,18 +297,100 @@ Get a random number from 1 to 100 but in steps of 10::
{{ 100 |random(start=1, step=10) }} => 51
Shuffle Filter
--------------
.. versionadded:: 1.8
This filter will randomize an existing list, giving a different order every invocation.
To get a random list from an existing list::
{{ ['a','b','c']|shuffle }} => ['c','a','b']
{{ ['a','b','c']|shuffle }} => ['b','c','a']
note that when used with a non 'listable' item it is a noop, otherwise it always returns a list
.. _math_stuff:
Math
--------------------
.. versionadded:: 1.9
To see if something is actually a number::
{{ myvar | isnan }}
Get the logarithm (default is e)::
{{ myvar | log }}
Get the base 10 logarithm::
{{ myvar | log(10) }}
Give me the power of 2! (or 5)::
{{ myvar | pow(2) }}
{{ myvar | pow(5) }}
Square root, or the 5th::
{{ myvar | root }}
{{ myvar | root(5) }}
Note that jinja2 already provides some like abs() and round().
.. _hash_filters:
Hashing filters
--------------------
.. versionadded:: 1.9
To get the sha1 hash of a string::
{{ 'test1'|hash('sha1') }}
To get the md5 hash of a string::
{{ 'test1'|hash('md5') }}
Get a string checksum::
{{ 'test2'|checksum }}
Other hashes (platform dependant)::
{{ 'test2'|hash('blowfish') }}
To get a sha512 password hash (random salt)::
{{ 'passwordsaresecret'|password_hash('sha512') }}
To get a sha256 password hash with a specific salt::
{{ 'secretpassword'|password_hash('sha256', 'mysecretsalt') }}
Hash types available depend on the master system running ansible,
'hash' depends on hashlib password_hash depends on crypt.
.. _other_useful_filters:
Other Useful Filters
--------------------
To concatenate a list into a string::
{{ list | join(" ") }}
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt'::
{{ path | basename }}
{{ path | basename }}
To get the directory from a path::
@ -318,14 +400,18 @@ To expand a path containing a tilde (`~`) character (new in version 1.5)::
{{ path | expanduser }}
To get the real path of a link (new in version 1.8)::
{{ path | readlink }}
To work with Base64 encoded strings::
{{ encoded | b64decode }}
{{ decoded | b64encode }}
To take an md5sum of a filename::
To create a UUID from a string (new in version 1.9)::
{{ filename | md5 }}
{{ hostname | to_uuid }}
To cast values as certain types, such as when you input a string as "True" from a vars_prompt and the system
doesn't know it is a boolean value::
@ -355,6 +441,9 @@ To replace text in a string with regex, use the "regex_replace" filter::
# convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
.. note:: If "regex_replace" filter is used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments),
then you need to escape backreferences (e.g. ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
A few useful filters are typically added with each new Ansible release. The development documentation shows
how to extend Ansible filters by writing your own as plugins, though in general, we encourage new ones
to be added to core so everyone can make use of them.
@ -685,7 +774,7 @@ And you will see the following fact added::
"ansible_local": {
"preferences": {
"general": {
"asdf" : "1",
"asdf" : "1",
"bar" : "2"
}
}
@ -703,7 +792,7 @@ can allow that fact to be used during that particular play. Otherwise, it will
Here is an example of what that might look like::
- hosts: webservers
tasks:
tasks:
- name: create directory for ansible custom facts
file: state=directory recurse=yes path=/etc/ansible/facts.d
- name: install custom impi fact
@ -740,10 +829,14 @@ the fact that they have not been communicated with in the current execution of /
To configure fact caching, enable it in ansible.cfg as follows::
[defaults]
gathering = smart
fact_caching = redis
fact_caching_timeout = 86400 # seconds
fact_caching_timeout = 86400
# seconds
You might also want to change the 'gathering' setting to 'smart' or 'explicit' or set gather_facts to False in most plays.
At the time of writing, Redis is the only supported fact caching engine.
At the time of writing, Redis is the only supported fact caching engine.
To get redis up and running, perform the equivalent OS commands::
yum install redis
@ -838,6 +931,7 @@ A frequently used idiom is walking a group to find all IP addresses in that grou
{% endfor %}
An example of this could include pointing a frontend proxy server to all of the app servers, setting up the correct firewall rules between servers, etc.
You need to make sure that the facts of those hosts have been populated before though, for example by running a play against them if the facts have not been cached recently (fact caching was added in Ansible 1.8).
Additionally, *inventory_hostname* is the name of the hostname as configured in Ansible's inventory host file. This can
be useful for when you don't want to rely on the discovered hostname `ansible_hostname` or for other mysterious
@ -846,6 +940,8 @@ period, without the rest of the domain.
*play_hosts* is available as a list of hostnames that are in scope for the current play. This may be useful for filling out templates with multiple hostnames or for injecting the list into the rules for a load balancer.
*delegate_to* is the inventory hostname of the host that the current task has been delegated to using 'delegate_to'.
Don't worry about any of this unless you think you need it. You'll know when you do.
Also available, *inventory_dir* is the pathname of the directory holding Ansible's inventory host file, *inventory_file* is the pathname and the filename pointing to the Ansible's inventory host file.
@ -889,7 +985,7 @@ The contents of each variables file is a simple YAML dictionary, like this::
.. note::
It's also possible to keep per-host and per-group variables in very
similar files, this is covered in :doc:`intro_patterns`.
similar files, this is covered in :ref:`splitting_out_vars`.
.. _passing_variables_on_the_command_line:
@ -948,9 +1044,10 @@ a use for it.
If multiple variables of the same name are defined in different places, they win in a certain order, which is::
* -e variables always win
* then comes "most everything else"
* then comes variables defined in inventory
* extra vars (-e in the command line) always win
* then comes connection variables defined in inventory (ansible_ssh_user, etc)
* then comes "most everything else" (command line switches, vars in play, included vars, role vars, etc)
* then comes the rest of the variables defined in inventory
* then comes facts discovered about a system
* then "role defaults", which are the most "defaulty" and lose in priority to everything.

@ -3,7 +3,7 @@ Quickstart Video
We've recorded a short video that shows how to get started with Ansible that you may like to use alongside the documentation.
The `quickstart video <http://ansible.com/ansible-resources>`_ is about 20 minutes long and will show you some of the basics about your
The `quickstart video <http://ansible.com/ansible-resources>`_ is about 30 minutes long and will show you some of the basics about your
first steps with Ansible.
Enjoy, and be sure to visit the rest of the documentation to learn more.

@ -11,8 +11,8 @@
# some basic default values...
hostfile = /etc/ansible/hosts
library = /usr/share/ansible
inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/
remote_tmp = $HOME/.ansible/tmp
pattern = *
forks = 5
@ -21,7 +21,7 @@ sudo_user = root
#ask_sudo_pass = True
#ask_pass = True
transport = smart
remote_port = 22
#remote_port = 22
module_lang = C
# plays will gather facts by default, which contain information about
@ -147,13 +147,18 @@ filter_plugins = /usr/share/ansible_plugins/filter_plugins
# avoid issues.
#http_user_agent = ansible-agent
# if set to a persistant type (not 'memory', for example 'redis') fact values
# if set to a persistent type (not 'memory', for example 'redis') fact values
# from previous runs in Ansible will be stored. This may be useful when
# wanting to use, for example, IP information from one group of servers
# without having to talk to them in the same playbook run to get their
# current IP information.
fact_caching = memory
# retry files
#retry_files_enabled = False
#retry_files_save_path = ~/.ansible-retry
[paramiko_connection]
# uncomment this line to cause the paramiko connection plugin to not record new host

@ -1,11 +1,18 @@
# Script to set a windows computer up for remoting
# The script checks the current WinRM/Remoting configuration and makes the necessary changes
# set $VerbosePreference="Continue" before running the script in order to see the output of the script
# Configure a Windows host for remote management with Ansible
# -----------------------------------------------------------
#
# This script checks the current WinRM/PSRemoting configuration and makes the
# necessary changes to allow Ansible to connect, authenticate and execute
# PowerShell commands.
#
# Set $VerbosePreference = "Continue" before running the script in order to
# see the output messages.
#
# Written by Trond Hindenes <trond@hindenes.com>
# Updated by Chris Church <cchurch@ansible.com>
#
# Version 1.0 - July 6th, 2014
# Version 1.1 - November 11th, 2014
Param (
[string]$SubjectName = $env:COMPUTERNAME,
@ -14,7 +21,6 @@ Param (
)
#region function defs
Function New-LegacySelfSignedCert
{
Param (
@ -22,10 +28,10 @@ Function New-LegacySelfSignedCert
[int]$ValidDays = 365
)
$name = new-object -com "X509Enrollment.CX500DistinguishedName.1"
$name = New-Object -COM "X509Enrollment.CX500DistinguishedName.1"
$name.Encode("CN=$SubjectName", 0)
$key = new-object -com "X509Enrollment.CX509PrivateKey.1"
$key = New-Object -COM "X509Enrollment.CX509PrivateKey.1"
$key.ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
$key.KeySpec = 1
$key.Length = 1024
@ -33,149 +39,160 @@ Function New-LegacySelfSignedCert
$key.MachineContext = 1
$key.Create()
$serverauthoid = new-object -com "X509Enrollment.CObjectId.1"
$serverauthoid = New-Object -COM "X509Enrollment.CObjectId.1"
$serverauthoid.InitializeFromValue("1.3.6.1.5.5.7.3.1")
$ekuoids = new-object -com "X509Enrollment.CObjectIds.1"
$ekuoids.add($serverauthoid)
$ekuext = new-object -com "X509Enrollment.CX509ExtensionEnhancedKeyUsage.1"
$ekuoids = New-Object -COM "X509Enrollment.CObjectIds.1"
$ekuoids.Add($serverauthoid)
$ekuext = New-Object -COM "X509Enrollment.CX509ExtensionEnhancedKeyUsage.1"
$ekuext.InitializeEncode($ekuoids)
$cert = new-object -com "X509Enrollment.CX509CertificateRequestCertificate.1"
$cert = New-Object -COM "X509Enrollment.CX509CertificateRequestCertificate.1"
$cert.InitializeFromPrivateKey(2, $key, "")
$cert.Subject = $name
$cert.Issuer = $cert.Subject
$cert.NotBefore = (get-date).addDays(-1)
$cert.NotBefore = (Get-Date).AddDays(-1)
$cert.NotAfter = $cert.NotBefore.AddDays($ValidDays)
$cert.X509Extensions.Add($ekuext)
$cert.Encode()
$enrollment = new-object -com "X509Enrollment.CX509Enrollment.1"
$enrollment = New-Object -COM "X509Enrollment.CX509Enrollment.1"
$enrollment.InitializeFromRequest($cert)
$certdata = $enrollment.CreateRequest(0)
$enrollment.InstallResponse(2, $certdata, 0, "")
#return the thumprint of the last installed cert
ls "Cert:\LocalMachine\my"| Sort-Object notbefore -Descending | select -First 1 | select -expand Thumbprint
# Return the thumbprint of the last installed cert.
Get-ChildItem "Cert:\LocalMachine\my"| Sort-Object NotBefore -Descending | Select -First 1 | Select -Expand Thumbprint
}
#endregion
#Start script
# Setup error handling.
Trap
{
$_
Exit 1
}
$ErrorActionPreference = "Stop"
#Detect PowerShell version
if ($PSVersionTable.PSVersion.Major -lt 3)
# Detect PowerShell version.
If ($PSVersionTable.PSVersion.Major -lt 3)
{
Write-Error "PowerShell/Windows Management Framework needs to be updated to 3 or higher. Stopping script"
Throw "PowerShell version 3 or higher is required."
}
#Detect OS
$Win32_OS = Get-WmiObject Win32_OperatingSystem
switch ($Win32_OS.Version)
{
"6.2.9200" {$OSVersion = "Windows Server 2012"}
"6.1.7601" {$OSVersion = "Windows Server 2008R2"}
}
# Find and start the WinRM service.
Write-Verbose "Verifying WinRM service."
If (!(Get-Service "WinRM"))
{
Throw "Unable to find the WinRM service."
}
ElseIf ((Get-Service "WinRM").Status -ne "Running")
{
Write-Verbose "Starting WinRM service."
Start-Service -Name "WinRM" -ErrorAction Stop
}
#Set up remoting
Write-verbose "Verifying WS-MAN"
if (!(get-service "WinRM"))
{
Write-Error "I couldnt find the winRM service on this computer. Stopping"
}
Elseif ((get-service "WinRM").Status -ne "Running")
{
Write-Verbose "Starting WinRM"
Start-Service -Name "WinRM" -ErrorAction Stop
}
#At this point, winrm should be running
#Check that we have a ps session config
if (!(Get-PSSessionConfiguration -verbose:$false) -or (!(get-childitem WSMan:\localhost\Listener)))
{
Write-Verbose "PS remoting is not enabled. Activating"
try
{
Enable-PSRemoting -Force -ErrorAction SilentlyContinue
}
catch{}
}
Else
{
Write-Verbose "PS remoting is already active and running"
}
#At this point, test a remoting connection to localhost, which should work
$result = invoke-command -ComputerName localhost -ScriptBlock {$env:computername} -ErrorVariable localremotingerror -ErrorAction SilentlyContinue
$options = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
$resultssl = New-PSSession -UseSSL -ComputerName "localhost" -SessionOption $options -ErrorVariable localremotingsslerror -ErrorAction SilentlyContinue
if (!$result -and $resultssl)
{
Write-Verbose "HTTP-based sessions not enabled, HTTPS based sessions enabled"
}
ElseIf (!$result -and !$resultssl)
{
Write-error "Could not establish session on either HTTP or HTTPS. Breaking"
}
#at this point, make sure there is a SSL-based listener
$listeners = dir WSMan:\localhost\Listener
if (!($listeners | where {$_.Keys -like "TRANSPORT=HTTPS"}))
{
#HTTPS-based endpoint does not exist.
if (($CreateSelfSignedCert) -and ($OSVersion -notmatch "2012"))
# WinRM should be running; check that we have a PS session config.
If (!(Get-PSSessionConfiguration -Verbose:$false) -or (!(Get-ChildItem WSMan:\localhost\Listener)))
{
Write-Verbose "Enabling PS Remoting."
Enable-PSRemoting -Force -ErrorAction Stop
}
Else
{
Write-Verbose "PS Remoting is already enabled."
}
# Test a remoting connection to localhost, which should work.
$httpResult = Invoke-Command -ComputerName "localhost" -ScriptBlock {$env:COMPUTERNAME} -ErrorVariable httpError -ErrorAction SilentlyContinue
$httpsOptions = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
$httpsResult = New-PSSession -UseSSL -ComputerName "localhost" -SessionOption $httpsOptions -ErrorVariable httpsError -ErrorAction SilentlyContinue
If ($httpResult -and $httpsResult)
{
Write-Verbose "HTTP and HTTPS sessions are enabled."
}
ElseIf ($httpsResult -and !$httpResult)
{
Write-Verbose "HTTP sessions are disabled, HTTPS session are enabled."
}
ElseIf ($httpResult -and !$httpsResult)
{
Write-Verbose "HTTPS sessions are disabled, HTTP session are enabled."
}
Else
{
Throw "Unable to establish an HTTP or HTTPS remoting session."
}
# Make sure there is a SSL listener.
$listeners = Get-ChildItem WSMan:\localhost\Listener
If (!($listeners | Where {$_.Keys -like "TRANSPORT=HTTPS"}))
{
# HTTPS-based endpoint does not exist.
If (Get-Command "New-SelfSignedCertificate" -ErrorAction SilentlyContinue)
{
$thumprint = New-LegacySelfSignedCert -SubjectName $env:COMPUTERNAME
$cert = New-SelfSignedCertificate -DnsName $env:COMPUTERNAME -CertStoreLocation "Cert:\LocalMachine\My"
$thumbprint = $cert.Thumbprint
}
if (($CreateSelfSignedCert) -and ($OSVersion -match "2012"))
Else
{
$cert = New-SelfSignedCertificate -DnsName $env:COMPUTERNAME -CertStoreLocation "Cert:\LocalMachine\My"
$thumprint = $cert.Thumbprint
$thumbprint = New-LegacySelfSignedCert -SubjectName $env:COMPUTERNAME
}
# Create the hashtables of settings to be used.
$valueset = @{}
$valueset.add('Hostname',$env:COMPUTERNAME)
$valueset.add('CertificateThumbprint',$thumprint)
$valueset.Add('Hostname', $env:COMPUTERNAME)
$valueset.Add('CertificateThumbprint', $thumbprint)
$selectorset = @{}
$selectorset.add('Transport','HTTPS')
$selectorset.add('Address','*')
Write-Verbose "Enabling SSL-based remoting"
New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset
}
Else
{
Write-Verbose "SSL-based remoting already active"
}
$selectorset.Add('Transport', 'HTTPS')
$selectorset.Add('Address', '*')
Write-Verbose "Enabling SSL listener."
New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset
}
Else
{
Write-Verbose "SSL listener is already active."
}
#Check for basic authentication
$basicauthsetting = Get-ChildItem WSMan:\localhost\Service\Auth | where {$_.Name -eq "Basic"}
if (($basicauthsetting.Value) -eq $false)
{
Write-Verbose "Enabling basic auth"
# Check for basic authentication.
$basicAuthSetting = Get-ChildItem WSMan:\localhost\Service\Auth | Where {$_.Name -eq "Basic"}
If (($basicAuthSetting.Value) -eq $false)
{
Write-Verbose "Enabling basic auth support."
Set-Item -Path "WSMan:\localhost\Service\Auth\Basic" -Value $true
}
Else
{
Write-verbose "basic auth already enabled"
}
#FIrewall
netsh advfirewall firewall add rule Profile=public name="Allow WinRM HTTPS" dir=in localport=5986 protocol=TCP action=allow
}
Else
{
Write-Verbose "Basic auth is already enabled."
}
# Configure firewall to allow WinRM HTTPS connections.
$fwtest1 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS"
$fwtest2 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS" profile=any
If ($fwtest1.count -lt 5)
{
Write-Verbose "Adding firewall rule to allow WinRM HTTPS."
netsh advfirewall firewall add rule profile=any name="Allow WinRM HTTPS" dir=in localport=5986 protocol=TCP action=allow
}
ElseIf (($fwtest1.count -ge 5) -and ($fwtest2.count -lt 5))
{
Write-Verbose "Updating firewall rule to allow WinRM HTTPS for any profile."
netsh advfirewall firewall set rule name="Allow WinRM HTTPS" new profile=any
}
Else
{
Write-Verbose "Firewall rule already exists to allow WinRM HTTPS."
}
Write-Verbose "PS Remoting successfully setup for Ansible"
Write-Verbose "PS Remoting has been successfully configured for Ansible."

@ -62,13 +62,24 @@ if ([Environment]::OSVersion.Version.Major -gt 6)
$osminor = [environment]::OSVersion.Version.Minor
$architecture = $ENV:PROCESSOR_ARCHITECTURE
if ($architecture -eq "AMD64")
{
$architecture = "x64"
}
else
{
$architecture = "x86"
}
if ($osminor -eq 1)
{
$DownloadUrl = "http://download.microsoft.com/download/E/7/6/E76850B8-DA6E-4FF5-8CCE-A24FC513FD16/Windows6.1-KB2506143-x64.msu"
$DownloadUrl = "http://download.microsoft.com/download/E/7/6/E76850B8-DA6E-4FF5-8CCE-A24FC513FD16/Windows6.1-KB2506143-" + $architecture + ".msu"
}
elseif ($osminor -eq 0)
{
$DownloadUrl = "http://download.microsoft.com/download/E/7/6/E76850B8-DA6E-4FF5-8CCE-A24FC513FD16/Windows6.0-KB2506146-x64.msu"
$DownloadUrl = "http://download.microsoft.com/download/E/7/6/E76850B8-DA6E-4FF5-8CCE-A24FC513FD16/Windows6.0-KB2506146-" + $architecture + ".msu"
}
else
{

@ -1,45 +1,78 @@
#!/bin/bash
# usage: source ./hacking/env-setup [-q]
# usage: source hacking/env-setup [-q]
# modifies environment for running Ansible from checkout
# Default values for shell variables we use
PYTHONPATH=${PYTHONPATH-""}
PATH=${PATH-""}
MANPATH=${MANPATH-""}
verbosity=${1-info} # Defaults to `info' if unspecified
if [ "$verbosity" = -q ]; then
verbosity=silent
fi
# When run using source as directed, $0 gets set to bash, so we must use $BASH_SOURCE
if [ -n "$BASH_SOURCE" ] ; then
HACKING_DIR=`dirname $BASH_SOURCE`
elif [ $(basename $0) = "env-setup" ]; then
HACKING_DIR=`dirname $0`
HACKING_DIR=$(dirname "$BASH_SOURCE")
elif [ $(basename -- "$0") = "env-setup" ]; then
HACKING_DIR=$(dirname "$0")
elif [ -n "$KSH_VERSION" ]; then
HACKING_DIR=$(dirname "${.sh.file}")
else
HACKING_DIR="$PWD/hacking"
fi
# The below is an alternative to readlink -fn which doesn't exist on OS X
# Source: http://stackoverflow.com/a/1678636
FULL_PATH=`python -c "import os; print(os.path.realpath('$HACKING_DIR'))"`
ANSIBLE_HOME=`dirname "$FULL_PATH"`
FULL_PATH=$(python -c "import os; print(os.path.realpath('$HACKING_DIR'))")
ANSIBLE_HOME=$(dirname "$FULL_PATH")
PREFIX_PYTHONPATH="$ANSIBLE_HOME/lib"
PREFIX_PATH="$ANSIBLE_HOME/bin"
PREFIX_MANPATH="$ANSIBLE_HOME/docs/man"
[[ $PYTHONPATH != ${PREFIX_PYTHONPATH}* ]] && export PYTHONPATH=$PREFIX_PYTHONPATH:$PYTHONPATH
[[ $PATH != ${PREFIX_PATH}* ]] && export PATH=$PREFIX_PATH:$PATH
unset ANSIBLE_LIBRARY
export ANSIBLE_LIBRARY="$ANSIBLE_HOME/library:`python $HACKING_DIR/get_library.py`"
[[ $MANPATH != ${PREFIX_MANPATH}* ]] && export MANPATH=$PREFIX_MANPATH:$MANPATH
# Print out values unless -q is set
if [ $# -eq 0 -o "$1" != "-q" ] ; then
echo ""
echo "Setting up Ansible to run out of checkout..."
echo ""
echo "PATH=$PATH"
echo "PYTHONPATH=$PYTHONPATH"
echo "ANSIBLE_LIBRARY=$ANSIBLE_LIBRARY"
echo "MANPATH=$MANPATH"
echo ""
echo "Remember, you may wish to specify your host file with -i"
echo ""
echo "Done!"
echo ""
expr "$PYTHONPATH" : "${PREFIX_PYTHONPATH}.*" > /dev/null || export PYTHONPATH="$PREFIX_PYTHONPATH:$PYTHONPATH"
expr "$PATH" : "${PREFIX_PATH}.*" > /dev/null || export PATH="$PREFIX_PATH:$PATH"
expr "$MANPATH" : "${PREFIX_MANPATH}.*" > /dev/null || export MANPATH="$PREFIX_MANPATH:$MANPATH"
#
# Generate egg_info so that pkg_resources works
#
# Do the work in a function so we don't repeat ourselves later
gen_egg_info()
{
python setup.py egg_info
if [ -e "$PREFIX_PYTHONPATH/ansible.egg-info" ] ; then
rm -r "$PREFIX_PYTHONPATH/ansible.egg-info"
fi
mv "ansible.egg-info" "$PREFIX_PYTHONPATH"
}
if [ "$ANSIBLE_HOME" != "$PWD" ] ; then
current_dir="$PWD"
else
current_dir="$ANSIBLE_HOME"
fi
cd "$ANSIBLE_HOME"
if [ "$verbosity" = silent ] ; then
gen_egg_info > /dev/null 2>&1
else
gen_egg_info
fi
cd "$current_dir"
if [ "$verbosity" != silent ] ; then
cat <<- EOF
Setting up Ansible to run out of checkout...
PATH=$PATH
PYTHONPATH=$PYTHONPATH
MANPATH=$MANPATH
Remember, you may wish to specify your host file with -i
Done!
EOF
fi

@ -36,6 +36,16 @@ end
set -gx ANSIBLE_LIBRARY $ANSIBLE_HOME/library
# Generate egg_info so that pkg_resources works
pushd $ANSIBLE_HOME
python setup.py egg_info
if test -e $PREFIX_PYTHONPATH/ansible*.egg-info
rm -r $PREFIX_PYTHONPATH/ansible*.egg-info
end
mv ansible*egg-info $PREFIX_PYTHONPATH
popd
if set -q argv
switch $argv
case '-q' '--quiet'

@ -1,5 +1,6 @@
#!/usr/bin/env python
# (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# (c) 2012-2014, Michael DeHaan <michael@ansible.com> and others
#
# This file is part of Ansible
#
@ -44,7 +45,7 @@ TO_OLD_TO_BE_NOTABLE = 1.0
# Get parent directory of the directory this script lives in
MODULEDIR=os.path.abspath(os.path.join(
os.path.dirname(os.path.realpath(__file__)), os.pardir, 'library'
os.path.dirname(os.path.realpath(__file__)), os.pardir, 'lib', 'ansible', 'modules'
))
# The name of the DOCUMENTATION template
@ -58,6 +59,8 @@ _MODULE = re.compile(r"M\(([^)]+)\)")
_URL = re.compile(r"U\(([^)]+)\)")
_CONST = re.compile(r"C\(([^)]+)\)")
DEPRECATED = " (D)"
NOTCORE = " (E)"
#####################################################################################
def rst_ify(text):
@ -106,7 +109,9 @@ def write_data(text, options, outputname, module):
''' dumps module output to a file or the screen, as requested '''
if options.output_dir is not None:
f = open(os.path.join(options.output_dir, outputname % module), 'w')
fname = os.path.join(options.output_dir, outputname % module)
fname = fname.replace(".py","")
f = open(fname, 'w')
f.write(text.encode('utf-8'))
f.close()
else:
@ -114,28 +119,54 @@ def write_data(text, options, outputname, module):
#####################################################################################
def list_modules(module_dir):
''' returns a hash of categories, each category being a hash of module names to file paths '''
categories = dict(all=dict())
files = glob.glob("%s/*" % module_dir)
for d in files:
if os.path.isdir(d):
files2 = glob.glob("%s/*" % d)
for f in files2:
def list_modules(module_dir, depth=0):
''' returns a hash of categories, each category being a hash of module names to file paths '''
if f.endswith(".ps1"):
categories = dict(all=dict(),_aliases=dict())
if depth <= 3: # limit # of subdirs
files = glob.glob("%s/*" % module_dir)
for d in files:
category = os.path.splitext(os.path.basename(d))[0]
if os.path.isdir(d):
res = list_modules(d, depth + 1)
for key in res.keys():
if key in categories:
categories[key] = ansible.utils.merge_hash(categories[key], res[key])
res.pop(key, None)
if depth < 2:
categories.update(res)
else:
category = module_dir.split("/")[-1]
if not category in categories:
categories[category] = res
else:
categories[category].update(res)
else:
module = category
category = os.path.basename(module_dir)
if not d.endswith(".py") or d.endswith('__init__.py'):
# windows powershell modules have documentation stubs in python docstring
# format (they are not executed) so skip the ps1 format files
continue
elif module.startswith("_") and os.path.islink(d):
source = os.path.splitext(os.path.basename(os.path.realpath(d)))[0]
module = module.replace("_","",1)
if not d in categories['_aliases']:
categories['_aliases'][source] = [module]
else:
categories['_aliases'][source].update(module)
continue
tokens = f.split("/")
module = tokens[-1]
category = tokens[-2]
if not category in categories:
categories[category] = {}
categories[category][module] = f
categories['all'][module] = f
categories[category][module] = d
categories['all'][module] = d
return categories
#####################################################################################
@ -184,25 +215,48 @@ def jinja2_environment(template_dir, typ):
#####################################################################################
def process_module(module, options, env, template, outputname, module_map):
print "rendering: %s" % module
def process_module(module, options, env, template, outputname, module_map, aliases):
fname = module_map[module]
if isinstance(fname, dict):
return "SKIPPED"
basename = os.path.basename(fname)
deprecated = False
# ignore files with extensions
if "." in os.path.basename(fname):
if not basename.endswith(".py"):
return
elif module.startswith("_"):
if os.path.islink(fname):
return # ignore, its an alias
deprecated = True
module = module.replace("_","",1)
print "rendering: %s" % module
# use ansible core library to parse out doc metadata YAML and plaintext examples
doc, examples = ansible.utils.module_docs.get_docstring(fname, verbose=options.verbose)
# crash if module is missing documentation and not explicitly hidden from docs index
if doc is None and module not in ansible.utils.module_docs.BLACKLIST_MODULES:
sys.stderr.write("*** ERROR: CORE MODULE MISSING DOCUMENTATION: %s, %s ***\n" % (fname, module))
sys.exit(1)
if doc is None:
return "SKIPPED"
if module in ansible.utils.module_docs.BLACKLIST_MODULES:
return "SKIPPED"
else:
sys.stderr.write("*** ERROR: MODULE MISSING DOCUMENTATION: %s, %s ***\n" % (fname, module))
sys.exit(1)
if deprecated and 'deprecated' not in doc:
sys.stderr.write("*** ERROR: DEPRECATED MODULE MISSING 'deprecated' DOCUMENTATION: %s, %s ***\n" % (fname, module))
sys.exit(1)
if "/core/" in fname:
doc['core'] = True
else:
doc['core'] = False
if module in aliases:
doc['aliases'] = aliases[module]
all_keys = []
@ -226,9 +280,10 @@ def process_module(module, options, env, template, outputname, module_map):
for (k,v) in doc['options'].iteritems():
all_keys.append(k)
all_keys = sorted(all_keys)
doc['option_keys'] = all_keys
doc['option_keys'] = all_keys
doc['filename'] = fname
doc['docuri'] = doc['module'].replace('_', '-')
doc['now_date'] = datetime.date.today().strftime('%Y-%m-%d')
@ -239,13 +294,32 @@ def process_module(module, options, env, template, outputname, module_map):
text = template.render(doc)
write_data(text, options, outputname, module)
return doc['short_description']
#####################################################################################
def print_modules(module, category_file, deprecated, core, options, env, template, outputname, module_map, aliases):
modstring = module
modname = module
if module in deprecated:
modstring = modstring + DEPRECATED
modname = "_" + module
elif module not in core:
modstring = modstring + NOTCORE
result = process_module(modname, options, env, template, outputname, module_map, aliases)
if result != "SKIPPED":
category_file.write(" %s - %s <%s_module>\n" % (modstring, result, module))
def process_category(category, categories, options, env, template, outputname):
module_map = categories[category]
aliases = {}
if '_aliases' in categories:
aliases = categories['_aliases']
category_file_path = os.path.join(options.output_dir, "list_of_%s_modules.rst" % category)
category_file = open(category_file_path, "w")
print "*** recording category %s in %s ***" % (category, category_file_path)
@ -255,7 +329,27 @@ def process_category(category, categories, options, env, template, outputname):
category = category.replace("_"," ")
category = category.title()
modules = module_map.keys()
modules = []
deprecated = []
core = []
for module in module_map.keys():
if isinstance(module_map[module], dict):
for mod in module_map[module].keys():
if mod.startswith("_"):
mod = mod.replace("_","",1)
deprecated.append(mod)
elif '/core/' in module_map[module][mod]:
core.append(mod)
else:
if module.startswith("_"):
module = module.replace("_","",1)
deprecated.append(module)
elif '/core/' in module_map[module]:
core.append(module)
modules.append(module)
modules.sort()
category_header = "%s Modules" % (category.title())
@ -265,17 +359,34 @@ def process_category(category, categories, options, env, template, outputname):
%s
%s
.. toctree::
:maxdepth: 1
.. toctree:: :maxdepth: 1
""" % (category_header, underscores))
sections = []
for module in modules:
result = process_module(module, options, env, template, outputname, module_map)
if result != "SKIPPED":
category_file.write(" %s_module\n" % module)
if module in module_map and isinstance(module_map[module], dict):
sections.append(module)
continue
else:
print_modules(module, category_file, deprecated, core, options, env, template, outputname, module_map, aliases)
sections.sort()
for section in sections:
category_file.write("\n%s\n%s\n\n" % (section.replace("_"," ").title(),'-' * len(section)))
category_file.write(".. toctree:: :maxdepth: 1\n\n")
section_modules = module_map[section].keys()
section_modules.sort()
#for module in module_map[section]:
for module in section_modules:
print_modules(module, category_file, deprecated, core, options, env, template, outputname, module_map[section], aliases)
category_file.write("""\n\n
.. note::
- %s: This marks a module as deprecated, which means a module is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.
- %s: This marks a module as 'extras', which means it ships with ansible but may be a newer module and possibly (but not necessarily) less activity maintained than 'core' modules.
- Tickets filed on modules are filed to different repos than those on the main open source project. Core module tickets should be filed at `ansible/ansible-modules-core on GitHub <http://github.com/ansible/ansible-modules-core>`_, extras tickets to `ansible/ansible-modules-extras on GitHub <http://github.com/ansible/ansible-modules-extras>`_
""" % (DEPRECATED, NOTCORE))
category_file.close()
# TODO: end a new category file
@ -320,6 +431,8 @@ def main():
category_list_file.write(" :maxdepth: 1\n\n")
for category in category_names:
if category.startswith("_"):
continue
category_list_file.write(" list_of_%s_modules\n" % category)
process_category(category, categories, options, env, template, outputname)

@ -21,6 +21,17 @@
#
--------------------------------------------#}
{% if aliases is defined -%}
Aliases: @{ ','.join(aliases) }@
{% endif %}
{% if deprecated is defined -%}
DEPRECATED
----------
@{ deprecated }@
{% endif %}
Synopsis
--------
@ -101,3 +112,42 @@ Examples
{% endfor %}
{% endif %}
{% if not deprecated %}
{% if core %}
This is a Core Module
---------------------
This source of this module is hosted on GitHub in the `ansible-modules-core <http://github.com/ansible/ansible-modules-core>`_ repo.
If you believe you have found a bug in this module, and are already running the latest stable or development version of Ansible, first look in the `issue tracker at github.com/ansible/ansible-modules-core <http://github.com/ansible/ansible-modules-core>`_ to see if a bug has already been filed. If not, we would be grateful if you would file one.
Should you have a question rather than a bug report, inquries are welcome on the `ansible-project google group <https://groups.google.com/forum/#!forum/ansible-project>`_ or on Ansible's "#ansible" channel, located on irc.freenode.net. Development oriented topics should instead use the similar `ansible-devel google group <https://groups.google.com/forum/#!forum/ansible-project>`_.
Documentation updates for this module can also be edited directly by submitting a pull request to the module source code, just look for the "DOCUMENTATION" block in the source tree.
This is a "core" ansible module, which means it will receive slightly higher priority for all requests than those in the "extras" repos.
{% else %}
This is an Extras Module
------------------------
This source of this module is hosted on GitHub in the `ansible-modules-extras <http://github.com/ansible/ansible-modules-extras>`_ repo.
If you believe you have found a bug in this module, and are already running the latest stable or development version of Ansible, first look in the `issue tracker at github.com/ansible/ansible-modules-extras <http://github.com/ansible/ansible-modules-extras>`_ to see if a bug has already been filed. If not, we would be grateful if you would file one.
Should you have a question rather than a bug report, inquries are welcome on the `ansible-project google group <https://groups.google.com/forum/#!forum/ansible-project>` or on Ansible's "#ansible" channel, located on irc.freenode.net. Development oriented topics should instead use the similar `ansible-devel google group <https://groups.google.com/forum/#!forum/ansible-project>`_.
Documentation updates for this module can also be edited directly by submitting a pull request to the module source code, just look for the "DOCUMENTATION" block in the source tree.
Note that this module is designated a "extras" module. Non-core modules are still fully usable, but may receive slightly lower response rates for issues and pull requests.
Popular "extras" modules may be promoted to core modules over time.
{% endif %}
{% endif %}
For help in developing on modules, should you be so inclined, please read :doc:`community`, :doc:`developing_test_pr` and :doc:`developing_modules`.

@ -58,7 +58,7 @@ def parse():
parser.add_option('-D', '--debugger', dest='debugger',
help="path to python debugger (e.g. /usr/bin/pdb)")
parser.add_option('-I', '--interpreter', dest='interpreter',
help="path to interpeter to use for this module (e.g. ansible_python_interpreter=/usr/bin/python)",
help="path to interpreter to use for this module (e.g. ansible_python_interpreter=/usr/bin/python)",
metavar='INTERPRETER_TYPE=INTERPRETER_PATH')
parser.add_option('-c', '--check', dest='check', action='store_true',
help="run the module in check mode")
@ -104,7 +104,7 @@ def boilerplate_module(modfile, args, interpreter, check):
inject = {}
if interpreter:
if '=' not in interpreter:
print 'interpeter must by in the form of ansible_python_interpreter=/usr/bin/python'
print 'interpreter must by in the form of ansible_python_interpreter=/usr/bin/python'
sys.exit(1)
interpreter_type, interpreter_path = interpreter.split('=')
if not interpreter_type.startswith('ansible_'):

@ -0,0 +1,3 @@
#!/bin/sh
git pull --rebase
git submodule update --init --recursive

@ -1,748 +0,0 @@
#!/usr/bin/env python
# Copyright 2013 Google Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# This is a custom functional test script for the Google Compute Engine
# ansible modules. In order to run these tests, you must:
# 1) Create a Google Cloud Platform account and enable the Google
# Compute Engine service and billing
# 2) Download, install, and configure 'gcutil'
# see [https://developers.google.com/compute/docs/gcutil/]
# 3) Convert your GCE Service Account private key from PKCS12 to PEM format
# $ openssl pkcs12 -in pkey.pkcs12 -passin pass:notasecret \
# > -nodes -nocerts | openssl rsa -out pkey.pem
# 4) Make sure you have libcloud 0.13.3 or later installed.
# 5) Make sure you have a libcloud 'secrets.py' file in your PYTHONPATH
# 6) Set GCE_PARAMS and GCE_KEYWORD_PARMS in your 'secrets.py' file.
# 7) Set up a simple hosts file
# $ echo 127.0.0.1 > ~/ansible_hosts
# $ echo "export ANSIBLE_HOSTS='~/ansible_hosts'" >> ~/.bashrc
# $ . ~/.bashrc
# 8) Set up your ansible 'hacking' environment
# $ cd ~/ansible
# $ . hacking/env-setup
# $ export ANSIBLE_HOST_KEY_CHECKING=no
# $ ansible all -m ping
# 9) Set your PROJECT variable below
# 10) Run and time the tests and log output, take ~30 minutes to run
# $ time stdbuf -oL python test/gce_tests.py 2>&1 | tee log
#
# Last update: gcutil-1.11.0 and v1beta16
# Set this to your test Project ID
PROJECT="google.com:erjohnso"
# debugging
DEBUG=False # lots of debugging output
VERBOSE=True # on failure, display ansible command and expected/actual result
# location - note that some tests rely on the module's 'default'
# region/zone, which should match the settings below.
REGION="us-central1"
ZONE="%s-a" % REGION
# Peeking is a way to trigger looking at a specified set of resources
# before and/or after a test run. The 'test_cases' data structure below
# has a few tests with 'peek_before' and 'peek_after'. When those keys
# are set and PEEKING_ENABLED is True, then these steps will be executed
# to aid in debugging tests. Normally, this is not needed.
PEEKING_ENABLED=False
# disks
DNAME="aaaaa-ansible-disk"
DNAME2="aaaaa-ansible-disk2"
DNAME6="aaaaa-ansible-inst6"
DNAME7="aaaaa-ansible-inst7"
USE_PD="true"
KERNEL="https://www.googleapis.com/compute/v1beta16/projects/google/global/kernels/gce-no-conn-track-v20130813"
# instances
INAME="aaaaa-ansible-inst"
INAME2="aaaaa-ansible-inst2"
INAME3="aaaaa-ansible-inst3"
INAME4="aaaaa-ansible-inst4"
INAME5="aaaaa-ansible-inst5"
INAME6="aaaaa-ansible-inst6"
INAME7="aaaaa-ansible-inst7"
TYPE="n1-standard-1"
IMAGE="https://www.googleapis.com/compute/v1beta16/projects/debian-cloud/global/images/debian-7-wheezy-v20131014"
NETWORK="default"
SCOPES="https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.full_control"
# networks / firewalls
NETWK1="ansible-network1"
NETWK2="ansible-network2"
NETWK3="ansible-network3"
CIDR1="10.240.16.0/24"
CIDR2="10.240.32.0/24"
CIDR3="10.240.64.0/24"
GW1="10.240.16.1"
GW2="10.240.32.1"
FW1="ansible-fwrule1"
FW2="ansible-fwrule2"
FW3="ansible-fwrule3"
FW4="ansible-fwrule4"
# load-balancer tests
HC1="ansible-hc1"
HC2="ansible-hc2"
HC3="ansible-hc3"
LB1="ansible-lb1"
LB2="ansible-lb2"
from commands import getstatusoutput as run
import sys
test_cases = [
{'id': '01', 'desc': 'Detach / Delete disk tests',
'setup': ['gcutil addinstance "%s" --wait_until_running --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --persistent_boot_disk=%s' % (INAME, ZONE, TYPE, NETWORK, SCOPES, IMAGE, USE_PD),
'gcutil adddisk "%s" --size_gb=2 --zone=%s --wait_until_complete' % (DNAME, ZONE)],
'tests': [
{'desc': 'DETACH_ONLY but disk not found [success]',
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % ("missing-disk", INAME, ZONE),
'r': '127.0.0.1 | success >> {"changed": false, "detach_only": true, "detached_from_instance": "%s", "name": "missing-disk", "state": "absent", "zone": "%s"}' % (INAME, ZONE),
},
{'desc': 'DETACH_ONLY but instance not found [success]',
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % (DNAME, "missing-instance", ZONE),
'r': '127.0.0.1 | success >> {"changed": false, "detach_only": true, "detached_from_instance": "missing-instance", "name": "%s", "size_gb": 2, "state": "absent", "zone": "%s"}' % (DNAME, ZONE),
},
{'desc': 'DETACH_ONLY but neither disk nor instance exists [success]',
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % ("missing-disk", "missing-instance", ZONE),
'r': '127.0.0.1 | success >> {"changed": false, "detach_only": true, "detached_from_instance": "missing-instance", "name": "missing-disk", "state": "absent", "zone": "%s"}' % (ZONE),
},
{'desc': 'DETACH_ONLY but disk is not currently attached [success]',
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % (DNAME, INAME, ZONE),
'r': '127.0.0.1 | success >> {"changed": false, "detach_only": true, "detached_from_instance": "%s", "name": "%s", "size_gb": 2, "state": "absent", "zone": "%s"}' % (INAME, DNAME, ZONE),
},
{'desc': 'DETACH_ONLY disk is attached and should be detached [success]',
'setup': ['gcutil attachdisk --disk="%s,mode=READ_ONLY" --zone=%s %s' % (DNAME, ZONE, INAME), 'sleep 10'],
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % (DNAME, INAME, ZONE),
'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": true, "detach_only": true, "detached_from_instance": "%s", "name": "%s", "size_gb": 2, "state": "absent", "zone": "%s"}' % (INAME, INAME, DNAME, ZONE),
'teardown': ['gcutil detachdisk --zone=%s --device_name=%s %s' % (ZONE, DNAME, INAME)],
},
{'desc': 'DETACH_ONLY but not instance specified [FAIL]',
'm': 'gce_pd',
'a': 'name=%s zone=%s detach_only=yes state=absent' % (DNAME, ZONE),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must specify an instance name when detaching a disk"}',
},
{'desc': 'DELETE but disk not found [success]',
'm': 'gce_pd',
'a': 'name=%s zone=%s state=absent' % ("missing-disk", ZONE),
'r': '127.0.0.1 | success >> {"changed": false, "name": "missing-disk", "state": "absent", "zone": "%s"}' % (ZONE),
},
{'desc': 'DELETE but disk is attached [FAIL]',
'setup': ['gcutil attachdisk --disk="%s,mode=READ_ONLY" --zone=%s %s' % (DNAME, ZONE, INAME), 'sleep 10'],
'm': 'gce_pd',
'a': 'name=%s zone=%s state=absent' % (DNAME, ZONE),
'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"The disk resource 'projects/%s/zones/%s/disks/%s' is already being used by 'projects/%s/zones/%s/instances/%s'\"}" % (PROJECT, ZONE, DNAME, PROJECT, ZONE, INAME),
'teardown': ['gcutil detachdisk --zone=%s --device_name=%s %s' % (ZONE, DNAME, INAME)],
},
{'desc': 'DELETE disk [success]',
'm': 'gce_pd',
'a': 'name=%s zone=%s state=absent' % (DNAME, ZONE),
'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "size_gb": 2, "state": "absent", "zone": "%s"}' % (DNAME, ZONE),
},
],
'teardown': ['gcutil deleteinstance -f "%s" --zone=%s' % (INAME, ZONE),
'sleep 15',
'gcutil deletedisk -f "%s" --zone=%s' % (INAME, ZONE),
'sleep 10',
'gcutil deletedisk -f "%s" --zone=%s' % (DNAME, ZONE),
'sleep 10'],
},
{'id': '02', 'desc': 'Create disk but do not attach (e.g. no instance_name param)',
'setup': [],
'tests': [
{'desc': 'CREATE_NO_ATTACH "string" for size_gb [FAIL]',
'm': 'gce_pd',
'a': 'name=%s size_gb="foo" zone=%s' % (DNAME, ZONE),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must supply a size_gb larger than 1 GB"}',
},
{'desc': 'CREATE_NO_ATTACH negative size_gb [FAIL]',
'm': 'gce_pd',
'a': 'name=%s size_gb=-2 zone=%s' % (DNAME, ZONE),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must supply a size_gb larger than 1 GB"}',
},
{'desc': 'CREATE_NO_ATTACH size_gb exceeds quota [FAIL]',
'm': 'gce_pd',
'a': 'name=%s size_gb=9999 zone=%s' % ("big-disk", ZONE),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Requested disk size exceeds quota"}',
},
{'desc': 'CREATE_NO_ATTACH create the disk [success]',
'm': 'gce_pd',
'a': 'name=%s zone=%s' % (DNAME, ZONE),
'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "size_gb": 10, "state": "present", "zone": "%s"}' % (DNAME, ZONE),
},
{'desc': 'CREATE_NO_ATTACH but disk already exists [success]',
'm': 'gce_pd',
'a': 'name=%s zone=%s' % (DNAME, ZONE),
'r': '127.0.0.1 | success >> {"changed": false, "name": "%s", "size_gb": 10, "state": "present", "zone": "%s"}' % (DNAME, ZONE),
},
],
'teardown': ['gcutil deletedisk -f "%s" --zone=%s' % (DNAME, ZONE),
'sleep 10'],
},
{'id': '03', 'desc': 'Create and attach disk',
'setup': ['gcutil addinstance "%s" --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --persistent_boot_disk=%s' % (INAME2, ZONE, TYPE, NETWORK, SCOPES, IMAGE, USE_PD),
'gcutil addinstance "%s" --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --persistent_boot_disk=%s' % (INAME, ZONE, "g1-small", NETWORK, SCOPES, IMAGE, USE_PD),
'gcutil adddisk "%s" --size_gb=2 --zone=%s' % (DNAME, ZONE),
'gcutil adddisk "%s" --size_gb=2 --zone=%s --wait_until_complete' % (DNAME2, ZONE),],
'tests': [
{'desc': 'CREATE_AND_ATTACH "string" for size_gb [FAIL]',
'm': 'gce_pd',
'a': 'name=%s size_gb="foo" instance_name=%s zone=%s' % (DNAME, INAME, ZONE),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must supply a size_gb larger than 1 GB"}',
},
{'desc': 'CREATE_AND_ATTACH negative size_gb [FAIL]',
'm': 'gce_pd',
'a': 'name=%s size_gb=-2 instance_name=%s zone=%s' % (DNAME, INAME, ZONE),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must supply a size_gb larger than 1 GB"}',
},
{'desc': 'CREATE_AND_ATTACH size_gb exceeds quota [FAIL]',
'm': 'gce_pd',
'a': 'name=%s size_gb=9999 instance_name=%s zone=%s' % ("big-disk", INAME, ZONE),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Requested disk size exceeds quota"}',
},
{'desc': 'CREATE_AND_ATTACH missing instance [FAIL]',
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s' % (DNAME, "missing-instance", ZONE),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Instance %s does not exist in zone %s"}' % ("missing-instance", ZONE),
},
{'desc': 'CREATE_AND_ATTACH disk exists but not attached [success]',
'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)],
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s' % (DNAME, INAME, ZONE),
'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": true, "name": "%s", "size_gb": 2, "state": "present", "zone": "%s"}' % (INAME, DNAME, ZONE),
'peek_after': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)],
},
{'desc': 'CREATE_AND_ATTACH disk exists already attached [success]',
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s' % (DNAME, INAME, ZONE),
'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": false, "name": "%s", "size_gb": 2, "state": "present", "zone": "%s"}' % (INAME, DNAME, ZONE),
},
{'desc': 'CREATE_AND_ATTACH attached RO, attempt RO to 2nd inst [success]',
'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)],
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s' % (DNAME, INAME2, ZONE),
'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": true, "name": "%s", "size_gb": 2, "state": "present", "zone": "%s"}' % (INAME2, DNAME, ZONE),
'peek_after': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)],
},
{'desc': 'CREATE_AND_ATTACH attached RO, attach RW to self [FAILED no-op]',
'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)],
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s mode=READ_WRITE' % (DNAME, INAME, ZONE),
'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": false, "name": "%s", "size_gb": 2, "state": "present", "zone": "%s"}' % (INAME, DNAME, ZONE),
},
{'desc': 'CREATE_AND_ATTACH attached RW, attach RW to other [FAIL]',
'setup': ['gcutil attachdisk --disk=%s,mode=READ_WRITE --zone=%s %s' % (DNAME2, ZONE, INAME), 'sleep 10'],
'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)],
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s mode=READ_WRITE' % (DNAME2, INAME2, ZONE),
'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Unexpected response: HTTP return_code[200], API error code[RESOURCE_IN_USE] and message: The disk resource 'projects/%s/zones/%s/disks/%s' is already being used in read-write mode\"}" % (PROJECT, ZONE, DNAME2),
'peek_after': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)],
},
{'desc': 'CREATE_AND_ATTACH attach too many disks to inst [FAIL]',
'setup': ['gcutil adddisk aa-disk-dummy --size_gb=2 --zone=%s' % (ZONE),
'gcutil adddisk aa-disk-dummy2 --size_gb=2 --zone=%s --wait_until_complete' % (ZONE),
'gcutil attachdisk --disk=aa-disk-dummy --zone=%s %s' % (ZONE, INAME),
'sleep 5'],
'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)],
'm': 'gce_pd',
'a': 'name=%s instance_name=%s zone=%s' % ("aa-disk-dummy2", INAME, ZONE),
'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Unexpected response: HTTP return_code[200], API error code[LIMIT_EXCEEDED] and message: Exceeded limit 'maximum_persistent_disks' on resource 'projects/%s/zones/%s/instances/%s'. Limit: 4\"}" % (PROJECT, ZONE, INAME),
'teardown': ['gcutil detachdisk --device_name=aa-disk-dummy --zone=%s %s' % (ZONE, INAME),
'sleep 3',
'gcutil deletedisk -f aa-disk-dummy --zone=%s' % (ZONE),
'sleep 10',
'gcutil deletedisk -f aa-disk-dummy2 --zone=%s' % (ZONE),
'sleep 10'],
},
],
'teardown': ['gcutil deleteinstance -f "%s" --zone=%s' % (INAME2, ZONE),
'sleep 15',
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME, ZONE),
'sleep 15',
'gcutil deletedisk -f "%s" --zone=%s' % (INAME, ZONE),
'sleep 10',
'gcutil deletedisk -f "%s" --zone=%s' % (INAME2, ZONE),
'sleep 10',
'gcutil deletedisk -f "%s" --zone=%s' % (DNAME, ZONE),
'sleep 10',
'gcutil deletedisk -f "%s" --zone=%s' % (DNAME2, ZONE),
'sleep 10'],
},
{'id': '04', 'desc': 'Delete / destroy instances',
'setup': ['gcutil addinstance "%s" --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME, ZONE, TYPE, IMAGE),
'gcutil addinstance "%s" --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME2, ZONE, TYPE, IMAGE),
'gcutil addinstance "%s" --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME3, ZONE, TYPE, IMAGE),
'gcutil addinstance "%s" --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME4, ZONE, TYPE, IMAGE),
'gcutil addinstance "%s" --wait_until_running --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME5, ZONE, TYPE, IMAGE)],
'tests': [
{'desc': 'DELETE instance, bad zone param [FAIL]',
'm': 'gce',
'a': 'name=missing-inst zone=bogus state=absent',
'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "value of zone must be one of: us-central1-a,us-central1-b,us-central2-a,europe-west1-a,europe-west1-b, got: bogus"}',
},
{'desc': 'DELETE non-existent instance, no-op [success]',
'm': 'gce',
'a': 'name=missing-inst zone=%s state=absent' % (ZONE),
'r': '127.0.0.1 | success >> {"changed": false, "name": "missing-inst", "state": "absent", "zone": "%s"}' % (ZONE),
},
{'desc': 'DELETE an existing named instance [success]',
'm': 'gce',
'a': 'name=%s zone=%s state=absent' % (INAME, ZONE),
'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "state": "absent", "zone": "%s"}' % (INAME, ZONE),
},
{'desc': 'DELETE list of instances with a non-existent one [success]',
'm': 'gce',
'a': 'instance_names=%s,missing,%s zone=%s state=absent' % (INAME2,INAME3, ZONE),
'r': '127.0.0.1 | success >> {"changed": true, "instance_names": ["%s", "%s"], "state": "absent", "zone": "%s"}' % (INAME2, INAME3, ZONE),
},
{'desc': 'DELETE list of instances all pre-exist [success]',
'm': 'gce',
'a': 'instance_names=%s,%s zone=%s state=absent' % (INAME4,INAME5, ZONE),
'r': '127.0.0.1 | success >> {"changed": true, "instance_names": ["%s", "%s"], "state": "absent", "zone": "%s"}' % (INAME4, INAME5, ZONE),
},
],
'teardown': ['gcutil deleteinstance -f "%s" --zone=%s' % (INAME, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME2, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME3, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME4, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME5, ZONE),
'sleep 10'],
},
{'id': '05', 'desc': 'Create instances',
'setup': ['gcutil adddisk --source_image=%s --zone=%s %s --wait_until_complete' % (IMAGE, ZONE, DNAME7),
'gcutil addinstance boo --wait_until_running --zone=%s --machine_type=%s --network=%s --disk=%s,mode=READ_WRITE,boot --kernel=%s' % (ZONE,TYPE,NETWORK,DNAME7,KERNEL),
],
'tests': [
{'desc': 'CREATE_INSTANCE invalid image arg [FAIL]',
'm': 'gce',
'a': 'name=foo image=foo',
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Missing required create instance variable"}',
},
{'desc': 'CREATE_INSTANCE metadata a list [FAIL]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s zone=%s metadata=\'[\\"foo\\":\\"bar\\",\\"baz\\":1]\'' % (INAME,ZONE),
'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata syntax"}',
},
{'desc': 'CREATE_INSTANCE metadata not a dict [FAIL]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s zone=%s metadata=\\"foo\\":\\"bar\\",\\"baz\\":1' % (INAME,ZONE),
'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata syntax"}',
},
{'desc': 'CREATE_INSTANCE with metadata form1 [FAIL]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s zone=%s metadata=\'{"foo":"bar","baz":1}\'' % (INAME,ZONE),
'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata: malformed string"}',
},
{'desc': 'CREATE_INSTANCE with metadata form2 [FAIL]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s zone=%s metadata={\'foo\':\'bar\',\'baz\':1}' % (INAME,ZONE),
'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata: malformed string"}',
},
{'desc': 'CREATE_INSTANCE with metadata form3 [FAIL]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s zone=%s metadata="foo:bar" '% (INAME,ZONE),
'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata syntax"}',
},
{'desc': 'CREATE_INSTANCE with metadata form4 [FAIL]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s zone=%s metadata="{\'foo\':\'bar\'}"'% (INAME,ZONE),
'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata: malformed string"}',
},
{'desc': 'CREATE_INSTANCE invalid image arg [FAIL]',
'm': 'gce',
'a': 'instance_names=foo,bar image=foo',
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Missing required create instance variable"}',
},
{'desc': 'CREATE_INSTANCE single inst, using defaults [success]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s' % (INAME),
'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": "debian-7-wheezy-v20130816", "machine_type": "n1-standard-1", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.175.15", "public_ip": "173.255.120.190", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME, ZONE, INAME, ZONE),
},
{'desc': 'CREATE_INSTANCE the same instance again, no-op [success]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s' % (INAME),
'r': '127.0.0.1 | success >> {"changed": false, "instance_data": [{"image": "debian-7-wheezy-v20130816", "machine_type": "n1-standard-1", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.175.15", "public_ip": "173.255.120.190", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME, ZONE, INAME, ZONE),
},
{'desc': 'CREATE_INSTANCE instance with alt type [success]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s machine_type=n1-standard-2' % (INAME2),
'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": "debian-7-wheezy-v20130816", "machine_type": "n1-standard-2", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.192.227", "public_ip": "173.255.121.233", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME2, ZONE, INAME2, ZONE),
},
{'desc': 'CREATE_INSTANCE instance with root pd [success]',
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s persistent_boot_disk=yes' % (INAME3),
'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": null, "machine_type": "n1-standard-1", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.178.140", "public_ip": "173.255.121.176", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME3, ZONE, INAME3, ZONE),
},
{'desc': 'CREATE_INSTANCE instance with root pd, that already exists [success]',
'setup': ['gcutil adddisk --source_image=%s --zone=%s %s --wait_until_complete' % (IMAGE, ZONE, DNAME6),],
'strip_numbers': True,
'm': 'gce',
'a': 'name=%s zone=%s persistent_boot_disk=yes' % (INAME6, ZONE),
'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": null, "machine_type": "n1-standard-1", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.178.140", "public_ip": "173.255.121.176", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME6, ZONE, INAME6, ZONE),
},
{'desc': 'CREATE_INSTANCE instance with root pd attached to other inst [FAIL]',
'm': 'gce',
'a': 'name=%s zone=%s persistent_boot_disk=yes' % (INAME7, ZONE),
'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "Unexpected error attempting to create instance %s, error: The disk resource \'projects/%s/zones/%s/disks/%s\' is already being used in read-write mode"}' % (INAME7,PROJECT,ZONE,DNAME7),
},
{'desc': 'CREATE_INSTANCE use *all* the options! [success]',
'strip_numbers': True,
'm': 'gce',
'a': 'instance_names=%s,%s metadata=\'{\\"foo\\":\\"bar\\", \\"baz\\":1}\' tags=t1,t2,t3 zone=%s image=centos-6-v20130731 persistent_boot_disk=yes' % (INAME4,INAME5,ZONE),
'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": null, "machine_type": "n1-standard-1", "metadata": {"baz": "1", "foo": "bar"}, "name": "%s", "network": "default", "private_ip": "10.240.130.4", "public_ip": "173.255.121.97", "status": "RUNNING", "tags": ["t1", "t2", "t3"], "zone": "%s"}, {"image": null, "machine_type": "n1-standard-1", "metadata": {"baz": "1", "foo": "bar"}, "name": "%s", "network": "default", "private_ip": "10.240.207.226", "public_ip": "173.255.121.85", "status": "RUNNING", "tags": ["t1", "t2", "t3"], "zone": "%s"}], "instance_names": ["%s", "%s"], "state": "present", "zone": "%s"}' % (INAME4, ZONE, INAME5, ZONE, INAME4, INAME5, ZONE),
},
],
'teardown': ['gcutil deleteinstance -f "%s" --zone=%s' % (INAME, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME2, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME3, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME4, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME5, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME6, ZONE),
'gcutil deleteinstance -f "%s" --zone=%s' % (INAME7, ZONE),
'gcutil deleteinstance -f boo --zone=%s' % (ZONE),
'sleep 10',
'gcutil deletedisk -f "%s" --zone=%s' % (INAME3, ZONE),
'gcutil deletedisk -f "%s" --zone=%s' % (INAME4, ZONE),
'gcutil deletedisk -f "%s" --zone=%s' % (INAME5, ZONE),
'gcutil deletedisk -f "%s" --zone=%s' % (INAME6, ZONE),
'gcutil deletedisk -f "%s" --zone=%s' % (INAME7, ZONE),
'sleep 10'],
},
{'id': '06', 'desc': 'Delete / destroy networks and firewall rules',
'setup': ['gcutil addnetwork --range="%s" --gateway="%s" %s' % (CIDR1, GW1, NETWK1),
'gcutil addnetwork --range="%s" --gateway="%s" %s' % (CIDR2, GW2, NETWK2),
'sleep 5',
'gcutil addfirewall --allowed="tcp:80" --network=%s %s' % (NETWK1, FW1),
'gcutil addfirewall --allowed="tcp:80" --network=%s %s' % (NETWK2, FW2),
'sleep 5'],
'tests': [
{'desc': 'DELETE bogus named firewall [success]',
'm': 'gce_net',
'a': 'fwname=missing-fwrule state=absent',
'r': '127.0.0.1 | success >> {"changed": false, "fwname": "missing-fwrule", "state": "absent"}',
},
{'desc': 'DELETE bogus named network [success]',
'm': 'gce_net',
'a': 'name=missing-network state=absent',
'r': '127.0.0.1 | success >> {"changed": false, "name": "missing-network", "state": "absent"}',
},
{'desc': 'DELETE named firewall rule [success]',
'm': 'gce_net',
'a': 'fwname=%s state=absent' % (FW1),
'r': '127.0.0.1 | success >> {"changed": true, "fwname": "%s", "state": "absent"}' % (FW1),
'teardown': ['sleep 5'], # pause to give GCE time to delete fwrule
},
{'desc': 'DELETE unused named network [success]',
'm': 'gce_net',
'a': 'name=%s state=absent' % (NETWK1),
'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "state": "absent"}' % (NETWK1),
},
{'desc': 'DELETE named network *and* fwrule [success]',
'm': 'gce_net',
'a': 'name=%s fwname=%s state=absent' % (NETWK2, FW2),
'r': '127.0.0.1 | success >> {"changed": true, "fwname": "%s", "name": "%s", "state": "absent"}' % (FW2, NETWK2),
},
],
'teardown': ['gcutil deletenetwork -f %s' % (NETWK1),
'gcutil deletenetwork -f %s' % (NETWK2),
'sleep 5',
'gcutil deletefirewall -f %s' % (FW1),
'gcutil deletefirewall -f %s' % (FW2)],
},
{'id': '07', 'desc': 'Create networks and firewall rules',
'setup': ['gcutil addnetwork --range="%s" --gateway="%s" %s' % (CIDR1, GW1, NETWK1),
'sleep 5',
'gcutil addfirewall --allowed="tcp:80" --network=%s %s' % (NETWK1, FW1),
'sleep 5'],
'tests': [
{'desc': 'CREATE network without specifying ipv4_range [FAIL]',
'm': 'gce_net',
'a': 'name=fail',
'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Missing required 'ipv4_range' parameter\"}",
},
{'desc': 'CREATE network with specifying bad ipv4_range [FAIL]',
'm': 'gce_net',
'a': 'name=fail ipv4_range=bad_value',
'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Unexpected response: HTTP return_code[400], API error code[None] and message: Invalid value for field 'resource.IPv4Range': 'bad_value'. Must be a CIDR address range that is contained in the RFC1918 private address blocks: [10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16]\"}",
},
{'desc': 'CREATE existing network, not changed [success]',
'm': 'gce_net',
'a': 'name=%s ipv4_range=%s' % (NETWK1, CIDR1),
'r': '127.0.0.1 | success >> {"changed": false, "ipv4_range": "%s", "name": "%s", "state": "present"}' % (CIDR1, NETWK1),
},
{'desc': 'CREATE new network, changed [success]',
'm': 'gce_net',
'a': 'name=%s ipv4_range=%s' % (NETWK2, CIDR2),
'r': '127.0.0.1 | success >> {"changed": true, "ipv4_range": "10.240.32.0/24", "name": "%s", "state": "present"}' % (NETWK2),
},
{'desc': 'CREATE new fw rule missing params [FAIL]',
'm': 'gce_net',
'a': 'name=%s fwname=%s' % (NETWK1, FW1),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Missing required firewall rule parameter(s)"}',
},
{'desc': 'CREATE new fw rule bad params [FAIL]',
'm': 'gce_net',
'a': 'name=%s fwname=broken allowed=blah src_tags="one,two"' % (NETWK1),
'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Unexpected response: HTTP return_code[400], API error code[None] and message: Invalid value for field 'resource.allowed[0].IPProtocol': 'blah'. Must be one of [\\\"tcp\\\", \\\"udp\\\", \\\"icmp\\\"] or an IP protocol number between 0 and 255\"}",
},
{'desc': 'CREATE existing fw rule [success]',
'm': 'gce_net',
'a': 'name=%s fwname=%s allowed="tcp:80" src_tags="one,two"' % (NETWK1, FW1),
'r': '127.0.0.1 | success >> {"allowed": "tcp:80", "changed": false, "fwname": "%s", "ipv4_range": "%s", "name": "%s", "src_range": null, "src_tags": ["one", "two"], "state": "present"}' % (FW1, CIDR1, NETWK1),
},
{'desc': 'CREATE new fw rule [success]',
'm': 'gce_net',
'a': 'name=%s fwname=%s allowed="tcp:80" src_tags="one,two"' % (NETWK1, FW3),
'r': '127.0.0.1 | success >> {"allowed": "tcp:80", "changed": true, "fwname": "%s", "ipv4_range": "%s", "name": "%s", "src_range": null, "src_tags": ["one", "two"], "state": "present"}' % (FW3, CIDR1, NETWK1),
},
{'desc': 'CREATE new network *and* fw rule [success]',
'm': 'gce_net',
'a': 'name=%s ipv4_range=%s fwname=%s allowed="tcp:80" src_tags="one,two"' % (NETWK3, CIDR3, FW4),
'r': '127.0.0.1 | success >> {"allowed": "tcp:80", "changed": true, "fwname": "%s", "ipv4_range": "%s", "name": "%s", "src_range": null, "src_tags": ["one", "two"], "state": "present"}' % (FW4, CIDR3, NETWK3),
},
],
'teardown': ['gcutil deletefirewall -f %s' % (FW1),
'gcutil deletefirewall -f %s' % (FW2),
'gcutil deletefirewall -f %s' % (FW3),
'gcutil deletefirewall -f %s' % (FW4),
'sleep 5',
'gcutil deletenetwork -f %s' % (NETWK1),
'gcutil deletenetwork -f %s' % (NETWK2),
'gcutil deletenetwork -f %s' % (NETWK3),
'sleep 5'],
},
{'id': '08', 'desc': 'Create load-balancer resources',
'setup': ['gcutil addinstance "%s" --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --nopersistent_boot_disk' % (INAME, ZONE, TYPE, NETWORK, SCOPES, IMAGE),
'gcutil addinstance "%s" --wait_until_running --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --nopersistent_boot_disk' % (INAME2, ZONE, TYPE, NETWORK, SCOPES, IMAGE),
],
'tests': [
{'desc': 'Do nothing [FAIL]',
'm': 'gce_lb',
'a': 'httphealthcheck_port=7',
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Nothing to do, please specify a \\\"name\\\" or \\\"httphealthcheck_name\\\" parameter"}',
},
{'desc': 'CREATE_HC create basic http healthcheck [success]',
'm': 'gce_lb',
'a': 'httphealthcheck_name=%s' % (HC1),
'r': '127.0.0.1 | success >> {"changed": true, "httphealthcheck_healthy_count": 2, "httphealthcheck_host": null, "httphealthcheck_interval": 5, "httphealthcheck_name": "%s", "httphealthcheck_path": "/", "httphealthcheck_port": 80, "httphealthcheck_timeout": 5, "httphealthcheck_unhealthy_count": 2, "name": null, "state": "present"}' % (HC1),
},
{'desc': 'CREATE_HC (repeat, no-op) create basic http healthcheck [success]',
'm': 'gce_lb',
'a': 'httphealthcheck_name=%s' % (HC1),
'r': '127.0.0.1 | success >> {"changed": false, "httphealthcheck_healthy_count": 2, "httphealthcheck_host": null, "httphealthcheck_interval": 5, "httphealthcheck_name": "%s", "httphealthcheck_path": "/", "httphealthcheck_port": 80, "httphealthcheck_timeout": 5, "httphealthcheck_unhealthy_count": 2, "name": null, "state": "present"}' % (HC1),
},
{'desc': 'CREATE_HC create custom http healthcheck [success]',
'm': 'gce_lb',
'a': 'httphealthcheck_name=%s httphealthcheck_port=1234 httphealthcheck_path="/whatup" httphealthcheck_host="foo" httphealthcheck_interval=300' % (HC2),
'r': '127.0.0.1 | success >> {"changed": true, "httphealthcheck_healthy_count": 2, "httphealthcheck_host": "foo", "httphealthcheck_interval": 300, "httphealthcheck_name": "%s", "httphealthcheck_path": "/whatup", "httphealthcheck_port": 1234, "httphealthcheck_timeout": 5, "httphealthcheck_unhealthy_count": 2, "name": null, "state": "present"}' % (HC2),
},
{'desc': 'CREATE_HC create (broken) custom http healthcheck [FAIL]',
'm': 'gce_lb',
'a': 'httphealthcheck_name=%s httphealthcheck_port="string" httphealthcheck_path=7' % (HC3),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Unexpected response: HTTP return_code[400], API error code[None] and message: Invalid value for: Expected a signed integer, got \'string\' (class java.lang.String)"}',
},
{'desc': 'CREATE_LB create lb, missing region [FAIL]',
'm': 'gce_lb',
'a': 'name=%s' % (LB1),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Missing required region name"}',
},
{'desc': 'CREATE_LB create lb, bogus region [FAIL]',
'm': 'gce_lb',
'a': 'name=%s region=bogus' % (LB1),
'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Unexpected response: HTTP return_code[404], API error code[None] and message: The resource \'projects/%s/regions/bogus\' was not found"}' % (PROJECT),
},
{'desc': 'CREATE_LB create lb, minimal params [success]',
'strip_numbers': True,
'm': 'gce_lb',
'a': 'name=%s region=%s' % (LB1, REGION),
'r': '127.0.0.1 | success >> {"changed": true, "external_ip": "173.255.123.245", "httphealthchecks": [], "members": [], "name": "%s", "port_range": "1-65535", "protocol": "tcp", "region": "%s", "state": "present"}' % (LB1, REGION),
},
{'desc': 'CREATE_LB create lb full params [success]',
'strip_numbers': True,
'm': 'gce_lb',
'a': 'httphealthcheck_name=%s httphealthcheck_port=5055 httphealthcheck_path="/howami" name=%s port_range=8000-8888 region=%s members=%s/%s,%s/%s' % (HC3,LB2,REGION,ZONE,INAME,ZONE,INAME2),
'r': '127.0.0.1 | success >> {"changed": true, "external_ip": "173.255.126.81", "httphealthcheck_healthy_count": 2, "httphealthcheck_host": null, "httphealthcheck_interval": 5, "httphealthcheck_name": "%s", "httphealthcheck_path": "/howami", "httphealthcheck_port": 5055, "httphealthcheck_timeout": 5, "httphealthcheck_unhealthy_count": 2, "httphealthchecks": ["%s"], "members": ["%s/%s", "%s/%s"], "name": "%s", "port_range": "8000-8888", "protocol": "tcp", "region": "%s", "state": "present"}' % (HC3,HC3,ZONE,INAME,ZONE,INAME2,LB2,REGION),
},
],
'teardown': [
'gcutil deleteinstance --zone=%s -f %s %s' % (ZONE, INAME, INAME2),
'gcutil deleteforwardingrule --region=%s -f %s %s' % (REGION, LB1, LB2),
'sleep 10',
'gcutil deletetargetpool --region=%s -f %s-tp %s-tp' % (REGION, LB1, LB2),
'sleep 10',
'gcutil deletehttphealthcheck -f %s %s %s' % (HC1, HC2, HC3),
],
},
{'id': '09', 'desc': 'Destroy load-balancer resources',
'setup': ['gcutil addhttphealthcheck %s' % (HC1),
'sleep 5',
'gcutil addhttphealthcheck %s' % (HC2),
'sleep 5',
'gcutil addtargetpool --health_checks=%s --region=%s %s-tp' % (HC1, REGION, LB1),
'sleep 5',
'gcutil addforwardingrule --target=%s-tp --region=%s %s' % (LB1, REGION, LB1),
'sleep 5',
'gcutil addtargetpool --region=%s %s-tp' % (REGION, LB2),
'sleep 5',
'gcutil addforwardingrule --target=%s-tp --region=%s %s' % (LB2, REGION, LB2),
'sleep 5',
],
'tests': [
{'desc': 'DELETE_LB: delete a non-existent LB [success]',
'm': 'gce_lb',
'a': 'name=missing state=absent',
'r': '127.0.0.1 | success >> {"changed": false, "name": "missing", "state": "absent"}',
},
{'desc': 'DELETE_LB: delete a non-existent LB+HC [success]',
'm': 'gce_lb',
'a': 'name=missing httphealthcheck_name=alsomissing state=absent',
'r': '127.0.0.1 | success >> {"changed": false, "httphealthcheck_name": "alsomissing", "name": "missing", "state": "absent"}',
},
{'desc': 'DELETE_LB: destroy standalone healthcheck [success]',
'm': 'gce_lb',
'a': 'httphealthcheck_name=%s state=absent' % (HC2),
'r': '127.0.0.1 | success >> {"changed": true, "httphealthcheck_name": "%s", "name": null, "state": "absent"}' % (HC2),
},
{'desc': 'DELETE_LB: destroy standalone balancer [success]',
'm': 'gce_lb',
'a': 'name=%s state=absent' % (LB2),
'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "state": "absent"}' % (LB2),
},
{'desc': 'DELETE_LB: destroy LB+HC [success]',
'm': 'gce_lb',
'a': 'name=%s httphealthcheck_name=%s state=absent' % (LB1, HC1),
'r': '127.0.0.1 | success >> {"changed": true, "httphealthcheck_name": "%s", "name": "%s", "state": "absent"}' % (HC1,LB1),
},
],
'teardown': [
'gcutil deleteforwardingrule --region=%s -f %s %s' % (REGION, LB1, LB2),
'sleep 10',
'gcutil deletetargetpool --region=%s -f %s-tp %s-tp' % (REGION, LB1, LB2),
'sleep 10',
'gcutil deletehttphealthcheck -f %s %s' % (HC1, HC2),
],
},
]
def main(tests_to_run=[]):
for test in test_cases:
if tests_to_run and test['id'] not in tests_to_run:
continue
print "=> starting/setup '%s:%s'"% (test['id'], test['desc'])
if DEBUG: print "=debug>", test['setup']
for c in test['setup']:
(s,o) = run(c)
test_i = 1
for t in test['tests']:
if DEBUG: print "=>debug>", test_i, t['desc']
# run any test-specific setup commands
if t.has_key('setup'):
for setup in t['setup']:
(status, output) = run(setup)
# run any 'peek_before' commands
if t.has_key('peek_before') and PEEKING_ENABLED:
for setup in t['peek_before']:
(status, output) = run(setup)
# run the ansible test if 'a' exists, otherwise
# an empty 'a' directive allows test to run
# setup/teardown for a subsequent test.
if t['a']:
if DEBUG: print "=>debug>", t['m'], t['a']
acmd = "ansible all -o -m %s -a \"%s\"" % (t['m'],t['a'])
#acmd = "ANSIBLE_KEEP_REMOTE_FILES=1 ansible all -vvv -m %s -a \"%s\"" % (t['m'],t['a'])
(s,o) = run(acmd)
# check expected output
if DEBUG: print "=debug>", o.strip(), "!=", t['r']
print "=> %s.%02d '%s':" % (test['id'], test_i, t['desc']),
if t.has_key('strip_numbers'):
# strip out all numbers so we don't trip over different
# IP addresses
is_good = (o.strip().translate(None, "0123456789") == t['r'].translate(None, "0123456789"))
else:
is_good = (o.strip() == t['r'])
if is_good:
print "PASS"
else:
print "FAIL"
if VERBOSE:
print "=>", acmd
print "=> Expected:", t['r']
print "=> Got:", o.strip()
# run any 'peek_after' commands
if t.has_key('peek_after') and PEEKING_ENABLED:
for setup in t['peek_after']:
(status, output) = run(setup)
# run any test-specific teardown commands
if t.has_key('teardown'):
for td in t['teardown']:
(status, output) = run(td)
test_i += 1
print "=> completing/teardown '%s:%s'" % (test['id'], test['desc'])
if DEBUG: print "=debug>", test['teardown']
for c in test['teardown']:
(s,o) = run(c)
if __name__ == '__main__':
tests_to_run = []
if len(sys.argv) == 2:
if sys.argv[1] in ["--help", "--list"]:
print "usage: %s [id1,id2,...,idN]" % sys.argv[0]
print " * An empty argument list will execute all tests"
print " * Do not need to specify tests in numerical order"
print " * List test categories with --list or --help"
print ""
for test in test_cases:
print "\t%s:%s" % (test['id'], test['desc'])
sys.exit(0)
else:
tests_to_run = sys.argv[1].split(',')
main(tests_to_run)

@ -14,5 +14,5 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
__version__ = '1.8'
__version__ = '1.9'
__author__ = 'Michael DeHaan'

@ -0,0 +1,141 @@
# (c) 2014, Brian Coca, Josh Drake, et al
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import os
import time
import errno
try:
import simplejson as json
except ImportError:
import json
from ansible import constants as C
from ansible import utils
from ansible.cache.base import BaseCacheModule
class CacheModule(BaseCacheModule):
"""
A caching module backed by json files.
"""
def __init__(self, *args, **kwargs):
self._timeout = float(C.CACHE_PLUGIN_TIMEOUT)
self._cache = {}
self._cache_dir = C.CACHE_PLUGIN_CONNECTION # expects a dir path
if not self._cache_dir:
utils.exit("error, fact_caching_connection is not set, cannot use fact cache")
if not os.path.exists(self._cache_dir):
try:
os.makedirs(self._cache_dir)
except (OSError,IOError), e:
utils.warning("error while trying to create cache dir %s : %s" % (self._cache_dir, str(e)))
return None
def get(self, key):
if key in self._cache:
return self._cache.get(key)
if self.has_expired(key):
raise KeyError
cachefile = "%s/%s" % (self._cache_dir, key)
try:
f = open( cachefile, 'r')
except (OSError,IOError), e:
utils.warning("error while trying to write to %s : %s" % (cachefile, str(e)))
else:
value = json.load(f)
self._cache[key] = value
return value
finally:
f.close()
def set(self, key, value):
self._cache[key] = value
cachefile = "%s/%s" % (self._cache_dir, key)
try:
f = open(cachefile, 'w')
except (OSError,IOError), e:
utils.warning("error while trying to read %s : %s" % (cachefile, str(e)))
else:
f.write(utils.jsonify(value))
finally:
f.close()
def has_expired(self, key):
cachefile = "%s/%s" % (self._cache_dir, key)
try:
st = os.stat(cachefile)
except (OSError,IOError), e:
if e.errno == errno.ENOENT:
return False
else:
utils.warning("error while trying to stat %s : %s" % (cachefile, str(e)))
if time.time() - st.st_mtime <= self._timeout:
return False
if key in self._cache:
del self._cache[key]
return True
def keys(self):
keys = []
for k in os.listdir(self._cache_dir):
if not (k.startswith('.') or self.has_expired(k)):
keys.append(k)
return keys
def contains(self, key):
if key in self._cache:
return True
if self.has_expired(key):
return False
try:
st = os.stat("%s/%s" % (self._cache_dir, key))
return True
except (OSError,IOError), e:
if e.errno == errno.ENOENT:
return False
else:
utils.warning("error while trying to stat %s : %s" % (cachefile, str(e)))
def delete(self, key):
del self._cache[key]
try:
os.remove("%s/%s" % (self._cache_dir, key))
except (OSError,IOError), e:
pass #TODO: only pass on non existing?
def flush(self):
self._cache = {}
for key in self.keys():
self.delete(key)
def copy(self):
ret = dict()
for key in self.keys():
ret[key] = self.get(key)
return ret

@ -20,9 +20,14 @@ import collections
# FIXME: can we store these as something else before we ship it?
import sys
import time
import json
try:
import simplejson as json
except ImportError:
import json
from ansible import constants as C
from ansible.utils import jsonify
from ansible.cache.base import BaseCacheModule
try:
@ -65,7 +70,7 @@ class CacheModule(BaseCacheModule):
return json.loads(value)
def set(self, key, value):
value2 = json.dumps(value)
value2 = jsonify(value)
if self._timeout > 0: # a timeout of 0 is handled as meaning 'never expire'
self._cache.setex(self._make_key(key), int(self._timeout), value2)
else:

@ -27,6 +27,7 @@ import fcntl
import constants
import locale
from ansible.color import stringc
from ansible.module_utils import basic
import logging
if constants.DEFAULT_LOG_PATH != '':
@ -411,7 +412,7 @@ class CliRunnerCallbacks(DefaultRunnerCallbacks):
self._async_notified[jid] = clock + 1
if self._async_notified[jid] > clock:
self._async_notified[jid] = clock
display("<job %s> polling, %ss remaining" % (jid, clock), runner=self.runner)
display("<job %s> polling on %s, %ss remaining" % (jid, host, clock), runner=self.runner)
super(CliRunnerCallbacks, self).on_async_poll(host, res, jid, clock)
def on_async_ok(self, host, res, jid):
@ -450,13 +451,18 @@ class PlaybookRunnerCallbacks(DefaultRunnerCallbacks):
self._async_notified = {}
def on_unreachable(self, host, results):
delegate_to = self.runner.module_vars.get('delegate_to')
if delegate_to:
host = '%s -> %s' % (host, delegate_to)
if self.runner.delegate_to:
host = '%s -> %s' % (host, self.runner.delegate_to)
item = None
if type(results) == dict:
item = results.get('item', None)
if isinstance(item, unicode):
item = utils.to_bytes(item)
results = basic.json_dict_unicode_to_bytes(results)
else:
results = utils.to_bytes(results)
host = utils.to_bytes(host)
if item:
msg = "fatal: [%s] => (item=%s) => %s" % (host, item, results)
else:
@ -465,9 +471,8 @@ class PlaybookRunnerCallbacks(DefaultRunnerCallbacks):
super(PlaybookRunnerCallbacks, self).on_unreachable(host, results)
def on_failed(self, host, results, ignore_errors=False):
delegate_to = self.runner.module_vars.get('delegate_to')
if delegate_to:
host = '%s -> %s' % (host, delegate_to)
if self.runner.delegate_to:
host = '%s -> %s' % (host, self.runner.delegate_to)
results2 = results.copy()
results2.pop('invocation', None)
@ -500,9 +505,8 @@ class PlaybookRunnerCallbacks(DefaultRunnerCallbacks):
super(PlaybookRunnerCallbacks, self).on_failed(host, results, ignore_errors=ignore_errors)
def on_ok(self, host, host_result):
delegate_to = self.runner.module_vars.get('delegate_to')
if delegate_to:
host = '%s -> %s' % (host, delegate_to)
if self.runner.delegate_to:
host = '%s -> %s' % (host, self.runner.delegate_to)
item = host_result.get('item', None)
@ -542,9 +546,8 @@ class PlaybookRunnerCallbacks(DefaultRunnerCallbacks):
super(PlaybookRunnerCallbacks, self).on_ok(host, host_result)
def on_skipped(self, host, item=None):
delegate_to = self.runner.module_vars.get('delegate_to')
if delegate_to:
host = '%s -> %s' % (host, delegate_to)
if self.runner.delegate_to:
host = '%s -> %s' % (host, self.runner.delegate_to)
if constants.DISPLAY_SKIPPED_HOSTS:
msg = ''
@ -607,11 +610,13 @@ class PlaybookCallbacks(object):
call_callback_module('playbook_on_no_hosts_remaining')
def on_task_start(self, name, is_conditional):
name = utils.to_bytes(name)
msg = "TASK: [%s]" % name
if is_conditional:
msg = "NOTIFIED: [%s]" % name
if hasattr(self, 'start_at'):
self.start_at = utils.to_bytes(self.start_at)
if name == self.start_at or fnmatch.fnmatch(name, self.start_at):
# we found out match, we can get rid of this now
del self.start_at
@ -624,7 +629,13 @@ class PlaybookCallbacks(object):
if hasattr(self, 'start_at'): # we still have start_at so skip the task
self.skip_task = True
elif hasattr(self, 'step') and self.step:
msg = ('Perform task: %s (y/n/c): ' % name).encode(sys.stdout.encoding)
if isinstance(name, str):
name = utils.to_unicode(name)
msg = u'Perform task: %s (y/n/c): ' % name
if sys.stdout.encoding:
msg = msg.encode(sys.stdout.encoding, errors='replace')
else:
msg = msg.encode('utf-8')
resp = raw_input(msg)
if resp.lower() in ['y','yes']:
self.skip_task = False
@ -674,7 +685,7 @@ class PlaybookCallbacks(object):
result = prompt(msg, private)
# if result is false and default is not None
if not result and default:
if not result and default is not None:
result = default

@ -86,26 +86,13 @@ def shell_expand_path(path):
path = os.path.expanduser(os.path.expandvars(path))
return path
def get_plugin_paths(path):
return ':'.join([os.path.join(x, path) for x in [os.path.expanduser('~/.ansible/plugins/'), '/usr/share/ansible_plugins/']])
p = load_config_file()
active_user = pwd.getpwuid(os.geteuid())[0]
# Needed so the RPM can call setup.py and have modules land in the
# correct location. See #1277 for discussion
if getattr(sys, "real_prefix", None):
# in a virtualenv
DIST_MODULE_PATH = os.path.join(sys.prefix, 'share/ansible/')
else:
DIST_MODULE_PATH = '/usr/share/ansible/'
# Look for modules relative to this file path
# This is so that we can find the modules when running from a local checkout
# installed as editable with `pip install -e ...` or `python setup.py develop`
local_module_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), '..', '..', 'library')
)
DIST_MODULE_PATH = os.pathsep.join([DIST_MODULE_PATH, local_module_path])
# check all of these extensions when looking for yaml files for things like
# group variables -- really anything we can load
YAML_FILENAME_EXTENSIONS = [ "", ".yml", ".yaml", ".json" ]
@ -114,8 +101,8 @@ YAML_FILENAME_EXTENSIONS = [ "", ".yml", ".yaml", ".json" ]
DEFAULTS='defaults'
# configurable things
DEFAULT_HOST_LIST = shell_expand_path(get_config(p, DEFAULTS, 'hostfile', 'ANSIBLE_HOSTS', '/etc/ansible/hosts'))
DEFAULT_MODULE_PATH = get_config(p, DEFAULTS, 'library', 'ANSIBLE_LIBRARY', DIST_MODULE_PATH)
DEFAULT_HOST_LIST = shell_expand_path(get_config(p, DEFAULTS, 'inventory', 'ANSIBLE_INVENTORY', get_config(p, DEFAULTS,'hostfile','ANSIBLE_HOSTS', '/etc/ansible/hosts')))
DEFAULT_MODULE_PATH = get_config(p, DEFAULTS, 'library', 'ANSIBLE_LIBRARY', None)
DEFAULT_ROLES_PATH = shell_expand_path(get_config(p, DEFAULTS, 'roles_path', 'ANSIBLE_ROLES_PATH', '/etc/ansible/roles'))
DEFAULT_REMOTE_TMP = get_config(p, DEFAULTS, 'remote_tmp', 'ANSIBLE_REMOTE_TEMP', '$HOME/.ansible/tmp')
DEFAULT_MODULE_NAME = get_config(p, DEFAULTS, 'module_name', None, 'command')
@ -151,13 +138,13 @@ DEFAULT_SU_USER = get_config(p, DEFAULTS, 'su_user', 'ANSIBLE_SU_USER'
DEFAULT_ASK_SU_PASS = get_config(p, DEFAULTS, 'ask_su_pass', 'ANSIBLE_ASK_SU_PASS', False, boolean=True)
DEFAULT_GATHERING = get_config(p, DEFAULTS, 'gathering', 'ANSIBLE_GATHERING', 'implicit').lower()
DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '/usr/share/ansible_plugins/action_plugins')
DEFAULT_CACHE_PLUGIN_PATH = get_config(p, DEFAULTS, 'cache_plugins', 'ANSIBLE_CACHE_PLUGINS', '/usr/share/ansible_plugins/cache_plugins')
DEFAULT_CALLBACK_PLUGIN_PATH = get_config(p, DEFAULTS, 'callback_plugins', 'ANSIBLE_CALLBACK_PLUGINS', '/usr/share/ansible_plugins/callback_plugins')
DEFAULT_CONNECTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'connection_plugins', 'ANSIBLE_CONNECTION_PLUGINS', '/usr/share/ansible_plugins/connection_plugins')
DEFAULT_LOOKUP_PLUGIN_PATH = get_config(p, DEFAULTS, 'lookup_plugins', 'ANSIBLE_LOOKUP_PLUGINS', '/usr/share/ansible_plugins/lookup_plugins')
DEFAULT_VARS_PLUGIN_PATH = get_config(p, DEFAULTS, 'vars_plugins', 'ANSIBLE_VARS_PLUGINS', '/usr/share/ansible_plugins/vars_plugins')
DEFAULT_FILTER_PLUGIN_PATH = get_config(p, DEFAULTS, 'filter_plugins', 'ANSIBLE_FILTER_PLUGINS', '/usr/share/ansible_plugins/filter_plugins')
DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', get_plugin_paths('action_plugins'))
DEFAULT_CACHE_PLUGIN_PATH = get_config(p, DEFAULTS, 'cache_plugins', 'ANSIBLE_CACHE_PLUGINS', get_plugin_paths('cache_plugins'))
DEFAULT_CALLBACK_PLUGIN_PATH = get_config(p, DEFAULTS, 'callback_plugins', 'ANSIBLE_CALLBACK_PLUGINS', get_plugin_paths('callback_plugins'))
DEFAULT_CONNECTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'connection_plugins', 'ANSIBLE_CONNECTION_PLUGINS', get_plugin_paths('connection_plugins'))
DEFAULT_LOOKUP_PLUGIN_PATH = get_config(p, DEFAULTS, 'lookup_plugins', 'ANSIBLE_LOOKUP_PLUGINS', get_plugin_paths('lookup_plugins'))
DEFAULT_VARS_PLUGIN_PATH = get_config(p, DEFAULTS, 'vars_plugins', 'ANSIBLE_VARS_PLUGINS', get_plugin_paths('vars_plugins'))
DEFAULT_FILTER_PLUGIN_PATH = get_config(p, DEFAULTS, 'filter_plugins', 'ANSIBLE_FILTER_PLUGINS', get_plugin_paths('filter_plugins'))
DEFAULT_LOG_PATH = shell_expand_path(get_config(p, DEFAULTS, 'log_path', 'ANSIBLE_LOG_PATH', ''))
CACHE_PLUGIN = get_config(p, DEFAULTS, 'fact_caching', 'ANSIBLE_CACHE_PLUGIN', 'memory')
@ -177,6 +164,9 @@ DEFAULT_CALLABLE_WHITELIST = get_config(p, DEFAULTS, 'callable_whitelist', '
COMMAND_WARNINGS = get_config(p, DEFAULTS, 'command_warnings', 'ANSIBLE_COMMAND_WARNINGS', False, boolean=True)
DEFAULT_LOAD_CALLBACK_PLUGINS = get_config(p, DEFAULTS, 'bin_ansible_callbacks', 'ANSIBLE_LOAD_CALLBACK_PLUGINS', False, boolean=True)
RETRY_FILES_ENABLED = get_config(p, DEFAULTS, 'retry_files_enabled', 'ANSIBLE_RETRY_FILES_ENABLED', True, boolean=True)
RETRY_FILES_SAVE_PATH = get_config(p, DEFAULTS, 'retry_files_save_path', 'ANSIBLE_RETRY_FILES_SAVE_PATH', '~/')
# CONNECTION RELATED
ANSIBLE_SSH_ARGS = get_config(p, 'ssh_connection', 'ssh_args', 'ANSIBLE_SSH_ARGS', None)
ANSIBLE_SSH_CONTROL_PATH = get_config(p, 'ssh_connection', 'control_path', 'ANSIBLE_SSH_CONTROL_PATH', "%(directory)s/ansible-ssh-%%h-%%p-%%r")

@ -420,7 +420,7 @@ class Inventory(object):
group = self.get_group(groupname)
if group is None:
raise Exception("group not found: %s" % groupname)
raise errors.AnsibleError("group not found: %s" % groupname)
vars = {}
@ -437,7 +437,10 @@ class Inventory(object):
def get_variables(self, hostname, update_cached=False, vault_password=None):
return self.get_host(hostname).get_variables()
host = self.get_host(hostname)
if not host:
raise errors.AnsibleError("host not found: %s" % hostname)
return host.get_variables()
def get_host_variables(self, hostname, update_cached=False, vault_password=None):

@ -36,6 +36,7 @@ class InventoryParser(object):
def __init__(self, filename=C.DEFAULT_HOST_LIST):
with open(filename) as fh:
self.filename = filename
self.lines = fh.readlines()
self.groups = {}
self.hosts = {}
@ -87,8 +88,8 @@ class InventoryParser(object):
self.groups = dict(all=all, ungrouped=ungrouped)
active_group_name = 'ungrouped'
for line in self.lines:
line = utils.before_comment(line).strip()
for lineno in range(len(self.lines)):
line = utils.before_comment(self.lines[lineno]).strip()
if line.startswith("[") and line.endswith("]"):
active_group_name = line.replace("[","").replace("]","")
if ":vars" in line or ":children" in line:
@ -142,7 +143,7 @@ class InventoryParser(object):
try:
(k,v) = t.split("=", 1)
except ValueError, e:
raise errors.AnsibleError("Invalid ini entry: %s - %s" % (t, str(e)))
raise errors.AnsibleError("%s:%s: Invalid ini entry: %s - %s" % (self.filename, lineno + 1, t, str(e)))
host.set_variable(k, self._parse_value(v))
self.groups[active_group_name].add_host(host)
@ -153,8 +154,8 @@ class InventoryParser(object):
def _parse_group_children(self):
group = None
for line in self.lines:
line = line.strip()
for lineno in range(len(self.lines)):
line = self.lines[lineno].strip()
if line is None or line == '':
continue
if line.startswith("[") and ":children]" in line:
@ -169,7 +170,7 @@ class InventoryParser(object):
elif group:
kid_group = self.groups.get(line, None)
if kid_group is None:
raise errors.AnsibleError("child group is not defined: (%s)" % line)
raise errors.AnsibleError("%s:%d: child group is not defined: (%s)" % (self.filename, lineno + 1, line))
else:
group.add_child_group(kid_group)
@ -180,13 +181,13 @@ class InventoryParser(object):
def _parse_group_variables(self):
group = None
for line in self.lines:
line = line.strip()
for lineno in range(len(self.lines)):
line = self.lines[lineno].strip()
if line.startswith("[") and ":vars]" in line:
line = line.replace("[","").replace(":vars]","")
group = self.groups.get(line, None)
if group is None:
raise errors.AnsibleError("can't add vars to undefined group: %s" % line)
raise errors.AnsibleError("%s:%d: can't add vars to undefined group: %s" % (self.filename, lineno + 1, line))
elif line.startswith("#") or line.startswith(";"):
pass
elif line.startswith("["):
@ -195,7 +196,7 @@ class InventoryParser(object):
pass
elif group:
if "=" not in line:
raise errors.AnsibleError("variables assigned to group must be in key=value form")
raise errors.AnsibleError("%s:%d: variables assigned to group must be in key=value form" % (self.filename, lineno + 1))
else:
(k, v) = [e.strip() for e in line.split("=", 1)]
group.set_variable(k, self._parse_value(v))

@ -22,10 +22,12 @@ import subprocess
import ansible.constants as C
from ansible.inventory.host import Host
from ansible.inventory.group import Group
from ansible.module_utils.basic import json_dict_bytes_to_unicode
from ansible import utils
from ansible import errors
import sys
class InventoryScript(object):
''' Host inventory parser for ansible using external inventory scripts. '''
@ -41,6 +43,10 @@ class InventoryScript(object):
except OSError, e:
raise errors.AnsibleError("problem running %s (%s)" % (' '.join(cmd), e))
(stdout, stderr) = sp.communicate()
if sp.returncode != 0:
raise errors.AnsibleError("Inventory script (%s) had an execution error: %s " % (filename,stderr))
self.data = stdout
# see comment about _meta below
self.host_vars_from_top = None
@ -53,6 +59,7 @@ class InventoryScript(object):
# not passing from_remote because data from CMDB is trusted
self.raw = utils.parse_json(self.data)
self.raw = json_dict_bytes_to_unicode(self.raw)
all = Group('all')
groups = dict(all=all)
@ -141,7 +148,7 @@ class InventoryScript(object):
if out.strip() == '':
return dict()
try:
return utils.parse_json(out)
return json_dict_bytes_to_unicode(utils.parse_json(out))
except ValueError:
raise errors.AnsibleError("could not parse post variable response: %s, %s" % (cmd, out))

@ -151,11 +151,18 @@ class ModuleReplacer(object):
complex_args_json = utils.jsonify(complex_args)
# We force conversion of module_args to str because module_common calls shlex.split,
# a standard library function that incorrectly handles Unicode input before Python 2.7.3.
# Note: it would be better to do all this conversion at the border
# (when the data is originally parsed into data structures) but
# it's currently coming from too many sources to make that
# effective.
try:
encoded_args = repr(module_args.encode('utf-8'))
except UnicodeDecodeError:
encoded_args = repr(module_args)
encoded_complex = repr(complex_args_json)
try:
encoded_complex = repr(complex_args_json.encode('utf-8'))
except UnicodeDecodeError:
encoded_complex = repr(complex_args_json.encode('utf-8'))
# these strings should be part of the 'basic' snippet which is required to be included
module_data = module_data.replace(REPLACER_VERSION, repr(__version__))

@ -87,10 +87,19 @@ except ImportError:
HAVE_HASHLIB=False
try:
from hashlib import md5 as _md5
from hashlib import sha1 as _sha1
HAVE_HASHLIB=True
except ImportError:
from md5 import md5 as _md5
from sha import sha as _sha1
try:
from hashlib import md5 as _md5
except ImportError:
try:
from md5 import md5 as _md5
except ImportError:
# MD5 unavailable. Possibly FIPS mode
_md5 = None
try:
from hashlib import sha256 as _sha256
@ -151,6 +160,7 @@ FILE_COMMON_ARGUMENTS=dict(
serole = dict(),
selevel = dict(),
setype = dict(),
follow = dict(type='bool', default=False),
# not taken by the file module, but other modules call file so it must ignore them.
content = dict(no_log=True),
backup = dict(),
@ -161,6 +171,7 @@ FILE_COMMON_ARGUMENTS=dict(
directory_mode = dict(), # used by copy
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
def get_platform():
''' what's the platform? example: Linux is a platform. '''
@ -222,6 +233,103 @@ def load_platform_subclass(cls, *args, **kwargs):
return super(cls, subclass).__new__(subclass)
def json_dict_unicode_to_bytes(d):
''' Recursively convert dict keys and values to byte str
Specialized for json return because this only handles, lists, tuples,
and dict container types (the containers that the json module returns)
'''
if isinstance(d, unicode):
return d.encode('utf-8')
elif isinstance(d, dict):
return dict(map(json_dict_unicode_to_bytes, d.iteritems()))
elif isinstance(d, list):
return list(map(json_dict_unicode_to_bytes, d))
elif isinstance(d, tuple):
return tuple(map(json_dict_unicode_to_bytes, d))
else:
return d
def json_dict_bytes_to_unicode(d):
''' Recursively convert dict keys and values to byte str
Specialized for json return because this only handles, lists, tuples,
and dict container types (the containers that the json module returns)
'''
if isinstance(d, str):
return unicode(d, 'utf-8')
elif isinstance(d, dict):
return dict(map(json_dict_bytes_to_unicode, d.iteritems()))
elif isinstance(d, list):
return list(map(json_dict_bytes_to_unicode, d))
elif isinstance(d, tuple):
return tuple(map(json_dict_bytes_to_unicode, d))
else:
return d
def heuristic_log_sanitize(data):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
return ''.join(output)
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
@ -295,6 +403,11 @@ class AnsibleModule(object):
else:
path = os.path.expanduser(path)
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(path):
path = os.path.realpath(path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
@ -962,69 +1075,10 @@ class AnsibleModule(object):
if k in params:
self.fail_json(msg="duplicate parameter: %s (value=%s)" % (k, v))
params[k] = v
params2 = json.loads(MODULE_COMPLEX_ARGS)
params2 = json_dict_unicode_to_bytes(json.loads(MODULE_COMPLEX_ARGS))
params2.update(params)
return (params2, args)
def _heuristic_log_sanitize(self, data):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
return ''.join(output)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
@ -1047,7 +1101,7 @@ class AnsibleModule(object):
param_val = str(param_val)
elif isinstance(param_val, unicode):
param_val = param_val.encode('utf-8')
log_args[param] = self._heuristic_log_sanitize(param_val)
log_args[param] = heuristic_log_sanitize(param_val)
module = 'ansible-%s' % os.path.basename(__file__)
msg = []
@ -1069,12 +1123,11 @@ class AnsibleModule(object):
msg = msg.encode('utf-8')
if (has_journal):
journal_args = ["MESSAGE=%s %s" % (module, msg)]
journal_args.append("MODULE=%s" % os.path.basename(__file__))
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append(arg.upper() + "=" + str(log_args[arg]))
journal_args.append((arg.upper(), str(log_args[arg])))
try:
journal.sendv(*journal_args)
journal.send("%s %s" % (module, msg), **dict(journal_args))
except IOError, e:
# fall back to syslog since logging to journal failed
syslog.openlog(str(module), 0, syslog.LOG_USER)
@ -1207,9 +1260,24 @@ class AnsibleModule(object):
return digest.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file(). '''
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if not _md5:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, _md5())
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, _sha1())
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
if not HAVE_HASHLIB:
@ -1320,7 +1388,7 @@ class AnsibleModule(object):
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False):
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False, prompt_regex=None):
'''
Execute a command, returns rc, stdout, and stderr.
args is the command to run
@ -1328,12 +1396,17 @@ class AnsibleModule(object):
If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
If args is a string and use_unsafe_shell=True it run with shell=True.
Other arguments:
- check_rc (boolean) Whether to call fail_json in case of
non zero RC. Default is False.
- close_fds (boolean) See documentation for subprocess.Popen().
Default is True.
- executable (string) See documentation for subprocess.Popen().
Default is None.
- check_rc (boolean) Whether to call fail_json in case of
non zero RC. Default is False.
- close_fds (boolean) See documentation for subprocess.Popen().
Default is True.
- executable (string) See documentation for subprocess.Popen().
Default is None.
- prompt_regex (string) A regex string (not a compiled regex) which
can be used to detect prompts in the stdout
which would otherwise cause the execution
to hang (especially if no input data is
specified)
'''
shell = False
@ -1349,6 +1422,13 @@ class AnsibleModule(object):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
prompt_re = None
if prompt_regex:
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
# expand things like $HOME and ~
if not shell:
args = [ os.path.expandvars(os.path.expanduser(x)) for x in args ]
@ -1365,27 +1445,27 @@ class AnsibleModule(object):
# create a printable version of the command for use
# in reporting later, which strips out things like
# passwords from the args list
if isinstance(args, list):
clean_args = " ".join(pipes.quote(arg) for arg in args)
if isinstance(args, basestring):
to_clean_args = shlex.split(args.encode('utf-8'))
else:
clean_args = args
# all clean strings should return two match groups,
# where the first is the CLI argument and the second
# is the password/key/phrase that will be hidden
clean_re_strings = [
# this removes things like --password, --pass, --pass-wd, etc.
# optionally followed by an '=' or a space. The password can
# be quoted or not too, though it does not care about quotes
# that are not balanced
# source: http://blog.stevenlevithan.com/archives/match-quoted-string
r'([-]{0,2}pass[-]?(?:word|wd)?[=\s]?)((?:["\'])?(?:[^\s])*(?:\1)?)',
r'^(?P<before>.*:)(?P<password>.*)(?P<after>\@.*)$',
# TODO: add more regex checks here
]
for re_str in clean_re_strings:
r = re.compile(re_str)
clean_args = r.sub(r'\1********', clean_args)
to_clean_args = args
clean_args = []
is_passwd = False
for arg in to_clean_args:
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
clean_args.append(heuristic_log_sanitize(arg))
clean_args = ' '.join(pipes.quote(arg) for arg in clean_args)
if data:
st_in = subprocess.PIPE
@ -1442,6 +1522,10 @@ class AnsibleModule(object):
stderr += dat
if dat == '':
rpipes.remove(cmd.stderr)
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
@ -1466,7 +1550,7 @@ class AnsibleModule(object):
self.fail_json(rc=257, msg=traceback.format_exc(), cmd=clean_args)
if rc != 0 and check_rc:
msg = stderr.rstrip()
msg = heuristic_log_sanitize(stderr.rstrip())
self.fail_json(cmd=clean_args, rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd

@ -0,0 +1,128 @@
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c) 2014, Toshio Kuratomi <tkuratomi@ansible.com>
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
class SQLParseError(Exception):
pass
class UnclosedQuoteError(SQLParseError):
pass
# maps a type of identifier to the maximum number of dot levels that are
# allowed to specifiy that identifier. For example, a database column can be
# specified by up to 4 levels: database.schema.table.column
_PG_IDENTIFIER_TO_DOT_LEVEL = dict(database=1, schema=2, table=3, column=4, role=1)
_MYSQL_IDENTIFIER_TO_DOT_LEVEL = dict(database=1, table=2, column=3, role=1, vars=1)
def _find_end_quote(identifier, quote_char):
accumulate = 0
while True:
try:
quote = identifier.index(quote_char)
except ValueError:
raise UnclosedQuoteError
accumulate = accumulate + quote
try:
next_char = identifier[quote+1]
except IndexError:
return accumulate
if next_char == quote_char:
try:
identifier = identifier[quote+2:]
accumulate = accumulate + 2
except IndexError:
raise UnclosedQuoteError
else:
return accumulate
def _identifier_parse(identifier, quote_char):
if not identifier:
raise SQLParseError('Identifier name unspecified or unquoted trailing dot')
already_quoted = False
if identifier.startswith(quote_char):
already_quoted = True
try:
end_quote = _find_end_quote(identifier[1:], quote_char=quote_char) + 1
except UnclosedQuoteError:
already_quoted = False
else:
if end_quote < len(identifier) - 1:
if identifier[end_quote+1] == '.':
dot = end_quote + 1
first_identifier = identifier[:dot]
next_identifier = identifier[dot+1:]
further_identifiers = _identifier_parse(next_identifier, quote_char)
further_identifiers.insert(0, first_identifier)
else:
raise SQLParseError('User escaped identifiers must escape extra quotes')
else:
further_identifiers = [identifier]
if not already_quoted:
try:
dot = identifier.index('.')
except ValueError:
identifier = identifier.replace(quote_char, quote_char*2)
identifier = ''.join((quote_char, identifier, quote_char))
further_identifiers = [identifier]
else:
if dot == 0 or dot >= len(identifier) - 1:
identifier = identifier.replace(quote_char, quote_char*2)
identifier = ''.join((quote_char, identifier, quote_char))
further_identifiers = [identifier]
else:
first_identifier = identifier[:dot]
next_identifier = identifier[dot+1:]
further_identifiers = _identifier_parse(next_identifier, quote_char)
first_identifier = first_identifier.replace(quote_char, quote_char*2)
first_identifier = ''.join((quote_char, first_identifier, quote_char))
further_identifiers.insert(0, first_identifier)
return further_identifiers
def pg_quote_identifier(identifier, id_type):
identifier_fragments = _identifier_parse(identifier, quote_char='"')
if len(identifier_fragments) > _PG_IDENTIFIER_TO_DOT_LEVEL[id_type]:
raise SQLParseError('PostgreSQL does not support %s with more than %i dots' % (id_type, _PG_IDENTIFIER_TO_DOT_LEVEL[id_type]))
return '.'.join(identifier_fragments)
def mysql_quote_identifier(identifier, id_type):
identifier_fragments = _identifier_parse(identifier, quote_char='`')
if len(identifier_fragments) > _MYSQL_IDENTIFIER_TO_DOT_LEVEL[id_type]:
raise SQLParseError('MySQL does not support %s with more than %i dots' % (id_type, _MYSQL_IDENTIFIER_TO_DOT_LEVEL[id_type]))
special_cased_fragments = []
for fragment in identifier_fragments:
if fragment == '`*`':
special_cased_fragments.append('*')
else:
special_cased_fragments.append(fragment)
return '.'.join(special_cased_fragments)

@ -36,7 +36,10 @@ AWS_REGIONS = [
'ap-northeast-1',
'ap-southeast-1',
'ap-southeast-2',
'cn-north-1',
'eu-central-1',
'eu-west-1',
'eu-central-1',
'sa-east-1',
'us-east-1',
'us-west-1',
@ -54,7 +57,6 @@ def aws_common_argument_spec():
security_token=dict(no_log=True),
profile=dict(),
)
return spec
def ec2_argument_spec():
@ -164,6 +166,11 @@ def boto_fix_security_token_in_profile(conn, profile_name):
def connect_to_aws(aws_module, region, **params):
conn = aws_module.connect_to_region(region, **params)
if not conn:
if region not in [aws_module_region.name for aws_module_region in aws_module.regions()]:
raise StandardError("Region %s does not seem to be available for aws module %s. If the region definitely exists, you may need to upgrade boto" % (region, aws_module.__name__))
else:
raise StandardError("Unknown problem connecting to region %s for aws module %s." % (region, aws_module.__name__))
if params.get('profile_name'):
conn = boto_fix_security_token_in_profile(conn, params['profile_name'])
return conn
@ -179,13 +186,13 @@ def ec2_connect(module):
if region:
try:
ec2 = connect_to_aws(boto.ec2, region, **boto_params)
except boto.exception.NoAuthHandlerFound, e:
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
# Otherwise, no region so we fallback to the old connection method
elif ec2_url:
try:
ec2 = boto.connect_ec2_endpoint(ec2_url, **boto_params)
except boto.exception.NoAuthHandlerFound, e:
except (boto.exception.NoAuthHandlerFound, StandardError), e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="Either region or ec2_url must be specified")

@ -29,6 +29,7 @@ import socket
import struct
import datetime
import getpass
import pwd
import ConfigParser
import StringIO
@ -46,7 +47,7 @@ except ImportError:
import simplejson as json
# --------------------------------------------------------------
# timeout function to make sure some fact gathering
# timeout function to make sure some fact gathering
# steps do not exceed a time limit
class TimeoutError(Exception):
@ -82,19 +83,22 @@ class Facts(object):
subclass Facts.
"""
_I386RE = re.compile(r'i[3456]86')
# i86pc is a Solaris and derivatives-ism
_I386RE = re.compile(r'i([3456]86|86pc)')
# For the most part, we assume that platform.dist() will tell the truth.
# This is the fallback to handle unknowns or exceptions
OSDIST_DICT = { '/etc/redhat-release': 'RedHat',
'/etc/vmware-release': 'VMwareESX',
'/etc/openwrt_release': 'OpenWrt',
'/etc/system-release': 'OtherLinux',
'/etc/alpine-release': 'Alpine',
'/etc/release': 'Solaris',
'/etc/arch-release': 'Archlinux',
'/etc/SuSE-release': 'SuSE',
'/etc/gentoo-release': 'Gentoo',
'/etc/os-release': 'Debian' }
OSDIST_LIST = ( ('/etc/redhat-release', 'RedHat'),
('/etc/vmware-release', 'VMwareESX'),
('/etc/openwrt_release', 'OpenWrt'),
('/etc/system-release', 'OtherLinux'),
('/etc/alpine-release', 'Alpine'),
('/etc/release', 'Solaris'),
('/etc/arch-release', 'Archlinux'),
('/etc/SuSE-release', 'SuSE'),
('/etc/os-release', 'SuSE'),
('/etc/gentoo-release', 'Gentoo'),
('/etc/os-release', 'Debian'),
('/etc/lsb-release', 'Mandriva') )
SELINUX_MODE_DICT = { 1: 'enforcing', 0: 'permissive', -1: 'disabled' }
# A list of dicts. If there is a platform with more than one
@ -116,19 +120,23 @@ class Facts(object):
{ 'path' : '/usr/bin/pkg', 'name' : 'pkg' },
]
def __init__(self):
def __init__(self, load_on_init=True):
self.facts = {}
self.get_platform_facts()
self.get_distribution_facts()
self.get_cmdline()
self.get_public_ssh_host_keys()
self.get_selinux_facts()
self.get_pkg_mgr_facts()
self.get_lsb_facts()
self.get_date_time_facts()
self.get_user_facts()
self.get_local_facts()
self.get_env_facts()
if load_on_init:
self.get_platform_facts()
self.get_distribution_facts()
self.get_cmdline()
self.get_public_ssh_host_keys()
self.get_selinux_facts()
self.get_fips_facts()
self.get_pkg_mgr_facts()
self.get_lsb_facts()
self.get_date_time_facts()
self.get_user_facts()
self.get_local_facts()
self.get_env_facts()
def populate(self):
return self.facts
@ -185,7 +193,7 @@ class Facts(object):
# if that fails, skip it
rc, out, err = module.run_command(fn)
else:
out = open(fn).read()
out = get_file_content(fn, default='')
# load raw json
fact = 'loading %s' % fact_base
@ -230,6 +238,8 @@ class Facts(object):
FreeBSD = 'FreeBSD', HPUX = 'HP-UX'
)
# TODO: Rewrite this to use the function references in a dict pattern
# as it's much cleaner than this massive if-else
if self.facts['system'] == 'AIX':
self.facts['distribution'] = 'AIX'
rc, out, err = module.run_command("/usr/bin/oslevel")
@ -268,54 +278,116 @@ class Facts(object):
self.facts['distribution_major_version'] = dist[1].split('.')[0] or 'NA'
self.facts['distribution_release'] = dist[2] or 'NA'
# Try to handle the exceptions now ...
for (path, name) in Facts.OSDIST_DICT.items():
if os.path.exists(path) and os.path.getsize(path) > 0:
if self.facts['distribution'] == 'Fedora':
pass
elif name == 'RedHat':
data = get_file_content(path)
if 'Red Hat' in data:
self.facts['distribution'] = name
else:
self.facts['distribution'] = data.split()[0]
elif name == 'OtherLinux':
data = get_file_content(path)
if 'Amazon' in data:
self.facts['distribution'] = 'Amazon'
self.facts['distribution_version'] = data.split()[-1]
elif name == 'OpenWrt':
data = get_file_content(path)
if 'OpenWrt' in data:
for (path, name) in Facts.OSDIST_LIST:
if os.path.exists(path):
if os.path.getsize(path) > 0:
if self.facts['distribution'] in ('Fedora', ):
# Once we determine the value is one of these distros
# we trust the values are always correct
break
elif name == 'RedHat':
data = get_file_content(path)
if 'Red Hat' in data:
self.facts['distribution'] = name
else:
self.facts['distribution'] = data.split()[0]
break
elif name == 'OtherLinux':
data = get_file_content(path)
if 'Amazon' in data:
self.facts['distribution'] = 'Amazon'
self.facts['distribution_version'] = data.split()[-1]
break
elif name == 'OpenWrt':
data = get_file_content(path)
if 'OpenWrt' in data:
self.facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
self.facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
self.facts['distribution_release'] = release.groups()[0]
break
elif name == 'Alpine':
data = get_file_content(path)
self.facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
self.facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
self.facts['distribution_release'] = release.groups()[0]
elif name == 'Alpine':
data = get_file_content(path)
self.facts['distribution'] = 'Alpine'
self.facts['distribution_version'] = data
elif name == 'Solaris':
data = get_file_content(path).split('\n')[0]
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ','')
ora_prefix = 'Oracle '
self.facts['distribution'] = data.split()[0]
self.facts['distribution_version'] = data.split()[1]
self.facts['distribution_release'] = ora_prefix + data
elif name == 'SuSE':
data = get_file_content(path).splitlines()
for line in data:
if '=' in line:
self.facts['distribution_release'] = line.split('=')[1].strip()
elif name == 'Debian':
data = get_file_content(path).split('\n')[0]
release = re.search("PRETTY_NAME.+ \(?([^ ]+?)\)?\"", data)
if release:
self.facts['distribution_release'] = release.groups()[0]
self.facts['distribution_version'] = data
break
elif name == 'Solaris':
data = get_file_content(path).split('\n')[0]
if 'Solaris' in data:
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ','')
ora_prefix = 'Oracle '
self.facts['distribution'] = data.split()[0]
self.facts['distribution_version'] = data.split()[1]
self.facts['distribution_release'] = ora_prefix + data
break
uname_rc, uname_out, uname_err = module.run_command(['uname', '-v'])
distribution_version = None
if 'SmartOS' in data:
self.facts['distribution'] = 'SmartOS'
if os.path.exists('/etc/product'):
product_data = dict([l.split(': ', 1) for l in get_file_content('/etc/product').split('\n') if ': ' in l])
if 'Image' in product_data:
distribution_version = product_data.get('Image').split()[-1]
elif 'OpenIndiana' in data:
self.facts['distribution'] = 'OpenIndiana'
elif 'OmniOS' in data:
self.facts['distribution'] = 'OmniOS'
distribution_version = data.split()[-1]
elif uname_rc == 0 and 'NexentaOS_' in uname_out:
self.facts['distribution'] = 'Nexenta'
distribution_version = data.split()[-1].lstrip('v')
if self.facts['distribution'] in ('SmartOS', 'OpenIndiana', 'OmniOS', 'Nexenta'):
self.facts['distribution_release'] = data.strip()
if distribution_version is not None:
self.facts['distribution_version'] = distribution_version
elif uname_rc == 0:
self.facts['distribution_version'] = uname_out.split('\n')[0].strip()
break
elif name == 'SuSE':
data = get_file_content(path)
if 'suse' in data.lower():
if path == '/etc/os-release':
release = re.search("PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
distdata = get_file_content(path).split('\n')[0]
self.facts['distribution'] = distdata.split('=')[1]
if release:
self.facts['distribution_release'] = release.groups()[0]
break
elif path == '/etc/SuSE-release':
data = data.splitlines()
distdata = get_file_content(path).split('\n')[0]
self.facts['distribution'] = distdata.split()[0]
for line in data:
release = re.search('CODENAME *= *([^\n]+)', line)
if release:
self.facts['distribution_release'] = release.groups()[0].strip()
break
elif name == 'Debian':
data = get_file_content(path)
if 'Debian' in data:
release = re.search("PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
if release:
self.facts['distribution_release'] = release.groups()[0]
break
elif name == 'Mandriva':
data = get_file_content(path)
if 'Mandriva' in data:
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
self.facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
self.facts['distribution_release'] = release.groups()[0]
self.facts['distribution'] = name
break
else:
self.facts['distribution'] = name
@ -394,20 +466,16 @@ class Facts(object):
self.facts['lsb']['major_release'] = self.facts['lsb']['release'].split('.')[0]
elif lsb_path is None and os.path.exists('/etc/lsb-release'):
self.facts['lsb'] = {}
f = open('/etc/lsb-release', 'r')
try:
for line in f.readlines():
value = line.split('=',1)[1].strip()
if 'DISTRIB_ID' in line:
self.facts['lsb']['id'] = value
elif 'DISTRIB_RELEASE' in line:
self.facts['lsb']['release'] = value
elif 'DISTRIB_DESCRIPTION' in line:
self.facts['lsb']['description'] = value
elif 'DISTRIB_CODENAME' in line:
self.facts['lsb']['codename'] = value
finally:
f.close()
for line in get_file_lines('/etc/lsb-release'):
value = line.split('=',1)[1].strip()
if 'DISTRIB_ID' in line:
self.facts['lsb']['id'] = value
elif 'DISTRIB_RELEASE' in line:
self.facts['lsb']['release'] = value
elif 'DISTRIB_DESCRIPTION' in line:
self.facts['lsb']['description'] = value
elif 'DISTRIB_CODENAME' in line:
self.facts['lsb']['codename'] = value
else:
return self.facts
@ -451,6 +519,13 @@ class Facts(object):
self.facts['selinux']['type'] = 'unknown'
def get_fips_facts(self):
self.facts['fips'] = False
data = get_file_content('/proc/sys/crypto/fips_enabled')
if data and data == '1':
self.facts['fips'] = True
def get_date_time_facts(self):
self.facts['date_time'] = {}
@ -476,6 +551,12 @@ class Facts(object):
# User
def get_user_facts(self):
self.facts['user_id'] = getpass.getuser()
pwent = pwd.getpwnam(getpass.getuser())
self.facts['user_uid'] = pwent.pw_uid
self.facts['user_gid'] = pwent.pw_gid
self.facts['user_gecos'] = pwent.pw_gecos
self.facts['user_dir'] = pwent.pw_dir
self.facts['user_shell'] = pwent.pw_shell
def get_env_facts(self):
self.facts['env'] = {}
@ -527,7 +608,11 @@ class LinuxHardware(Hardware):
"""
platform = 'Linux'
MEMORY_FACTS = ['MemTotal', 'SwapTotal', 'MemFree', 'SwapFree']
# Originally only had these four as toplevelfacts
ORIGINAL_MEMORY_FACTS = frozenset(('MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'))
# Now we have all of these in a dict structure
MEMORY_FACTS = ORIGINAL_MEMORY_FACTS.union(('Buffers', 'Cached', 'SwapCached'))
def __init__(self):
Hardware.__init__(self)
@ -546,31 +631,95 @@ class LinuxHardware(Hardware):
def get_memory_facts(self):
if not os.access("/proc/meminfo", os.R_OK):
return
for line in open("/proc/meminfo").readlines():
memstats = {}
for line in get_file_lines("/proc/meminfo"):
data = line.split(":", 1)
key = data[0]
if key in LinuxHardware.MEMORY_FACTS:
if key in self.ORIGINAL_MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
self.facts["%s_mb" % key.lower()] = long(val) / 1024
if key in self.MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
memstats[key.lower()] = long(val) / 1024
if None not in (memstats.get('memtotal'), memstats.get('memfree')):
memstats['real:used'] = memstats['memtotal'] - memstats['memfree']
if None not in (memstats.get('cached'), memstats.get('memfree'), memstats.get('buffers')):
memstats['nocache:free'] = memstats['cached'] + memstats['memfree'] + memstats['buffers']
if None not in (memstats.get('memtotal'), memstats.get('nocache:free')):
memstats['nocache:used'] = memstats['memtotal'] - memstats['nocache:free']
if None not in (memstats.get('swaptotal'), memstats.get('swapfree')):
memstats['swap:used'] = memstats['swaptotal'] - memstats['swapfree']
self.facts['memory_mb'] = {
'real' : {
'total': memstats.get('memtotal'),
'used': memstats.get('real:used'),
'free': memstats.get('memfree'),
},
'nocache' : {
'free': memstats.get('nocache:free'),
'used': memstats.get('nocache:used'),
},
'swap' : {
'total': memstats.get('swaptotal'),
'free': memstats.get('swapfree'),
'used': memstats.get('swap:used'),
'cached': memstats.get('swapcached'),
},
}
def get_cpu_facts(self):
i = 0
vendor_id_occurrence = 0
model_name_occurrence = 0
physid = 0
coreid = 0
sockets = {}
cores = {}
xen = False
xen_paravirt = False
try:
if os.path.exists('/proc/xen'):
xen = True
else:
for line in get_file_lines('/sys/hypervisor/type'):
if line.strip() == 'xen':
xen = True
# Only interested in the first line
break
except IOError:
pass
if not os.access("/proc/cpuinfo", os.R_OK):
return
self.facts['processor'] = []
for line in open("/proc/cpuinfo").readlines():
for line in get_file_lines('/proc/cpuinfo'):
data = line.split(":", 1)
key = data[0].strip()
if xen:
if key == 'flags':
# Check for vme cpu flag, Xen paravirt does not expose this.
# Need to detect Xen paravirt because it exposes cpuinfo
# differently than Xen HVM or KVM and causes reporting of
# only a single cpu core.
if 'vme' not in data:
xen_paravirt = True
# model name is for Intel arch, Processor (mind the uppercase P)
# works for some ARM devices, like the Sheevaplug.
if key == 'model name' or key == 'Processor' or key == 'vendor_id':
if 'processor' not in self.facts:
self.facts['processor'] = []
self.facts['processor'].append(data[1].strip())
if key == 'vendor_id':
vendor_id_occurrence += 1
if key == 'model name':
model_name_occurrence += 1
i += 1
elif key == 'physical id':
physid = data[1].strip()
@ -586,13 +735,23 @@ class LinuxHardware(Hardware):
cores[coreid] = int(data[1].strip())
elif key == '# processors':
self.facts['processor_cores'] = int(data[1].strip())
if vendor_id_occurrence == model_name_occurrence:
i = vendor_id_occurrence
if self.facts['architecture'] != 's390x':
self.facts['processor_count'] = sockets and len(sockets) or i
self.facts['processor_cores'] = sockets.values() and sockets.values()[0] or 1
self.facts['processor_threads_per_core'] = ((cores.values() and
cores.values()[0] or 1) / self.facts['processor_cores'])
self.facts['processor_vcpus'] = (self.facts['processor_threads_per_core'] *
self.facts['processor_count'] * self.facts['processor_cores'])
if xen_paravirt:
self.facts['processor_count'] = i
self.facts['processor_cores'] = i
self.facts['processor_threads_per_core'] = 1
self.facts['processor_vcpus'] = i
else:
self.facts['processor_count'] = sockets and len(sockets) or i
self.facts['processor_cores'] = sockets.values() and sockets.values()[0] or 1
self.facts['processor_threads_per_core'] = ((cores.values() and
cores.values()[0] or 1) / self.facts['processor_cores'])
self.facts['processor_vcpus'] = (self.facts['processor_threads_per_core'] *
self.facts['processor_count'] * self.facts['processor_cores'])
def get_dmi_facts(self):
''' learn dmi facts from system
@ -683,6 +842,13 @@ class LinuxHardware(Hardware):
size_available = statvfs_result.f_bsize * (statvfs_result.f_bavail)
except OSError, e:
continue
lsblkPath = module.get_bin_path("lsblk")
rc, out, err = module.run_command("%s -ln --output UUID %s" % (lsblkPath, fields[0]), use_unsafe_shell=True)
if rc == 0:
uuid = out.strip()
else:
uuid = 'NA'
self.facts['mounts'].append(
{'mount': fields[1],
@ -692,6 +858,7 @@ class LinuxHardware(Hardware):
# statvfs data
'size_total': size_total,
'size_available': size_available,
'uuid': uuid,
})
def get_device_facts(self):
@ -1108,7 +1275,7 @@ class NetBSDHardware(Hardware):
if not os.access("/proc/cpuinfo", os.R_OK):
return
self.facts['processor'] = []
for line in open("/proc/cpuinfo").readlines():
for line in get_file_lines("/proc/cpuinfo"):
data = line.split(":", 1)
key = data[0].strip()
# model name is for Intel arch, Processor (mind the uppercase P)
@ -1134,7 +1301,7 @@ class NetBSDHardware(Hardware):
def get_memory_facts(self):
if not os.access("/proc/meminfo", os.R_OK):
return
for line in open("/proc/meminfo").readlines():
for line in get_file_lines("/proc/meminfo"):
data = line.split(":", 1)
key = data[0]
if key in NetBSDHardware.MEMORY_FACTS:
@ -1312,7 +1479,7 @@ class HPUX(Hardware):
self.facts['memtotal_mb'] = int(data) / 1024
except AttributeError:
#For systems where memory details aren't sent to syslog or the log has rotated, use parsed
#adb output. Unfortunatley /dev/kmem doesn't have world-read, so this only works as root.
#adb output. Unfortunately /dev/kmem doesn't have world-read, so this only works as root.
if os.access("/dev/kmem", os.R_OK):
rc, out, err = module.run_command("echo 'phys_mem_pages/D' | adb -k /stand/vmunix /dev/kmem | tail -1 | awk '{print $2}'", use_unsafe_shell=True)
if not err:
@ -1516,44 +1683,44 @@ class LinuxNetwork(Network):
device = os.path.basename(path)
interfaces[device] = { 'device': device }
if os.path.exists(os.path.join(path, 'address')):
macaddress = open(os.path.join(path, 'address')).read().strip()
macaddress = get_file_content(os.path.join(path, 'address'), default='')
if macaddress and macaddress != '00:00:00:00:00:00':
interfaces[device]['macaddress'] = macaddress
if os.path.exists(os.path.join(path, 'mtu')):
interfaces[device]['mtu'] = int(open(os.path.join(path, 'mtu')).read().strip())
interfaces[device]['mtu'] = int(get_file_content(os.path.join(path, 'mtu')))
if os.path.exists(os.path.join(path, 'operstate')):
interfaces[device]['active'] = open(os.path.join(path, 'operstate')).read().strip() != 'down'
interfaces[device]['active'] = get_file_content(os.path.join(path, 'operstate')) != 'down'
# if os.path.exists(os.path.join(path, 'carrier')):
# interfaces[device]['link'] = open(os.path.join(path, 'carrier')).read().strip() == '1'
# interfaces[device]['link'] = get_file_content(os.path.join(path, 'carrier')) == '1'
if os.path.exists(os.path.join(path, 'device','driver', 'module')):
interfaces[device]['module'] = os.path.basename(os.path.realpath(os.path.join(path, 'device', 'driver', 'module')))
if os.path.exists(os.path.join(path, 'type')):
type = open(os.path.join(path, 'type')).read().strip()
if type == '1':
_type = get_file_content(os.path.join(path, 'type'))
if _type == '1':
interfaces[device]['type'] = 'ether'
elif type == '512':
elif _type == '512':
interfaces[device]['type'] = 'ppp'
elif type == '772':
elif _type == '772':
interfaces[device]['type'] = 'loopback'
if os.path.exists(os.path.join(path, 'bridge')):
interfaces[device]['type'] = 'bridge'
interfaces[device]['interfaces'] = [ os.path.basename(b) for b in glob.glob(os.path.join(path, 'brif', '*')) ]
if os.path.exists(os.path.join(path, 'bridge', 'bridge_id')):
interfaces[device]['id'] = open(os.path.join(path, 'bridge', 'bridge_id')).read().strip()
interfaces[device]['id'] = get_file_content(os.path.join(path, 'bridge', 'bridge_id'), default='')
if os.path.exists(os.path.join(path, 'bridge', 'stp_state')):
interfaces[device]['stp'] = open(os.path.join(path, 'bridge', 'stp_state')).read().strip() == '1'
interfaces[device]['stp'] = get_file_content(os.path.join(path, 'bridge', 'stp_state')) == '1'
if os.path.exists(os.path.join(path, 'bonding')):
interfaces[device]['type'] = 'bonding'
interfaces[device]['slaves'] = open(os.path.join(path, 'bonding', 'slaves')).read().split()
interfaces[device]['mode'] = open(os.path.join(path, 'bonding', 'mode')).read().split()[0]
interfaces[device]['miimon'] = open(os.path.join(path, 'bonding', 'miimon')).read().split()[0]
interfaces[device]['lacp_rate'] = open(os.path.join(path, 'bonding', 'lacp_rate')).read().split()[0]
primary = open(os.path.join(path, 'bonding', 'primary')).read()
interfaces[device]['slaves'] = get_file_content(os.path.join(path, 'bonding', 'slaves'), default='').split()
interfaces[device]['mode'] = get_file_content(os.path.join(path, 'bonding', 'mode'), default='').split()[0]
interfaces[device]['miimon'] = get_file_content(os.path.join(path, 'bonding', 'miimon'), default='').split()[0]
interfaces[device]['lacp_rate'] = get_file_content(os.path.join(path, 'bonding', 'lacp_rate'), default='').split()[0]
primary = get_file_content(os.path.join(path, 'bonding', 'primary'))
if primary:
interfaces[device]['primary'] = primary
path = os.path.join(path, 'bonding', 'all_slaves_active')
if os.path.exists(path):
interfaces[device]['all_slaves_active'] = open(path).read() == '1'
interfaces[device]['all_slaves_active'] = get_file_content(path) == '1'
# Check whether an interface is in promiscuous mode
if os.path.exists(os.path.join(path,'flags')):
@ -1561,7 +1728,7 @@ class LinuxNetwork(Network):
# The second byte indicates whether the interface is in promiscuous mode.
# 1 = promisc
# 0 = no promisc
data = int(open(os.path.join(path, 'flags')).read().strip(),16)
data = int(get_file_content(os.path.join(path, 'flags')),16)
promisc_mode = (data & 0x0100 > 0)
interfaces[device]['promisc'] = promisc_mode
@ -2107,7 +2274,7 @@ class LinuxVirtual(Virtual):
self.facts['virtualization_type'] = 'xen'
self.facts['virtualization_role'] = 'guest'
try:
for line in open('/proc/xen/capabilities'):
for line in get_file_lines('/proc/xen/capabilities'):
if "control_d" in line:
self.facts['virtualization_role'] = 'host'
except IOError:
@ -2123,7 +2290,11 @@ class LinuxVirtual(Virtual):
return
if os.path.exists('/proc/1/cgroup'):
for line in open('/proc/1/cgroup').readlines():
for line in get_file_lines('/proc/1/cgroup'):
if re.search('/docker/', line):
self.facts['virtualization_type'] = 'docker'
self.facts['virtualization_role'] = 'guest'
return
if re.search('/lxc/', line):
self.facts['virtualization_type'] = 'lxc'
self.facts['virtualization_role'] = 'guest'
@ -2171,8 +2342,13 @@ class LinuxVirtual(Virtual):
self.facts['virtualization_role'] = 'guest'
return
if sys_vendor == 'QEMU':
self.facts['virtualization_type'] = 'kvm'
self.facts['virtualization_role'] = 'guest'
return
if os.path.exists('/proc/self/status'):
for line in open('/proc/self/status').readlines():
for line in get_file_lines('/proc/self/status'):
if re.match('^VxID: \d+', line):
self.facts['virtualization_type'] = 'linux_vserver'
if re.match('^VxID: 0', line):
@ -2182,7 +2358,7 @@ class LinuxVirtual(Virtual):
return
if os.path.exists('/proc/cpuinfo'):
for line in open('/proc/cpuinfo').readlines():
for line in get_file_lines('/proc/cpuinfo'):
if re.match('^model name.*QEMU Virtual CPU', line):
self.facts['virtualization_type'] = 'kvm'
elif re.match('^vendor_id.*User Mode Linux', line):
@ -2215,7 +2391,7 @@ class LinuxVirtual(Virtual):
# Beware that we can have both kvm and virtualbox running on a single system
if os.path.exists("/proc/modules") and os.access('/proc/modules', os.R_OK):
modules = []
for line in open("/proc/modules").readlines():
for line in get_file_lines("/proc/modules"):
data = line.split(" ", 1)
modules.append(data[0])
@ -2326,14 +2502,28 @@ class SunOSVirtual(Virtual):
self.facts['virtualization_type'] = 'virtualbox'
self.facts['virtualization_role'] = 'guest'
def get_file_content(path, default=None):
def get_file_content(path, default=None, strip=True):
data = default
if os.path.exists(path) and os.access(path, os.R_OK):
data = open(path).read().strip()
if len(data) == 0:
data = default
try:
datafile = open(path)
data = datafile.read()
if strip:
data = data.strip()
if len(data) == 0:
data = default
finally:
datafile.close()
return data
def get_file_lines(path):
'''file.readlines() that closes the file'''
datafile = open(path)
try:
return datafile.readlines()
finally:
datafile.close()
def ansible_facts(module):
facts = {}
facts.update(Facts().populate())

@ -32,7 +32,7 @@ import pprint
USER_AGENT_PRODUCT="Ansible-gce"
USER_AGENT_VERSION="v1"
def gce_connect(module):
def gce_connect(module, provider=None):
"""Return a Google Cloud Engine connection."""
service_account_email = module.params.get('service_account_email', None)
pem_file = module.params.get('pem_file', None)
@ -71,8 +71,14 @@ def gce_connect(module):
'secrets file.')
return None
# Allow for passing in libcloud Google DNS (e.g, Provider.GOOGLE)
if provider is None:
provider = Provider.GCE
try:
gce = get_driver(Provider.GCE)(service_account_email, pem_file, datacenter=module.params.get('zone'), project=project_id)
gce = get_driver(provider)(service_account_email, pem_file,
datacenter=module.params.get('zone', None),
project=project_id)
gce.connection.user_agent_append("%s/%s" % (
USER_AGENT_PRODUCT, USER_AGENT_VERSION))
except (RuntimeError, ValueError), e:

@ -40,7 +40,7 @@ def add_git_host_key(module, url, accept_hostkey=True, create_dir=True):
""" idempotently add a git url hostkey """
fqdn = get_fqdn(module.params['repo'])
fqdn = get_fqdn(url)
if fqdn:
known_host = check_hostkey(module, fqdn)
@ -72,12 +72,14 @@ def get_fqdn(repo_url):
if 'ssh' not in parts[0] and 'git' not in parts[0]:
# don't try and scan a hostname that's not ssh
return None
# parts[1] will be empty on python2.4 on ssh:// or git:// urls, so
# ensure we actually have a parts[1] before continuing.
if parts[1] != '':
result = parts[1]
if ":" in result:
result = result.split(":")[0]
if "@" in result:
result = result.split("@", 1)[1]
if "@" in result:
result = result.split("@", 1)[1]
return result

@ -142,3 +142,25 @@ Function ConvertTo-Bool
return
}
# Helper function to calculate a hash of a file in a way which powershell 3
# and above can handle:
Function Get-FileChecksum($path)
{
$hash = ""
If (Test-Path -PathType Leaf $path)
{
$sp = new-object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider;
$fp = [System.IO.File]::Open($path, [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read);
[System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace("-", "").ToLower();
$fp.Dispose();
}
ElseIf (Test-Path -PathType Container $path)
{
$hash= "3";
}
Else
{
$hash = "1";
}
return $hash
}

@ -173,9 +173,9 @@ def rax_find_server(module, rax_module, server):
def rax_find_loadbalancer(module, rax_module, loadbalancer):
clb = rax_module.cloud_loadbalancers
try:
UUID(loadbalancer)
found = clb.get(loadbalancer)
except:
found = []
for lb in clb.list():
if loadbalancer == lb.name:
found.append(lb)

@ -76,7 +76,7 @@ def split_args(args):
do_decode = True
except UnicodeDecodeError:
do_decode = False
items = args.strip().split('\n')
items = args.split('\n')
# iterate over the tokens, and reassemble any that may have been
# split on a space inside a jinja2 block.
@ -138,7 +138,10 @@ def split_args(args):
spacer = ' '
params[-1] = "%s%s%s" % (params[-1], spacer, token)
else:
params[-1] = "%s\n%s" % (params[-1], token)
spacer = ''
if not params[-1].endswith('\n') and idx == 0:
spacer = '\n'
params[-1] = "%s%s%s" % (params[-1], spacer, token)
appended = True
# if the number of paired block tags is not the same, the depth has changed, so we calculate that here
@ -170,7 +173,7 @@ def split_args(args):
# one item (meaning we split on newlines), add a newline back here
# to preserve the original structure
if len(items) > 1 and itemidx != len(items) - 1 and not line_continuation:
if not params[-1].endswith('\n'):
if not params[-1].endswith('\n') or item == '':
params[-1] += '\n'
# always clear the line continuation flag

@ -219,6 +219,8 @@ class SSLValidationHandler(urllib2.BaseHandler):
# Write the dummy ca cert if we are running on Mac OS X
if platform == 'Darwin':
os.write(tmp_fd, DUMMY_CA_CERT)
# Default Homebrew path for OpenSSL certs
paths_checked.append('/usr/local/etc/openssl')
# for all of the paths, find any .crt or .pem files
# and compile them into single temp file for use
@ -250,9 +252,33 @@ class SSLValidationHandler(urllib2.BaseHandler):
except:
self.module.fail_json(msg='Connection to proxy failed')
def detect_no_proxy(self, url):
'''
Detect if the 'no_proxy' environment variable is set and honor those locations.
'''
env_no_proxy = os.environ.get('no_proxy')
if env_no_proxy:
env_no_proxy = env_no_proxy.split(',')
netloc = urlparse.urlparse(url).netloc
for host in env_no_proxy:
if netloc.endswith(host) or netloc.split(':')[0].endswith(host):
# Our requested URL matches something in no_proxy, so don't
# use the proxy for this
return False
return True
def http_request(self, req):
tmp_ca_cert_path, paths_checked = self.get_ca_certs()
https_proxy = os.environ.get('https_proxy')
# Detect if 'no_proxy' environment variable is set and if our URL is included
use_proxy = self.detect_no_proxy(req.get_full_url())
if not use_proxy:
# ignore proxy settings for this host request
return req
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if https_proxy:

@ -0,0 +1 @@
Subproject commit 095f8681dbdfd2e9247446822e953287c9bca66c

@ -0,0 +1 @@
Subproject commit d94d0ce70b5db5ecfafbc73bebc822c9e18734f3

@ -21,6 +21,7 @@ import ansible.runner
from ansible.utils.template import template
from ansible import utils
from ansible import errors
from ansible.module_utils.splitter import split_args, unquote
import ansible.callbacks
import ansible.cache
import os
@ -209,12 +210,15 @@ class PlayBook(object):
name and returns the merged vars along with the path
'''
new_vars = existing_vars.copy()
tokens = shlex.split(play_ds.get('include', ''))
tokens = split_args(play_ds.get('include', ''))
for t in tokens[1:]:
(k,v) = t.split("=", 1)
new_vars[k] = template(basedir, v, new_vars)
try:
(k,v) = unquote(t).split("=", 1)
new_vars[k] = template(basedir, v, new_vars)
except ValueError, e:
raise errors.AnsibleError('included playbook variables must be in the form k=v, got: %s' % t)
return (new_vars, tokens[0])
return (new_vars, unquote(tokens[0]))
# *****************************************************
@ -395,6 +399,10 @@ class PlayBook(object):
remote_user=task.remote_user,
remote_port=task.play.remote_port,
module_vars=task.module_vars,
play_vars=task.play_vars,
play_file_vars=task.play_file_vars,
role_vars=task.role_vars,
role_params=task.role_params,
default_vars=task.default_vars,
extra_vars=self.extra_vars,
private_key_file=self.private_key_file,
@ -496,7 +504,7 @@ class PlayBook(object):
def _save_play_facts(host, facts):
# saves play facts in SETUP_CACHE, unless the module executed was
# set_fact, in which case we add them to the VARS_CACHE
if task.module_name == 'set_fact':
if task.module_name in ('set_fact', 'include_vars'):
utils.update_hash(self.VARS_CACHE, host, facts)
else:
utils.update_hash(self.SETUP_CACHE, host, facts)
@ -601,6 +609,9 @@ class PlayBook(object):
transport=play.transport,
is_playbook=True,
module_vars=play.vars,
play_vars=play.vars,
play_file_vars=play.vars_file_vars,
role_vars=play.role_vars,
default_vars=play.default_vars,
check=self.check,
diff=self.diff,
@ -632,19 +643,28 @@ class PlayBook(object):
buf = StringIO.StringIO()
for x in replay_hosts:
buf.write("%s\n" % x)
basedir = self.inventory.basedir()
basedir = C.shell_expand_path(C.RETRY_FILES_SAVE_PATH)
filename = "%s.retry" % os.path.basename(self.filename)
filename = filename.replace(".yml","")
filename = os.path.join(os.path.expandvars('$HOME/'), filename)
filename = os.path.join(basedir, filename)
try:
if not os.path.exists(basedir):
os.makedirs(basedir)
fd = open(filename, 'w')
fd.write(buf.getvalue())
fd.close()
return filename
except:
pass
return None
ansible.callbacks.display(
"\nERROR: could not create retry file. Check the value of \n"
+ "the configuration variable 'retry_files_save_path' or set \n"
+ "'retry_files_enabled' to False to avoid this message.\n",
color='red'
)
return None
return filename
# *****************************************************

@ -33,12 +33,12 @@ import uuid
class Play(object):
__slots__ = [
'hosts', 'name', 'vars', 'default_vars', 'vars_prompt', 'vars_files',
'hosts', 'name', 'vars', 'vars_file_vars', 'role_vars', 'default_vars', 'vars_prompt', 'vars_files',
'handlers', 'remote_user', 'remote_port', 'included_roles', 'accelerate',
'accelerate_port', 'accelerate_ipv6', 'sudo', 'sudo_user', 'transport', 'playbook',
'tags', 'gather_facts', 'serial', '_ds', '_handlers', '_tasks',
'basedir', 'any_errors_fatal', 'roles', 'max_fail_pct', '_play_hosts', 'su', 'su_user',
'vault_password', 'no_log',
'vault_password', 'no_log', 'environment',
]
# to catch typos and so forth -- these are userland names
@ -48,7 +48,7 @@ class Play(object):
'tasks', 'handlers', 'remote_user', 'user', 'port', 'include', 'accelerate', 'accelerate_port', 'accelerate_ipv6',
'sudo', 'sudo_user', 'connection', 'tags', 'gather_facts', 'serial',
'any_errors_fatal', 'roles', 'role_names', 'pre_tasks', 'post_tasks', 'max_fail_percentage',
'su', 'su_user', 'vault_password', 'no_log',
'su', 'su_user', 'vault_password', 'no_log', 'environment',
]
# *************************************************
@ -65,10 +65,13 @@ class Play(object):
self.vars_prompt = ds.get('vars_prompt', {})
self.playbook = playbook
self.vars = self._get_vars()
self.vars_file_vars = dict() # these are vars read in from vars_files:
self.role_vars = dict() # these are vars read in from vars/main.yml files in roles
self.basedir = basedir
self.roles = ds.get('roles', None)
self.tags = ds.get('tags', None)
self.vault_password = vault_password
self.environment = ds.get('environment', {})
if self.tags is None:
self.tags = []
@ -77,12 +80,14 @@ class Play(object):
elif type(self.tags) != list:
self.tags = []
# make sure we have some special internal variables set
self.vars['playbook_dir'] = os.path.abspath(self.basedir)
# make sure we have some special internal variables set, which
# we use later when loading tasks and handlers
load_vars = dict()
load_vars['playbook_dir'] = os.path.abspath(self.basedir)
if self.playbook.inventory.basedir() is not None:
self.vars['inventory_dir'] = self.playbook.inventory.basedir()
load_vars['inventory_dir'] = self.playbook.inventory.basedir()
if self.playbook.inventory.src() is not None:
self.vars['inventory_file'] = self.playbook.inventory.src()
load_vars['inventory_file'] = self.playbook.inventory.src()
# We first load the vars files from the datastructure
# so we have the default variables to pass into the roles
@ -103,15 +108,17 @@ class Play(object):
self._update_vars_files_for_host(None)
# apply any extra_vars specified on the command line now
if type(self.playbook.extra_vars) == dict:
self.vars = utils.combine_vars(self.vars, self.playbook.extra_vars)
# template everything to be efficient, but do not pre-mature template
# tasks/handlers as they may have inventory scope overrides
# tasks/handlers as they may have inventory scope overrides. We also
# create a set of temporary variables for templating, so we don't
# trample on the existing vars structures
_tasks = ds.pop('tasks', [])
_handlers = ds.pop('handlers', [])
ds = template(basedir, ds, self.vars)
temp_vars = utils.merge_hash(self.vars, self.vars_file_vars)
temp_vars = utils.merge_hash(temp_vars, self.playbook.extra_vars)
ds = template(basedir, ds, temp_vars)
ds['tasks'] = _tasks
ds['handlers'] = _handlers
@ -121,7 +128,11 @@ class Play(object):
if hosts is None:
raise errors.AnsibleError('hosts declaration is required')
elif isinstance(hosts, list):
hosts = ';'.join(hosts)
try:
hosts = ';'.join(hosts)
except TypeError,e:
raise errors.AnsibleError('improper host declaration: %s' % str(e))
self.serial = str(ds.get('serial', 0))
self.hosts = hosts
self.name = ds.get('name', self.hosts)
@ -154,8 +165,7 @@ class Play(object):
raise errors.AnsibleError('sudo params ("sudo", "sudo_user") and su params '
'("su", "su_user") cannot be used together')
load_vars = {}
load_vars['role_names'] = ds.get('role_names',[])
load_vars['role_names'] = ds.get('role_names', [])
self._tasks = self._load_tasks(self._ds.get('tasks', []), load_vars)
self._handlers = self._load_tasks(self._ds.get('handlers', []), load_vars)
@ -218,7 +228,16 @@ class Play(object):
raise errors.AnsibleError("too many levels of recursion while resolving role dependencies")
for role in roles:
role_path,role_vars = self._get_role_path(role)
# save just the role params for this role, which exclude the special
# keywords 'role', 'tags', and 'when'.
role_params = role_vars.copy()
for item in ('role', 'tags', 'when'):
if item in role_params:
del role_params[item]
role_vars = utils.combine_vars(passed_vars, role_vars)
vars = self._resolve_main(utils.path_dwim(self.basedir, os.path.join(role_path, 'vars')))
vars_data = {}
if os.path.isfile(vars):
@ -227,10 +246,12 @@ class Play(object):
if not isinstance(vars_data, dict):
raise errors.AnsibleError("vars from '%s' are not a dict" % vars)
role_vars = utils.combine_vars(vars_data, role_vars)
defaults = self._resolve_main(utils.path_dwim(self.basedir, os.path.join(role_path, 'defaults')))
defaults_data = {}
if os.path.isfile(defaults):
defaults_data = utils.parse_yaml_from_file(defaults, vault_password=self.vault_password)
# the meta directory contains the yaml that should
# hold the list of dependencies (if any)
meta = self._resolve_main(utils.path_dwim(self.basedir, os.path.join(role_path, 'meta')))
@ -243,6 +264,13 @@ class Play(object):
for dep in dependencies:
allow_dupes = False
(dep_path,dep_vars) = self._get_role_path(dep)
# save the dep params, just as we did above
dep_params = dep_vars.copy()
for item in ('role', 'tags', 'when'):
if item in dep_params:
del dep_params[item]
meta = self._resolve_main(utils.path_dwim(self.basedir, os.path.join(dep_path, 'meta')))
if os.path.isfile(meta):
meta_data = utils.parse_yaml_from_file(meta, vault_password=self.vault_password)
@ -282,12 +310,15 @@ class Play(object):
dep_vars = utils.combine_vars(passed_vars, dep_vars)
dep_vars = utils.combine_vars(role_vars, dep_vars)
vars = self._resolve_main(utils.path_dwim(self.basedir, os.path.join(dep_path, 'vars')))
vars_data = {}
if os.path.isfile(vars):
vars_data = utils.parse_yaml_from_file(vars, vault_password=self.vault_password)
if vars_data:
dep_vars = utils.combine_vars(vars_data, dep_vars)
dep_vars = utils.combine_vars(dep_vars, vars_data)
pass
defaults = self._resolve_main(utils.path_dwim(self.basedir, os.path.join(dep_path, 'defaults')))
dep_defaults_data = {}
if os.path.isfile(defaults):
@ -323,15 +354,28 @@ class Play(object):
dep_vars['when'] = tmpcond
self._build_role_dependencies([dep], dep_stack, passed_vars=dep_vars, level=level+1)
dep_stack.append([dep,dep_path,dep_vars,dep_defaults_data])
dep_stack.append([dep, dep_path, dep_vars, dep_params, dep_defaults_data])
# only add the current role when we're at the top level,
# otherwise we'll end up in a recursive loop
if level == 0:
self.included_roles.append(role)
dep_stack.append([role,role_path,role_vars,defaults_data])
dep_stack.append([role, role_path, role_vars, role_params, defaults_data])
return dep_stack
def _load_role_vars_files(self, vars_files):
# process variables stored in vars/main.yml files
role_vars = {}
for filename in vars_files:
if os.path.exists(filename):
new_vars = utils.parse_yaml_from_file(filename, vault_password=self.vault_password)
if new_vars:
if type(new_vars) != dict:
raise errors.AnsibleError("%s must be stored as dictionary/hash: %s" % (filename, type(new_vars)))
role_vars = utils.combine_vars(role_vars, new_vars)
return role_vars
def _load_role_defaults(self, defaults_files):
# process default variables
default_vars = {}
@ -358,10 +402,10 @@ class Play(object):
if type(roles) != list:
raise errors.AnsibleError("value of 'roles:' must be a list")
new_tasks = []
new_handlers = []
new_vars_files = []
defaults_files = []
new_tasks = []
new_handlers = []
role_vars_files = []
defaults_files = []
pre_tasks = ds.get('pre_tasks', None)
if type(pre_tasks) != list:
@ -372,18 +416,18 @@ class Play(object):
# flush handlers after pre_tasks
new_tasks.append(dict(meta='flush_handlers'))
roles = self._build_role_dependencies(roles, [], self.vars)
roles = self._build_role_dependencies(roles, [], {})
# give each role an uuid and
# make role_path available as variable to the task
for idx, val in enumerate(roles):
this_uuid = str(uuid.uuid4())
roles[idx][-2]['role_uuid'] = this_uuid
roles[idx][-2]['role_path'] = roles[idx][1]
roles[idx][-3]['role_uuid'] = this_uuid
roles[idx][-3]['role_path'] = roles[idx][1]
role_names = []
for (role,role_path,role_vars,default_vars) in roles:
for (role, role_path, role_vars, role_params, default_vars) in roles:
# special vars must be extracted from the dict to the included tasks
special_keys = [ "sudo", "sudo_user", "when", "with_items" ]
special_vars = {}
@ -416,19 +460,19 @@ class Play(object):
role_names.append(role_name)
if os.path.isfile(task):
nt = dict(include=pipes.quote(task), vars=role_vars, default_vars=default_vars, role_name=role_name)
nt = dict(include=pipes.quote(task), vars=role_vars, role_params=role_params, default_vars=default_vars, role_name=role_name)
for k in special_keys:
if k in special_vars:
nt[k] = special_vars[k]
new_tasks.append(nt)
if os.path.isfile(handler):
nt = dict(include=pipes.quote(handler), vars=role_vars, role_name=role_name)
nt = dict(include=pipes.quote(handler), vars=role_vars, role_params=role_params, role_name=role_name)
for k in special_keys:
if k in special_vars:
nt[k] = special_vars[k]
new_handlers.append(nt)
if os.path.isfile(vars_file):
new_vars_files.append(vars_file)
role_vars_files.append(vars_file)
if os.path.isfile(defaults_file):
defaults_files.append(defaults_file)
if os.path.isdir(library):
@ -456,13 +500,12 @@ class Play(object):
new_tasks.append(dict(meta='flush_handlers'))
new_handlers.extend(handlers)
new_vars_files.extend(vars_files)
ds['tasks'] = new_tasks
ds['handlers'] = new_handlers
ds['vars_files'] = new_vars_files
ds['role_names'] = role_names
self.role_vars = self._load_role_vars_files(role_vars_files)
self.default_vars = self._load_role_defaults(defaults_files)
return ds
@ -488,7 +531,7 @@ class Play(object):
# *************************************************
def _load_tasks(self, tasks, vars=None, default_vars=None, sudo_vars=None,
def _load_tasks(self, tasks, vars=None, role_params=None, default_vars=None, sudo_vars=None,
additional_conditions=None, original_file=None, role_name=None):
''' handle task and handler include statements '''
@ -500,6 +543,8 @@ class Play(object):
additional_conditions = []
if vars is None:
vars = {}
if role_params is None:
role_params = {}
if default_vars is None:
default_vars = {}
if sudo_vars is None:
@ -529,8 +574,7 @@ class Play(object):
results.append(Task(self, x))
continue
task_vars = self.vars.copy()
task_vars.update(vars)
task_vars = vars.copy()
if original_file:
task_vars['_original_file'] = original_file
@ -552,11 +596,15 @@ class Play(object):
included_additional_conditions.append(x[k])
elif type(x[k]) is list:
included_additional_conditions.extend(x[k])
elif k in ("include", "vars", "default_vars", "sudo", "sudo_user", "role_name", "no_log"):
elif k in ("include", "vars", "role_params", "default_vars", "sudo", "sudo_user", "role_name", "no_log"):
continue
else:
include_vars[k] = x[k]
# get any role parameters specified
role_params = x.get('role_params', {})
# get any role default variables specified
default_vars = x.get('default_vars', {})
if not default_vars:
default_vars = self.default_vars
@ -582,19 +630,29 @@ class Play(object):
dirname = self.basedir
if original_file:
dirname = os.path.dirname(original_file)
include_file = template(dirname, tokens[0], mv)
# temp vars are used here to avoid trampling on the existing vars structures
temp_vars = utils.merge_hash(self.vars, self.vars_file_vars)
temp_vars = utils.merge_hash(temp_vars, mv)
temp_vars = utils.merge_hash(temp_vars, self.playbook.extra_vars)
include_file = template(dirname, tokens[0], temp_vars)
include_filename = utils.path_dwim(dirname, include_file)
data = utils.parse_yaml_from_file(include_filename, vault_password=self.vault_password)
if 'role_name' in x and data is not None:
for y in data:
if isinstance(y, dict) and 'include' in y:
y['role_name'] = new_role
loaded = self._load_tasks(data, mv, default_vars, included_sudo_vars, list(included_additional_conditions), original_file=include_filename, role_name=new_role)
loaded = self._load_tasks(data, mv, role_params, default_vars, included_sudo_vars, list(included_additional_conditions), original_file=include_filename, role_name=new_role)
results += loaded
elif type(x) == dict:
task = Task(
self, x,
module_vars=task_vars,
play_vars=self.vars,
play_file_vars=self.vars_file_vars,
role_vars=self.role_vars,
role_params=role_params,
default_vars=default_vars,
additional_conditions=list(additional_conditions),
role_name=role_name
@ -812,7 +870,7 @@ class Play(object):
target_filename = filename4
update_vars_cache(host, data, target_filename=target_filename)
else:
self.vars = utils.combine_vars(self.vars, data)
self.vars_file_vars = utils.combine_vars(self.vars_file_vars, data)
# we did process this file
return True
# we did not process this file

@ -26,7 +26,7 @@ class Task(object):
__slots__ = [
'name', 'meta', 'action', 'when', 'async_seconds', 'async_poll_interval',
'notify', 'module_name', 'module_args', 'module_vars', 'default_vars',
'notify', 'module_name', 'module_args', 'module_vars', 'play_vars', 'play_file_vars', 'role_vars', 'role_params', 'default_vars',
'play', 'notified_by', 'tags', 'register', 'role_name',
'delegate_to', 'first_available_file', 'ignore_errors',
'local_action', 'transport', 'sudo', 'remote_user', 'sudo_user', 'sudo_pass',
@ -45,7 +45,7 @@ class Task(object):
'su', 'su_user', 'su_pass', 'no_log', 'run_once',
]
def __init__(self, play, ds, module_vars=None, default_vars=None, additional_conditions=None, role_name=None):
def __init__(self, play, ds, module_vars=None, play_vars=None, play_file_vars=None, role_vars=None, role_params=None, default_vars=None, additional_conditions=None, role_name=None):
''' constructor loads from a task or handler datastructure '''
# meta directives are used to tell things like ansible/playbook to run
@ -84,9 +84,13 @@ class Task(object):
# code to allow "with_glob" and to reference a lookup plugin named glob
elif x.startswith("with_"):
if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"):
utils.warning("It is unnecessary to use '{{' in loops, leave variables in loop expressions bare.")
if isinstance(ds[x], basestring):
param = ds[x].strip()
# Only a variable, no logic
if (param.startswith('{{') and
param.find('}}') == len(ds[x]) - 2 and
param.find('|') == -1):
utils.warning("It is unnecessary to use '{{' in loops, leave variables in loop expressions bare.")
plugin_name = x.replace("with_","")
if plugin_name in utils.plugins.lookup_loader:
@ -97,8 +101,13 @@ class Task(object):
raise errors.AnsibleError("cannot find lookup plugin named %s for usage in with_%s" % (plugin_name, plugin_name))
elif x in [ 'changed_when', 'failed_when', 'when']:
if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"):
utils.warning("It is unnecessary to use '{{' in conditionals, leave variables in loop expressions bare.")
if isinstance(ds[x], basestring):
param = ds[x].strip()
# Only a variable, no logic
if (param.startswith('{{') and
param.find('}}') == len(ds[x]) - 2 and
param.find('|') == -1):
utils.warning("It is unnecessary to use '{{' in conditionals, leave variables in loop expressions bare.")
elif x.startswith("when_"):
utils.deprecated("The 'when_' conditional has been removed. Switch to using the regular unified 'when' statements as described on docs.ansible.com.","1.5", removed=True)
@ -110,9 +119,13 @@ class Task(object):
elif not x in Task.VALID_KEYS:
raise errors.AnsibleError("%s is not a legal parameter in an Ansible task or handler" % x)
self.module_vars = module_vars
self.default_vars = default_vars
self.play = play
self.module_vars = module_vars
self.play_vars = play_vars
self.play_file_vars = play_file_vars
self.role_vars = role_vars
self.role_params = role_params
self.default_vars = default_vars
self.play = play
# load various attributes
self.name = ds.get('name', None)
@ -120,7 +133,7 @@ class Task(object):
self.register = ds.get('register', None)
self.sudo = utils.boolean(ds.get('sudo', play.sudo))
self.su = utils.boolean(ds.get('su', play.su))
self.environment = ds.get('environment', {})
self.environment = ds.get('environment', play.environment)
self.role_name = role_name
self.no_log = utils.boolean(ds.get('no_log', "false")) or self.play.no_log
self.run_once = utils.boolean(ds.get('run_once', 'false'))
@ -210,7 +223,11 @@ class Task(object):
# combine the default and module vars here for use in templating
all_vars = self.default_vars.copy()
all_vars = utils.combine_vars(all_vars, self.play_vars)
all_vars = utils.combine_vars(all_vars, self.play_file_vars)
all_vars = utils.combine_vars(all_vars, self.role_vars)
all_vars = utils.combine_vars(all_vars, self.module_vars)
all_vars = utils.combine_vars(all_vars, self.role_params)
self.async_seconds = ds.get('async', 0) # not async by default
self.async_seconds = template.template_from_string(play.basedir, self.async_seconds, all_vars)

@ -53,9 +53,9 @@ from ansible.utils import update_hash
module_replacer = ModuleReplacer(strip_comments=False)
try:
from hashlib import md5 as _md5
from hashlib import sha1
except ImportError:
from md5 import md5 as _md5
from sha import sha as sha1
HAS_ATFORK=True
try:
@ -102,7 +102,7 @@ class HostVars(dict):
if host not in self.lookup:
result = self.inventory.get_variables(host, vault_password=self.vault_password).copy()
result.update(self.vars_cache.get(host, {}))
self.lookup[host] = result
self.lookup[host] = template.template('.', result, self.vars_cache)
return self.lookup[host]
@ -134,7 +134,11 @@ class Runner(object):
sudo=False, # whether to run sudo or not
sudo_user=C.DEFAULT_SUDO_USER, # ex: 'root'
module_vars=None, # a playbooks internals thing
default_vars=None, # ditto
play_vars=None, #
play_file_vars=None, #
role_vars=None, #
role_params=None, #
default_vars=None, #
extra_vars=None, # extra vars specified with he playbook(s)
is_playbook=False, # running from playbook or not?
inventory=None, # reference to Inventory object
@ -154,6 +158,7 @@ class Runner(object):
run_hosts=None, # an optional list of pre-calculated hosts to run on
no_log=False, # option to enable/disable logging for a given task
run_once=False, # option to enable/disable host bypass loop for a given task
sudo_exe=C.DEFAULT_SUDO_EXE, # ex: /usr/local/bin/sudo
):
# used to lock multiprocess inputs and outputs at various levels
@ -175,12 +180,17 @@ class Runner(object):
self.inventory = utils.default(inventory, lambda: ansible.inventory.Inventory(host_list))
self.module_vars = utils.default(module_vars, lambda: {})
self.play_vars = utils.default(play_vars, lambda: {})
self.play_file_vars = utils.default(play_file_vars, lambda: {})
self.role_vars = utils.default(role_vars, lambda: {})
self.role_params = utils.default(role_params, lambda: {})
self.default_vars = utils.default(default_vars, lambda: {})
self.extra_vars = utils.default(extra_vars, lambda: {})
self.always_run = None
self.connector = connection.Connector(self)
self.conditional = conditional
self.delegate_to = None
self.module_name = module_name
self.forks = int(forks)
self.pattern = pattern
@ -207,20 +217,28 @@ class Runner(object):
self.su_user_var = su_user
self.su_user = None
self.su_pass = su_pass
self.omit_token = '__omit_place_holder__%s' % _md5(os.urandom(64)).hexdigest()
self.omit_token = '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest()
self.vault_pass = vault_pass
self.no_log = no_log
self.run_once = run_once
self.sudo_exe = sudo_exe
if self.transport == 'smart':
# if the transport is 'smart' see if SSH can support ControlPersist if not use paramiko
# If the transport is 'smart', check to see if certain conditions
# would prevent us from using ssh, and fallback to paramiko.
# 'smart' is the default since 1.2.1/1.3
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
if "Bad configuration option" in err:
self.transport = "ssh"
if sys.platform.startswith('darwin') and self.remote_pass:
# due to a current bug in sshpass on OSX, which can trigger
# a kernel panic even for non-privileged users, we revert to
# paramiko on that OS when a SSH password is specified
self.transport = "paramiko"
else:
self.transport = "ssh"
# see if SSH can support ControlPersist if not use paramiko
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
if "Bad configuration option" in err:
self.transport = "paramiko"
# save the original transport, in case it gets
# changed later via options like accelerate
@ -312,16 +330,13 @@ class Runner(object):
# *****************************************************
def _compute_delegate(self, host, password, remote_inject):
def _compute_delegate(self, password, remote_inject):
""" Build a dictionary of all attributes for the delegate host """
delegate = {}
# allow delegated host to be templated
delegate['host'] = template.template(self.basedir, host,
remote_inject, fail_on_undefined=True)
delegate['inject'] = remote_inject.copy()
# set any interpreters
@ -333,36 +348,33 @@ class Runner(object):
del delegate['inject'][i]
port = C.DEFAULT_REMOTE_PORT
this_host = delegate['host']
# get the vars for the delegate by its name
try:
this_info = delegate['inject']['hostvars'][this_host]
this_info = delegate['inject']['hostvars'][self.delegate_to]
except:
# make sure the inject is empty for non-inventory hosts
this_info = {}
# get the real ssh_address for the delegate
# and allow ansible_ssh_host to be templated
delegate['ssh_host'] = template.template(self.basedir,
this_info.get('ansible_ssh_host', this_host),
this_info, fail_on_undefined=True)
delegate['ssh_host'] = template.template(
self.basedir,
this_info.get('ansible_ssh_host', self.delegate_to),
this_info,
fail_on_undefined=True
)
delegate['port'] = this_info.get('ansible_ssh_port', port)
delegate['user'] = self._compute_delegate_user(this_host, delegate['inject'])
delegate['user'] = self._compute_delegate_user(self.delegate_to, delegate['inject'])
delegate['pass'] = this_info.get('ansible_ssh_pass', password)
delegate['private_key_file'] = this_info.get('ansible_ssh_private_key_file',
self.private_key_file)
delegate['private_key_file'] = this_info.get('ansible_ssh_private_key_file', self.private_key_file)
delegate['transport'] = this_info.get('ansible_connection', self.transport)
delegate['sudo_pass'] = this_info.get('ansible_sudo_pass', self.sudo_pass)
# Last chance to get private_key_file from global variables.
# this is useful if delegated host is not defined in the inventory
if delegate['private_key_file'] is None:
delegate['private_key_file'] = remote_inject.get(
'ansible_ssh_private_key_file', None)
delegate['private_key_file'] = remote_inject.get('ansible_ssh_private_key_file', None)
if delegate['private_key_file'] is not None:
delegate['private_key_file'] = os.path.expanduser(delegate['private_key_file'])
@ -382,10 +394,20 @@ class Runner(object):
actual_user = inject.get('ansible_ssh_user', self.remote_user)
thisuser = None
if host in inject['hostvars']:
if inject['hostvars'][host].get('ansible_ssh_user'):
# user for delegate host in inventory
thisuser = inject['hostvars'][host].get('ansible_ssh_user')
try:
if host in inject['hostvars']:
if inject['hostvars'][host].get('ansible_ssh_user'):
# user for delegate host in inventory
thisuser = inject['hostvars'][host].get('ansible_ssh_user')
else:
# look up the variables for the host directly from inventory
host_vars = self.inventory.get_variables(host, vault_password=self.vault_pass)
if 'ansible_ssh_user' in host_vars:
thisuser = host_vars['ansible_ssh_user']
except errors.AnsibleError, e:
# the hostname was not found in the inventory, so
# we just ignore this and try the next method
pass
if thisuser is None and self.remote_user:
# user defined by play/runner
@ -583,24 +605,14 @@ class Runner(object):
# *****************************************************
def _executor_internal(self, host, new_stdin):
''' executes any module one or more times '''
host_variables = self.inventory.get_variables(host, vault_password=self.vault_pass)
host_connection = host_variables.get('ansible_connection', self.transport)
if host_connection in [ 'paramiko', 'ssh', 'accelerate' ]:
port = host_variables.get('ansible_ssh_port', self.remote_port)
if port is None:
port = C.DEFAULT_REMOTE_PORT
else:
# fireball, local, etc
port = self.remote_port
def get_combined_cache(self):
# merge the VARS and SETUP caches for this host
combined_cache = self.setup_cache.copy()
combined_cache = utils.merge_hash(combined_cache, self.vars_cache)
return utils.merge_hash(combined_cache, self.vars_cache)
hostvars = HostVars(combined_cache, self.inventory, vault_password=self.vault_pass)
def get_inject_vars(self, host):
host_variables = self.inventory.get_variables(host, vault_password=self.vault_pass)
combined_cache = self.get_combined_cache()
# use combined_cache and host_variables to template the module_vars
# we update the inject variables with the data we're about to template
@ -609,28 +621,78 @@ class Runner(object):
module_vars_inject = utils.combine_vars(self.module_vars, module_vars_inject)
module_vars = template.template(self.basedir, self.module_vars, module_vars_inject)
# remove bad variables from the module vars, which may be in there due
# the way role declarations are specified in playbooks
if 'tags' in module_vars:
del module_vars['tags']
if 'when' in module_vars:
del module_vars['when']
# start building the dictionary of injected variables
inject = {}
# default vars are the lowest priority
inject = utils.combine_vars(inject, self.default_vars)
# next come inventory variables for the host
inject = utils.combine_vars(inject, host_variables)
# then the setup_cache which contains facts gathered
inject = utils.combine_vars(inject, self.setup_cache.get(host, {}))
# next come variables from vars and vars files
inject = utils.combine_vars(inject, self.play_vars)
inject = utils.combine_vars(inject, self.play_file_vars)
# next come variables from role vars/main.yml files
inject = utils.combine_vars(inject, self.role_vars)
# then come the module variables
inject = utils.combine_vars(inject, module_vars)
# followed by vars_cache things (set_fact, include_vars, and
# vars_files which had host-specific templating done)
inject = utils.combine_vars(inject, self.vars_cache.get(host, {}))
# role parameters next
inject = utils.combine_vars(inject, self.role_params)
# and finally -e vars are the highest priority
inject = utils.combine_vars(inject, self.extra_vars)
# and then special vars
inject.setdefault('ansible_ssh_user', self.remote_user)
inject['hostvars'] = hostvars
inject['group_names'] = host_variables.get('group_names', [])
inject['groups'] = self.inventory.groups_list()
inject['vars'] = self.module_vars
inject['defaults'] = self.default_vars
inject['environment'] = self.environment
inject['group_names'] = host_variables.get('group_names', [])
inject['groups'] = self.inventory.groups_list()
inject['vars'] = self.module_vars
inject['defaults'] = self.default_vars
inject['environment'] = self.environment
inject['playbook_dir'] = os.path.abspath(self.basedir)
inject['omit'] = self.omit_token
inject['omit'] = self.omit_token
inject['combined_cache'] = combined_cache
return inject
def _executor_internal(self, host, new_stdin):
''' executes any module one or more times '''
# We build the proper injected dictionary for all future
# templating operations in this run
inject = self.get_inject_vars(host)
# Then we selectively merge some variable dictionaries down to a
# single dictionary, used to template the HostVars for this host
temp_vars = self.inventory.get_variables(host, vault_password=self.vault_pass)
temp_vars = utils.merge_hash(temp_vars, inject['combined_cache'])
temp_vars = utils.merge_hash(temp_vars, self.play_vars)
temp_vars = utils.merge_hash(temp_vars, self.play_file_vars)
temp_vars = utils.merge_hash(temp_vars, self.extra_vars)
# template this one is available, callbacks use this
delegate_to = self.module_vars.get('delegate_to')
if delegate_to:
self.module_vars['delegate_to'] = template.template(self.basedir, delegate_to, inject)
hostvars = HostVars(temp_vars, self.inventory, vault_password=self.vault_pass)
# and we save the HostVars in the injected dictionary so they
# may be referenced from playbooks/templates
inject['hostvars'] = hostvars
host_connection = inject.get('ansible_connection', self.transport)
if host_connection in [ 'paramiko', 'ssh', 'accelerate' ]:
port = hostvars.get('ansible_ssh_port', self.remote_port)
if port is None:
port = C.DEFAULT_REMOTE_PORT
else:
# fireball, local, etc
port = self.remote_port
if self.inventory.basedir() is not None:
inject['inventory_dir'] = self.inventory.basedir()
@ -654,24 +716,46 @@ class Runner(object):
if os.path.exists(filesdir):
basedir = filesdir
items_terms = self.module_vars.get('items_lookup_terms', '')
items_terms = template.template(basedir, items_terms, inject)
items = utils.plugins.lookup_loader.get(items_plugin, runner=self, basedir=basedir).run(items_terms, inject=inject)
try:
items_terms = self.module_vars.get('items_lookup_terms', '')
items_terms = template.template(basedir, items_terms, inject)
items = utils.plugins.lookup_loader.get(items_plugin, runner=self, basedir=basedir).run(items_terms, inject=inject)
except errors.AnsibleUndefinedVariable, e:
if 'has no attribute' in str(e):
# the undefined variable was an attribute of a variable that does
# exist, so try and run this through the conditional check to see
# if the user wanted to skip something on being undefined
if utils.check_conditional(self.conditional, self.basedir, inject, fail_on_undefined=True):
# the conditional check passed, so we have to fail here
raise
else:
# the conditional failed, so we skip this task
result = utils.jsonify(dict(changed=False, skipped=True))
self.callbacks.on_skipped(host, None)
return ReturnData(host=host, result=result)
except errors.AnsibleError, e:
raise
except Exception, e:
raise errors.AnsibleError("Unexpected error while executing task: %s" % str(e))
# strip out any jinja2 template syntax within
# the data returned by the lookup plugin
items = utils._clean_data_struct(items, from_remote=True)
if type(items) != list:
raise errors.AnsibleError("lookup plugins have to return a list: %r" % items)
if len(items) and utils.is_list_of_strings(items) and self.module_name in [ 'apt', 'yum', 'pkgng' ]:
# hack for apt, yum, and pkgng so that with_items maps back into a single module call
use_these_items = []
for x in items:
inject['item'] = x
if not self.conditional or utils.check_conditional(self.conditional, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars):
use_these_items.append(x)
inject['item'] = ",".join(use_these_items)
items = None
if items is None:
items = []
else:
if type(items) != list:
raise errors.AnsibleError("lookup plugins have to return a list: %r" % items)
if len(items) and utils.is_list_of_strings(items) and self.module_name in [ 'apt', 'yum', 'pkgng', 'zypper' ]:
# hack for apt, yum, and pkgng so that with_items maps back into a single module call
use_these_items = []
for x in items:
inject['item'] = x
if not self.conditional or utils.check_conditional(self.conditional, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars):
use_these_items.append(x)
inject['item'] = ",".join(use_these_items)
items = None
def _safe_template_complex_args(args, inject):
# Ensure the complex args here are a dictionary, but
@ -733,6 +817,10 @@ class Runner(object):
port,
complex_args=complex_args
)
if 'stdout' in result.result and 'stdout_lines' not in result.result:
result.result['stdout_lines'] = result.result['stdout'].splitlines()
results.append(result.result)
if result.comm_ok == False:
all_comm_ok = False
@ -805,6 +893,7 @@ class Runner(object):
self.sudo_pass = inject.get('ansible_sudo_pass', self.sudo_pass)
self.su = inject.get('ansible_su', self.su)
self.su_pass = inject.get('ansible_su_pass', self.su_pass)
self.sudo_exe = inject.get('ansible_sudo_exe', self.sudo_exe)
# select default root user in case self.sudo requested
# but no user specified; happens e.g. in host vars when
@ -831,9 +920,12 @@ class Runner(object):
# the delegated host may have different SSH port configured, etc
# and we need to transfer those, and only those, variables
delegate_to = inject.get('delegate_to', None)
if delegate_to is not None:
delegate = self._compute_delegate(delegate_to, actual_pass, inject)
self.delegate_to = inject.get('delegate_to', None)
if self.delegate_to:
self.delegate_to = template.template(self.basedir, self.delegate_to, inject)
if self.delegate_to is not None:
delegate = self._compute_delegate(actual_pass, inject)
actual_transport = delegate['transport']
actual_host = delegate['ssh_host']
actual_port = delegate['port']
@ -842,6 +934,8 @@ class Runner(object):
actual_private_key_file = delegate['private_key_file']
self.sudo_pass = delegate['sudo_pass']
inject = delegate['inject']
# set resolved delegate_to into inject so modules can call _remote_checksum
inject['delegate_to'] = self.delegate_to
# user/pass may still contain variables at this stage
actual_user = template.template(self.basedir, actual_user, inject)
@ -865,7 +959,7 @@ class Runner(object):
try:
conn = self.connector.connect(actual_host, actual_port, actual_user, actual_pass, actual_transport, actual_private_key_file)
if delegate_to or host != actual_host:
if self.delegate_to or host != actual_host:
conn.delegate = host
default_shell = getattr(conn, 'default_shell', '')
@ -898,7 +992,7 @@ class Runner(object):
# render module_args and complex_args templates
try:
# When templating module_args, we need to be careful to ensure
# that no variables inadvertantly (or maliciously) add params
# that no variables inadvertently (or maliciously) add params
# to the list of args. We do this by counting the number of k=v
# pairs before and after templating.
num_args_pre = self._count_module_args(module_args, allow_dupes=True)
@ -942,7 +1036,7 @@ class Runner(object):
cond = template.template(self.basedir, until, inject, expand_lists=False)
if not utils.check_conditional(cond, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars):
retries = self.module_vars.get('retries')
retries = template.template(self.basedir, self.module_vars.get('retries'), inject, expand_lists=False)
delay = self.module_vars.get('delay')
for x in range(1, int(retries) + 1):
# template the delay, cast to float and sleep
@ -1108,26 +1202,77 @@ class Runner(object):
# *****************************************************
def _remote_md5(self, conn, tmp, path):
''' takes a remote md5sum without requiring python, and returns 1 if no file '''
cmd = conn.shell.md5(path)
def _remote_expand_user(self, conn, path, tmp):
''' takes a remote path and performs tilde expansion on the remote host '''
if not path.startswith('~'):
return path
split_path = path.split(os.path.sep, 1)
expand_path = split_path[0]
if expand_path == '~':
if self.sudo and self.sudo_user:
expand_path = '~%s' % self.sudo_user
elif self.su and self.su_user:
expand_path = '~%s' % self.su_user
cmd = conn.shell.expand_user(expand_path)
data = self._low_level_exec_command(conn, cmd, tmp, sudoable=False, su=False)
initial_fragment = utils.last_non_blank_line(data['stdout'])
if not initial_fragment:
# Something went wrong trying to expand the path remotely. Return
# the original string
return path
if len(split_path) > 1:
return conn.shell.join_path(initial_fragment, *split_path[1:])
else:
return initial_fragment
# *****************************************************
def _remote_checksum(self, conn, tmp, path, inject):
''' takes a remote checksum and returns 1 if no file '''
# Lookup the python interp from the host or delegate
# host == inven_host when there is no delegate
host = inject['inventory_hostname']
if 'delegate_to' in inject:
delegate = inject['delegate_to']
if delegate:
# host == None when the delegate is not in inventory
host = None
# delegate set, check whether the delegate has inventory vars
delegate = template.template(self.basedir, delegate, inject)
if delegate in inject['hostvars']:
# host == delegate if we need to lookup the
# python_interpreter from the delegate's inventory vars
host = delegate
if host:
python_interp = inject['hostvars'][host].get('ansible_python_interpreter', 'python')
else:
python_interp = 'python'
cmd = conn.shell.checksum(path, python_interp)
data = self._low_level_exec_command(conn, cmd, tmp, sudoable=True)
data2 = utils.last_non_blank_line(data['stdout'])
try:
if data2 == '':
# this may happen if the connection to the remote server
# failed, so just return "INVALIDMD5SUM" to avoid errors
return "INVALIDMD5SUM"
# failed, so just return "INVALIDCHECKSUM" to avoid errors
return "INVALIDCHECKSUM"
else:
return data2.split()[0]
except IndexError:
sys.stderr.write("warning: md5sum command failed unusually, please report this to the list so it can be fixed\n")
sys.stderr.write("command: %s\n" % md5s)
sys.stderr.write("warning: Calculating checksum failed unusually, please report this to the list so it can be fixed\n")
sys.stderr.write("command: %s\n" % cmd)
sys.stderr.write("----\n")
sys.stderr.write("output: %s\n" % data)
sys.stderr.write("----\n")
# this will signal that it changed and allow things to keep going
return "INVALIDMD5SUM"
return "INVALIDCHECKSUM"
# *****************************************************
@ -1201,9 +1346,13 @@ class Runner(object):
# Search module path(s) for named module.
module_suffixes = getattr(conn, 'default_suffixes', None)
module_path = utils.plugins.module_finder.find_plugin(module_name, module_suffixes)
module_path = utils.plugins.module_finder.find_plugin(module_name, module_suffixes, transport=self.transport)
if module_path is None:
raise errors.AnsibleFileNotFound("module %s not found in %s" % (module_name, utils.plugins.module_finder.print_paths()))
module_path2 = utils.plugins.module_finder.find_plugin('ping', module_suffixes)
if module_path2 is not None:
raise errors.AnsibleFileNotFound("module %s not found in configured module paths" % (module_name))
else:
raise errors.AnsibleFileNotFound("module %s not found in configured module paths. Additionally, core modules are missing. If this is a checkout, run 'git submodule update --init --recursive' to correct this problem." % (module_name))
# insert shared code and arguments into the module
@ -1318,9 +1467,15 @@ class Runner(object):
# Expose the current hostgroup to the bypassing plugins
self.host_set = hosts
# We aren't iterating over all the hosts in this
# group. So, just pick the first host in our group to
# group. So, just choose the "delegate_to" host if that is defined and is
# one of the targeted hosts, otherwise pick the first host in our group to
# construct the conn object with.
result_data = self._executor(hosts[0], None).result
if self.delegate_to is not None and self.delegate_to in hosts:
host = self.delegate_to
else:
host = hosts[0]
result_data = self._executor(host, None).result
# Create a ResultData item for each host in this group
# using the returned result. If we didn't do this we would
# get false reports of dark hosts.

@ -108,10 +108,11 @@ class ActionModule(object):
# Does all work assembling the file
path = self._assemble_from_fragments(src, delimiter, _re)
pathmd5 = utils.md5s(path)
remote_md5 = self.runner._remote_md5(conn, tmp, dest)
path_checksum = utils.checksum_s(path)
dest = self.runner._remote_expand_user(conn, dest, tmp)
remote_checksum = self.runner._remote_checksum(conn, tmp, dest, inject)
if pathmd5 != remote_md5:
if path_checksum != remote_checksum:
resultant = file(path).read()
if self.runner.diff:
dest_result = self.runner._execute_module(conn, tmp, 'slurp', "path=%s" % dest, inject=inject, persist_files=True)
@ -124,7 +125,7 @@ class ActionModule(object):
xfered = self.runner._transfer_str(conn, tmp, 'src', resultant)
# fix file permissions when the copy is done as a different user
if self.runner.sudo and self.runner.sudo_user != 'root':
if self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root':
self.runner._remote_chmod(conn, 'a+r', xfered, tmp)
# run the copy module
@ -147,6 +148,11 @@ class ActionModule(object):
dest=dest,
original_basename=os.path.basename(src),
)
# make sure checkmod is passed on correctly
if self.runner.noop_on_check(inject):
new_module_args['CHECKMODE'] = True
module_args_tmp = utils.merge_module_args(module_args, new_module_args)
return self.runner._execute_module(conn, tmp, 'file', module_args_tmp, inject=inject)

@ -157,12 +157,15 @@ class ActionModule(object):
if "-tmp-" not in tmp_path:
tmp_path = self.runner._make_tmp_path(conn)
# expand any user home dir specifier
dest = self.runner._remote_expand_user(conn, dest, tmp_path)
for source_full, source_rel in source_files:
# Generate the MD5 hash of the local file.
local_md5 = utils.md5(source_full)
# Generate a hash of the local file.
local_checksum = utils.checksum(source_full)
# If local_md5 is not defined we can't find the file so we should fail out.
if local_md5 is None:
# If local_checksum is not defined we can't find the file so we should fail out.
if local_checksum is None:
result = dict(failed=True, msg="could not find src=%s" % source_full)
return ReturnData(conn=conn, result=result)
@ -174,27 +177,31 @@ class ActionModule(object):
else:
dest_file = conn.shell.join_path(dest)
# Attempt to get the remote MD5 Hash.
remote_md5 = self.runner._remote_md5(conn, tmp_path, dest_file)
# Attempt to get the remote checksum
remote_checksum = self.runner._remote_checksum(conn, tmp_path, dest_file, inject)
if remote_md5 == '3':
# The remote_md5 was executed on a directory.
if remote_checksum == '3':
# The remote_checksum was executed on a directory.
if content is not None:
# If source was defined as content remove the temporary file and fail out.
self._remove_tempfile_if_content_defined(content, content_tempfile)
result = dict(failed=True, msg="can not use content with a dir as dest")
return ReturnData(conn=conn, result=result)
else:
# Append the relative source location to the destination and retry remote_md5.
# Append the relative source location to the destination and retry remote_checksum
dest_file = conn.shell.join_path(dest, source_rel)
remote_md5 = self.runner._remote_md5(conn, tmp_path, dest_file)
remote_checksum = self.runner._remote_checksum(conn, tmp_path, dest_file, inject)
if remote_checksum == '4':
result = dict(msg="python isn't present on the system. Unable to compute checksum", failed=True)
return ReturnData(conn=conn, result=result)
if remote_md5 != '1' and not force:
# remote_file does not exist so continue to next iteration.
if remote_checksum != '1' and not force:
# remote_file exists so continue to next iteration.
continue
if local_md5 != remote_md5:
# The MD5 hashes don't match and we will change or error out.
if local_checksum != remote_checksum:
# The checksums don't match and we will change or error out.
changed = True
# Create a tmp_path if missing only if this is not recursive.
@ -227,7 +234,7 @@ class ActionModule(object):
self._remove_tempfile_if_content_defined(content, content_tempfile)
# fix file permissions when the copy is done as a different user
if self.runner.sudo and self.runner.sudo_user != 'root' and not raw:
if (self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root') and not raw:
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp_path)
if raw:
@ -254,7 +261,7 @@ class ActionModule(object):
module_executed = True
else:
# no need to transfer the file, already correct md5, but still need to call
# no need to transfer the file, already correct hash, but still need to call
# the file module in case we want to change attributes
self._remove_tempfile_if_content_defined(content, content_tempfile)
@ -283,8 +290,8 @@ class ActionModule(object):
module_executed = True
module_result = module_return.result
if not module_result.get('md5sum'):
module_result['md5sum'] = local_md5
if not module_result.get('checksum'):
module_result['checksum'] = local_checksum
if module_result.get('failed') == True:
return module_return
if module_result.get('changed') == True:

@ -51,8 +51,8 @@ class ActionModule(object):
else:
result = dict(msg=args['msg'])
elif 'var' in args and not utils.LOOKUP_REGEX.search(args['var']):
results = template.template(self.basedir, "{{ %s }}" % args['var'], inject)
result[args['var']] = results
results = template.template(self.basedir, args['var'], inject, convert_bare=True)
result['var'] = { args['var']: results }
# force flag to make debug output module always verbose
result['verbose_always'] = True

@ -50,19 +50,55 @@ class ActionModule(object):
flat = utils.boolean(flat)
fail_on_missing = options.get('fail_on_missing', False)
fail_on_missing = utils.boolean(fail_on_missing)
validate_md5 = options.get('validate_md5', True)
validate_md5 = utils.boolean(validate_md5)
validate_checksum = options.get('validate_checksum', None)
if validate_checksum is not None:
validate_checksum = utils.boolean(validate_checksum)
# Alias for validate_checksum (old way of specifying it)
validate_md5 = options.get('validate_md5', None)
if validate_md5 is not None:
validate_md5 = utils.boolean(validate_md5)
if validate_md5 is None and validate_checksum is None:
# Default
validate_checksum = True
elif validate_checksum is None:
validate_checksum = validate_md5
elif validate_md5 is not None and validate_checksum is not None:
results = dict(failed=True, msg="validate_checksum and validate_md5 cannot both be specified")
return ReturnData(conn, result=results)
if source is None or dest is None:
results = dict(failed=True, msg="src and dest are required")
return ReturnData(conn=conn, result=results)
source = os.path.expanduser(source)
source = conn.shell.join_path(source)
source = self.runner._remote_expand_user(conn, source, tmp)
# calculate checksum for the remote file
remote_checksum = self.runner._remote_checksum(conn, tmp, source, inject)
# use slurp if sudo and permissions are lacking
remote_data = None
if remote_checksum in ('1', '2') or self.runner.sudo:
slurpres = self.runner._execute_module(conn, tmp, 'slurp', 'src=%s' % source, inject=inject)
if slurpres.is_successful():
if slurpres.result['encoding'] == 'base64':
remote_data = base64.b64decode(slurpres.result['content'])
if remote_data is not None:
remote_checksum = utils.checksum_s(remote_data)
# the source path may have been expanded on the
# target system, so we compare it here and use the
# expanded version if it's different
remote_source = slurpres.result.get('source')
if remote_source and remote_source != source:
source = remote_source
# calculate the destination name
if os.path.sep not in conn.shell.join_path('a', ''):
source_local = source.replace('\\', '/')
else:
source_local = source
dest = os.path.expanduser(dest)
if flat:
if dest.endswith("/"):
# if the path ends with "/", we'll use the source filename as the
@ -76,40 +112,30 @@ class ActionModule(object):
# files are saved in dest dir, with a subdir for each host, then the filename
dest = "%s/%s/%s" % (utils.path_dwim(self.runner.basedir, dest), conn.host, source_local)
dest = os.path.expanduser(dest.replace("//","/"))
# calculate md5 sum for the remote file
remote_md5 = self.runner._remote_md5(conn, tmp, source)
# use slurp if sudo and permissions are lacking
remote_data = None
if remote_md5 in ('1', '2') or self.runner.sudo:
slurpres = self.runner._execute_module(conn, tmp, 'slurp', 'src=%s' % source, inject=inject)
if slurpres.is_successful():
if slurpres.result['encoding'] == 'base64':
remote_data = base64.b64decode(slurpres.result['content'])
if remote_data is not None:
remote_md5 = utils.md5s(remote_data)
# these don't fail because you may want to transfer a log file that possibly MAY exist
# but keep going to fetch other log files
if remote_md5 == '0':
result = dict(msg="unable to calculate the md5 sum of the remote file", file=source, changed=False)
return ReturnData(conn=conn, result=result)
if remote_md5 == '1':
if fail_on_missing:
result = dict(failed=True, msg="the remote file does not exist", file=source)
else:
result = dict(msg="the remote file does not exist, not transferring, ignored", file=source, changed=False)
return ReturnData(conn=conn, result=result)
if remote_md5 == '2':
result = dict(msg="no read permission on remote file, not transferring, ignored", file=source, changed=False)
dest = dest.replace("//","/")
if remote_checksum in ('0', '1', '2', '3', '4'):
# these don't fail because you may want to transfer a log file that possibly MAY exist
# but keep going to fetch other log files
if remote_checksum == '0':
result = dict(msg="unable to calculate the checksum of the remote file", file=source, changed=False)
elif remote_checksum == '1':
if fail_on_missing:
result = dict(failed=True, msg="the remote file does not exist", file=source)
else:
result = dict(msg="the remote file does not exist, not transferring, ignored", file=source, changed=False)
elif remote_checksum == '2':
result = dict(msg="no read permission on remote file, not transferring, ignored", file=source, changed=False)
elif remote_checksum == '3':
result = dict(msg="remote file is a directory, fetch cannot work on directories", file=source, changed=False)
elif remote_checksum == '4':
result = dict(msg="python isn't present on the system. Unable to compute checksum", file=source, changed=False)
return ReturnData(conn=conn, result=result)
# calculate md5 sum for the local file
local_md5 = utils.md5(dest)
# calculate checksum for the local file
local_checksum = utils.checksum(dest)
if remote_md5 != local_md5:
if remote_checksum != local_checksum:
# create the containing directories, if needed
if not os.path.isdir(os.path.dirname(dest)):
os.makedirs(os.path.dirname(dest))
@ -121,13 +147,27 @@ class ActionModule(object):
f = open(dest, 'w')
f.write(remote_data)
f.close()
new_md5 = utils.md5(dest)
if validate_md5 and new_md5 != remote_md5:
result = dict(failed=True, md5sum=new_md5, msg="md5 mismatch", file=source, dest=dest, remote_md5sum=remote_md5)
new_checksum = utils.secure_hash(dest)
# For backwards compatibility. We'll return None on FIPS enabled
# systems
try:
new_md5 = utils.md5(dest)
except ValueError:
new_md5 = None
if validate_checksum and new_checksum != remote_checksum:
result = dict(failed=True, md5sum=new_md5, msg="checksum mismatch", file=source, dest=dest, remote_md5sum=None, checksum=new_checksum, remote_checksum=remote_checksum)
return ReturnData(conn=conn, result=result)
result = dict(changed=True, md5sum=new_md5, dest=dest, remote_md5sum=remote_md5)
result = dict(changed=True, md5sum=new_md5, dest=dest, remote_md5sum=None, checksum=new_checksum, remote_checksum=remote_checksum)
return ReturnData(conn=conn, result=result)
else:
result = dict(changed=False, md5sum=local_md5, file=source, dest=dest)
# For backwards compatibility. We'll return None on FIPS enabled
# systems
try:
local_md5 = utils.md5(dest)
except ValueError:
local_md5 = None
result = dict(changed=False, md5sum=local_md5, file=source, dest=dest, checksum=local_checksum)
return ReturnData(conn=conn, result=result)

@ -0,0 +1,66 @@
# (c) 2015, Brian Coca <briancoca+dev@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
import os
from ansible import utils
from ansible.runner.return_data import ReturnData
class ActionModule(object):
def __init__(self, runner):
self.runner = runner
def run(self, conn, tmp, module_name, module_args, inject, complex_args=None, **kwargs):
options = {}
if complex_args:
options.update(complex_args)
options.update(utils.parse_kv(module_args))
src = options.get('src', None)
dest = options.get('dest', None)
remote_src = utils.boolean(options.get('remote_src', 'yes'))
if src is None or dest is None:
result = dict(failed=True, msg="src and dest are required")
return ReturnData(conn=conn, comm_ok=False, result=result)
if remote_src:
return self.runner._execute_module(conn, tmp, 'patch', module_args, inject=inject, complex_args=complex_args)
# Source is local
if '_original_file' in inject:
src = utils.path_dwim_relative(inject['_original_file'], 'files', src, self.runner.basedir)
else:
src = utils.path_dwim(self.runner.basedir, src)
tmp_src = tmp + src
conn.put_file(src, tmp_src)
if self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root':
if not self.runner.noop_on_check(inject):
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp)
new_module_args = dict(
src=tmp_src,
)
if self.runner.noop_on_check(inject):
new_module_args['CHECKMODE'] = True
module_args = utils.merge_module_args(module_args, new_module_args)
return self.runner._execute_module(conn, tmp, 'patch', module_args, inject=inject, complex_args=complex_args)

@ -33,9 +33,6 @@ class ActionModule(object):
def run(self, conn, tmp, module_name, module_args, inject, complex_args=None, **kwargs):
''' handler for template operations '''
# note: since this module just calls the copy module, the --check mode support
# can be implemented entirely over there
if not self.runner.is_playbook:
raise errors.AnsibleError("in current versions of ansible, templates are only usable in playbooks")
@ -78,6 +75,8 @@ class ActionModule(object):
else:
source = utils.path_dwim(self.runner.basedir, source)
# Expand any user home dir specification
dest = self.runner._remote_expand_user(conn, dest, tmp)
if dest.endswith("/"): # CCTODO: Fix path for Windows hosts.
base = os.path.basename(source)
@ -90,10 +89,17 @@ class ActionModule(object):
result = dict(failed=True, msg=type(e).__name__ + ": " + str(e))
return ReturnData(conn=conn, comm_ok=False, result=result)
local_md5 = utils.md5s(resultant)
remote_md5 = self.runner._remote_md5(conn, tmp, dest)
local_checksum = utils.checksum_s(resultant)
remote_checksum = self.runner._remote_checksum(conn, tmp, dest, inject)
if remote_checksum in ('0', '2', '3', '4'):
# Note: 1 means the file is not present which is fine; template
# will create it
result = dict(failed=True, msg="failed to checksum remote file."
" Checksum error code: %s" % remote_checksum)
return ReturnData(conn=conn, comm_ok=True, result=result)
if local_md5 != remote_md5:
if local_checksum != remote_checksum:
# template is different from the remote value
@ -113,7 +119,7 @@ class ActionModule(object):
xfered = self.runner._transfer_str(conn, tmp, 'source', resultant)
# fix file permissions when the copy is done as a different user
if self.runner.sudo and self.runner.sudo_user != 'root':
if self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root':
self.runner._remote_chmod(conn, 'a+r', xfered, tmp)
# run the copy module
@ -121,6 +127,7 @@ class ActionModule(object):
src=xfered,
dest=dest,
original_basename=os.path.basename(source),
follow=True,
)
module_args_tmp = utils.merge_module_args(module_args, new_module_args)
@ -132,5 +139,22 @@ class ActionModule(object):
res.diff = dict(before=dest_contents, after=resultant)
return res
else:
return self.runner._execute_module(conn, tmp, 'file', module_args, inject=inject, complex_args=complex_args)
# when running the file module based on the template data, we do
# not want the source filename (the name of the template) to be used,
# since this would mess up links, so we clear the src param and tell
# the module to follow links. When doing that, we have to set
# original_basename to the template just in case the dest is
# a directory.
module_args = ''
new_module_args = dict(
src=None,
original_basename=os.path.basename(source),
follow=True,
)
# be sure to inject the check mode param into the module args and
# rely on the file module to report its changed status
if self.runner.noop_on_check(inject):
new_module_args['CHECKMODE'] = True
options.update(new_module_args)
return self.runner._execute_module(conn, tmp, 'file', module_args, inject=inject, complex_args=options)

@ -49,12 +49,33 @@ class ActionModule(object):
source = options.get('src', None)
dest = options.get('dest', None)
copy = utils.boolean(options.get('copy', 'yes'))
creates = options.get('creates', None)
if source is None or dest is None:
result = dict(failed=True, msg="src (or content) and dest are required")
return ReturnData(conn=conn, result=result)
dest = os.path.expanduser(dest) # CCTODO: Fix path for Windows hosts.
if creates:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of command executions.
module_args_tmp = ""
complex_args_tmp = dict(path=creates, get_md5=False, get_checksum=False)
module_return = self.runner._execute_module(conn, tmp, 'stat', module_args_tmp, inject=inject,
complex_args=complex_args_tmp, persist_files=True)
stat = module_return.result.get('stat', None)
if stat and stat.get('exists', False):
return ReturnData(
conn=conn,
comm_ok=True,
result=dict(
skipped=True,
changed=False,
msg=("skipped, since %s exists" % creates)
)
)
dest = self.runner._remote_expand_user(conn, dest, tmp) # CCTODO: Fix path for Windows hosts.
source = template.template(self.runner.basedir, os.path.expanduser(source), inject)
if copy:
if '_original_file' in inject:
@ -62,8 +83,11 @@ class ActionModule(object):
else:
source = utils.path_dwim(self.runner.basedir, source)
remote_md5 = self.runner._remote_md5(conn, tmp, dest)
if remote_md5 != '3':
remote_checksum = self.runner._remote_checksum(conn, tmp, dest, inject)
if remote_checksum == '4':
result = dict(failed=True, msg="python isn't present on the system. Unable to compute checksum")
return ReturnData(conn=conn, result=result)
if remote_checksum != '3':
result = dict(failed=True, msg="dest '%s' must be an existing dir" % dest)
return ReturnData(conn=conn, result=result)
@ -76,14 +100,23 @@ class ActionModule(object):
# handle check mode client side
# fix file permissions when the copy is done as a different user
if copy:
if self.runner.sudo and self.runner.sudo_user != 'root':
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp)
if self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root':
if not self.runner.noop_on_check(inject):
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp)
# Build temporary module_args.
new_module_args = dict(
src=tmp_src,
original_basename=os.path.basename(source),
)
# make sure checkmod is passed on correctly
if self.runner.noop_on_check(inject):
new_module_args['CHECKMODE'] = True
module_args = utils.merge_module_args(module_args, new_module_args)
else:
module_args = "%s original_basename=%s" % (module_args, pipes.quote(os.path.basename(source)))
# make sure checkmod is passed on correctly
if self.runner.noop_on_check(inject):
module_args += " CHECKMODE=True"
return self.runner._execute_module(conn, tmp, 'unarchive', module_args, inject=inject, complex_args=complex_args)

@ -0,0 +1,377 @@
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import os
from ansible import utils
import ansible.constants as C
import ansible.utils.template as template
from ansible import errors
from ansible.runner.return_data import ReturnData
import base64
import json
import stat
import tempfile
import pipes
## fixes https://github.com/ansible/ansible/issues/3518
# http://mypy.pythonblogs.com/12_mypy/archive/1253_workaround_for_python_bug_ascii_codec_cant_encode_character_uxa0_in_position_111_ordinal_not_in_range128.html
import sys
reload(sys)
sys.setdefaultencoding("utf8")
class ActionModule(object):
def __init__(self, runner):
self.runner = runner
def run(self, conn, tmp_path, module_name, module_args, inject, complex_args=None, **kwargs):
''' handler for file transfer operations '''
# load up options
options = {}
if complex_args:
options.update(complex_args)
options.update(utils.parse_kv(module_args))
source = options.get('src', None)
content = options.get('content', None)
dest = options.get('dest', None)
raw = utils.boolean(options.get('raw', 'no'))
force = utils.boolean(options.get('force', 'yes'))
# content with newlines is going to be escaped to safely load in yaml
# now we need to unescape it so that the newlines are evaluated properly
# when writing the file to disk
if content:
if isinstance(content, unicode):
try:
content = content.decode('unicode-escape')
except UnicodeDecodeError:
pass
if (source is None and content is None and not 'first_available_file' in inject) or dest is None:
result=dict(failed=True, msg="src (or content) and dest are required")
return ReturnData(conn=conn, result=result)
elif (source is not None or 'first_available_file' in inject) and content is not None:
result=dict(failed=True, msg="src and content are mutually exclusive")
return ReturnData(conn=conn, result=result)
# Check if the source ends with a "/"
source_trailing_slash = False
if source:
source_trailing_slash = source.endswith("/")
# Define content_tempfile in case we set it after finding content populated.
content_tempfile = None
# If content is defined make a temp file and write the content into it.
if content is not None:
try:
# If content comes to us as a dict it should be decoded json.
# We need to encode it back into a string to write it out.
if type(content) is dict:
content_tempfile = self._create_content_tempfile(json.dumps(content))
else:
content_tempfile = self._create_content_tempfile(content)
source = content_tempfile
except Exception, err:
result = dict(failed=True, msg="could not write content temp file: %s" % err)
return ReturnData(conn=conn, result=result)
# if we have first_available_file in our vars
# look up the files and use the first one we find as src
elif 'first_available_file' in inject:
found = False
for fn in inject.get('first_available_file'):
fn_orig = fn
fnt = template.template(self.runner.basedir, fn, inject)
fnd = utils.path_dwim(self.runner.basedir, fnt)
if not os.path.exists(fnd) and '_original_file' in inject:
fnd = utils.path_dwim_relative(inject['_original_file'], 'files', fnt, self.runner.basedir, check=False)
if os.path.exists(fnd):
source = fnd
found = True
break
if not found:
results = dict(failed=True, msg="could not find src in first_available_file list")
return ReturnData(conn=conn, result=results)
else:
source = template.template(self.runner.basedir, source, inject)
if '_original_file' in inject:
source = utils.path_dwim_relative(inject['_original_file'], 'files', source, self.runner.basedir)
else:
source = utils.path_dwim(self.runner.basedir, source)
# A list of source file tuples (full_path, relative_path) which will try to copy to the destination
source_files = []
# If source is a directory populate our list else source is a file and translate it to a tuple.
if os.path.isdir(source):
# Get the amount of spaces to remove to get the relative path.
if source_trailing_slash:
sz = len(source) + 1
else:
sz = len(source.rsplit('/', 1)[0]) + 1
# Walk the directory and append the file tuples to source_files.
for base_path, sub_folders, files in os.walk(source):
for file in files:
full_path = os.path.join(base_path, file)
rel_path = full_path[sz:]
source_files.append((full_path, rel_path))
# If it's recursive copy, destination is always a dir,
# explicitly mark it so (note - copy module relies on this).
if not conn.shell.path_has_trailing_slash(dest):
dest = conn.shell.join_path(dest, '')
else:
source_files.append((source, os.path.basename(source)))
changed = False
diffs = []
module_result = {"changed": False}
# A register for if we executed a module.
# Used to cut down on command calls when not recursive.
module_executed = False
# Tell _execute_module to delete the file if there is one file.
delete_remote_tmp = (len(source_files) == 1)
# If this is a recursive action create a tmp_path that we can share as the _exec_module create is too late.
if not delete_remote_tmp:
if "-tmp-" not in tmp_path:
tmp_path = self.runner._make_tmp_path(conn)
# expand any user home dir specifier
dest = self.runner._remote_expand_user(conn, dest, tmp_path)
for source_full, source_rel in source_files:
# Generate a hash of the local file.
local_checksum = utils.checksum(source_full)
# If local_checksum is not defined we can't find the file so we should fail out.
if local_checksum is None:
result = dict(failed=True, msg="could not find src=%s" % source_full)
return ReturnData(conn=conn, result=result)
# This is kind of optimization - if user told us destination is
# dir, do path manipulation right away, otherwise we still check
# for dest being a dir via remote call below.
if conn.shell.path_has_trailing_slash(dest):
dest_file = conn.shell.join_path(dest, source_rel)
else:
dest_file = conn.shell.join_path(dest)
# Attempt to get the remote checksum
remote_checksum = self.runner._remote_checksum(conn, tmp_path, dest_file, inject)
if remote_checksum == '3':
# The remote_checksum was executed on a directory.
if content is not None:
# If source was defined as content remove the temporary file and fail out.
self._remove_tempfile_if_content_defined(content, content_tempfile)
result = dict(failed=True, msg="can not use content with a dir as dest")
return ReturnData(conn=conn, result=result)
else:
# Append the relative source location to the destination and retry remote_checksum.
dest_file = conn.shell.join_path(dest, source_rel)
remote_checksum = self.runner._remote_checksum(conn, tmp_path, dest_file, inject)
if remote_checksum != '1' and not force:
# remote_file does not exist so continue to next iteration.
continue
if local_checksum != remote_checksum:
# The checksums don't match and we will change or error out.
changed = True
# Create a tmp_path if missing only if this is not recursive.
# If this is recursive we already have a tmp_path.
if delete_remote_tmp:
if "-tmp-" not in tmp_path:
tmp_path = self.runner._make_tmp_path(conn)
if self.runner.diff and not raw:
diff = self._get_diff_data(conn, tmp_path, inject, dest_file, source_full)
else:
diff = {}
if self.runner.noop_on_check(inject):
self._remove_tempfile_if_content_defined(content, content_tempfile)
diffs.append(diff)
changed = True
module_result = dict(changed=True)
continue
# Define a remote directory that we will copy the file to.
tmp_src = tmp_path + 'source'
if not raw:
conn.put_file(source_full, tmp_src)
else:
conn.put_file(source_full, dest_file)
# We have copied the file remotely and no longer require our content_tempfile
self._remove_tempfile_if_content_defined(content, content_tempfile)
# fix file permissions when the copy is done as a different user
if (self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root') and not raw:
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp_path)
if raw:
# Continue to next iteration if raw is defined.
continue
# Run the copy module
# src and dest here come after original and override them
# we pass dest only to make sure it includes trailing slash in case of recursive copy
new_module_args = dict(
src=tmp_src,
dest=dest,
original_basename=source_rel
)
if self.runner.noop_on_check(inject):
new_module_args['CHECKMODE'] = True
if self.runner.no_log:
new_module_args['NO_LOG'] = True
module_args_tmp = utils.merge_module_args(module_args, new_module_args)
module_return = self.runner._execute_module(conn, tmp_path, 'win_copy', module_args_tmp, inject=inject, complex_args=complex_args, delete_remote_tmp=delete_remote_tmp)
module_executed = True
else:
# no need to transfer the file, already correct md5, but still need to call
# the file module in case we want to change attributes
self._remove_tempfile_if_content_defined(content, content_tempfile)
if raw:
# Continue to next iteration if raw is defined.
# self.runner._remove_tmp_path(conn, tmp_path)
continue
tmp_src = tmp_path + source_rel
# Build temporary module_args.
new_module_args = dict(
src=tmp_src,
dest=dest,
original_basename=source_rel
)
if self.runner.noop_on_check(inject):
new_module_args['CHECKMODE'] = True
if self.runner.no_log:
new_module_args['NO_LOG'] = True
module_args_tmp = utils.merge_module_args(module_args, new_module_args)
# Execute the file module.
module_return = self.runner._execute_module(conn, tmp_path, 'win_file', module_args_tmp, inject=inject, complex_args=complex_args, delete_remote_tmp=delete_remote_tmp)
module_executed = True
module_result = module_return.result
if not module_result.get('checksum'):
module_result['checksum'] = local_checksum
if module_result.get('failed') == True:
return module_return
if module_result.get('changed') == True:
changed = True
# Delete tmp_path if we were recursive or if we did not execute a module.
if (not C.DEFAULT_KEEP_REMOTE_FILES and not delete_remote_tmp) \
or (not C.DEFAULT_KEEP_REMOTE_FILES and delete_remote_tmp and not module_executed):
self.runner._remove_tmp_path(conn, tmp_path)
# the file module returns the file path as 'path', but
# the copy module uses 'dest', so add it if it's not there
if 'path' in module_result and 'dest' not in module_result:
module_result['dest'] = module_result['path']
# TODO: Support detailed status/diff for multiple files
if len(source_files) == 1:
result = module_result
else:
result = dict(dest=dest, src=source, changed=changed)
if len(diffs) == 1:
return ReturnData(conn=conn, result=result, diff=diffs[0])
else:
return ReturnData(conn=conn, result=result)
def _create_content_tempfile(self, content):
''' Create a tempfile containing defined content '''
fd, content_tempfile = tempfile.mkstemp()
f = os.fdopen(fd, 'w')
try:
f.write(content)
except Exception, err:
os.remove(content_tempfile)
raise Exception(err)
finally:
f.close()
return content_tempfile
def _get_diff_data(self, conn, tmp, inject, destination, source):
peek_result = self.runner._execute_module(conn, tmp, 'win_file', "path=%s diff_peek=1" % destination, inject=inject, persist_files=True)
if not peek_result.is_successful():
return {}
diff = {}
if peek_result.result['state'] == 'absent':
diff['before'] = ''
elif peek_result.result['appears_binary']:
diff['dst_binary'] = 1
elif peek_result.result['size'] > utils.MAX_FILE_SIZE_FOR_DIFF:
diff['dst_larger'] = utils.MAX_FILE_SIZE_FOR_DIFF
else:
dest_result = self.runner._execute_module(conn, tmp, 'slurp', "path=%s" % destination, inject=inject, persist_files=True)
if 'content' in dest_result.result:
dest_contents = dest_result.result['content']
if dest_result.result['encoding'] == 'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise Exception("unknown encoding, failed: %s" % dest_result.result)
diff['before_header'] = destination
diff['before'] = dest_contents
src = open(source)
src_contents = src.read(8192)
st = os.stat(source)
if "\x00" in src_contents:
diff['src_binary'] = 1
elif st[stat.ST_SIZE] > utils.MAX_FILE_SIZE_FOR_DIFF:
diff['src_larger'] = utils.MAX_FILE_SIZE_FOR_DIFF
else:
src.seek(0)
diff['after_header'] = source
diff['after'] = src.read()
return diff
def _remove_tempfile_if_content_defined(self, content, content_tempfile):
if content is not None:
os.remove(content_tempfile)
def _result_key_merge(self, options, results):
# add keys to file module results to mimic copy
if 'path' in results.result and 'dest' not in results.result:
results.result['dest'] = results.result['path']
del results.result['path']
return results

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save