Merge remote-tracking branch 'upstream/devel' into devel

* upstream/devel: (52 commits)
  format fixes to make fetch more usable
  ...
pull/598/head
Seth Vidal 12 years ago
commit 285aaf836c

@ -3,12 +3,30 @@ Ansible Changes By Release
0.6 "Cabo" ------------ pending
* groups variable available a hash to return the hosts in each group name
* groups variable available as a hash to return the hosts in each group name
* fetch module now does not fail a system when requesting file paths (ex: logs) that don't exist
* apt module now takes an optional install-recommends=yes|no (default yes)
* fixes to the return codes of the copy module
* copy module takes a remote md5sum to avoid large file transfer
* when sudoing to root, still use /etc/ansible/setup as the metadata path, as if root
* support to tag tasks and includes and use --tags in playbook CLI
* various user and group module fixes (error handling, etc)
* apt module now takes an optional force parameter
* slightly better psychic service status handling for the service module
* cowsay support on Ubuntu
* playbooks can now include other playbooks (example/playbooks/nested_playbooks.yml)
* paramiko is now only imported if needed when running from source checkout
* fetch module fixes for SSH connection type
* modules now consistently all take yes/no for boolean parameters (some accepted true/false)
* in YAML inventory, hosts can list their groups in inverted order now also (see tests/yaml_hosts)
* setup module no longer saves to disk, template module now only used in playbooks
* setup module no longer needs to run twice per playbook
* vars_files now usable with with_items, provided file paths don't contain host specific facts
* error reporting if with_items value is unbound
* with_items no longer creates lots of tasks, creates one task that makes multiple calls
* can use host_specific facts inside with_items (see above)
* at the top level of a playbook, set 'gather_facts: False' to skip fact gathering
* first_available_file and with_items used together will now raise an error
0.5 "Amsterdam" ------- July 04, 2012

@ -35,6 +35,8 @@ def main(args):
parser = utils.base_parser(constants=C, usage=usage, connect_opts=True, runas_opts=True)
parser.add_option('-e', '--extra-vars', dest="extra_vars", default=None,
help="set additional key=value variables from the CLI")
parser.add_option('-t', '--tags', dest='tags', default='all',
help="only run plays and tasks tagged with these values")
options, args = parser.parse_args(args)
@ -53,6 +55,7 @@ def main(args):
options.sudo = True
options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER
extra_vars = utils.parse_kv(options.extra_vars)
only_tags = options.tags.split(",")
# run all playbooks specified on the command line
for playbook in args:
@ -78,7 +81,8 @@ def main(args):
sudo_user=options.sudo_user,
sudo_pass=sudopass,
extra_vars=extra_vars,
private_key_file=options.private_key_file
private_key_file=options.private_key_file,
only_tags=only_tags,
)
try:
@ -87,7 +91,7 @@ def main(args):
print callbacks.banner("PLAY RECAP")
for h in hosts:
t = pb.stats.summarize(h)
print "%-30s : ok=%4s changed=%4s unreachable=%4s failed=%4s " % (h,
print "%-30s : ok=%-4s changed=%-4s unreachable=%-4s failed=%-4s " % (h,
t['ok'], t['changed'], t['unreachable'], t['failures']
)
print "\n"

@ -0,0 +1,21 @@
##
# Example Ansible playbook that uses the MySQL module.
#
---
- hosts: all
user: root
vars:
mysql_root_password: 'password'
tasks:
- name: Create database user
action: mysql_user loginpass=$mysql_root_password user=bob passwd=12345 priv=*.*:ALL state=present
- name: Create database
action: mysql_db loginpass=$mysql_root_password db=bobdata state=present
- name: Ensure no user named 'sally' exists
action: mysql_user loginpass=$mysql_root_password user=sally state=absent

@ -0,0 +1,26 @@
---
# it is possible to have top level playbook files import other playbook
# files. For example, a playbook called could include three
# different playbooks, such as webservers, workers, dbservers, etc.
#
# Running the site playbook would run all playbooks, while individual
# playbooks could still be run directly. This is somewhat like
# the tag feature and can be used in conjunction for very fine grained
# control over what you want to target when running ansible.
- name: this is a play at the top level of a file
hosts: all
user: root
tasks:
- name: say hi
tags: foo
action: shell echo "hi..."
# and this is how we include another playbook, be careful and
# don't recurse infinitely or anything. Note you can't use
# any variables here.
- include: intro_example.yml
# and if we wanted, we can continue with more includes here,
# or more plays inline in this file

@ -0,0 +1,44 @@
---
# tags allow us to run all of a playbook or part of it.
#
# assume: ansible-playbook tags.yml --tags foo
#
# try this with:
# --tags foo
# --tags bar
# --tags extra
#
# the value of a 'tags:' element can be a string or list
# of tag names. Variables are not usable in tag names.
- name: example play one
hosts: all
user: root
# any tags applied to the play are shorthand to applying
# the tag to all tasks in it. Here, each task is given
# the tag extra
tags:
- extra
tasks:
# this task will run if you don't specify any tags,
# if you specify 'foo' or if you specify 'extra'
- name: hi
tags: foo
action: shell echo "first task ran"
- name: example play two
hosts: all
user: root
tasks:
- name: hi
tags:
- bar
action: shell echo "second task ran"
- include: tasks/base.yml tags=base

@ -1,4 +1,10 @@
# This is a very simple Jinja2 template representing an imaginary configuration file
# for an imaginary app.
# this is an example of loading a fact from the setup module
system={{ ansible_system }}
# here is a variable that could be set in a playbook or inventory file
http_port={{ http_port }}

@ -69,7 +69,7 @@ conn = xmlrpclib.Server("http://127.0.0.1/cobbler_api", allow_none=True)
# executed with no parameters, return the list of
# all groups and hosts
if len(sys.argv) == 1:
if len(sys.argv) == 2 and (sys.argv[1] == '--list'):
systems = conn.get_item_names('system')
groups = { 'ungrouped' : [] }
@ -103,10 +103,10 @@ if len(sys.argv) == 1:
# executed with a hostname as a parameter, return the
# variables for that host
if len(sys.argv) == 2:
elif len(sys.argv) == 3 and (sys.argv[1] == '--host'):
# look up the system record for the given DNS name
result = conn.find_system_by_dns_name(sys.argv[1])
result = conn.find_system_by_dns_name(sys.argv[2])
system = result.get('name', None)
data = {}
if system is None:
@ -125,3 +125,7 @@ if len(sys.argv) == 2:
print json.dumps(results)
sys.exit(0)
else:
print "usage: --list ..OR.. --host <hostname>"
sys.exit(1)

@ -83,7 +83,7 @@ except:
print "***********************************"
print "PARSED OUTPUT"
print utils.bigjson(results)
print utils.jsonify(results,format=True)
sys.exit(0)

@ -15,20 +15,23 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#######################################################
import utils
import sys
import getpass
import os
import subprocess
#######################################################
cowsay = None
if os.path.exists("/usr/bin/cowsay"):
cowsay = "/usr/bin/cowsay"
elif os.path.exists("/usr/games/cowsay"):
cowsay = "/usr/games/cowsay"
class AggregateStats(object):
''' holds stats about per-host activity during playbook runs '''
def __init__(self):
self.processed = {}
self.failures = {}
self.ok = {}
@ -76,17 +79,65 @@ class AggregateStats(object):
########################################################################
def regular_generic_msg(hostname, result, oneline, caption):
''' output on the result of a module run that is not command '''
if not oneline:
return "%s | %s >> %s\n" % (hostname, caption, utils.jsonify(result,format=True))
else:
return "%s | %s >> %s\n" % (hostname, caption, utils.jsonify(result))
def banner(msg):
res = ""
if os.path.exists("/usr/bin/cowsay"):
cmd = subprocess.Popen("/usr/bin/cowsay -W 60 \"%s\"" % msg,
if cowsay != None:
cmd = subprocess.Popen("%s -W 60 \"%s\"" % (cowsay, msg),
stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
(out, err) = cmd.communicate()
res = "%s\n" % out
return "%s\n" % out
else:
return "\n%s ********************* " % msg
def command_generic_msg(hostname, result, oneline, caption):
''' output the result of a command run '''
rc = result.get('rc', '0')
stdout = result.get('stdout','')
stderr = result.get('stderr', '')
msg = result.get('msg', '')
if not oneline:
buf = "%s | %s | rc=%s >>\n" % (hostname, caption, result.get('rc',0))
if stdout:
buf += stdout
if stderr:
buf += stderr
if msg:
buf += msg
return buf + "\n"
else:
if stderr:
return "%s | %s | rc=%s | (stdout) %s (stderr) %s\n" % (hostname, caption, rc, stdout, stderr)
else:
return "%s | %s | rc=%s | (stdout) %s\n" % (hostname, caption, rc, stdout)
def host_report_msg(hostname, module_name, result, oneline):
''' summarize the JSON results for a particular host '''
failed = utils.is_failed(result)
if module_name in [ 'command', 'shell', 'raw' ] and 'ansible_job_id' not in result:
if not failed:
return command_generic_msg(hostname, result, oneline, 'success')
else:
return command_generic_msg(hostname, result, oneline, 'FAILED')
else:
res = "%s ********************* \n" % msg
return res
if not failed:
return regular_generic_msg(hostname, result, oneline, 'success')
else:
return regular_generic_msg(hostname, result, oneline, 'FAILED')
###############################################
class DefaultRunnerCallbacks(object):
''' no-op callbacks for API usage of Runner() if no callbacks are specified '''
@ -127,33 +178,43 @@ class CliRunnerCallbacks(DefaultRunnerCallbacks):
''' callbacks for use by /usr/bin/ansible '''
def __init__(self):
# set by /usr/bin/ansible later
self.options = None
self._async_notified = {}
def on_failed(self, host, res):
self._on_any(host,res)
def on_ok(self, host, res):
self._on_any(host,res)
def on_unreachable(self, host, res):
if type(res) == dict:
res = res.get('msg','')
print "%s | FAILED => %s" % (host, res)
if self.options.tree:
utils.write_tree_file(self.options.tree, host, utils.bigjson(dict(failed=True, msg=res)))
utils.write_tree_file(
self.options.tree, host,
utils.jsonify(dict(failed=True, msg=res),format=True)
)
def on_skipped(self, host):
pass
def on_error(self, host, err):
print >>sys.stderr, "err: [%s] => %s\n" % (host, err)
def on_no_hosts(self):
print >>sys.stderr, "no hosts matched\n"
def on_async_poll(self, host, res, jid, clock):
if jid not in self._async_notified:
self._async_notified[jid] = clock + 1
if self._async_notified[jid] > clock:
@ -161,15 +222,18 @@ class CliRunnerCallbacks(DefaultRunnerCallbacks):
print "<job %s> polling, %ss remaining"%(jid, clock)
def on_async_ok(self, host, res, jid):
print "<job %s> finished on %s => %s"%(jid, host, utils.bigjson(res))
print "<job %s> finished on %s => %s"%(jid, host, utils.jsonify(res,format=True))
def on_async_failed(self, host, res, jid):
print "<job %s> FAILED on %s => %s"%(jid, host, utils.bigjson(res))
print "<job %s> FAILED on %s => %s"%(jid, host, utils.jsonify(res,format=True))
def _on_any(self, host, result):
print utils.host_report_msg(host, self.options.module_name, result, self.options.one_line)
print host_report_msg(host, self.options.module_name, result, self.options.one_line)
if self.options.tree:
utils.write_tree_file(self.options.tree, host, utils.bigjson(result))
utils.write_tree_file(self.options.tree, host, utils.json(result,format=True))
########################################################################
@ -177,33 +241,41 @@ class PlaybookRunnerCallbacks(DefaultRunnerCallbacks):
''' callbacks used for Runner() from /usr/bin/ansible-playbook '''
def __init__(self, stats, verbose=False):
self.stats = stats
self._async_notified = {}
self.verbose = verbose
def on_unreachable(self, host, msg):
print "fatal: [%s] => %s" % (host, msg)
def on_failed(self, host, results):
print "failed: [%s] => %s\n" % (host, utils.smjson(results))
print "failed: [%s] => %s" % (host, utils.jsonify(results))
def on_ok(self, host, host_result):
# show verbose output for non-setup module results if --verbose is used
if not self.verbose or host_result.get("verbose_override",None) is not None:
print "ok: [%s]\n" % (host)
print "ok: [%s]" % (host)
else:
print "ok: [%s] => %s" % (host, utils.smjson(host_result))
print "ok: [%s] => %s" % (host, utils.jsonify(host_result))
def on_error(self, host, err):
print >>sys.stderr, "err: [%s] => %s\n" % (host, err)
print >>sys.stderr, "err: [%s] => %s" % (host, err)
def on_skipped(self, host):
print "skipping: [%s]\n" % host
print "skipping: [%s]" % host
def on_no_hosts(self):
print "no hosts matched or remaining\n"
def on_async_poll(self, host, res, jid, clock):
if jid not in self._async_notified:
self._async_notified[jid] = clock + 1
if self._async_notified[jid] > clock:
@ -211,9 +283,11 @@ class PlaybookRunnerCallbacks(DefaultRunnerCallbacks):
print "<job %s> polling, %ss remaining"%(jid, clock)
def on_async_ok(self, host, res, jid):
print "<job %s> finished on %s"%(jid, host)
def on_async_failed(self, host, res, jid):
print "<job %s> FAILED on %s"%(jid, host)
########################################################################
@ -222,34 +296,45 @@ class PlaybookCallbacks(object):
''' playbook.py callbacks used by /usr/bin/ansible-playbook '''
def __init__(self, verbose=False):
self.verbose = verbose
def on_start(self):
print "\n"
pass
def on_notify(self, host, handler):
pass
def on_task_start(self, name, is_conditional):
print banner(utils.task_start_msg(name, is_conditional))
msg = "TASK: [%s]" % name
if is_conditional:
msg = "NOTIFIED: [%s]" % name
print banner(msg)
def on_vars_prompt(self, varname, private=True):
msg = 'input for %s: ' % varname
if private:
return getpass.getpass(msg)
return raw_input(msg)
def on_setup_primary(self):
print banner("SETUP PHASE")
def on_setup(self):
def on_setup_secondary(self):
print banner("VARIABLE IMPORT PHASE")
print banner("GATHERING FACTS")
def on_import_for_host(self, host, imported_file):
print "%s: importing %s" % (host, imported_file)
def on_not_import_for_host(self, host, missing_file):
print "%s: not importing file: %s" % (host, missing_file)
def on_play_start(self, pattern):
print banner("PLAY [%s]" % pattern)

@ -14,17 +14,12 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
import os
DEFAULT_HOST_LIST = os.environ.get('ANSIBLE_HOSTS',
'/etc/ansible/hosts')
DEFAULT_MODULE_PATH = os.environ.get('ANSIBLE_LIBRARY',
'/usr/share/ansible')
DEFAULT_REMOTE_TMP = os.environ.get('ANSIBLE_REMOTE_TMP',
'$HOME/.ansible/tmp')
DEFAULT_HOST_LIST = os.environ.get('ANSIBLE_HOSTS', '/etc/ansible/hosts')
DEFAULT_MODULE_PATH = os.environ.get('ANSIBLE_LIBRARY', '/usr/share/ansible')
DEFAULT_REMOTE_TMP = os.environ.get('ANSIBLE_REMOTE_TMP', '$HOME/.ansible/tmp')
DEFAULT_MODULE_NAME = 'command'
DEFAULT_PATTERN = '*'
@ -34,7 +29,7 @@ DEFAULT_TIMEOUT = os.environ.get('ANSIBLE_TIMEOUT',10)
DEFAULT_POLL_INTERVAL = os.environ.get('ANSIBLE_POLL_INTERVAL',15)
DEFAULT_REMOTE_USER = os.environ.get('ANSIBLE_REMOTE_USER','root')
DEFAULT_REMOTE_PASS = None
DEFAULT_PRIVATE_KEY_FILE = os.environ.get('ANSIBLE_REMOTE_USER',None)
DEFAULT_PRIVATE_KEY_FILE = os.environ.get('ANSIBLE_PRIVATE_KEY_FILE',None)
DEFAULT_SUDO_PASS = None
DEFAULT_SUDO_USER = os.environ.get('ANSIBLE_SUDO_USER','root')
DEFAULT_REMOTE_PORT = 22

@ -15,11 +15,8 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
class AnsibleError(Exception):
"""
The base Ansible exception from which all others should subclass.
"""
''' The base Ansible exception from which all others should subclass '''
def __init__(self, msg):
self.msg = msg
@ -27,11 +24,9 @@ class AnsibleError(Exception):
def __str__(self):
return self.msg
class AnsibleFileNotFound(AnsibleError):
pass
class AnsibleConnectionFailed(AnsibleError):
pass

@ -52,7 +52,7 @@ class Inventory(object):
if type(host_list) in [ str, unicode ]:
if host_list.find(",") != -1:
host_list = host_list.split(",")
host_list = host_list.split(",")
if type(host_list) == list:
all = Group('all')
@ -94,7 +94,7 @@ class Inventory(object):
for group in groups:
for host in group.get_hosts():
if group.name == pat or pat == 'all' or self._match(host.name, pat):
#must test explicitly for None because [] means no hosts allowed
# must test explicitly for None because [] means no hosts allowed
if self._restriction==None or host.name in self._restriction:
if inverted:
if host.name in hosts:
@ -108,8 +108,8 @@ class Inventory(object):
def get_host(self, hostname):
for group in self.groups:
for host in group.get_hosts():
if hostname == host.name:
for host in group.get_hosts():
if hostname == host.name:
return host
return None
@ -128,7 +128,6 @@ class Inventory(object):
def get_variables(self, hostname):
if self._is_script:
# TODO: move this to inventory_script.py
host = self.get_host(hostname)
cmd = subprocess.Popen(
[self.host_list,"--host",hostname],
@ -161,10 +160,9 @@ class Inventory(object):
def restrict_to(self, restriction, append_missing=False):
""" Restrict list operations to the hosts given in restriction """
if type(restriction) != list:
restriction = [ restriction ]
self._restriction = restriction
if type(restriction) != list:
restriction = [ restriction ]
self._restriction = restriction
def lift_restriction(self):
""" Do not restrict list operations """

@ -15,38 +15,37 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
# from ansible import errors
class Group(object):
"""
Group of ansible hosts
"""
''' a group of ansible hosts '''
def __init__(self, name=None):
self.name = name
self.hosts = []
self.vars = {}
self.child_groups = []
self.parent_groups = []
if self.name is None:
raise Exception("group name is required")
raise Exception("group name is required")
def add_child_group(self, group):
if self == group:
raise Exception("can't add group to itself")
self.child_groups.append(group)
group.parent_groups.append(self)
def add_host(self, host):
self.hosts.append(host)
host.add_group(self)
def set_variable(self, key, value):
self.vars[key] = value
def get_hosts(self):
hosts = []
for kid in self.child_groups:
hosts.extend(kid.get_hosts())
@ -54,14 +53,16 @@ class Group(object):
return hosts
def get_variables(self):
vars = {}
# FIXME: verify this variable override order is what we want
for ancestor in self.get_ancestors():
vars.update(ancestor.get_variables())
vars.update(ancestor.get_variables())
vars.update(self.vars)
return vars
def _get_ancestors(self):
results = {}
for g in self.parent_groups:
results[g.name] = g
@ -69,8 +70,6 @@ class Group(object):
return results
def get_ancestors(self):
return self._get_ancestors().values()
return self._get_ancestors().values()

@ -15,17 +15,14 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
from ansible import errors
import ansible.constants as C
class Host(object):
"""
Group of ansible hosts
"""
''' a single ansible host '''
def __init__(self, name=None, port=None):
self.name = name
self.vars = {}
self.groups = []
@ -36,12 +33,15 @@ class Host(object):
raise Exception("host name is required")
def add_group(self, group):
self.groups.append(group)
def set_variable(self, key, value):
self.vars[key]=value;
self.vars[key]=value
def get_groups(self):
groups = {}
for g in self.groups:
groups[g.name] = g
@ -51,6 +51,7 @@ class Host(object):
return groups.values()
def get_variables(self):
results = {}
for group in self.groups:
results.update(group.get_variables())

@ -54,6 +54,7 @@ class InventoryParser(object):
# delta asdf=jkl favcolor=red
def _parse_base_groups(self):
# FIXME: refactor
ungrouped = Group(name='ungrouped')
all = Group(name='all')
@ -79,9 +80,9 @@ class InventoryParser(object):
hostname = tokens[0]
port = C.DEFAULT_REMOTE_PORT
if hostname.find(":") != -1:
tokens2 = hostname.split(":")
hostname = tokens2[0]
port = tokens2[1]
tokens2 = hostname.split(":")
hostname = tokens2[0]
port = tokens2[1]
host = None
if hostname in self.hosts:
host = self.hosts[hostname]
@ -89,9 +90,9 @@ class InventoryParser(object):
host = Host(name=hostname, port=port)
self.hosts[hostname] = host
if len(tokens) > 1:
for t in tokens[1:]:
(k,v) = t.split("=")
host.set_variable(k,v)
for t in tokens[1:]:
(k,v) = t.split("=")
host.set_variable(k,v)
self.groups[active_group_name].add_host(host)
# [southeast:children]
@ -134,7 +135,7 @@ class InventoryParser(object):
line = line.replace("[","").replace(":vars]","")
group = self.groups.get(line, None)
if group is None:
raise errors.AnsibleError("can't add vars to undefined group: %s" % line)
raise errors.AnsibleError("can't add vars to undefined group: %s" % line)
elif line.startswith("#"):
pass
elif line.startswith("["):

@ -26,9 +26,7 @@ from ansible import errors
from ansible import utils
class InventoryScript(object):
"""
Host inventory parser for ansible using external inventory scripts.
"""
''' Host inventory parser for ansible using external inventory scripts. '''
def __init__(self, filename=C.DEFAULT_HOST_LIST):
@ -39,6 +37,7 @@ class InventoryScript(object):
self.groups = self._parse()
def _parse(self):
groups = {}
self.raw = utils.parse_json(self.data)
all=Group('all')
@ -55,4 +54,3 @@ class InventoryScript(object):
all.add_child_group(group)
return groups

@ -15,8 +15,6 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
import ansible.constants as C
from ansible.inventory.host import Host
from ansible.inventory.group import Group
@ -24,9 +22,7 @@ from ansible import errors
from ansible import utils
class InventoryParserYaml(object):
"""
Host inventory for ansible.
"""
''' Host inventory parser for ansible '''
def __init__(self, filename=C.DEFAULT_HOST_LIST):
@ -37,6 +33,7 @@ class InventoryParserYaml(object):
self._parse(data)
def _make_host(self, hostname):
if hostname in self._hosts:
return self._hosts[hostname]
else:
@ -47,6 +44,7 @@ class InventoryParserYaml(object):
# see file 'test/yaml_hosts' for syntax
def _parse(self, data):
# FIXME: refactor into subfunctions
all = Group('all')
ungrouped = Group('ungrouped')
@ -73,8 +71,8 @@ class InventoryParserYaml(object):
vars = subresult.get('vars',{})
if type(vars) == list:
for subitem in vars:
for (k,v) in subitem.items():
host.set_variable(k,v)
for (k,v) in subitem.items():
host.set_variable(k,v)
elif type(vars) == dict:
for (k,v) in subresult.get('vars',{}).items():
host.set_variable(k,v)
@ -106,12 +104,33 @@ class InventoryParserYaml(object):
elif type(item) == dict and 'host' in item:
host = self._make_host(item['host'])
vars = item.get('vars', {})
if type(vars)==list:
varlist, vars = vars, {}
for subitem in varlist:
vars.update(subitem)
for (k,v) in vars.items():
host.set_variable(k,v)
host.set_variable(k,v)
groups = item.get('groups', {})
if type(groups) in [ str, unicode ]:
groups = [ groups ]
if type(groups)==list:
for subitem in groups:
if subitem in self.groups:
group = self.groups[subitem]
else:
group = Group(subitem)
self.groups[group.name] = group
all.add_child_group(group)
group.add_host(host)
grouped_hosts.append(host)
if host not in grouped_hosts:
ungrouped.add_host(host)
# make sure ungrouped.hosts is the complement of grouped_hosts
ungrouped_hosts = [host for host in ungrouped.hosts if host not in grouped_hosts]

@ -15,30 +15,24 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
import ansible.inventory
import ansible.runner
import ansible.constants as C
from ansible import utils
from ansible import errors
import os
import collections
from play import Play
#############################################
class PlayBook(object):
'''
runs an ansible playbook, given as a datastructure
or YAML filename. a playbook is a deployment, config
management, or automation based set of commands to
run in series.
multiple plays/tasks do not execute simultaneously,
but tasks in each pattern do execute in parallel
(according to the number of forks requested) among
the hosts they address
runs an ansible playbook, given as a datastructure or YAML filename.
A playbook is a deployment, config management, or automation based
set of commands to run in series.
multiple plays/tasks do not execute simultaneously, but tasks in each
pattern do execute in parallel (according to the number of forks
requested) among the hosts they address
'''
# *****************************************************
@ -61,7 +55,8 @@ class PlayBook(object):
stats = None,
sudo = False,
sudo_user = C.DEFAULT_SUDO_USER,
extra_vars = None):
extra_vars = None,
only_tags = None):
"""
playbook: path to a playbook file
@ -80,13 +75,15 @@ class PlayBook(object):
sudo: if not specified per play, requests all plays use sudo mode
"""
self.SETUP_CACHE = {}
self.SETUP_CACHE = collections.defaultdict(dict)
if playbook is None or callbacks is None or runner_callbacks is None or stats is None:
raise Exception('missing required arguments')
if extra_vars is None:
extra_vars = {}
if only_tags is None:
only_tags = [ 'all' ]
self.module_path = module_path
self.forks = forks
@ -105,26 +102,53 @@ class PlayBook(object):
self.extra_vars = extra_vars
self.global_vars = {}
self.private_key_file = private_key_file
self.only_tags = only_tags
self.inventory = ansible.inventory.Inventory(host_list)
self.inventory = ansible.inventory.Inventory(host_list)
if not self.inventory._is_script:
self.global_vars.update(self.inventory.get_group_variables('all'))
self.basedir = os.path.dirname(playbook)
self.playbook = utils.parse_yaml_from_file(playbook)
self.basedir = os.path.dirname(playbook)
self.playbook = self._load_playbook_from_file(playbook)
self.module_path = self.module_path + os.pathsep + os.path.join(self.basedir, "library")
# *****************************************************
def _load_playbook_from_file(self, path):
'''
run top level error checking on playbooks and allow them to include other playbooks.
'''
playbook_data = utils.parse_yaml_from_file(path)
accumulated_plays = []
if type(playbook_data) != list:
raise errors.AnsibleError("parse error: playbooks must be formatted as a YAML list")
for play in playbook_data:
if type(play) != dict:
raise errors.AnsibleError("parse error: each play in a playbook must a YAML dictionary (hash), recieved: %s" % play)
if 'include' in play:
if len(play.keys()) == 1:
included_path = utils.path_dwim(self.basedir, play['include'])
accumulated_plays.extend(self._load_playbook_from_file(included_path))
else:
raise errors.AnsibleError("parse error: top level includes cannot be used with other directives: %s" % play)
else:
accumulated_plays.append(play)
return accumulated_plays
# *****************************************************
def run(self):
''' run all patterns in the playbook '''
# loop through all patterns and run them
self.callbacks.on_start()
for play_ds in self.playbook:
self.SETUP_CACHE = {}
self.SETUP_CACHE = collections.defaultdict(dict)
self._run_play(Play(self,play_ds))
# summarize the results
@ -191,19 +215,17 @@ class PlayBook(object):
# load up an appropriate ansible runner to run the task in parallel
results = self._run_task_internal(task)
# if no hosts are matched, carry on
if results is None:
results = {}
# add facts to the global setup cache
for host, result in results['contacted'].iteritems():
if "ansible_facts" in result:
for k,v in result['ansible_facts'].iteritems():
self.SETUP_CACHE[host][k]=v
facts = results.get('ansible_facts', {})
self.SETUP_CACHE[host].update(facts)
self.stats.compute(results)
# if no hosts are matched, carry on
if results is None:
results = {}
# flag which notify handlers need to be run
if len(task.notify) > 0:
for host, results in results.get('contacted',{}).iteritems():
@ -231,28 +253,21 @@ class PlayBook(object):
# *****************************************************
def _do_setup_step(self, play, vars_files=None):
''' push variables down to the systems and get variables+facts back up '''
# this enables conditional includes like $facter_os.yml and is only done
# after the original pass when we have that data.
#
if vars_files is not None:
self.callbacks.on_setup_secondary()
play.update_vars_files(self.inventory.list_hosts(play.hosts))
else:
self.callbacks.on_setup_primary()
def _do_setup_step(self, play):
''' get facts from the remote system '''
host_list = [ h for h in self.inventory.list_hosts(play.hosts)
if not (h in self.stats.failures or h in self.stats.dark) ]
if not play.gather_facts:
return {}
self.callbacks.on_setup()
self.inventory.restrict_to(host_list)
# push any variables down to the system
setup_results = ansible.runner.Runner(
pattern=play.hosts, module_name='setup', module_args=play.vars, inventory=self.inventory,
pattern=play.hosts, module_name='setup', module_args={}, inventory=self.inventory,
forks=self.forks, module_path=self.module_path, timeout=self.timeout, remote_user=play.remote_user,
remote_pass=self.remote_pass, remote_port=play.remote_port, private_key_file=self.private_key_file,
setup_cache=self.SETUP_CACHE, callbacks=self.runner_callbacks, sudo=play.sudo, sudo_user=play.sudo_user,
@ -265,11 +280,8 @@ class PlayBook(object):
# now for each result, load into the setup cache so we can
# let runner template out future commands
setup_ok = setup_results.get('contacted', {})
if vars_files is None:
# first pass only or we'll erase good work
for (host, result) in setup_ok.iteritems():
if 'ansible_facts' in result:
self.SETUP_CACHE[host] = result['ansible_facts']
for (host, result) in setup_ok.iteritems():
self.SETUP_CACHE[host] = result.get('ansible_facts', {})
return setup_results
# *****************************************************
@ -277,18 +289,29 @@ class PlayBook(object):
def _run_play(self, play):
''' run a list of tasks for a given pattern, in order '''
if not play.should_run(self.only_tags):
return
self.callbacks.on_play_start(play.name)
# push any variables down to the system # and get facts/ohai/other data back up
rc = self._do_setup_step(play) # pattern, vars, user, port, sudo, sudo_user, transport, None)
# get facts from system
rc = self._do_setup_step(play)
# now with that data, handle contentional variable file imports!
if play.vars_files and len(play.vars_files) > 0:
rc = self._do_setup_step(play, play.vars_files)
play.update_vars_files(self.inventory.list_hosts(play.hosts))
# run all the top level tasks, these get run on every node
for task in play.tasks():
self._run_task(play, task, False)
# only run the task if the requested tags match
should_run = False
for x in self.only_tags:
for y in task.tags:
if (x==y):
should_run = True
break
if should_run:
self._run_task(play, task, False)
# run notify actions
for handler in play.handlers():

@ -26,8 +26,10 @@ import os
class Play(object):
__slots__ = [
'hosts', 'name', 'vars', 'vars_prompt', 'vars_files', 'handlers', 'remote_user', 'remote_port',
'sudo', 'sudo_user', 'transport', 'playbook', '_ds', '_handlers', '_tasks'
'hosts', 'name', 'vars', 'vars_prompt', 'vars_files',
'handlers', 'remote_user', 'remote_port',
'sudo', 'sudo_user', 'transport', 'playbook',
'tags', 'gather_facts', '_ds', '_handlers', '_tasks'
]
# *************************************************
@ -37,6 +39,7 @@ class Play(object):
# TODO: more error handling
hosts = ds.get('hosts')
if hosts is None:
raise errors.AnsibleError('hosts declaration is required')
@ -44,27 +47,40 @@ class Play(object):
hosts = ';'.join(hosts)
hosts = utils.template(hosts, playbook.extra_vars, {})
self._ds = ds
self.playbook = playbook
self.hosts = hosts
self.name = ds.get('name', self.hosts)
self.vars = ds.get('vars', {})
self.vars_files = ds.get('vars_files', [])
self.vars_prompt = ds.get('vars_prompt', {})
self.vars = self._get_vars(self.playbook.basedir)
self._tasks = ds.get('tasks', [])
self._handlers = ds.get('handlers', [])
self.remote_user = ds.get('user', self.playbook.remote_user)
self.remote_port = ds.get('port', self.playbook.remote_port)
self.sudo = ds.get('sudo', self.playbook.sudo)
self.sudo_user = ds.get('sudo_user', self.playbook.sudo_user)
self.transport = ds.get('connection', self.playbook.transport)
self._ds = ds
self.playbook = playbook
self.hosts = hosts
self.name = ds.get('name', self.hosts)
self.vars = ds.get('vars', {})
self.vars_files = ds.get('vars_files', [])
self.vars_prompt = ds.get('vars_prompt', {})
self.vars = self._get_vars(self.playbook.basedir)
self._tasks = ds.get('tasks', [])
self._handlers = ds.get('handlers', [])
self.remote_user = ds.get('user', self.playbook.remote_user)
self.remote_port = ds.get('port', self.playbook.remote_port)
self.sudo = ds.get('sudo', self.playbook.sudo)
self.sudo_user = ds.get('sudo_user', self.playbook.sudo_user)
self.transport = ds.get('connection', self.playbook.transport)
self.tags = ds.get('tags', None)
self.gather_facts = ds.get('gather_facts', True)
self._update_vars_files_for_host(None)
self._tasks = self._load_tasks(self._ds, 'tasks')
self._handlers = self._load_tasks(self._ds, 'handlers')
if self.tags is None:
self.tags = []
elif type(self.tags) in [ str, unicode ]:
self.tags = [ self.tags ]
elif type(self.tags) != list:
self.tags = []
if self.sudo_user != 'root':
self.sudo = True
# *************************************************
def _load_tasks(self, ds, keyname):
@ -76,6 +92,7 @@ class Play(object):
task_vars = self.vars.copy()
if 'include' in x:
tokens = shlex.split(x['include'])
for t in tokens[1:]:
(k,v) = t.split("=", 1)
task_vars[k]=v
@ -86,15 +103,13 @@ class Play(object):
else:
raise Exception("unexpected task type")
for y in data:
items = y.get('with_items',None)
if items is None:
items = [ '' ]
elif isinstance(items, basestring):
items = utils.varLookup(items, task_vars)
for item in items:
mv = task_vars.copy()
mv['item'] = item
results.append(Task(self,y,module_vars=mv))
mv = task_vars.copy()
results.append(Task(self,y,module_vars=mv))
for x in results:
if self.tags is not None:
x.tags.extend(self.tags)
return results
# *************************************************
@ -143,17 +158,41 @@ class Play(object):
def update_vars_files(self, hosts):
''' calculate vars_files, which requires that setup runs first so ansible facts can be mixed in '''
# now loop through all the hosts...
for h in hosts:
self._update_vars_files_for_host(h)
# *************************************************
def should_run(self, tags):
''' does the play match any of the tags? '''
if len(self._tasks) == 0:
return False
for task in self._tasks:
for task_tag in task.tags:
if task_tag in tags:
return True
return False
# *************************************************
def _has_vars_in(self, msg):
return ((msg.find("$") != -1) or (msg.find("{{") != -1))
# *************************************************
def _update_vars_files_for_host(self, host):
if not host in self.playbook.SETUP_CACHE:
if (host is not None) and (not host in self.playbook.SETUP_CACHE):
# no need to process failed hosts or hosts not in this play
return
if type(self.vars_files) != list:
self.vars_files = [ self.vars_files ]
for filename in self.vars_files:
if type(filename) == list:
@ -162,29 +201,51 @@ class Play(object):
found = False
sequence = []
for real_filename in filename:
filename2 = utils.template(real_filename, self.playbook.SETUP_CACHE[host])
filename2 = utils.template(filename2, self.vars)
filename2 = utils.path_dwim(self.playbook.basedir, filename2)
sequence.append(filename2)
if os.path.exists(filename2):
filename2 = utils.template(real_filename, self.vars)
filename3 = filename2
if host is not None:
filename3 = utils.template(filename2, self.playbook.SETUP_CACHE[host])
filename4 = utils.path_dwim(self.playbook.basedir, filename3)
sequence.append(filename4)
if os.path.exists(filename4):
found = True
data = utils.parse_yaml_from_file(filename2)
self.playbook.SETUP_CACHE[host].update(data)
self.playbook.callbacks.on_import_for_host(host, filename2)
break
else:
self.playbook.callbacks.on_not_import_for_host(host, filename2)
data = utils.parse_yaml_from_file(filename4)
if host is not None:
if self._has_vars_in(filename2) and not self._has_vars_in(filename3):
# this filename has variables in it that were fact specific
# so it needs to be loaded into the per host SETUP_CACHE
self.playbook.SETUP_CACHE[host].update(data)
self.playbook.callbacks.on_import_for_host(host, filename4)
elif not self._has_vars_in(filename4):
# found a non-host specific variable, load into vars and NOT
# the setup cache
self.vars.update(data)
elif host is not None:
self.playbook.callbacks.on_not_import_for_host(host, filename4)
if not found:
raise errors.AnsibleError(
"%s: FATAL, no files matched for vars_files import sequence: %s" % (host, sequence)
)
else:
filename2 = utils.template(filename, self.playbook.SETUP_CACHE[host])
filename2 = utils.template(filename2, self.vars)
fpath = utils.path_dwim(self.playbook.basedir, filename2)
new_vars = utils.parse_yaml_from_file(fpath)
# just one filename supplied, load it!
filename2 = utils.template(filename, self.vars)
filename3 = filename2
if host is not None:
filename3 = utils.template(filename2, self.playbook.SETUP_CACHE[host])
filename4 = utils.path_dwim(self.playbook.basedir, filename3)
if self._has_vars_in(filename4):
return
new_vars = utils.parse_yaml_from_file(filename4)
if new_vars:
self.playbook.SETUP_CACHE[host].update(new_vars)
#else: could warn if vars file contains no vars.
if type(new_vars) != dict:
raise errors.AnsibleError("files specified in vars_files must be a YAML dictionary: %s" % filename4)
if host is not None and self._has_vars_in(filename2) and not self._has_vars_in(filename3):
# running a host specific pass and has host specific variables
# load into setup cache
self.playbook.SETUP_CACHE[host].update(new_vars)
elif host is None:
# running a non-host specific pass and we can update the global vars instead
self.vars.update(new_vars)

@ -15,8 +15,6 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
from ansible import errors
from ansible import utils
@ -24,47 +22,81 @@ class Task(object):
__slots__ = [
'name', 'action', 'only_if', 'async_seconds', 'async_poll_interval',
'notify', 'module_name', 'module_args', 'module_vars', 'play', 'notified_by',
'notify', 'module_name', 'module_args', 'module_vars',
'play', 'notified_by', 'tags', 'with_items', 'first_available_file'
]
def __init__(self, play, ds, module_vars=None):
''' constructor loads from a task or handler datastructure '''
# TODO: more error handling
# include task specific vars
self.module_vars = module_vars
self.play = play
# load various attributes
self.name = ds.get('name', None)
self.action = ds.get('action', '')
self.tags = [ 'all' ]
# notified by is used by Playbook code to flag which hosts
# need to run a notifier
self.notified_by = []
# if no name is specified, use the action line as the name
if self.name is None:
self.name = self.action
# load various attributes
self.only_if = ds.get('only_if', 'True')
self.async_seconds = int(ds.get('async', 0)) # not async by default
self.async_poll_interval = int(ds.get('poll', 10)) # default poll = 10 seconds
self.notify = ds.get('notify', [])
self.first_available_file = ds.get('first_available_file', None)
self.with_items = ds.get('with_items', None)
# notify can be a string or a list, store as a list
if isinstance(self.notify, basestring):
self.notify = [ self.notify ]
# split the action line into a module name + arguments
tokens = self.action.split(None, 1)
if len(tokens) < 1:
raise errors.AnsibleError("invalid/missing action in task")
self.module_name = tokens[0]
self.module_args = ''
if len(tokens) > 1:
self.module_args = tokens[1]
import_tags = self.module_vars.get('tags',[])
if type(import_tags) in [str,unicode]:
# allow the user to list comma delimited tags
import_tags = import_tags.split(",")
self.name = utils.template(self.name, self.module_vars)
self.action = utils.template(self.name, self.module_vars)
if 'first_available_file' in ds:
self.module_vars['first_available_file'] = ds.get('first_available_file')
# handle mutually incompatible options
if self.with_items is not None and self.first_available_file is not None:
raise errors.AnsibleError("with_items and first_available_file are mutually incompatible in a single task")
# make first_available_file accessable to Runner code
if self.first_available_file:
self.module_vars['first_available_file'] = self.first_available_file
# process with_items so it can be used by Runner code
if self.with_items is None:
self.with_items = [ ]
elif isinstance(self.with_items, basestring):
self.with_items = utils.varLookup(self.with_items, self.module_vars)
if type(self.with_items) != list:
raise errors.AnsibleError("with_items must be a list, got: %s" % self.with_items)
self.module_vars['items'] = self.with_items
# tags allow certain parts of a playbook to be run without running the whole playbook
apply_tags = ds.get('tags', None)
if apply_tags is not None:
if type(apply_tags) in [ str, unicode ]:
self.tags.append(apply_tags)
elif type(apply_tags) == list:
self.tags.extend(apply_tags)
self.tags.extend(import_tags)

@ -30,6 +30,7 @@ import time
import base64
import getpass
import codecs
import collections
import ansible.constants as C
import ansible.inventory
@ -51,8 +52,7 @@ def _executor_hook(job_queue, result_queue):
''' callback used by multiprocessing pool '''
# attempt workaround of https://github.com/newsapps/beeswithmachineguns/issues/17
# does not occur for everyone, some claim still occurs on newer paramiko
# this function not present in CentOS 6
# this function also not present in CentOS 6
if HAS_ATFORK:
atfork()
@ -70,14 +70,14 @@ def _executor_hook(job_queue, result_queue):
################################################
class ReturnData(object):
''' internal return class for execute methods, not part of API signature '''
__slots__ = [ 'result', 'comm_ok', 'executed_str', 'host' ]
__slots__ = [ 'result', 'comm_ok', 'host' ]
def __init__(self, host=None, result=None, comm_ok=True, executed_str=''):
def __init__(self, host=None, result=None, comm_ok=True):
self.host = host
self.result = result
self.comm_ok = comm_ok
self.executed_str = executed_str
if type(self.result) in [ str, unicode ]:
self.result = utils.parse_json(self.result)
@ -91,16 +91,10 @@ class ReturnData(object):
return self.comm_ok
def is_successful(self):
if not self.comm_ok:
return False
else:
if 'failed' in self.result:
return False
if self.result.get('rc',0) != 0:
return False
return True
return self.comm_ok and ('failed' not in self.result) and (self.result.get('rc',0) == 0)
class Runner(object):
''' core API interface to ansible '''
def __init__(self,
host_list=C.DEFAULT_HOST_LIST, module_path=C.DEFAULT_MODULE_PATH,
@ -115,35 +109,41 @@ class Runner(object):
module_vars=None, is_playbook=False, inventory=None):
"""
host_list : path to a host list file, like /etc/ansible/hosts
module_path : path to modules, like /usr/share/ansible
module_name : which module to run (string)
module_args : args to pass to the module (string)
forks : desired level of paralellism (hosts to run on at a time)
timeout : connection timeout, such as a SSH timeout, in seconds
pattern : pattern or groups to select from in inventory
remote_user : connect as this remote username
remote_pass : supply this password (if not using keys)
remote_port : use this default remote port (if not set by the inventory system)
private_key_file : use this private key as your auth key
sudo_user : If you want to sudo to a user other than root.
sudo_pass : sudo password if using sudo and sudo requires a password
background : run asynchronously with a cap of this many # of seconds (if not 0)
basedir : paths used by modules if not absolute are relative to here
setup_cache : this is a internalism that is going away
transport : transport mode (paramiko, local)
conditional : only execute if this string, evaluated, is True
callbacks : output callback class
sudo : log in as remote user and immediately sudo to root
module_vars : provides additional variables to a template. FIXME: factor this out
is_playbook : indicates Runner is being used by a playbook. affects behavior in various ways.
inventory : inventory object, if host_list is not provided
host_list : path to a host list file, like /etc/ansible/hosts
module_path : path to modules, like /usr/share/ansible
module_name : which module to run (string)
module_args : args to pass to the module (string)
forks : desired level of paralellism (hosts to run on at a time)
timeout : connection timeout, such as a SSH timeout, in seconds
pattern : pattern or groups to select from in inventory
remote_user : connect as this remote username
remote_pass : supply this password (if not using keys)
remote_port : use this default remote port (if not set by the inventory system)
private_key_file : use this private key as your auth key
sudo_user : If you want to sudo to a user other than root.
sudo_pass : sudo password if using sudo and sudo requires a password
background : run asynchronously with a cap of this many # of seconds (if not 0)
basedir : paths used by modules if not absolute are relative to here
setup_cache : this is a internalism that is going away
transport : transport mode (paramiko, local)
conditional : only execute if this string, evaluated, is True
callbacks : output callback class
sudo : log in as remote user and immediately sudo to root
module_vars : provides additional variables to a template.
is_playbook : indicates Runner is being used by a playbook. affects behavior in various ways.
inventory : inventory object, if host_list is not provided
"""
# -- handle various parameters that need checking/mangling
if setup_cache is None:
setup_cache = {}
setup_cache = collections.defaultdict(dict)
if type(module_args) not in [str, unicode, dict]:
raise errors.AnsibleError("module_args must be a string or dict: %s" % self.module_args)
if basedir is None:
basedir = os.getcwd()
self.basedir = basedir
if callbacks is None:
callbacks = ans_callbacks.DefaultRunnerCallbacks()
@ -151,9 +151,12 @@ class Runner(object):
self.generated_jid = str(random.randint(0, 999999999999))
self.sudo_user = sudo_user
self.transport = transport
self.connector = connection.Connection(self, self.transport, self.sudo_user)
if self.transport == 'ssh' and remote_pass:
raise errors.AnsibleError("SSH transport does not support passwords, only keys or agents")
if self.transport == 'local':
self.remote_user = pwd.getpwuid(os.geteuid())[0]
if inventory is None:
self.inventory = ansible.inventory.Inventory(host_list)
@ -163,35 +166,30 @@ class Runner(object):
if module_vars is None:
module_vars = {}
self.setup_cache = setup_cache
self.conditional = conditional
self.module_path = module_path
self.module_name = module_name
self.forks = int(forks)
self.pattern = pattern
self.module_args = module_args
self.module_vars = module_vars
self.timeout = timeout
self.verbose = verbose
self.remote_user = remote_user
self.remote_pass = remote_pass
self.remote_port = remote_port
# -- save constructor parameters for later use
self.sudo_user = sudo_user
self.connector = connection.Connection(self)
self.setup_cache = setup_cache
self.conditional = conditional
self.module_path = module_path
self.module_name = module_name
self.forks = int(forks)
self.pattern = pattern
self.module_args = module_args
self.module_vars = module_vars
self.timeout = timeout
self.verbose = verbose
self.remote_user = remote_user
self.remote_pass = remote_pass
self.remote_port = remote_port
self.private_key_file = private_key_file
self.background = background
self.basedir = basedir
self.sudo = sudo
self.sudo_pass = sudo_pass
self.is_playbook = is_playbook
euid = pwd.getpwuid(os.geteuid())[0]
if self.transport == 'local' and self.remote_user != euid:
raise errors.AnsibleError("User mismatch: expected %s, but is %s" % (self.remote_user, euid))
if type(self.module_args) not in [str, unicode, dict]:
raise errors.AnsibleError("module_args must be a string or dict: %s" % self.module_args)
if self.transport == 'ssh' and self.remote_pass:
raise errors.AnsibleError("SSH transport does not support remote passwords, only keys or agents")
self.background = background
self.sudo = sudo
self.sudo_pass = sudo_pass
self.is_playbook = is_playbook
self._tmp_paths = {}
# ensure we're using unique tmp paths
random.seed()
# *****************************************************
@ -221,7 +219,7 @@ class Runner(object):
''' transfer string to remote file '''
if type(data) == dict:
data = utils.smjson(data)
data = utils.jsonify(data)
afd, afile = tempfile.mkstemp()
afo = os.fdopen(afd, 'w')
@ -230,62 +228,20 @@ class Runner(object):
afo.close()
remote = os.path.join(tmp, name)
conn.put_file(afile, remote)
os.unlink(afile)
try:
conn.put_file(afile, remote)
finally:
os.unlink(afile)
return remote
# *****************************************************
def _add_setup_vars(self, inject, args):
''' setup module variables need special handling '''
is_dict = False
if type(args) == dict:
is_dict = True
# TODO: keep this as a dict through the whole path to simplify this code
for (k,v) in inject.iteritems():
if not k.startswith('facter_') and not k.startswith('ohai_') and not k.startswith('ansible_'):
if not is_dict:
if str(v).find(" ") != -1:
v = "\"%s\"" % v
args += " %s=%s" % (k, str(v).replace(" ","~~~"))
else:
args[k]=v
return args
# *****************************************************
def _add_setup_metadata(self, args):
''' automatically determine where to store variables for the setup module '''
is_dict = False
if type(args) == dict:
is_dict = True
# TODO: make a _metadata_path function
if not is_dict:
if args.find("metadata=") == -1:
if self.remote_user == 'root' or (self.sudo and self.sudo_user == 'root'):
args = "%s metadata=/etc/ansible/setup" % args
else:
args = "%s metadata=%s/setup" % (args, C.DEFAULT_REMOTE_TMP)
else:
if not 'metadata' in args:
if self.remote_user == 'root' or (self.sudo and self.sudo_user == 'root'):
args['metadata'] = '/etc/ansible/setup'
else:
args['metadata'] = "%s/setup" % C.DEFAULT_REMOTE_TMP
return args
# *****************************************************
def _execute_module(self, conn, tmp, remote_module_path, args,
async_jid=None, async_module=None, async_limit=None):
''' runs a module that has already been transferred '''
inject = self.setup_cache.get(conn.host,{}).copy()
inject = self.setup_cache[conn.host].copy()
host_variables = self.inventory.get_variables(conn.host)
inject.update(host_variables)
inject.update(self.module_vars)
@ -296,17 +252,10 @@ class Runner(object):
inject['groups'] = group_hosts
if self.module_name == 'setup':
if not args:
args = {}
args = self._add_setup_vars(inject, args)
args = self._add_setup_metadata(args)
if type(args) == dict:
args = utils.bigjson(args)
args = utils.template(args, inject, self.setup_cache)
args = utils.jsonify(args,format=True)
module_name_tail = remote_module_path.split("/")[-1]
args = utils.template(args, inject, self.setup_cache)
argsfile = self._transfer_str(conn, tmp, 'arguments', args)
if async_jid is None:
@ -315,28 +264,7 @@ class Runner(object):
cmd = " ".join([str(x) for x in [remote_module_path, async_jid, async_limit, async_module, argsfile]])
res = self._low_level_exec_command(conn, cmd, tmp, sudoable=True)
executed_str = "%s %s" % (module_name_tail, args.strip())
return ReturnData(host=conn.host, result=res, executed_str=executed_str)
# *****************************************************
def _save_setup_result_to_disk(self, conn, result):
''' cache results of calling setup '''
dest = os.path.expanduser("~/.ansible_setup_data")
user = getpass.getuser()
if user == 'root':
dest = "/var/lib/ansible/setup_data"
if not os.path.exists(dest):
os.makedirs(dest)
fh = open(os.path.join(dest, conn.host), "w")
fh.write(result)
fh.close()
return result
return ReturnData(host=conn.host, result=res)
# *****************************************************
@ -344,16 +272,11 @@ class Runner(object):
''' allows discovered variables to be used in templates and action statements '''
host = conn.host
if 'ansible_facts' in result:
var_result = result['ansible_facts']
else:
var_result = {}
var_result = result.get('ansible_facts',{})
# note: do not allow variables from playbook to be stomped on
# by variables coming up from facter/ohai/etc. They
# should be prefixed anyway
if not host in self.setup_cache:
self.setup_cache[host] = {}
for (k, v) in var_result.iteritems():
if not k in self.setup_cache[host]:
self.setup_cache[host][k] = v
@ -362,9 +285,9 @@ class Runner(object):
def _execute_raw(self, conn, tmp):
''' execute a non-module command for bootstrapping, or if there's no python on a device '''
stdout = self._low_level_exec_command( conn, self.module_args, tmp, sudoable = True )
data = dict(stdout=stdout)
return ReturnData(host=conn.host, result=data)
return ReturnData(host=conn.host, result=dict(
stdout=self._low_level_exec_command(conn, self.module_args, tmp, sudoable = True)
))
# ***************************************************
@ -409,14 +332,14 @@ class Runner(object):
# load up options
options = utils.parse_kv(self.module_args)
source = options.get('src', None)
dest = options.get('dest', None)
source = options.get('src', None)
dest = options.get('dest', None)
if (source is None and not 'first_available_file' in self.module_vars) or dest is None:
result=dict(failed=True, msg="src and dest are required")
return ReturnData(host=conn.host, result=result)
# apply templating to source argument
inject = self.setup_cache.get(conn.host,{})
inject = self.setup_cache[conn.host]
# if we have first_available_file in our vars
# look up the files and use the first one we find as src
@ -482,7 +405,7 @@ class Runner(object):
return ReturnData(host=conn.host, result=results)
# apply templating to source argument
inject = self.setup_cache.get(conn.host,{})
inject = self.setup_cache[conn.host]
if self.module_vars is not None:
inject.update(self.module_vars)
source = utils.template(source, inject, self.setup_cache)
@ -499,7 +422,7 @@ class Runner(object):
remote_md5 = self._remote_md5(conn, tmp, source)
if remote_md5 == '0':
result = dict(msg="missing remote file", changed=False)
result = dict(msg="missing remote file", file=source, changed=False)
return ReturnData(host=conn.host, result=result)
elif remote_md5 != local_md5:
# create the containing directories, if needed
@ -510,12 +433,12 @@ class Runner(object):
conn.fetch_file(source, dest)
new_md5 = utils.md5(dest)
if new_md5 != remote_md5:
result = dict(failed=True, msg="md5 mismatch", md5sum=new_md5)
result = dict(failed=True, md5sum=new_md5, msg="md5 mismatch", file=source)
return ReturnData(host=conn.host, result=result)
result = dict(changed=True, md5sum=new_md5)
return ReturnData(host=conn.host, result=result)
else:
result = dict(changed=False, md5sum=local_md5)
result = dict(changed=False, md5sum=local_md5, file=source)
return ReturnData(host=conn.host, result=result)
@ -545,6 +468,9 @@ class Runner(object):
def _execute_template(self, conn, tmp):
''' handler for template operations '''
if not self.is_playbook:
raise errors.AnsibleError("in current versions of ansible, templates are only usable in playbooks")
# load up options
options = utils.parse_kv(self.module_args)
source = options.get('src', None)
@ -555,7 +481,7 @@ class Runner(object):
return ReturnData(host=conn.host, comm_ok=False, result=result)
# apply templating to source argument so vars can be used in the path
inject = self.setup_cache.get(conn.host,{})
inject = self.setup_cache[conn.host]
# if we have first_available_file in our vars
# look up the files and use the first one we find as src
@ -571,38 +497,11 @@ class Runner(object):
result = dict(failed=True, msg="could not find src in first_available_file list")
return ReturnData(host=conn.host, comm_ok=False, result=result)
if self.module_vars is not None:
inject.update(self.module_vars)
source = utils.template(source, inject, self.setup_cache)
#(host, ok, data, err) = (None, None, None, None)
if not self.is_playbook:
# not running from a playbook so we have to fetch the remote
# setup file contents before proceeding...
if metadata is None:
if self.remote_user == 'root':
metadata = '/etc/ansible/setup'
else:
# path is expanded on remote side
metadata = "~/.ansible/tmp/setup"
# install the template module
slurp_module = self._transfer_module(conn, tmp, 'slurp')
# run the slurp module to get the metadata file
args = "src=%s" % metadata
result1 = self._execute_module(conn, tmp, slurp_module, args)
if not 'content' in result1.result or result1.result.get('encoding','base64') != 'base64':
result1.result['failed'] = True
return result1
content = base64.b64decode(result1.result['content'])
inject = utils.json_loads(content)
# install the template module
copy_module = self._transfer_module(conn, tmp, 'copy')
@ -621,7 +520,6 @@ class Runner(object):
# modify file attribs if needed
if exec_rc.comm_ok:
exec_rc.executed_str = exec_rc.executed_str.replace("copy","template",1)
return self._chain_file_module(conn, tmp, exec_rc, options)
else:
return exec_rc
@ -643,6 +541,8 @@ class Runner(object):
# *****************************************************
def _executor(self, host):
''' handler for multiprocessing library '''
try:
exec_rc = self._executor_internal(host)
if type(exec_rc) != ReturnData:
@ -659,19 +559,58 @@ class Runner(object):
self.callbacks.on_unreachable(host, msg)
return ReturnData(host=host, comm_ok=False, result=dict(failed=True, msg=msg))
# *****************************************************
def _executor_internal(self, host):
''' callback executed in parallel for each host. returns (hostname, connected_ok, extra) '''
''' executes any module one or more times '''
items = self.module_vars.get('items', [])
if len(items) == 0:
return self._executor_internal_inner(host)
else:
# executing using with_items, so make multiple calls
# TODO: refactor
aggregrate = {}
all_comm_ok = True
all_changed = False
all_failed = False
results = []
for x in items:
self.module_vars['item'] = x
result = self._executor_internal_inner(host)
results.append(result.result)
if result.comm_ok == False:
all_comm_ok = False
break
for x in results:
if x.get('changed') == True:
all_changed = True
if (x.get('failed') == True) or (('rc' in x) and (x['rc'] != 0)):
all_failed = True
break
msg = 'All items succeeded'
if all_failed:
msg = "One or more items failed."
rd_result = dict(failed=all_failed, changed=all_changed, results=results, msg=msg)
if not all_failed:
del rd_result['failed']
return ReturnData(host=host, comm_ok=all_comm_ok, result=rd_result)
# *****************************************************
def _executor_internal_inner(self, host):
''' decides how to invoke a module '''
host_variables = self.inventory.get_variables(host)
port = host_variables.get('ansible_ssh_port', self.remote_port)
inject = self.setup_cache.get(host,{}).copy()
inject = self.setup_cache[host].copy()
inject.update(host_variables)
inject.update(self.module_vars)
conditional = utils.template(self.conditional, inject, self.setup_cache)
if not eval(conditional):
result = utils.smjson(dict(skipped=True))
result = utils.jsonify(dict(skipped=True))
self.callbacks.on_skipped(host)
return ReturnData(host=host, result=result)
@ -687,16 +626,9 @@ class Runner(object):
tmp = self._make_tmp_path(conn)
result = None
if self.module_name == 'copy':
result = self._execute_copy(conn, tmp)
elif self.module_name == 'fetch':
result = self._execute_fetch(conn, tmp)
elif self.module_name == 'template':
result = self._execute_template(conn, tmp)
elif self.module_name == 'raw':
result = self._execute_raw(conn, tmp)
elif self.module_name == 'assemble':
result = self._execute_assemble(conn, tmp)
handler = getattr(self, "_execute_%s" % self.module_name, None)
if handler:
result = handler(conn, tmp)
else:
if self.background == 0:
result = self._execute_normal_module(conn, tmp, module_name)
@ -724,25 +656,22 @@ class Runner(object):
def _low_level_exec_command(self, conn, cmd, tmp, sudoable=False):
''' execute a command string over SSH, return the output '''
sudo_user = self.sudo_user
stdin, stdout, stderr = conn.exec_command(cmd, tmp, sudo_user, sudoable=sudoable)
out=None
if type(stdout) != str:
out="\n".join(stdout.readlines())
return "\n".join(stdout.readlines())
else:
out=stdout
# sudo mode paramiko doesn't capture stderr, so not relaying here either...
return out
return stdout
# *****************************************************
def _remote_md5(self, conn, tmp, path):
'''
takes a remote md5sum without requiring python, and returns 0 if the
file does not exist
takes a remote md5sum without requiring python, and returns 0 if no file
'''
test = "[[ -r %s ]]" % path
md5s = [
"(%s && /usr/bin/md5sum %s 2>/dev/null)" % (test,path),
@ -751,8 +680,7 @@ class Runner(object):
]
cmd = " || ".join(md5s)
cmd = "%s || (echo \"0 %s\")" % (cmd, path)
remote_md5 = self._low_level_exec_command(conn, cmd, tmp, True).split()[0]
return remote_md5
return self._low_level_exec_command(conn, cmd, tmp, True).split()[0]
# *****************************************************
@ -770,9 +698,7 @@ class Runner(object):
cmd += ' && echo %s' % basetmp
result = self._low_level_exec_command(conn, cmd, None, sudoable=False)
cleaned = result.split("\n")[0].strip() + '/'
return cleaned
return result.split("\n")[0].strip() + '/'
# *****************************************************
@ -813,7 +739,6 @@ class Runner(object):
job_queue = multiprocessing.Manager().Queue()
[job_queue.put(i) for i in hosts]
result_queue = multiprocessing.Manager().Queue()
workers = []
@ -841,10 +766,9 @@ class Runner(object):
def _partition_results(self, results):
''' seperate results by ones we contacted & ones we didn't '''
results2 = dict(contacted={}, dark={})
if results is None:
return None
results2 = dict(contacted={}, dark={})
for result in results:
host = result.host
@ -859,7 +783,6 @@ class Runner(object):
for host in self.inventory.list_hosts(self.pattern):
if not (host in results2['dark'] or host in results2['contacted']):
results2["dark"][host] = {}
return results2
# *****************************************************
@ -881,10 +804,12 @@ class Runner(object):
results = [ self._executor(h[1]) for h in hosts ]
return self._partition_results(results)
# *****************************************************
def run_async(self, time_limit):
''' Run this module asynchronously and return a poller. '''
self.background = time_limit
results = self.run()
return results, poller.AsyncPoller(results, self)

@ -18,17 +18,6 @@
################################################
import warnings
import traceback
import os
import time
import re
import shutil
import subprocess
import pipes
import socket
import random
import local
import paramiko_ssh
import ssh
@ -36,17 +25,17 @@ import ssh
class Connection(object):
''' Handles abstract connections to remote hosts '''
def __init__(self, runner, transport,sudo_user):
def __init__(self, runner):
self.runner = runner
self.transport = transport
self.sudo_user = sudo_user
def connect(self, host, port=None):
conn = None
if self.transport == 'local':
transport = self.runner.transport
if transport == 'local':
conn = local.LocalConnection(self.runner, host)
elif self.transport == 'paramiko':
elif transport == 'paramiko':
conn = paramiko_ssh.ParamikoConnection(self.runner, host, port)
elif self.transport == 'ssh':
elif transport == 'ssh':
conn = ssh.SSHConnection(self.runner, host, port)
if conn is None:
raise Exception("unsupported connection type")

@ -14,21 +14,11 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
################################################
import warnings
import traceback
import os
import time
import re
import shutil
import subprocess
import pipes
import socket
import random
from ansible import errors
class LocalConnection(object):
@ -43,8 +33,9 @@ class LocalConnection(object):
return self
def exec_command(self, cmd, tmp_path,sudo_user,sudoable=False):
def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False):
''' run a command on the local host '''
if self.runner.sudo and sudoable:
cmd = "sudo -s %s" % cmd
if self.runner.sudo_pass:
@ -60,6 +51,7 @@ class LocalConnection(object):
def put_file(self, in_path, out_path):
''' transfer a file from local to local '''
if not os.path.exists(in_path):
raise errors.AnsibleFileNotFound("file or module does not exist: %s" % in_path)
try:
@ -77,5 +69,4 @@ class LocalConnection(object):
def close(self):
''' terminate the connection; nothing to do here '''
pass

@ -14,27 +14,27 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
################################################
import warnings
import traceback
import os
import time
import re
import shutil
import subprocess
import pipes
import socket
import random
from ansible import errors
# prevent paramiko warning noise
# see http://stackoverflow.com/questions/3920502/
# prevent paramiko warning noise -- see http://stackoverflow.com/questions/3920502/
HAVE_PARAMIKO=False
with warnings.catch_warnings():
warnings.simplefilter("ignore")
import paramiko
try:
import paramiko
HAVE_PARAMIKO=True
except ImportError:
pass
class ParamikoConnection(object):
''' SSH based connections with Paramiko '''
@ -47,23 +47,20 @@ class ParamikoConnection(object):
if port is None:
self.port = self.runner.remote_port
def _get_conn(self):
user = self.runner.remote_user
def connect(self):
''' activates the connection object '''
if not HAVE_PARAMIKO:
raise errors.AnsibleError("paramiko is not installed")
user = self.runner.remote_user
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(
self.host,
username=user,
allow_agent=True,
look_for_keys=True,
key_filename=self.runner.private_key_file,
password=self.runner.remote_pass,
timeout=self.runner.timeout,
port=self.port
)
ssh.connect(self.host, username=user, allow_agent=True, look_for_keys=True,
key_filename=self.runner.private_key_file, password=self.runner.remote_pass,
timeout=self.runner.timeout, port=self.port)
except Exception, e:
msg = str(e)
if "PID check failed" in msg:
@ -75,17 +72,12 @@ class ParamikoConnection(object):
else:
raise errors.AnsibleConnectionFailed(msg)
return ssh
def connect(self):
''' connect to the remote host '''
self.ssh = self._get_conn()
self.ssh = ssh
return self
def exec_command(self, cmd, tmp_path,sudo_user,sudoable=False):
def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False):
''' run a command on the remote host '''
bufsize = 4096
chan = self.ssh.get_transport().open_session()
chan.get_pty()
@ -119,10 +111,7 @@ class ParamikoConnection(object):
except socket.timeout:
raise errors.AnsibleError('ssh timed out waiting for sudo.\n' + sudo_output)
stdin = chan.makefile('wb', bufsize)
stdout = chan.makefile('rb', bufsize)
stderr = '' # stderr goes to stdout when using a pty, so this will never output anything.
return stdin, stdout, stderr
return (chan.makefile('wb', bufsize), chan.makefile('rb', bufsize), '')
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
@ -132,21 +121,19 @@ class ParamikoConnection(object):
try:
sftp.put(in_path, out_path)
except IOError:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file to %s" % out_path)
sftp.close()
def fetch_file(self, in_path, out_path):
''' save a remote file to the specified path '''
sftp = self.ssh.open_sftp()
try:
sftp.get(in_path, out_path)
except IOError:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file from %s" % in_path)
sftp.close()
def close(self):
''' terminate the connection '''
self.ssh.close()

@ -16,17 +16,13 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
################################################
import os
import time
import subprocess
import shlex
import pipes
import random
import select
import fcntl
from ansible import errors
class SSHConnection(object):
@ -39,6 +35,7 @@ class SSHConnection(object):
def connect(self):
''' connect to the remote host '''
self.common_args = []
extra_args = os.getenv("ANSIBLE_SSH_ARGS", None)
if extra_args is not None:
@ -56,7 +53,7 @@ class SSHConnection(object):
return self
def exec_command(self, cmd, tmp_path,sudo_user,sudoable=False):
def exec_command(self, cmd, tmp_path, sudo_user,sudoable=False):
''' run a command on the remote host '''
ssh_cmd = ["ssh", "-tt", "-q"] + self.common_args + [self.host]
@ -126,8 +123,6 @@ class SSHConnection(object):
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
if not os.path.exists(in_path):
raise errors.AnsibleFileNotFound("file or module does not exist: %s" % in_path)
sftp_cmd = ["sftp"] + self.common_args + [self.host]
p = subprocess.Popen(sftp_cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
@ -136,5 +131,6 @@ class SSHConnection(object):
raise errors.AnsibleError("failed to transfer file from %s:\n%s\n%s" % (in_path, stdout, stderr))
def close(self):
''' terminate the connection '''
''' not applicable since we're executing openssh binaries '''
pass

@ -15,8 +15,6 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
###############################################################
import sys
import os
import shlex
@ -25,7 +23,9 @@ import codecs
import jinja2
import yaml
import optparse
from operator import methodcaller
import operator
from ansible import errors
import ansible.constants as C
try:
import json
@ -37,86 +37,33 @@ try:
except ImportError:
from md5 import md5 as _md5
from ansible import errors
import ansible.constants as C
###############################################################
# UTILITY FUNCTIONS FOR COMMAND LINE TOOLS
###############################################################
def err(msg):
''' print an error message to stderr '''
print >> sys.stderr, msg
def exit(msg, rc=1):
''' quit with an error to stdout and a failure code '''
err(msg)
sys.exit(rc)
def bigjson(result):
''' format JSON output (uncompressed) '''
result2 = result.copy()
return json.dumps(result2, sort_keys=True, indent=4)
def jsonify(result, format=False):
''' format JSON output (uncompressed or uncompressed) '''
def smjson(result):
''' format JSON output (compressed) '''
result2 = result.copy()
return json.dumps(result2, sort_keys=True)
def task_start_msg(name, conditional):
# FIXME: move to callbacks code
if conditional:
return "NOTIFIED: [%s]" % name
if format:
return json.dumps(result2, sort_keys=True, indent=4)
else:
return "TASK: [%s]" % name
def regular_generic_msg(hostname, result, oneline, caption):
''' output on the result of a module run that is not command '''
if not oneline:
return "%s | %s >> %s\n" % (hostname, caption, bigjson(result))
else:
return "%s | %s >> %s\n" % (hostname, caption, smjson(result))
def regular_success_msg(hostname, result, oneline):
''' output the result of a successful module run '''
return regular_generic_msg(hostname, result, oneline, 'success')
def regular_failure_msg(hostname, result, oneline):
''' output the result of a failed module run '''
return regular_generic_msg(hostname, result, oneline, 'FAILED')
def command_generic_msg(hostname, result, oneline, caption):
''' output the result of a command run '''
rc = result.get('rc', '0')
stdout = result.get('stdout','')
stderr = result.get('stderr', '')
msg = result.get('msg', '')
if not oneline:
buf = "%s | %s | rc=%s >>\n" % (hostname, caption, result.get('rc',0))
if stdout:
buf += stdout
if stderr:
buf += stderr
if msg:
buf += msg
buf += "\n"
return buf
else:
if stderr:
return "%s | %s | rc=%s | (stdout) %s (stderr) %s\n" % (hostname, caption, rc, stdout, stderr)
else:
return "%s | %s | rc=%s | (stdout) %s\n" % (hostname, caption, rc, stdout)
def command_success_msg(hostname, result, oneline):
''' output from a successful command run '''
return command_generic_msg(hostname, result, oneline, 'success')
def command_failure_msg(hostname, result, oneline):
''' output from a failed command run '''
return command_generic_msg(hostname, result, oneline, 'FAILED')
return json.dumps(result2, sort_keys=True)
def write_tree_file(tree, hostname, buf):
''' write something into treedir/hostname '''
# TODO: might be nice to append playbook runs per host in a similar way
# in which case, we'd want append mode.
path = os.path.join(tree, hostname)
@ -126,33 +73,12 @@ def write_tree_file(tree, hostname, buf):
def is_failed(result):
''' is a given JSON result a failed result? '''
failed = False
rc = 0
if type(result) == dict:
failed = result.get('failed', 0)
rc = result.get('rc', 0)
if rc != 0:
return True
return failed
def host_report_msg(hostname, module_name, result, oneline):
''' summarize the JSON results for a particular host '''
buf = ''
failed = is_failed(result)
if module_name in [ 'command', 'shell', 'raw' ] and 'ansible_job_id' not in result:
if not failed:
buf = command_success_msg(hostname, result, oneline)
else:
buf = command_failure_msg(hostname, result, oneline)
else:
if not failed:
buf = regular_success_msg(hostname, result, oneline)
else:
buf = regular_failure_msg(hostname, result, oneline)
return buf
return ((result.get('rc', 0) != 0) or (result.get('failed', False) in [ True, 'True', 'true']))
def prepare_writeable_dir(tree):
''' make sure a directory exists and is writeable '''
if tree != '/':
tree = os.path.realpath(os.path.expanduser(tree))
if not os.path.exists(tree):
@ -165,6 +91,7 @@ def prepare_writeable_dir(tree):
def path_dwim(basedir, given):
''' make relative paths work like folks expect '''
if given.startswith("/"):
return given
elif given.startswith("~/"):
@ -174,21 +101,23 @@ def path_dwim(basedir, given):
def json_loads(data):
''' parse a JSON string and return a data structure '''
return json.loads(data)
def parse_json(data):
''' this version for module return data only '''
try:
return json.loads(data)
except:
# not JSON, but try "Baby JSON" which allows many of our modules to not
# require JSON and makes writing modules in bash much simpler
results = {}
try :
try:
tokens = shlex.split(data)
except:
print "failed to parse json: "+ data
raise;
raise
for t in tokens:
if t.find("=") == -1:
@ -210,6 +139,7 @@ _LISTRE = re.compile(r"(\w+)\[(\d+)\]")
def _varLookup(name, vars):
''' find the contents of a possibly complex variable in vars. '''
path = name.split('.')
space = vars
for part in path:
@ -231,22 +161,17 @@ _KEYCRE = re.compile(r"\$(?P<complex>\{){0,1}((?(complex)[\w\.\[\]]+|\w+))(?(com
def varLookup(varname, vars):
''' helper function used by varReplace '''
m = _KEYCRE.search(varname)
if not m:
return None
return _varLookup(m.group(2), vars)
def varReplace(raw, vars):
'''Perform variable replacement of $vars
@param raw: String to perform substitution on.
@param vars: Dictionary of variables to replace. Key is variable name
(without $ prefix). Value is replacement string.
@return: Input raw string with substituted values.
'''
''' Perform variable replacement of $variables in string raw using vars dictionary '''
# this code originally from yum
done = [] # Completed chunks to return
done = [] # Completed chunks to return
while raw:
m = _KEYCRE.search(raw)
@ -269,22 +194,27 @@ def varReplace(raw, vars):
def _template(text, vars, setup_cache=None):
''' run a text buffer through the templating engine '''
vars = vars.copy()
vars['hostvars'] = setup_cache
text = varReplace(unicode(text), vars)
return text
return varReplace(unicode(text), vars)
def template(text, vars, setup_cache=None):
''' run a text buffer through the templating engine
until it no longer changes '''
''' run a text buffer through the templating engine until it no longer changes '''
prev_text = ''
depth = 0
while prev_text != text:
depth = depth + 1
if (depth > 20):
raise errors.AnsibleError("template recursion depth exceeded")
prev_text = text
text = _template(text, vars, setup_cache)
return text
def template_from_file(basedir, path, vars, setup_cache):
''' run a file through the templating engine '''
environment = jinja2.Environment(loader=jinja2.FileSystemLoader(basedir), trim_blocks=False)
data = codecs.open(path_dwim(basedir, path), encoding="utf8").read()
t = environment.from_string(data)
@ -297,10 +227,12 @@ def template_from_file(basedir, path, vars, setup_cache):
def parse_yaml(data):
''' convert a yaml string to a data structure '''
return yaml.load(data)
def parse_yaml_from_file(path):
''' convert a yaml file to a data structure '''
try:
data = file(path).read()
except IOError:
@ -309,6 +241,7 @@ def parse_yaml_from_file(path):
def parse_kv(args):
''' convert a string of key/value items to a dict '''
options = {}
if args is not None:
vargs = shlex.split(args, posix=True)
@ -320,6 +253,7 @@ def parse_kv(args):
def md5(filename):
''' Return MD5 hex digest of local file, or None if file is not present. '''
if not os.path.exists(filename):
return None
digest = _md5()
@ -332,18 +266,15 @@ def md5(filename):
infile.close()
return digest.hexdigest()
####################################################################
# option handling code for /usr/bin/ansible and ansible-playbook
# below this line
# FIXME: move to seperate file
class SortedOptParser(optparse.OptionParser):
'''Optparser which sorts the options by opt before outputting --help'''
def format_help(self, formatter=None):
self.option_list.sort(key=methodcaller('get_opt_string'))
self.option_list.sort(key=operator.methodcaller('get_opt_string'))
return optparse.OptionParser.format_help(self, formatter=None)
def base_parser(constants=C, usage="", output_opts=False, runas_opts=False, async_opts=False, connect_opts=False):
@ -400,4 +331,3 @@ def base_parser(constants=C, usage="", output_opts=False, runas_opts=False, asyn
return parser

@ -28,6 +28,10 @@ import subprocess
import syslog
import traceback
# added to stave off future warnings about apt api
import warnings;
warnings.filterwarnings('ignore', "apt API not stable yet", FutureWarning)
APT_PATH = "/usr/bin/apt-get"
APT = "DEBIAN_PRIORITY=critical %s" % APT_PATH
@ -86,11 +90,16 @@ def package_status(pkgname, version, cache):
#assume older version of python-apt is installed
return pkg.isInstalled, pkg.isUpgradable
def install(pkgspec, cache, upgrade=False, default_release=None, install_recommends=True):
def install(pkgspec, cache, upgrade=False, default_release=None, install_recommends=True, force=False):
name, version = package_split(pkgspec)
installed, upgradable = package_status(name, version, cache)
if not installed or (upgrade and upgradable):
cmd = "%s --option Dpkg::Options::=--force-confold -q -y install '%s'" % (APT, pkgspec)
if force:
force_yes = '--force-yes'
else:
force_yes = ''
cmd = "%s --option Dpkg::Options::=--force-confold -q -y %s install '%s'" % (APT, force_yes, pkgspec)
if default_release:
cmd += " -t '%s'" % (default_release,)
if not install_recommends:
@ -142,6 +151,7 @@ update_cache = params.get('update-cache', 'no')
purge = params.get('purge', 'no')
default_release = params.get('default-release', None)
install_recommends = params.get('install-recommends', 'yes')
force = params.get('force', 'no')
if state not in ['installed', 'latest', 'removed']:
fail_json(msg='invalid state')
@ -152,6 +162,9 @@ if update_cache not in ['yes', 'no']:
if purge not in ['yes', 'no']:
fail_json(msg='invalid value for purge (requires yes or no -- default is no)')
if force not in ['yes', 'no']:
fail_json(msg='invalid option for force (requires yes or no -- default is no)')
if package is None and update_cache != 'yes':
fail_json(msg='pkg=name and/or update-cache=yes is required')
@ -163,14 +176,19 @@ cache = apt.Cache()
if default_release:
apt_pkg.config['APT::Default-Release'] = default_release
# reopen cache w/ modified config
cache.open()
cache.open(progress=None)
if update_cache == 'yes':
cache.update()
cache.open()
cache.open(progress=None)
if package == None:
exit_json(changed=False)
if force == 'yes':
force_yes = True
else:
force_yes = False
if package.count('=') > 1:
fail_json(msg='invalid package spec')
@ -179,10 +197,11 @@ if state == 'latest':
fail_json(msg='version number inconsistent with state=latest')
changed = install(package, cache, upgrade=True,
default_release=default_release,
install_recommends=install_recommends)
install_recommends=install_recommends,
force=force_yes)
elif state == 'installed':
changed = install(package, cache, default_release=default_release,
install_recommends=install_recommends)
install_recommends=install_recommends,force=force_yes)
elif state == 'removed':
changed = remove(package, cache, purge == 'yes')

@ -1,24 +1 @@
#!/usr/bin/python
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
### THIS FILE IS FOR REFERENCE OR FUTURE USE ###
# See lib/ansible/runner.py for implementation of the fetch functionality #
# this is a virtual module that is entirely implemented server side

@ -150,9 +150,6 @@ mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# presently unused, we always use -R (FIXME?)
recurse = params.get('recurse', 'false')
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)

@ -54,11 +54,10 @@ def add_group_info(kwargs):
def group_del(group):
cmd = [GROUPDEL, group]
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
else:
return False
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
rc = p.returncode
return (rc, out, err)
def group_add(group, **kwargs):
cmd = [GROUPADD]
@ -69,11 +68,10 @@ def group_add(group, **kwargs):
elif key == 'system' and kwargs[key] == 'yes':
cmd.append('-r')
cmd.append(group)
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
else:
return False
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
rc = p.returncode
return (rc, out, err)
def group_mod(group, **kwargs):
cmd = [GROUPMOD]
@ -84,13 +82,12 @@ def group_mod(group, **kwargs):
cmd.append('-g')
cmd.append(kwargs[key])
if len(cmd) == 1:
return False
return (None, '', '')
cmd.append(group)
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
else:
return False
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
rc = p.returncode
return (rc, out, err)
def group_exists(group):
try:
@ -156,18 +153,32 @@ if system not in ['yes', 'no']:
if name is None:
fail_json(msg='name is required')
changed = False
rc = 0
rc = None
out = ''
err = ''
result = {}
result['name'] = name
if state == 'absent':
if group_exists(name):
changed = group_del(name)
exit_json(name=name, changed=changed)
(rc, out, err) = group_del(name)
if rc != 0:
fail_json(name=name, msg=err)
elif state == 'present':
if not group_exists(name):
changed = group_add(name, gid=gid, system=system)
(rc, out, err) = group_add(name, gid=gid, system=system)
else:
changed = group_mod(name, gid=gid)
(rc, out, err) = group_mod(name, gid=gid)
exit_json(name=name, changed=changed)
if rc is not None and rc != 0:
fail_json(name=name, msg=err)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
exit_json(**result)
fail_json(name=name, msg='Unexpected position reached')

@ -0,0 +1,122 @@
#!/usr/bin/python
# (c) 2012, Mark Theunissen <mark.theunissen@gmail.com>
# Sponsored by Four Kitchens http://fourkitchens.com.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
try:
import json
except ImportError:
import simplejson as json
import sys
import os
import os.path
import shlex
import syslog
import re
# ===========================================
# Standard Ansible support methods.
#
def exit_json(rc=0, **kwargs):
print json.dumps(kwargs)
sys.exit(rc)
def fail_json(**kwargs):
kwargs["failed"] = True
exit_json(rc=1, **kwargs)
# ===========================================
# Standard Ansible argument parsing code.
#
if len(sys.argv) == 1:
fail_json(msg="the mysql module requires arguments (-a)")
argfile = sys.argv[1]
if not os.path.exists(argfile):
fail_json(msg="argument file not found")
args = open(argfile, "r").read()
items = shlex.split(args)
syslog.openlog("ansible-%s" % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, "Invoked with %s" % args)
if not len(items):
fail_json(msg="the mysql module requires arguments (-a)")
params = {}
for x in items:
(k, v) = x.split("=")
params[k] = v
# ===========================================
# MySQL module specific support methods.
#
# Import MySQLdb here instead of at the top, so we can use the fail_json function.
try:
import MySQLdb
except ImportError:
fail_json(msg="The Python MySQL package is missing")
def db_exists(db):
res = cursor.execute("SHOW DATABASES LIKE %s", (db,))
return bool(res)
def db_delete(db):
query = "DROP DATABASE %s" % db
cursor.execute(query)
return True
def db_create(db,):
query = "CREATE DATABASE %s" % db
res = cursor.execute(query)
return True
# ===========================================
# Module execution.
#
# Gather arguments into local variables.
loginuser = params.get("loginuser", "root")
loginpass = params.get("loginpass", "")
loginhost = params.get("loginhost", "localhost")
db = params.get("db", None)
state = params.get("state", "present")
if state not in ["present", "absent"]:
fail_json(msg="invalid state, must be 'present' or 'absent'")
if db is not None:
changed = False
try:
db_connection = MySQLdb.connect(host=loginhost, user=loginuser, passwd=loginpass, db="mysql")
cursor = db_connection.cursor()
except Exception as e:
fail_json(msg="unable to connect to database")
if db_exists(db):
if state == "absent":
changed = db_delete(db)
else:
if state == "present":
changed = db_create(db)
exit_json(changed=changed, db=db)
fail_json(msg="invalid parameters passed, db parameter required")

@ -0,0 +1,234 @@
#!/usr/bin/python
# (c) 2012, Mark Theunissen <mark.theunissen@gmail.com>
# Sponsored by Four Kitchens http://fourkitchens.com.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
try:
import json
except ImportError:
import simplejson as json
import sys
import os
import os.path
import shlex
import syslog
import re
# ===========================================
# Standard Ansible support methods.
#
def exit_json(rc=0, **kwargs):
print json.dumps(kwargs)
sys.exit(rc)
def fail_json(**kwargs):
kwargs["failed"] = True
exit_json(rc=1, **kwargs)
# ===========================================
# Standard Ansible argument parsing code.
#
if len(sys.argv) == 1:
fail_json(msg="the mysql module requires arguments (-a)")
argfile = sys.argv[1]
if not os.path.exists(argfile):
fail_json(msg="argument file not found")
args = open(argfile, "r").read()
items = shlex.split(args)
syslog.openlog("ansible-%s" % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, "Invoked with %s" % args)
if not len(items):
fail_json(msg="the mysql module requires arguments (-a)")
params = {}
for x in items:
(k, v) = x.split("=")
params[k] = v
# ===========================================
# MySQL module specific support methods.
#
# Import MySQLdb here instead of at the top, so we can use the fail_json function.
try:
import MySQLdb
except ImportError:
fail_json(msg="The Python MySQL package is missing")
def user_exists(user, host):
cursor.execute("SELECT count(*) FROM user WHERE user = %s AND host = %s", (user,host))
count = cursor.fetchone()
return count[0] > 0
def user_add(user, host, passwd, new_priv):
cursor.execute("CREATE USER %s@%s IDENTIFIED BY %s", (user,host,passwd))
if new_priv is not None:
for db_table, priv in new_priv.iteritems():
privileges_grant(user,host,db_table,priv)
return True
def user_mod(user, host, passwd, new_priv):
changed = False
# Handle passwords.
if passwd is not None:
cursor.execute("SELECT password FROM user WHERE user = %s AND host = %s", (user,host))
current_pass_hash = cursor.fetchone()
cursor.execute("SELECT PASSWORD(%s)", (passwd,))
new_pass_hash = cursor.fetchone()
if current_pass_hash[0] != new_pass_hash[0]:
cursor.execute("SET PASSWORD FOR %s@%s = PASSWORD(%s)", (user,host,passwd))
changed = True
# Handle privileges.
if new_priv is not None:
curr_priv = privileges_get(user,host)
# If the user has privileges on a db.table that doesn't appear at all in
# the new specification, then revoke all privileges on it.
for db_table, priv in curr_priv.iteritems():
if db_table not in new_priv:
privileges_revoke(user,host,db_table)
changed = True
# If the user doesn't currently have any privileges on a db.table, then
# we can perform a straight grant operation.
for db_table, priv in new_priv.iteritems():
if db_table not in curr_priv:
privileges_grant(user,host,db_table,priv)
changed = True
# If the db.table specification exists in both the user's current privileges
# and in the new privileges, then we need to see if there's a difference.
db_table_intersect = set(new_priv.keys()) & set(curr_priv.keys())
for db_table in db_table_intersect:
priv_diff = set(new_priv[db_table]) ^ set(curr_priv[db_table])
if (len(priv_diff) > 0):
privileges_revoke(user,host,db_table)
privileges_grant(user,host,db_table,new_priv[db_table])
changed = True
return changed
def user_delete(user, host):
cursor.execute("DROP USER %s@%s", (user,host))
return True
def privileges_get(user,host):
""" MySQL doesn't have a better method of getting privileges aside from the
SHOW GRANTS query syntax, which requires us to then parse the returned string.
Here's an example of the string that is returned from MySQL:
GRANT USAGE ON *.* TO 'user'@'localhost' IDENTIFIED BY 'pass';
This function makes the query and returns a dictionary containing the results.
The dictionary format is the same as that returned by privileges_unpack() below.
"""
output = {}
cursor.execute("SHOW GRANTS FOR %s@%s", (user,host))
grants = cursor.fetchall()
for grant in grants:
res = re.match("GRANT\ (.+)\ ON\ (.+)\ TO", grant[0])
if res is None:
fail_json(msg="unable to parse the MySQL grant string")
privileges = res.group(1).split(", ")
privileges = ['ALL' if x=='ALL PRIVILEGES' else x for x in privileges]
db = res.group(2).replace('`', '')
output[db] = privileges
return output
def privileges_unpack(priv):
""" Take a privileges string, typically passed as a parameter, and unserialize
it into a dictionary, the same format as privileges_get() above. We have this
custom format to avoid using YAML/JSON strings inside YAML playbooks. Example
of a privileges string:
mydb.*:INSERT,UPDATE/anotherdb.*:SELECT/yetanother.*:ALL
The privilege USAGE stands for no privileges, so we add that in on *.* if it's
not specified in the string, as MySQL will always provide this by default.
"""
output = {}
for item in priv.split('/'):
pieces = item.split(':')
output[pieces[0]] = pieces[1].upper().split(',')
if '*.*' not in output:
output['*.*'] = ['USAGE']
return output
def privileges_revoke(user,host,db_table):
query = "REVOKE ALL PRIVILEGES ON %s FROM '%s'@'%s'" % (db_table,user,host)
cursor.execute(query)
def privileges_grant(user,host,db_table,priv):
priv_string = ",".join(priv)
query = "GRANT %s ON %s TO '%s'@'%s'" % (priv_string,db_table,user,host)
cursor.execute(query)
# ===========================================
# Module execution.
#
# Gather arguments into local variables.
loginuser = params.get("loginuser", "root")
loginpass = params.get("loginpass", "")
loginhost = params.get("loginhost", "localhost")
user = params.get("user", None)
passwd = params.get("passwd", None)
host = params.get("host", "localhost")
state = params.get("state", "present")
priv = params.get("priv", None)
if state not in ["present", "absent"]:
fail_json(msg="invalid state, must be 'present' or 'absent'")
if priv is not None:
try:
priv = privileges_unpack(priv)
except:
fail_json(msg="invalid privileges string")
if user is not None:
try:
db_connection = MySQLdb.connect(host=loginhost, user=loginuser, passwd=loginpass, db="mysql")
cursor = db_connection.cursor()
except Exception as e:
fail_json(msg="unable to connect to database")
if state == "present":
if user_exists(user, host):
changed = user_mod(user, host, passwd, priv)
else:
if passwd is None:
fail_json(msg="passwd parameter required when adding a user")
changed = user_add(user, host, passwd, priv)
elif state == "absent":
if user_exists(user, host):
changed = user_delete(user, host)
else:
changed = False
exit_json(changed=changed, user=user)
fail_json(msg="invalid parameters passed, user parameter required")

@ -1,23 +1 @@
#!/usr/bin/python
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# hey the Ansible raw module isn't really a remote transferred
# module. All the magic happens in Runner.py, see the web docs
# for more details.
# this is a virtual module that is entirely implemented server side

@ -108,6 +108,8 @@ def _get_service_status(name):
running = True
elif cleaned_status_stdout.find("start") != -1 and cleaned_status_stdout.find("not") == -1:
running = True
elif 'could not access pid file' in cleaned_status_stdout:
running = False
# if the job status is still not known check it by special conditions
if running == None:

@ -17,8 +17,6 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DEFAULT_ANSIBLE_SETUP = "/etc/ansible/setup"
import array
import fcntl
import glob
@ -33,11 +31,6 @@ import subprocess
import traceback
import syslog
try:
from hashlib import md5 as _md5
except ImportError:
from md5 import md5 as _md5
try:
import selinux
HAVE_SELINUX=True
@ -316,20 +309,6 @@ def ansible_facts():
get_service_facts(facts)
return facts
def md5(filename):
''' Return MD5 hex digest of local file, or None if file is not present. '''
if not os.path.exists(filename):
return None
digest = _md5()
blocksize = 64 * 1024
infile = open(filename, 'rb')
block = infile.read(blocksize)
while block:
digest.update(block)
block = infile.read(blocksize)
infile.close()
return digest.hexdigest()
# ===========================================
# load config & template variables
@ -355,21 +334,6 @@ except:
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % setup_options)
ansible_file = os.path.expandvars(setup_options.get('metadata', DEFAULT_ANSIBLE_SETUP))
ansible_dir = os.path.dirname(ansible_file)
# create the config dir if it doesn't exist
if not os.path.exists(ansible_dir):
os.makedirs(ansible_dir)
changed = False
md5sum = None
if not os.path.exists(ansible_file):
changed = True
else:
md5sum = md5(ansible_file)
# Get some basic facts in case facter or ohai are not installed
for (k, v) in ansible_facts().items():
setup_options["ansible_%s" % k] = v
@ -409,23 +373,7 @@ if os.path.exists("/usr/bin/ohai"):
k2 = "ohai_%s" % k
setup_options[k2] = v
# write the template/settings file using
# instructions from server
f = open(ansible_file, "w+")
reformat = json.dumps(setup_options, sort_keys=True, indent=4)
f.write(reformat)
f.close()
md5sum2 = md5(ansible_file)
if md5sum != md5sum2:
changed = True
setup_result = {}
setup_result['written'] = ansible_file
setup_result['changed'] = changed
setup_result['md5sum'] = md5sum2
setup_result['ansible_facts'] = setup_options
# hack to keep --verbose from showing all the setup module results

@ -1,6 +1,3 @@
# VIRTUAL
There is actually no actual shell module source, when you use 'shell' in ansible,
it runs the 'command' module with special arguments and it behaves differently.
See the command source and the comment "#USE_SHELL".
# There is actually no actual shell module source, when you use 'shell' in ansible,
# it runs the 'command' module with special arguments and it behaves differently.
# See the command source and the comment "#USE_SHELL".

@ -1,24 +1 @@
#!/usr/bin/python
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# hey the Ansible template module isn't really a remote transferred
# module. All the magic happens in Runner.py making use of the
# copy module, and if not running from a playbook, also the 'slurp'
# module.
# this is a virtual module that is entirely implemented server side

@ -54,6 +54,13 @@ def add_user_info(kwargs):
if user_exists(name):
kwargs['state'] = 'present'
info = user_info(name)
if info == False:
if 'failed' in kwargs:
kwargs['notice'] = "failed to look up user name: %s" % name
else:
kwargs['msg'] = "failed to look up user name: %s" % name
kwargs['failed'] = True
return kwargs
kwargs['uid'] = info[2]
kwargs['group'] = info[3]
kwargs['comment'] = info[4]
@ -70,16 +77,15 @@ def add_user_info(kwargs):
def user_del(user, **kwargs):
cmd = [USERDEL]
for key in kwargs:
if key == 'force' and kwargs[key]:
if key == 'force' and kwargs[key] == 'yes':
cmd.append('-f')
elif key == 'remove' and kwargs[key]:
elif key == 'remove' and kwargs[key] == 'yes':
cmd.append('-r')
cmd.append(user)
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
else:
return False
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
rc = p.returncode
return (rc, out, err)
def user_add(user, **kwargs):
cmd = [USERADD]
@ -119,11 +125,10 @@ def user_add(user, **kwargs):
elif key == 'system' and kwargs[key] == 'yes':
cmd.append('-r')
cmd.append(user)
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
else:
return False
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
rc = p.returncode
return (rc, out, err)
"""
Without spwd, we would have to resort to reading /etc/shadow
@ -184,13 +189,12 @@ def user_mod(user, **kwargs):
cmd.append(kwargs[key])
# skip if no changes to be made
if len(cmd) == 1:
return False
return (None, '', '')
cmd.append(user)
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
else:
return False
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
rc = p.returncode
return (rc, out, err)
def group_exists(group):
try:
@ -290,8 +294,8 @@ password = params.get('password', None)
# ===========================================
# following options are specific to userdel
force = params.get('force', False)
remove = params.get('remove', False)
force = params.get('force', 'no')
remove = params.get('remove', 'no')
# ===========================================
# following options are specific to useradd
@ -310,30 +314,47 @@ if system not in ['yes', 'no']:
fail_json(msg='invalid system')
if append not in [ 'yes', 'no' ]:
fail_json(msg='invalid append')
if force not in ['yes', 'no']:
fail_json(msg="invalid option for force, requires yes or no (defaults to no)")
if remove not in ['yes', 'no']:
fail_json(msg="invalid option for remove, requires yes or no (defaults to no)")
if name is None:
fail_json(msg='name is required')
changed = False
rc = 0
rc = None
out = ''
err = ''
result = {}
result['name'] = name
if state == 'absent':
if user_exists(name):
changed = user_del(name, force=force, remove=remove)
exit_json(name=name, changed=changed, force=force, remove=remove)
(rc, out, err) = user_del(name, force=force, remove=remove)
if rc != 0:
fail_json(name=name, msg=err)
result['force'] = force
result['remove'] = remove
elif state == 'present':
if not user_exists(name):
changed = user_add(name, uid=uid, group=group, groups=groups,
comment=comment, home=home, shell=shell,
password=password, createhome=createhome,
system=system)
(rc, out, err) = user_add(name, uid=uid, group=group, groups=groups,
comment=comment, home=home, shell=shell,
password=password, createhome=createhome,
system=system)
else:
changed = user_mod(name, uid=uid, group=group, groups=groups,
comment=comment, home=home, shell=shell,
password=password, append=append)
(rc, out, err) = user_mod(name, uid=uid, group=group, groups=groups,
comment=comment, home=home, shell=shell,
password=password, append=append)
if rc is not None and rc != 0:
fail_json(name=name, msg=err)
if password is not None:
exit_json(name=name, changed=changed, password="XXXXXXXX")
else:
exit_json(name=name, changed=changed)
result['password'] = 'NOTLOGGINGPASSWORD'
fail_json(name=name, msg='Unexpected position reached')
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
exit_json(**result)
sys.exit(0)

@ -221,15 +221,14 @@ class TestInventory(unittest.TestCase):
def test_yaml(self):
inventory = self.yaml_inventory()
hosts = inventory.list_hosts()
print hosts
expected_hosts=['jupiter', 'saturn', 'mars', 'zeus', 'hera', 'poseidon', 'thor', 'odin', 'loki']
expected_hosts=['garfield', 'goofy', 'hera', 'jerry', 'jupiter', 'loki', 'mars', 'mickey', 'odie', 'odin', 'poseidon', 'saturn', 'thor', 'tom', 'zeus']
self.compare(hosts, expected_hosts)
def test_yaml_all(self):
inventory = self.yaml_inventory()
hosts = inventory.list_hosts('all')
expected_hosts=['jupiter', 'saturn', 'mars', 'zeus', 'hera', 'poseidon', 'thor', 'odin', 'loki']
expected_hosts=['garfield', 'goofy', 'hera', 'jerry', 'jupiter', 'loki', 'mars', 'mickey', 'odie', 'odin', 'poseidon', 'saturn', 'thor', 'tom', 'zeus']
self.compare(hosts, expected_hosts)
def test_yaml_norse(self):
@ -323,3 +322,29 @@ class TestInventory(unittest.TestCase):
assert 'group_names' in vars
assert sorted(vars['group_names']) == [ 'norse', 'ruler' ]
def test_yaml_some_animals(self):
inventory = self.yaml_inventory()
hosts = inventory.list_hosts("cat:mouse")
expected_hosts=['garfield', 'jerry', 'mickey', 'tom']
self.compare(hosts, expected_hosts)
def test_yaml_comic(self):
inventory = self.yaml_inventory()
hosts = inventory.list_hosts("comic")
expected_hosts=['garfield', 'odie']
self.compare(hosts, expected_hosts)
def test_yaml_orange(self):
inventory = self.yaml_inventory()
hosts = inventory.list_hosts("orange")
expected_hosts=['garfield', 'goofy']
self.compare(hosts, expected_hosts)
def test_yaml_garfield_vars(self):
inventory = self.yaml_inventory()
vars = inventory.get_variables('garfield')
assert vars == {'ears': 'pointy',
'inventory_hostname': 'garfield',
'group_names': ['cat', 'comic', 'orange'],
'nose': 'pink'}

@ -30,12 +30,9 @@ class TestCallbacks(object):
def on_start(self):
EVENTS.append('start')
def on_setup_primary(self):
def on_setup(self):
EVENTS.append([ 'primary_setup' ])
def on_setup_secondary(self):
EVENTS.append([ 'secondary_setup' ])
def on_skipped(self, host):
EVENTS.append([ 'skipped', [ host ]])
@ -86,10 +83,7 @@ class TestCallbacks(object):
def on_unreachable(self, host, msg):
EVENTS.append([ 'failed/dark', [ host, msg ]])
def on_setup_primary(self):
pass
def on_setup_secondary(self):
def on_setup(self):
pass
def on_no_hosts(self):
@ -144,7 +138,6 @@ class TestPlaybook(unittest.TestCase):
runner_callbacks = self.test_callbacks
)
result = self.playbook.run()
# print utils.bigjson(dict(events=EVENTS))
return result
def test_one(self):
@ -153,19 +146,20 @@ class TestPlaybook(unittest.TestCase):
# if different, this will output to screen
print "**ACTUAL**"
print utils.bigjson(actual)
print utils.jsonify(actual, format=True)
expected = {
"127.0.0.2": {
"changed": 9,
"failures": 0,
"ok": 12,
"ok": 11,
"skipped": 1,
"unreachable": 0
}
}
print "**EXPECTED**"
print utils.bigjson(expected)
assert utils.bigjson(expected) == utils.bigjson(actual)
print utils.jsonify(expected, format=True)
assert utils.jsonify(expected, format=True) == utils.jsonify(actual,format=True)
# make sure the template module took options from the vars section
data = file('/tmp/ansible_test_data_template.out').read()

@ -119,28 +119,6 @@ class TestRunner(unittest.TestCase):
])
assert result['changed'] == False
def test_template(self):
input_ = self._get_test_file('sample.j2')
metadata = self._get_test_file('metadata.json')
output = self._get_stage_file('sample.out')
result = self._run('template', [
"src=%s" % input_,
"dest=%s" % output,
"metadata=%s" % metadata
])
assert os.path.exists(output)
out = file(output).read()
assert out.find("duck") != -1
assert result['changed'] == True
assert 'md5sum' in result
assert 'failed' not in result
result = self._run('template', [
"src=%s" % input_,
"dest=%s" % output,
"metadata=%s" % metadata
])
assert result['changed'] == False
def test_command(self):
# test command module, change trigger, etc
result = self._run('command', [ "/bin/echo", "hi" ])
@ -177,21 +155,6 @@ class TestRunner(unittest.TestCase):
assert len(result['stdout']) > 100000
assert result['stderr'] == ''
def test_setup(self):
output = self._get_stage_file('output.json')
result = self._run('setup', [ "metadata=%s" % output, "a=2", "b=3", "c=4" ])
assert 'failed' not in result
assert 'md5sum' in result
assert result['changed'] == True
outds = json.loads(file(output).read())
assert outds['c'] == '4'
# not bothering to test change hooks here since ohai/facter results change
# almost every time so changed is always true, this just tests that
# rewriting the file is ok
result = self._run('setup', [ "metadata=%s" % output, "a=2", "b=3", "c=4" ])
print "RAW RESULT=%s" % result
assert 'md5sum' in result
def test_async(self):
# test async launch and job status
# of any particular module
@ -231,14 +194,13 @@ class TestRunner(unittest.TestCase):
def test_service(self):
# TODO: tests for the service module
pass
def test_assemble(self):
input = self._get_test_file('assemble.d')
metadata = self._get_test_file('metadata.json')
output = self._get_stage_file('sample.out')
result = self._run('assemble', [
"src=%s" % input,
"dest=%s" % output,
"metadata=%s" % metadata
])
assert os.path.exists(output)
out = file(output).read()
@ -251,7 +213,6 @@ class TestRunner(unittest.TestCase):
result = self._run('assemble', [
"src=%s" % input,
"dest=%s" % output,
"metadata=%s" % metadata
])
assert result['changed'] == False

@ -1,3 +0,0 @@
{
"answer" : "Where will we find a duck and a hose at this hour?"
}

@ -1,5 +1,7 @@
---
# Below is the original way of defining hosts and groups.
- jupiter
- host: saturn
vars:
@ -37,3 +39,44 @@
- group: multiple
hosts:
- saturn
# Here we demonstrate that groups can be defined on a per-host basis.
# When managing a large set of systems this format makes it easier to
# ensure each of the systems is defined in a set of groups, compared
# to the standard group definitions, where a host may need to be added
# to multiple disconnected groups.
- host: garfield
groups: [ comic, cat, orange ]
vars:
- nose: pink
- host: odie
groups: [ comic, dog, yellow ]
- host: mickey
groups: [ cartoon, mouse, red ]
- host: goofy
groups: [ cartoon, dog, orange ]
- host: tom
groups: [ cartoon, cat, gray ]
- host: jerry
groups: [ cartoon, mouse, brown ]
- group: cat
vars:
- ears: pointy
- nose: black
- group: dog
vars:
- ears: flappy
- nose: black
- group: mouse
vars:
- ears: round
- nose: black

Loading…
Cancel
Save