diff --git a/.gitignore b/.gitignore index 4f12e1da9a0..d2275493129 100644 --- a/.gitignore +++ b/.gitignore @@ -39,3 +39,8 @@ debian/ *.swp *.swo credentials.yml +# test output +.coverage +results.xml +coverage.xml +/test/units/cover-html diff --git a/CHANGELOG.md b/CHANGELOG.md index 60330740156..f16b3d3ca69 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -6,17 +6,91 @@ Ansible Changes By Release Major features/changes: * The deprecated legacy variable templating system has been finally removed. Use {{ foo }} always not $foo or ${foo}. +* Any data file can also be JSON. Use sparingly -- with great power comes great responsibility. Starting file with "{" or "[" denotes JSON. +* Added 'gathering' param for ansible.cfg to change the default gather_facts policy. +* Accelerate improvements: + - multiple users can connect with different keys, when `accelerate_multi_key = yes` is specified in the ansible.cfg. + - daemon lifetime is now based on the time from the last activity, not the time from the daemon's launch. +* ansible-playbook now accepts --force-handlers to run handlers even if tasks result in failures + New Modules: -* packaging: cpanm +* files: replace +* packaging: cpanm (Perl) +* packaging: portage +* packaging: composer (PHP) +* packaging: homebrew_tap (OS X) +* packaging: homebrew_cask (OS X) +* packaging: apt_rpm +* packaging: layman +* monitoring: logentries +* monitoring: rollbar_deployment +* monitoring: librato_annotation +* notification: nexmo (SMS) +* notification: twilio (SMS) +* notification: slack (Slack.com) +* notification: typetalk (Typetalk.in) +* notification: sns (Amazon) * system: debconf +* system: ufw +* system: locale_gen +* system: alternatives +* system: capabilities +* net_infrastructure: bigip_facts +* net_infrastructure: dnssimple +* net_infrastructure: lldp +* web_infrastructure: apache2_module +* cloud: digital_ocean_domain +* cloud: digital_ocean_sshkey +* cloud: rax_identity +* cloud: rax_cbs (cloud block storage) +* cloud: rax_cbs_attachments +* cloud: ec2_asg (configure autoscaling groups) +* cloud: ec2_scaling_policy +* cloud: ec2_metric_alarm Other notable changes: -* info pending +* example callback plugin added for hipchat +* added example inventory plugin for vcenter/vsphere +* added example inventory plugin for doing really trivial inventory from SSH config files +* libvirt module now supports destroyed and paused as states +* s3 module can specify metadata +* security token additions to ec2 modules +* setup module code moved into module_utils/, facts now accessible by other modules +* synchronize module sets relative dirs based on inventory or role path +* misc bugfixes and other parameters +* the ec2_key module now has wait/wait_timeout parameters +* added version_compare filter (see docs) +* added ability for module documentation YAML to utilize shared module snippets for common args +* apt module now accepts "deb" parameter to install local dpkg files +* regex_replace filter plugin added +* ... to be filled in from changelogs ... +* + +## 1.5.4 "Love Walks In" - April 1, 2014 + +- Security fix for safe_eval, which further hardens the checking of the evaluation function. +- Changing order of variable precendence for system facts, to ensure that inventory variables take precedence over any facts that may be set on a host. + +## 1.5.3 "Love Walks In" - March 13, 2014 + +- Fix validate_certs and run_command errors from previous release +- Fixes to the git module related to host key checking + +## 1.5.2 "Love Walks In" - March 11, 2014 + +- Fix module errors in airbrake and apt from previous release + +## 1.5.1 "Love Walks In" - March 10, 2014 + +- Force command action to not be executed by the shell unless specifically enabled. +- Validate SSL certs accessed through urllib*. +- Implement new default cipher class AES256 in ansible-vault. +- Misc bug fixes. -## 1.5 "Love Walks In" - Feb 28, 2014 +## 1.5 "Love Walks In" - February 28, 2014 Major features/changes: diff --git a/CODING_GUIDELINES.md b/CODING_GUIDELINES.md index 7860fb24814..2da07681cee 100644 --- a/CODING_GUIDELINES.md +++ b/CODING_GUIDELINES.md @@ -66,8 +66,10 @@ Functions and Methods * In general, functions should not be 'too long' and should describe a meaningful amount of work * When code gets too nested, that's usually the sign the loop body could benefit from being a function + * Parts of our existing code are not the best examples of this at times. * Functions should have names that describe what they do, along with docstrings * Functions should be named with_underscores + * "Don't repeat yourself" is generally a good philosophy Variables ========= @@ -76,6 +78,16 @@ Variables * Ansible python code uses identifiers like 'ClassesLikeThis and variables_like_this * Module parameters should also use_underscores and not runtogether +Module Security +=============== + + * Modules must take steps to avoid passing user input from the shell and always check return codes + * always use module.run_command instead of subprocess or Popen or os.system -- this is mandatory + * if you use need the shell you must pass use_unsafe_shell=True to module.run_command + * if you do not need the shell, avoid using the shell + * any variables that can come from the user input with use_unsafe_shell=True must be wrapped by pipes.quote(x) + * downloads of https:// resource urls must import module_utils.urls and use the fetch_url method + Misc Preferences ================ @@ -149,16 +161,19 @@ All contributions to the core repo should preserve original licenses and new con Module Documentation ==================== -All module pull requests must include a DOCUMENTATION docstring (YAML format, see other modules for examples) as well as an EXAMPLES docstring, which -is free form. +All module pull requests must include a DOCUMENTATION docstring (YAML format, +see other modules for examples) as well as an EXAMPLES docstring, which is free form. -When adding new modules, any new parameter must have a "version_added" attribute. When submitting a new module, the module should have a "version_added" -attribute in the pull request as well, set to the current development version. +When adding new modules, any new parameter must have a "version_added" attribute. +When submitting a new module, the module should have a "version_added" attribute in the +pull request as well, set to the current development version. Be sure to check grammar and spelling. -It's frequently the case that modules get submitted with YAML that isn't valid, so you can run "make webdocs" from the checkout to preview your module's documentation. -If it fails to build, take a look at your DOCUMENTATION string or you might have a Python syntax error in there too. +It's frequently the case that modules get submitted with YAML that isn't valid, +so you can run "make webdocs" from the checkout to preview your module's documentation. +If it fails to build, take a look at your DOCUMENTATION string +or you might have a Python syntax error in there too. Python Imports ============== diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e980b6eb7da..ca27dda2d4f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -29,13 +29,9 @@ content up on places like github to share with others. Sharing A Feature Idea ---------------------- -If you have an idea for a new feature, you can open a new ticket at -[github.com/ansible/ansible](https://github.com/ansible/ansible), though in general we like to -talk about feature ideas first and bring in lots of people into the discussion. Consider stopping -by the -[Ansible project mailing list](https://groups.google.com/forum/#!forum/ansible-project) ([Subscribe](https://groups.google.com/forum/#!forum/ansible-project/join)) -or #ansible on irc.freenode.net. There is an overview about more mailing lists -later in this document. +Ideas are very welcome and the best place to share them is the [Ansible project mailing list](https://groups.google.com/forum/#!forum/ansible-project) ([Subscribe](https://groups.google.com/forum/#!forum/ansible-project/join)) or #ansible on irc.freenode.net. + +While you can file a feature request on GitHub, pull requests are a much better way to get your feature added than submitting a feature request. Open source is all about itch scratching, and it's less likely that someone else will have the same itches as yourself. We keep code reasonably simple on purpose so it's easy to dive in and make additions, but be sure to read the "Contributing Code" section below too -- as it doesn't hurt to have a discussion about a feature first -- we're inclined to have preferences about how incoming features might be implemented, and that can save confusion later. Helping with Documentation -------------------------- @@ -58,18 +54,24 @@ The Ansible project keeps it’s source on github at and takes contributions through [github pull requests](https://help.github.com/articles/using-pull-requests). -It is usually a good idea to join the ansible-devel list to discuss any large features prior to submission, and this -especially helps in avoiding duplicate work or efforts where we decide, upon seeing a pull request for the first -time, that revisions are needed. (This is not usually needed for module development) +It is usually a good idea to join the ansible-devel list to discuss any large features prior to submission, and this especially helps in avoiding duplicate work or efforts where we decide, upon seeing a pull request for the first time, that revisions are needed. (This is not usually needed for module development) + +Note that we do keep Ansible to a particular aesthetic, so if you are unclear about whether a feature +is a good fit or not, having the discussion on the development list is often a lot easier than having +to modify a pull request later. When submitting patches, be sure to run the unit tests first “make tests” and always use “git rebase” vs “git merge” (aliasing git pull to git pull --rebase is a great idea) to -avoid merge commits in your submissions. We will require resubmission of pull requests that -contain merge commits. +avoid merge commits in your submissions. There are also integration tests that can be run in the "tests/integration" directory. + +In order to keep the history clean and better audit incoming code, we will require resubmission of pull requests that contain merge commits. Use "git pull --rebase" vs "git pull" and "git rebase" vs "git merge". Also be sure to use topic branches to keep your additions on different branches, such that they won't pick up stray commits later. + +We’ll then review your contributions and engage with you about questions and so on. + +As we have a very large and active community, so it may take awhile to get your contributions +in! See the notes about priorities in a later section for understanding our work queue. -We’ll then review your contributions and engage with you about questions and so on. Please be -advised we have a very large and active community, so it may take awhile to get your contributions -in! Patches should be made against the 'devel' branch. +Patches should be made against the 'devel' branch. Contributions can be for new features like modules, or to fix bugs you or others have found. If you are interested in writing new modules to be included in the core Ansible distribution, please refer @@ -87,6 +89,8 @@ required. You're now live! Reporting A Bug --------------- +Ansible practices responsible disclosure - if this is a security related bug, email security@ansible.com instead of filing a ticket or posting to the Google Group and you will recieve a prompt response. + Bugs should be reported to [github.com/ansible/ansible](http://github.com/ansible/ansible) after signing up for a free github account. Before reporting a bug, please use the bug/issue search to see if the issue has already been reported. @@ -108,6 +112,44 @@ the mailing list or IRC first. As we are a very high volume project, if you det you do have a bug, please be sure to open the issue yourself to ensure we have a record of it. Don’t rely on someone else in the community to file the bug report for you. +It may take some time to get to your report, see "A Note About Priorities" below. + +A Note About Priorities +======================= + +Ansible was one of the top 5 projects with the most OSS contributors on GitHub in 2013, and well over +600 people have added code to the project. As a result, we have a LOT of incoming activity to process. + +In the interest of transparency, we're telling you how we do this. + +In our bug tracker you'll notice some labels - P1, P2, P3, P4, and P5. These are our internal +priority orders that we use to sort tickets. + +With some exceptions for easy merges (like documentation typos for instance), +we're going to spend most of our time working on P1 and P2 items first, including pull requests. +These usually relate to important +bugs or features affecting large segments of the userbase. So if you see something categorized +"P3 or P4", and it's not appearing to get a lot of immediate attention, this is why. + +These labels don't really have definition - they are a simple ordering. However something +affecting a major module (yum, apt, etc) is likely to be prioritized higher than a module +affecting a smaller number of users. + +Since we place a strong emphasis on testing and code review, it may take a few months for a minor feature to get merged. + +Don't worry though -- we'll also take periodic sweeps through the lower priority queues and give +them some attention as well, particularly in the area of new module changes. So it doesn't neccessarily +mean that we'll be exhausting all of the higher-priority queues before getting to your ticket. + +Release Numbering +================= + +Releases ending in ".0" are major releases and this is where all new features land. Releases ending +in another integer, like "0.X.1" and "0.X.2" are dot releases, and these are only going to contain +bugfixes. Typically we don't do dot releases for minor releases, but may occasionally decide to cut +dot releases containing a large number of smaller fixes if it's still a fairly long time before +the next release comes out. + Online Resources ================ @@ -165,11 +207,10 @@ we post with an @ansible.com address. Community Code of Conduct ------------------------- -Ansible’s community welcomes users of all types, backgrounds, and skill levels. Please -treat others as you expect to be treated, keep discussions positive, and avoid discrimination -or engaging in controversial debates (except vi vs emacs is cool). Posts to mailing lists -should remain focused around Ansible and IT automation. Abuse of these community guidelines -will not be tolerated and may result in banning from community resources. +Ansible’s community welcomes users of all types, backgrounds, and skill levels. Please +treat others as you expect to be treated, keep discussions positive, and avoid discrimination, profanity, allegations of Cthulhu worship, or engaging in controversial debates (except vi vs emacs is cool). + +Posts to mailing lists should remain focused around Ansible and IT automation. Abuse of these community guidelines will not be tolerated and may result in banning from community resources. Contributors License Agreement ------------------------------ diff --git a/Makefile b/Makefile index 982cd143b27..dc2a910630a 100644 --- a/Makefile +++ b/Makefile @@ -20,7 +20,7 @@ OS = $(shell uname -s) # Manpages are currently built with asciidoc -- would like to move to markdown # This doesn't evaluate until it's called. The -D argument is the # directory of the target file ($@), kinda like `dirname`. -MANPAGES := docs/man/man1/ansible.1 docs/man/man1/ansible-playbook.1 docs/man/man1/ansible-pull.1 docs/man/man1/ansible-doc.1 +MANPAGES := docs/man/man1/ansible.1 docs/man/man1/ansible-playbook.1 docs/man/man1/ansible-pull.1 docs/man/man1/ansible-doc.1 docs/man/man1/ansible-galaxy.1 docs/man/man1/ansible-vault.1 ifneq ($(shell which a2x 2>/dev/null),) ASCII2MAN = a2x -D $(dir $@) -d manpage -f manpage $< ASCII2HTMLMAN = a2x -D docs/html/man/ -d manpage -f xhtml @@ -172,3 +172,4 @@ deb: debian webdocs: $(MANPAGES) (cd docsite/; make docs) +docs: $(MANPAGES) diff --git a/README.md b/README.md index 853025911f9..5c6ecdecb22 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,5 @@ -[![PyPI version](https://badge.fury.io/py/ansible.png)](http://badge.fury.io/py/ansible) +[![PyPI version](https://badge.fury.io/py/ansible.png)](http://badge.fury.io/py/ansible) [![PyPI downloads](https://pypip.in/d/ansible/badge.png)](https://pypi.python.org/pypi/ansible) + Ansible ======= diff --git a/RELEASES.txt b/RELEASES.txt index c909346c7e7..03f71e37efa 100644 --- a/RELEASES.txt +++ b/RELEASES.txt @@ -14,6 +14,11 @@ Active Development Previous ++++++++ +======= +1.6 "The Cradle Will Rock" - NEXT +1.5.3 "Love Walks In" -------- 03-13-2014 +1.5.2 "Love Walks In" -------- 03-11-2014 +1.5.1 "Love Walks In" -------- 03-10-2014 1.5 "Love Walks In" -------- 02-28-2014 1.4.5 "Could This Be Magic?" - 02-12-2014 1.4.4 "Could This Be Magic?" - 01-06-2014 diff --git a/bin/ansible b/bin/ansible index 0189355ddbf..1e2540fafb7 100755 --- a/bin/ansible +++ b/bin/ansible @@ -128,14 +128,11 @@ class Cli(object): this_path = os.path.expanduser(options.vault_password_file) try: f = open(this_path, "rb") - tmp_vault_pass=f.read() + tmp_vault_pass=f.read().strip() f.close() except (OSError, IOError), e: raise errors.AnsibleError("Could not read %s: %s" % (this_path, e)) - # get rid of newline chars - tmp_vault_pass = tmp_vault_pass.strip() - if not options.ask_vault_pass: vault_pass = tmp_vault_pass @@ -160,8 +157,6 @@ class Cli(object): if options.su_user or options.ask_su_pass: options.su = True - elif options.sudo_user or options.ask_sudo_pass: - options.sudo = True options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER options.su_user = options.su_user or C.DEFAULT_SU_USER if options.tree: diff --git a/bin/ansible-doc b/bin/ansible-doc index 7e9a2eb81f5..a77fff81302 100755 --- a/bin/ansible-doc +++ b/bin/ansible-doc @@ -98,7 +98,7 @@ def get_man_text(doc): if 'option_keys' in doc and len(doc['option_keys']) > 0: text.append("Options (= is mandatory):\n") - for o in doc['option_keys']: + for o in sorted(doc['option_keys']): opt = doc['options'][o] if opt.get('required', False): @@ -146,10 +146,15 @@ def get_snippet_text(doc): text.append("- name: %s" % (desc)) text.append(" action: %s" % (doc['module'])) - for o in doc['options']: + for o in sorted(doc['options'].keys()): opt = doc['options'][o] desc = tty_ify("".join(opt['description'])) - s = o + "=" + + if opt.get('required', False): + s = o + "=" + else: + s = o + text.append(" %-20s # %s" % (s, desc)) text.append('') diff --git a/bin/ansible-galaxy b/bin/ansible-galaxy index a528b950f83..0a6938ccce4 100755 --- a/bin/ansible-galaxy +++ b/bin/ansible-galaxy @@ -170,7 +170,7 @@ def build_option_parser(action): parser.set_usage("usage: %prog init [options] role_name") parser.add_option( '-p', '--init-path', dest='init_path', default="./", - help='The path in which the skeleton role will be created.' + help='The path in which the skeleton role will be created. ' 'The default is the current working directory.') elif action == "install": parser.set_usage("usage: %prog install [options] [-r FILE | role_name(s)[,version] | tar_file(s)]") @@ -181,7 +181,7 @@ def build_option_parser(action): '-n', '--no-deps', dest='no_deps', action='store_true', default=False, help='Don\'t download roles listed as dependencies') parser.add_option( - '-r', '--role-file', dest='role_file', + '-r', '--role-file', dest='role_file', help='A file containing a list of roles to be imported') elif action == "remove": parser.set_usage("usage: %prog remove role1 role2 ...") @@ -192,7 +192,7 @@ def build_option_parser(action): if action != "init": parser.add_option( '-p', '--roles-path', dest='roles_path', default=C.DEFAULT_ROLES_PATH, - help='The path to the directory containing your roles.' + help='The path to the directory containing your roles. ' 'The default is the roles_path configured in your ' 'ansible.cfg file (/etc/ansible/roles if not configured)') @@ -655,7 +655,7 @@ def execute_install(args, options, parser): if role_name == "" or role_name.startswith("#"): continue - elif role_name.find(',') != -1: + elif ',' in role_name: role_name,role_version = role_name.split(',',1) role_name = role_name.strip() role_version = role_version.strip() diff --git a/bin/ansible-playbook b/bin/ansible-playbook index 5aa020a9245..f91c86ef4ba 100755 --- a/bin/ansible-playbook +++ b/bin/ansible-playbook @@ -78,6 +78,8 @@ def main(args): help="one-step-at-a-time: confirm each task before running") parser.add_option('--start-at-task', dest='start_at', help="start the playbook at the task matching this name") + parser.add_option('--force-handlers', dest='force_handlers', action='store_true', + help="run handlers even if a task fails") options, args = parser.parse_args(args) @@ -122,14 +124,11 @@ def main(args): this_path = os.path.expanduser(options.vault_password_file) try: f = open(this_path, "rb") - tmp_vault_pass=f.read() + tmp_vault_pass=f.read().strip() f.close() except (OSError, IOError), e: raise errors.AnsibleError("Could not read %s: %s" % (this_path, e)) - # get rid of newline chars - tmp_vault_pass = tmp_vault_pass.strip() - if not options.ask_vault_pass: vault_pass = tmp_vault_pass @@ -137,7 +136,7 @@ def main(args): for extra_vars_opt in options.extra_vars: if extra_vars_opt.startswith("@"): # Argument is a YAML file (JSON is a subset of YAML) - extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml_from_file(extra_vars_opt[1:])) + extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml_from_file(extra_vars_opt[1:], vault_password=vault_pass)) elif extra_vars_opt and extra_vars_opt[0] in '[{': # Arguments as YAML extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml(extra_vars_opt)) @@ -194,7 +193,8 @@ def main(args): su=options.su, su_pass=su_pass, su_user=options.su_user, - vault_password=vault_pass + vault_password=vault_pass, + force_handlers=options.force_handlers ) if options.listhosts or options.listtasks or options.syntax: @@ -206,12 +206,12 @@ def main(args): playnum += 1 play = ansible.playbook.Play(pb, play_ds, play_basedir) label = play.name - if options.listhosts: - hosts = pb.inventory.list_hosts(play.hosts) - print ' play #%d (%s): host count=%d' % (playnum, label, len(hosts)) - for host in hosts: - print ' %s' % host - if options.listtasks: + hosts = pb.inventory.list_hosts(play.hosts) + + # Filter all tasks by given tags + if pb.only_tags != 'all': + if options.subset and not hosts: + continue matched_tags, unmatched_tags = play.compare_tags(pb.only_tags) # Remove skipped tasks @@ -223,6 +223,13 @@ def main(args): if unknown_tags: continue + + if options.listhosts: + print ' play #%d (%s): host count=%d' % (playnum, label, len(hosts)) + for host in hosts: + print ' %s' % host + + if options.listtasks: print ' play #%d (%s):' % (playnum, label) for task in play.tasks(): diff --git a/bin/ansible-pull b/bin/ansible-pull index e6c5712f75a..83f281463f5 100755 --- a/bin/ansible-pull +++ b/bin/ansible-pull @@ -44,6 +44,8 @@ import subprocess import sys import datetime import socket +import random +import time from ansible import utils from ansible.utils import cmd_functions from ansible import errors @@ -102,6 +104,8 @@ def main(args): help='purge checkout after playbook run') parser.add_option('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true', help='only run the playbook if the repository has been updated') + parser.add_option('-s', '--sleep', dest='sleep', default=None, + help='sleep for random interval (between 0 and n number of seconds) before starting. this is a useful way to disperse git requests') parser.add_option('-f', '--force', dest='force', default=False, action='store_true', help='run the playbook even if the repository could ' @@ -117,6 +121,8 @@ def main(args): 'Defaults to behavior of repository module.') parser.add_option('-i', '--inventory-file', dest='inventory', help="location of the inventory host file") + parser.add_option('-e', '--extra-vars', dest="extra_vars", action="append", + help="set additional variables as key=value or YAML/JSON", default=[]) parser.add_option('-v', '--verbose', default=False, action="callback", callback=increment_debug, help='Pass -vvvv to ansible-playbook') @@ -126,6 +132,8 @@ def main(args): 'Default is %s.' % DEFAULT_REPO_TYPE) parser.add_option('--vault-password-file', dest='vault_password_file', help="vault password file") + parser.add_option('-K', '--ask-sudo-pass', default=False, dest='ask_sudo_pass', action='store_true', + help='ask for sudo password') options, args = parser.parse_args(args) hostname = socket.getfqdn() @@ -162,7 +170,18 @@ def main(args): inv_opts, base_opts, options.module_name, repo_opts ) - # RUN THE CHECKOUT COMMAND + if options.sleep: + try: + secs = random.randint(0,int(options.sleep)); + except ValueError: + parser.error("%s is not a number." % options.sleep) + return 1 + + print >>sys.stderr, "Sleeping for %d seconds..." % secs + time.sleep(secs); + + + # RUN THe CHECKOUT COMMAND rc, out, err = cmd_functions.run_cmd(cmd, live=True) if rc != 0: @@ -185,6 +204,10 @@ def main(args): cmd += " --vault-password-file=%s" % options.vault_password_file if options.inventory: cmd += ' -i "%s"' % options.inventory + for ev in options.extra_vars: + cmd += ' -e "%s"' % ev + if options.ask_sudo_pass: + cmd += ' -K' os.chdir(options.dest) # RUN THE PLAYBOOK COMMAND diff --git a/bin/ansible-vault b/bin/ansible-vault index 902653d40bf..1c2e48a0634 100755 --- a/bin/ansible-vault +++ b/bin/ansible-vault @@ -52,7 +52,7 @@ def build_option_parser(action): sys.exit() # options for all actions - #parser.add_option('-c', '--cipher', dest='cipher', default="AES", help="cipher to use") + #parser.add_option('-c', '--cipher', dest='cipher', default="AES256", help="cipher to use") parser.add_option('--debug', dest='debug', action="store_true", help="debug") parser.add_option('--vault-password-file', dest='password_file', help="vault password file") @@ -105,7 +105,6 @@ def _read_password(filename): f = open(filename, "rb") data = f.read() f.close - # get rid of newline chars data = data.strip() return data @@ -119,7 +118,7 @@ def execute_create(args, options, parser): else: password = _read_password(options.password_file) - cipher = 'AES' + cipher = 'AES256' if hasattr(options, 'cipher'): cipher = options.cipher @@ -133,7 +132,7 @@ def execute_decrypt(args, options, parser): else: password = _read_password(options.password_file) - cipher = 'AES' + cipher = 'AES256' if hasattr(options, 'cipher'): cipher = options.cipher @@ -161,15 +160,12 @@ def execute_edit(args, options, parser): def execute_encrypt(args, options, parser): - if len(args) > 1: - raise errors.AnsibleError("'create' does not accept more than one filename") - if not options.password_file: password, new_password = utils.ask_vault_passwords(ask_vault_pass=True, confirm_vault=True) else: password = _read_password(options.password_file) - cipher = 'AES' + cipher = 'AES256' if hasattr(options, 'cipher'): cipher = options.cipher diff --git a/docs/man/man1/ansible-galaxy.1 b/docs/man/man1/ansible-galaxy.1 new file mode 100644 index 00000000000..af2285121a6 --- /dev/null +++ b/docs/man/man1/ansible-galaxy.1 @@ -0,0 +1,180 @@ +'\" t +.\" Title: ansible-galaxy +.\" Author: [see the "AUTHOR" section] +.\" Generator: DocBook XSL Stylesheets v1.78.1 +.\" Date: 03/16/2014 +.\" Manual: System administration commands +.\" Source: Ansible 1.6 +.\" Language: English +.\" +.TH "ANSIBLE\-GALAXY" "1" "03/16/2014" "Ansible 1\&.6" "System administration commands" +.\" ----------------------------------------------------------------- +.\" * Define some portability stuff +.\" ----------------------------------------------------------------- +.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.\" http://bugs.debian.org/507673 +.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html +.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.ie \n(.g .ds Aq \(aq +.el .ds Aq ' +.\" ----------------------------------------------------------------- +.\" * set default formatting +.\" ----------------------------------------------------------------- +.\" disable hyphenation +.nh +.\" disable justification (adjust text to left margin only) +.ad l +.\" ----------------------------------------------------------------- +.\" * MAIN CONTENT STARTS HERE * +.\" ----------------------------------------------------------------- +.SH "NAME" +ansible-galaxy \- manage roles using galaxy\&.ansible\&.com +.SH "SYNOPSIS" +.sp +ansible\-galaxy [init|info|install|list|remove] [\-\-help] [options] \&... +.SH "DESCRIPTION" +.sp +\fBAnsible Galaxy\fR is a shared repository for Ansible roles (added in ansible version 1\&.2)\&. The ansible\-galaxy command can be used to manage these roles, or by creating a skeleton framework for roles you\(cqd like to upload to Galaxy\&. +.SH "COMMON OPTIONS" +.PP +\fB\-h\fR, \fB\-\-help\fR +.RS 4 +Show a help message related to the given sub\-command\&. +.RE +.SH "INSTALL" +.sp +The \fBinstall\fR sub\-command is used to install roles\&. +.SS "USAGE" +.sp +$ ansible\-galaxy install [options] [\-r FILE | role_name(s)[,version] | tar_file(s)] +.sp +Roles can be installed in several different ways: +.sp +.RS 4 +.ie n \{\ +\h'-04'\(bu\h'+03'\c +.\} +.el \{\ +.sp -1 +.IP \(bu 2.3 +.\} +A username\&.rolename[,version] \- this will install a single role\&. The Galaxy API will be contacted to provide the information about the role, and the corresponding \&.tar\&.gz will be downloaded from +\fBgithub\&.com\fR\&. If the version is omitted, the most recent version available will be installed\&. +.RE +.sp +.RS 4 +.ie n \{\ +\h'-04'\(bu\h'+03'\c +.\} +.el \{\ +.sp -1 +.IP \(bu 2.3 +.\} +A file name, using +\fB\-r\fR +\- this will install multiple roles listed one per line\&. The format of each line is the same as above: username\&.rolename[,version] +.RE +.sp +.RS 4 +.ie n \{\ +\h'-04'\(bu\h'+03'\c +.\} +.el \{\ +.sp -1 +.IP \(bu 2.3 +.\} +A \&.tar\&.gz of a valid role you\(cqve downloaded directly from +\fBgithub\&.com\fR\&. This is mainly useful when the system running Ansible does not have access to the Galaxy API, for instance when behind a firewall or proxy\&. +.RE +.SS "OPTIONS" +.PP +\fB\-f\fR, \fB\-\-force\fR +.RS 4 +Force overwriting an existing role\&. +.RE +.PP +\fB\-i\fR, \fB\-\-ignore\-errors\fR +.RS 4 +Ignore errors and continue with the next specified role\&. +.RE +.PP +\fB\-n\fR, \fB\-\-no\-deps\fR +.RS 4 +Don\(cqt download roles listed as dependencies\&. +.RE +.PP +\fB\-p\fR \fIROLES_PATH\fR, \fB\-\-roles\-path=\fR\fIROLES_PATH\fR +.RS 4 +The path to the directory containing your roles\&. The default is the +\fBroles_path\fR +configured in your +\fBansible\&.cfg\fR +file (/etc/ansible/roles if not configured) +.RE +.PP +\fB\-r\fR \fIROLE_FILE\fR, \fB\-\-role\-file=\fR\fIROLE_FILE\fR +.RS 4 +A file containing a list of roles to be imported, as specified above\&. This option cannot be used if a rolename or \&.tar\&.gz have been specified\&. +.RE +.SH "REMOVE" +.sp +The \fBremove\fR sub\-command is used to remove one or more roles\&. +.SS "USAGE" +.sp +$ ansible\-galaxy remove role1 role2 \&... +.SS "OPTIONS" +.PP +\fB\-p\fR \fIROLES_PATH\fR, \fB\-\-roles\-path=\fR\fIROLES_PATH\fR +.RS 4 +The path to the directory containing your roles\&. The default is the +\fBroles_path\fR +configured in your +\fBansible\&.cfg\fR +file (/etc/ansible/roles if not configured) +.RE +.SH "INIT" +.sp +The \fBinit\fR command is used to create an empty role suitable for uploading to https://galaxy\&.ansible\&.com (or for roles in general)\&. +.SS "USAGE" +.sp +$ ansible\-galaxy init [options] role_name +.SS "OPTIONS" +.PP +\fB\-f\fR, \fB\-\-force\fR +.RS 4 +Force overwriting an existing role\&. +.RE +.PP +\fB\-p\fR \fIINIT_PATH\fR, \fB\-\-init\-path=\fR\fIINIT_PATH\fR +.RS 4 +The path in which the skeleton role will be created\&.The default is the current working directory\&. +.RE +.SH "LIST" +.sp +The \fBlist\fR sub\-command is used to show what roles are currently instaled\&. You can specify a role name, and if installed only that role will be shown\&. +.SS "USAGE" +.sp +$ ansible\-galaxy list [role_name] +.SS "OPTIONS" +.PP +\fB\-p\fR \fIROLES_PATH\fR, \fB\-\-roles\-path=\fR\fIROLES_PATH\fR +.RS 4 +The path to the directory containing your roles\&. The default is the +\fBroles_path\fR +configured in your +\fBansible\&.cfg\fR +file (/etc/ansible/roles if not configured) +.RE +.SH "AUTHOR" +.sp +Ansible was originally written by Michael DeHaan\&. See the AUTHORS file for a complete list of contributors\&. +.SH "COPYRIGHT" +.sp +Copyright \(co 2014, Michael DeHaan +.sp +Ansible is released under the terms of the GPLv3 License\&. +.SH "SEE ALSO" +.sp +\fBansible\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1) +.sp +Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible diff --git a/docs/man/man1/ansible-galaxy.1.asciidoc.in b/docs/man/man1/ansible-galaxy.1.asciidoc.in new file mode 100644 index 00000000000..b8a80e6b2c5 --- /dev/null +++ b/docs/man/man1/ansible-galaxy.1.asciidoc.in @@ -0,0 +1,167 @@ +ansible-galaxy(1) +=================== +:doctype: manpage +:man source: Ansible +:man version: %VERSION% +:man manual: System administration commands + +NAME +---- +ansible-galaxy - manage roles using galaxy.ansible.com + + +SYNOPSIS +-------- +ansible-galaxy [init|info|install|list|remove] [--help] [options] ... + + +DESCRIPTION +----------- + +*Ansible Galaxy* is a shared repository for Ansible roles (added in +ansible version 1.2). The ansible-galaxy command can be used to manage +these roles, or by creating a skeleton framework for roles you'd like +to upload to Galaxy. + +COMMON OPTIONS +-------------- + +*-h*, *--help*:: + +Show a help message related to the given sub-command. + + +INSTALL +------- + +The *install* sub-command is used to install roles. + +USAGE +~~~~~ + +$ ansible-galaxy install [options] [-r FILE | role_name(s)[,version] | tar_file(s)] + +Roles can be installed in several different ways: + +* A username.rolename[,version] - this will install a single role. The Galaxy + API will be contacted to provide the information about the role, and the + corresponding .tar.gz will be downloaded from *github.com*. If the version + is omitted, the most recent version available will be installed. + +* A file name, using *-r* - this will install multiple roles listed one per + line. The format of each line is the same as above: username.rolename[,version] + +* A .tar.gz of a valid role you've downloaded directly from *github.com*. This + is mainly useful when the system running Ansible does not have access to + the Galaxy API, for instance when behind a firewall or proxy. + + +OPTIONS +~~~~~~~ + +*-f*, *--force*:: + +Force overwriting an existing role. + +*-i*, *--ignore-errors*:: + +Ignore errors and continue with the next specified role. + +*-n*, *--no-deps*:: + +Don't download roles listed as dependencies. + +*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH':: + +The path to the directory containing your roles. The default is the *roles_path* +configured in your *ansible.cfg* file (/etc/ansible/roles if not configured) + +*-r* 'ROLE_FILE', *--role-file=*'ROLE_FILE':: + +A file containing a list of roles to be imported, as specified above. This +option cannot be used if a rolename or .tar.gz have been specified. + +REMOVE +------ + +The *remove* sub-command is used to remove one or more roles. + +USAGE +~~~~~ + +$ ansible-galaxy remove role1 role2 ... + +OPTIONS +~~~~~~~ + +*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH':: + +The path to the directory containing your roles. The default is the *roles_path* +configured in your *ansible.cfg* file (/etc/ansible/roles if not configured) + +INIT +---- + +The *init* command is used to create an empty role suitable for uploading +to https://galaxy.ansible.com (or for roles in general). + +USAGE +~~~~~ + +$ ansible-galaxy init [options] role_name + +OPTIONS +~~~~~~~ + +*-f*, *--force*:: + +Force overwriting an existing role. + +*-p* 'INIT_PATH', *--init-path=*'INIT_PATH':: + +The path in which the skeleton role will be created.The default is the current +working directory. + +LIST +---- + +The *list* sub-command is used to show what roles are currently instaled. +You can specify a role name, and if installed only that role will be shown. + +USAGE +~~~~~ + +$ ansible-galaxy list [role_name] + +OPTIONS +~~~~~~~ + +*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH':: + +The path to the directory containing your roles. The default is the *roles_path* +configured in your *ansible.cfg* file (/etc/ansible/roles if not configured) + + +AUTHOR +------ + +Ansible was originally written by Michael DeHaan. See the AUTHORS file +for a complete list of contributors. + + +COPYRIGHT +--------- + +Copyright © 2014, Michael DeHaan + +Ansible is released under the terms of the GPLv3 License. + + +SEE ALSO +-------- + +*ansible*(1), *ansible-pull*(1), *ansible-doc*(1) + +Extensive documentation is available in the documentation site: +. IRC and mailing list info can be found +in file CONTRIBUTING.md, available in: diff --git a/docs/man/man1/ansible-playbook.1 b/docs/man/man1/ansible-playbook.1 index 2d221946a61..f435627f798 100644 --- a/docs/man/man1/ansible-playbook.1 +++ b/docs/man/man1/ansible-playbook.1 @@ -91,6 +91,66 @@ Prompt for the password to use for playbook plays that request sudo access, if a Desired sudo user (default=root)\&. .RE .PP +\fB\-S\fR, \fB\-\-su\fR +.RS 4 +run operations with su\&. +.RE +.PP +\fB\-\-ask\-su\-pass\fR +.RS 4 +Prompt for the password to use for playbook plays that request su access, if any\&. +.RE +.PP +\fB\-R\fR, \fISU_USER\fR, \fB\-\-sudo\-user=\fR\fISU_USER\fR +.RS 4 +Desired su user (default=root)\&. +.RE +.PP +\fB\-\-ask\-vault\-pass\fR +.RS 4 +Ask for vault password\&. +.RE +.PP +\fB\-\-vault\-password\-file=\fR\fIVAULT_PASSWORD_FILE\fR +.RS 4 +Vault password file\&. +.RE +.PP +\fB\-\-force\-handlers\fR +.RS 4 +Run play handlers even if a task fails\&. +.RE +.PP +\fB\-\-list\-hosts\fR +.RS 4 +Outputs a list of matching hosts without executing anything else\&. +.RE +.PP +\fB\-\-list\-tasks\fR +.RS 4 +List all tasks that would be executed\&. +.RE +.PP +\fB\-\-start\-at\-task=\fR\fISTART_AT\fR +.RS 4 +Start the playbook at the task matching this name\&. +.RE +.PP +\fB\-\-step\fR +.RS 4 +one-step-at-a-time: confirm each task before running\&. +.RE +.PP +\fB\-\-syntax\-check\fR +.RS 4 +Perform a syntax check on the playbook, but do not execute it\&. +.RE +.PP +\fB\-\-private\-key\fR +.RS 4 +Use this file to authenticate the connection\&. +.RE +.PP \fB\-t\fR, \fITAGS\fR, \fB\fI\-\-tags=\fR\fR\fB\*(AqTAGS\fR .RS 4 Only run plays and tasks tagged with these values\&. @@ -147,6 +207,13 @@ is mostly useful for crontab or kickstarts\&. .RS 4 Further limits the selected host/group patterns\&. .RE + +.PP +\fB\-\-version\fR +.RS 4 +Show program's version number and exit\&. +.RE + .SH "ENVIRONMENT" .sp The following environment variables may be specified\&. diff --git a/docs/man/man1/ansible-playbook.1.asciidoc.in b/docs/man/man1/ansible-playbook.1.asciidoc.in index a1ef2391930..23fe37a2c0b 100644 --- a/docs/man/man1/ansible-playbook.1.asciidoc.in +++ b/docs/man/man1/ansible-playbook.1.asciidoc.in @@ -76,11 +76,11 @@ access, if any. Desired sudo user (default=root). -*-t*, 'TAGS', *'--tags=*'TAGS':: +*-t*, 'TAGS', *--tags=*'TAGS':: Only run plays and tasks tagged with these values. -*'--skip-tags=*'SKIP_TAGS':: +*--skip-tags=*'SKIP_TAGS':: Only run plays and tasks whose tags do not match these values. diff --git a/docs/man/man1/ansible-vault.1 b/docs/man/man1/ansible-vault.1 new file mode 100644 index 00000000000..cced9f1bcfd --- /dev/null +++ b/docs/man/man1/ansible-vault.1 @@ -0,0 +1,103 @@ +'\" t +.\" Title: ansible-vault +.\" Author: [see the "AUTHOR" section] +.\" Generator: DocBook XSL Stylesheets v1.78.1 +.\" Date: 03/17/2014 +.\" Manual: System administration commands +.\" Source: Ansible 1.6 +.\" Language: English +.\" +.TH "ANSIBLE\-VAULT" "1" "03/17/2014" "Ansible 1\&.6" "System administration commands" +.\" ----------------------------------------------------------------- +.\" * Define some portability stuff +.\" ----------------------------------------------------------------- +.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.\" http://bugs.debian.org/507673 +.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html +.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.ie \n(.g .ds Aq \(aq +.el .ds Aq ' +.\" ----------------------------------------------------------------- +.\" * set default formatting +.\" ----------------------------------------------------------------- +.\" disable hyphenation +.nh +.\" disable justification (adjust text to left margin only) +.ad l +.\" ----------------------------------------------------------------- +.\" * MAIN CONTENT STARTS HERE * +.\" ----------------------------------------------------------------- +.SH "NAME" +ansible-vault \- manage encrypted YAML data\&. +.SH "SYNOPSIS" +.sp +ansible\-vault [create|decrypt|edit|encrypt|rekey] [\-\-help] [options] file_name +.SH "DESCRIPTION" +.sp +\fBansible\-vault\fR can encrypt any structured data file used by Ansible\&. This can include \fBgroup_vars/\fR or \fBhost_vars/\fR inventory variables, variables loaded by \fBinclude_vars\fR or \fBvars_files\fR, or variable files passed on the ansible\-playbook command line with \fB\-e @file\&.yml\fR or \fB\-e @file\&.json\fR\&. Role variables and defaults are also included! +.sp +Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with vault\&. If you\(cqd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted\&. +.SH "COMMON OPTIONS" +.sp +The following options are available to all sub\-commands: +.PP +\fB\-\-vault\-password\-file=\fR\fIFILE\fR +.RS 4 +A file containing the vault password to be used during the encryption/decryption steps\&. Be sure to keep this file secured if it is used\&. +.RE +.PP +\fB\-h\fR, \fB\-\-help\fR +.RS 4 +Show a help message related to the given sub\-command\&. +.RE +.PP +\fB\-\-debug\fR +.RS 4 +Enable debugging output for troubleshooting\&. +.RE +.SH "CREATE" +.sp +\fB$ ansible\-vault create [options] FILE\fR +.sp +The \fBcreate\fR sub\-command is used to initialize a new encrypted file\&. +.sp +First you will be prompted for a password\&. The password used with vault currently must be the same for all files you wish to use together at the same time\&. +.sp +After providing a password, the tool will launch whatever editor you have defined with $EDITOR, and defaults to vim\&. Once you are done with the editor session, the file will be saved as encrypted data\&. +.sp +The default cipher is AES (which is shared\-secret based)\&. +.SH "EDIT" +.sp +\fB$ ansible\-vault edit [options] FILE\fR +.sp +The \fBedit\fR sub\-command is used to modify a file which was previously encrypted using ansible\-vault\&. +.sp +This command will decrypt the file to a temporary file and allow you to edit the file, saving it back when done and removing the temporary file\&. +.SH "REKEY" +.sp +*$ ansible\-vault rekey [options] FILE_1 [FILE_2, \&..., FILE_N] +.sp +The \fBrekey\fR command is used to change the password on a vault\-encrypted files\&. This command can update multiple files at once, and will prompt for both the old and new passwords before modifying any data\&. +.SH "ENCRYPT" +.sp +*$ ansible\-vault encrypt [options] FILE_1 [FILE_2, \&..., FILE_N] +.sp +The \fBencrypt\fR sub\-command is used to encrypt pre\-existing data files\&. As with the \fBrekey\fR command, you can specify multiple files in one command\&. +.SH "DECRYPT" +.sp +*$ ansible\-vault decrypt [options] FILE_1 [FILE_2, \&..., FILE_N] +.sp +The \fBdecrypt\fR sub\-command is used to remove all encryption from data files\&. The files will be stored as plain\-text YAML once again, so be sure that you do not run this command on data files with active passwords or other sensitive data\&. In most cases, users will want to use the \fBedit\fR sub\-command to modify the files securely\&. +.SH "AUTHOR" +.sp +Ansible was originally written by Michael DeHaan\&. See the AUTHORS file for a complete list of contributors\&. +.SH "COPYRIGHT" +.sp +Copyright \(co 2014, Michael DeHaan +.sp +Ansible is released under the terms of the GPLv3 License\&. +.SH "SEE ALSO" +.sp +\fBansible\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1) +.sp +Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible diff --git a/docs/man/man1/ansible-vault.1.asciidoc.in b/docs/man/man1/ansible-vault.1.asciidoc.in new file mode 100644 index 00000000000..daccd8772f4 --- /dev/null +++ b/docs/man/man1/ansible-vault.1.asciidoc.in @@ -0,0 +1,126 @@ +ansible-vault(1) +================ +:doctype: manpage +:man source: Ansible +:man version: %VERSION% +:man manual: System administration commands + +NAME +---- +ansible-vault - manage encrypted YAML data. + + +SYNOPSIS +-------- +ansible-vault [create|decrypt|edit|encrypt|rekey] [--help] [options] file_name + + +DESCRIPTION +----------- + +*ansible-vault* can encrypt any structured data file used by Ansible. This can include +*group_vars/* or *host_vars/* inventory variables, variables loaded by *include_vars* or +*vars_files*, or variable files passed on the ansible-playbook command line with +*-e @file.yml* or *-e @file.json*. Role variables and defaults are also included! + +Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with +vault. If you’d like to not betray what variables you are even using, you can go as far to +keep an individual task file entirely encrypted. + + +COMMON OPTIONS +-------------- + +The following options are available to all sub-commands: + +*--vault-password-file=*'FILE':: + +A file containing the vault password to be used during the encryption/decryption +steps. Be sure to keep this file secured if it is used. + +*-h*, *--help*:: + +Show a help message related to the given sub-command. + +*--debug*:: + +Enable debugging output for troubleshooting. + +CREATE +------ + +*$ ansible-vault create [options] FILE* + +The *create* sub-command is used to initialize a new encrypted file. + +First you will be prompted for a password. The password used with vault currently +must be the same for all files you wish to use together at the same time. + +After providing a password, the tool will launch whatever editor you have defined +with $EDITOR, and defaults to vim. Once you are done with the editor session, the +file will be saved as encrypted data. + +The default cipher is AES (which is shared-secret based). + +EDIT +---- + +*$ ansible-vault edit [options] FILE* + +The *edit* sub-command is used to modify a file which was previously encrypted +using ansible-vault. + +This command will decrypt the file to a temporary file and allow you to edit the +file, saving it back when done and removing the temporary file. + +REKEY +----- + +*$ ansible-vault rekey [options] FILE_1 [FILE_2, ..., FILE_N] + +The *rekey* command is used to change the password on a vault-encrypted files. +This command can update multiple files at once, and will prompt for both the +old and new passwords before modifying any data. + +ENCRYPT +------- + +*$ ansible-vault encrypt [options] FILE_1 [FILE_2, ..., FILE_N] + +The *encrypt* sub-command is used to encrypt pre-existing data files. As with the +*rekey* command, you can specify multiple files in one command. + +DECRYPT +------- + +*$ ansible-vault decrypt [options] FILE_1 [FILE_2, ..., FILE_N] + +The *decrypt* sub-command is used to remove all encryption from data files. The files +will be stored as plain-text YAML once again, so be sure that you do not run this +command on data files with active passwords or other sensitive data. In most cases, +users will want to use the *edit* sub-command to modify the files securely. + + +AUTHOR +------ + +Ansible was originally written by Michael DeHaan. See the AUTHORS file +for a complete list of contributors. + + +COPYRIGHT +--------- + +Copyright © 2014, Michael DeHaan + +Ansible is released under the terms of the GPLv3 License. + + +SEE ALSO +-------- + +*ansible*(1), *ansible-pull*(1), *ansible-doc*(1) + +Extensive documentation is available in the documentation site: +. IRC and mailing list info can be found +in file CONTRIBUTING.md, available in: diff --git a/docsite/rst/developing_modules.rst b/docsite/rst/developing_modules.rst index 3f1c1e68dca..e8da717aed5 100644 --- a/docsite/rst/developing_modules.rst +++ b/docsite/rst/developing_modules.rst @@ -123,7 +123,7 @@ a lot shorter than this:: for arg in arguments: # ignore any arguments without an equals in it - if arg.find("=") != -1: + if "=" in arg: (key, value) = arg.split("=") diff --git a/docsite/rst/faq.rst b/docsite/rst/faq.rst index 82841d43812..af9d4930600 100644 --- a/docsite/rst/faq.rst +++ b/docsite/rst/faq.rst @@ -140,16 +140,16 @@ Then you can use the facts inside your template, like this:: .. _programatic_access_to_a_variable: -How do I access a variable name programatically? -++++++++++++++++++++++++++++++++++++++++++++++++ +How do I access a variable name programmatically? ++++++++++++++++++++++++++++++++++++++++++++++++++ An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied via a role parameter or other input. Variable names can be built by adding strings together, like so:: {{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }} -The trick about going through hostvars is neccessary because it's a dictionary of the entire namespace of variables. 'inventory_hostname' -is a magic variable that indiciates the current host you are looping over in the host loop. +The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. 'inventory_hostname' +is a magic variable that indicates the current host you are looping over in the host loop. .. _first_host_in_a_group: @@ -179,17 +179,7 @@ Notice how we interchanged the bracket syntax for dots -- that can be done anywh How do I copy files recursively onto a target host? +++++++++++++++++++++++++++++++++++++++++++++++++++ -The "copy" module doesn't handle recursive copies of directories. A common solution to do this is to use a local action to call 'rsync' to recursively copy files to the managed servers. - -Here is an example:: - - --- - # ... - tasks: - - name: recursively copy files from management server to target - local_action: command rsync -a /path/to/files $inventory_hostname:/path/to/target/ - -Note that you'll need passphrase-less SSH or ssh-agent set up to let rsync copy without prompting for a passphrase or password. +The "copy" module has a recursive parameter, though if you want to do something more efficient for a large number of files, take a look at the "synchronize" module instead, which wraps rsync. See the module index for info on both of these modules. .. _shell_env: @@ -256,7 +246,7 @@ Great question! Documentation for Ansible is kept in the main project git repos How do I keep secret data in my playbook? +++++++++++++++++++++++++++++++++++++++++ -If you would like to keep secret data in your Ansible content and still share it publically or keep things in source control, see :doc:`playbooks_vault`. +If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :doc:`playbooks_vault`. .. _i_dont_see_my_question: diff --git a/docsite/rst/guide_aws.rst b/docsite/rst/guide_aws.rst index dbe5427bc52..39f2440f195 100644 --- a/docsite/rst/guide_aws.rst +++ b/docsite/rst/guide_aws.rst @@ -129,7 +129,7 @@ it will be automatically discoverable via a dynamic group like so:: - ping Using this philosophy can be a great way to manage groups dynamically, without -having to maintain seperate inventory. +having to maintain separate inventory. .. _aws_pull: diff --git a/docsite/rst/guide_gce.rst b/docsite/rst/guide_gce.rst new file mode 100644 index 00000000000..f9e498ac0aa --- /dev/null +++ b/docsite/rst/guide_gce.rst @@ -0,0 +1,245 @@ +Google Cloud Platform Guide +=========================== + +.. gce_intro: + +Introduction +------------ + +.. note:: This section of the documentation is under construction. We are in the process of adding more examples about all of the GCE modules and how they work together. Upgrades via github pull requests are welcomed! + +Ansible contains modules for managing Google Compute Engine resources, including creating instances, controlling network access, working with persistent disks, and managing +load balancers. Additionally, there is an inventory plugin that can automatically suck down all of your GCE instances into Ansible dynamic inventory, and create groups by tag and other properties. + +The GCE modules all require the apache-libcloud module, which you can install from pip: + +.. code-block:: bash + + $ pip install apache-libcloud + +.. note:: If you're using Ansible on Mac OS X, libcloud also needs to access a CA cert chain. You'll need to download one (you can get one for `here `_.) + +Credentials +----------- + +To work with the GCE modules, you'll first need to get some credentials. You can create new one from the `console `_ by going to the "APIs and Auth" section. Once you've created a new client ID and downloaded the generated private key (in the `pkcs12 format `_), you'll need to convert the key by running the following command: + +.. code-block:: bash + + $ openssl pkcs12 -in pkey.pkcs12 -passin pass:notasecret -nodes -nocerts | openssl rsa -out pkey.pem + +There are two different ways to provide credentials to Ansible so that it can talk with Google Cloud for provisioning and configuration actions: + +* by providing to the modules directly +* by populating a ``secrets.py`` file + +Calling Modules By Passing Credentials +`````````````````````````````````````` + +For the GCE modules you can specify the credentials as arguments: + +* ``service_account_email``: email associated with the project +* ``pem_file``: path to the pem file +* ``project_id``: id of the project + +For example, to create a new instance using the cloud module, you can use the following configuration: + +.. code-block:: yaml + + - name: Create instance(s) + hosts: localhost + connection: local + gather_facts: no + + vars: + service_account_email: unique-id@developer.gserviceaccount.com + pem_file: /path/to/project.pem + project_id: project-id + machine_type: n1-standard-1 + image: debian-7 + + tasks: + + - name: Launch instances + gce: + instance_names: dev + machine_type: "{{ machine_type }}" + image: "{{ image }}" + service_account_email: "{{ service_account_email }}" + pem_file: "{{ pem_file }}" + project_id: "{{ project_id }}" + +Calling Modules with secrets.py +``````````````````````````````` + +Create a file ``secrets.py`` looking like following, and put it in some folder which is in your ``$PYTHONPATH``: + +.. code-block:: python + + GCE_PARAMS = ('i...@project.googleusercontent.com', '/path/to/project.pem') + GCE_KEYWORD_PARAMS = {'project': 'project-name'} + +Now the modules can be used as above, but the account information can be omitted. + +GCE Dynamic Inventory +--------------------- + +The best way to interact with your hosts is to use the gce inventory plugin, which dynamically queries GCE and tells Ansible what nodes can be managed. + +Note that when using the inventory script ``gce.py``, you also need to populate the ``gce.ini`` file that you can find in the plugins/inventory directory of the ansible checkout. + +To use the GCE dynamic inventory script, copy ``gce.py`` from ``plugins/inventory`` into your inventory directory and make it executable. You can specify credentials for ``gce.py`` using the ``GCE_INI_PATH`` environment variable -- the default is to look for gce.ini in the same directory as the inventory script. + +Let's see if inventory is working: + +.. code-block:: bash + + $ ./gce.py --list + +You should see output describing the hosts you have, if any, running in Google Compute Engine. + +Now let's see if we can use the inventory script to talk to Google. + +.. code-block:: bash + + $ GCE_INI_PATH=~/.gce.ini ansible all -i gce.py -m setup + hostname | success >> { + "ansible_facts": { + "ansible_all_ipv4_addresses": [ + "x.x.x.x" + ], + +As with all dynamic inventory plugins in Ansible, you can configure the inventory path in ansible.cfg. The recommended way to use the inventory is to create an ``inventory`` directory, and place both the ``gce.py`` script and a file containing ``localhost`` in it. This can allow for cloud inventory to be used alongside local inventory (such as a physical datacenter) or machines running in different providers. + +Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead of an individual file will cause ansible to evaluate each file in that directory for inventory. + +Let's once again use our inventory script to see if it can talk to Google Cloud: + +.. code-block:: bash + + $ ansible all -i inventory/ -m setup + hostname | success >> { + "ansible_facts": { + "ansible_all_ipv4_addresses": [ + "x.x.x.x" + ], + +The output should be similar to the previous command. If you're wanting less output and just want to check for SSH connectivity, use "-m" ping instead. + +Use Cases +--------- + +For the following use case, let's use this small shell script as a wrapper. + +.. code-block:: bash + + #!/bin/bash + PLAYBOOK="$1" + + if [ -z $PLAYBOOK ]; then + echo "You need to pass a playback as argument to this script." + exit 1 + fi + + export SSL_CERT_FILE=$(pwd)/cacert.cer + export ANSIBLE_HOST_KEY_CHECKING=False + + if [ ! -f "$SSL_CERT_FILE" ]; then + curl -O http://curl.haxx.se/ca/cacert.pem + fi + + ansible-playbook -v -i inventory/ "$PLAYBOOK" + + +Create an instance +`````````````````` + +The GCE module provides the ability to provision instances within Google Compute Engine. The provisioning task is typically performed from your Ansible control server against Google Cloud's API. + +A playbook would looks like this: + +.. code-block:: yaml + + - name: Create instance(s) + hosts: localhost + gather_facts: no + connection: local + + vars: + machine_type: n1-standard-1 # default + image: debian-7 + service_account_email: unique-id@developer.gserviceaccount.com + pem_file: /path/to/project.pem + project_id: project-id + + tasks: + - name: Launch instances + gce: + instance_names: dev + machine_type: "{{ machine_type }}" + image: "{{ image }}" + service_account_email: "{{ service_account_email }}" + pem_file: "{{ pem_file }}" + project_id: "{{ project_id }}" + tags: webserver + register: gce + + - name: Wait for SSH to come up + wait_for: host={{ item.public_ip }} port=22 delay=10 timeout=60 + with_items: gce.instance_data + + - name: add_host hostname={{ item.public_ip }} groupname=new_instances + + - name: Manage new instances + hosts: new_instances + connection: ssh + roles: + - base_configuration + - production_server + +Note that use of the "add_host" module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines +in the 'new_instances' group, if so desired. Any sort of arbitrary configuration is possible at this point. + +Configuring instances in a group +```````````````````````````````` + +All of the created instances in GCE are grouped by tag. Since this is a cloud, it's probably best to ignore hostnames and just focus on group management. + +Normally we'd also use roles here, but the following example is a simple one. Here we will also use the "gce_net" module to open up access to port 80 on +these nodes. + +The variables in the 'vars' section could also be kept in a 'vars_files' file or something encrypted with Ansible-vault, if you so choose. This is just +a basic example of what is possible:: + + - name: Setup web servers + hosts: tag_webserver + gather_facts: no + + vars: + machine_type: n1-standard-1 # default + image: debian-7 + service_account_email: unique-id@developer.gserviceaccount.com + pem_file: /path/to/project.pem + project_id: project-id + + roles: + + - name: Install lighttpd + apt: pkg=lighttpd state=installed + sudo: True + + - name: Allow HTTP + local_action: gce_net + args: + fwname: "all-http" + name: "default" + allowed: "tcp:80" + state: "present" + service_account_email: "{{ service_account_email }}" + pem_file: "{{ pem_file }}" + project_id: "{{ project_id }}" + +By pointing your browser to the IP of the server, you should see a page welcoming you. + +Upgrades to this documentation are welcome, hit the github link at the top right of this page if you would like to make additions! + diff --git a/docsite/rst/guide_rax.rst b/docsite/rst/guide_rax.rst index 37ca6b796c6..ae145c96f10 100644 --- a/docsite/rst/guide_rax.rst +++ b/docsite/rst/guide_rax.rst @@ -11,7 +11,7 @@ Introduction Ansible contains a number of core modules for interacting with Rackspace Cloud. The purpose of this section is to explain how to put Ansible modules together -(and use inventory scripts) to use Ansible in Rackspace Cloud context. +(and use inventory scripts) to use Ansible in a Rackspace Cloud context. Prerequisites for using the rax modules are minimal. In addition to ansible itself, all of the modules require and are tested against pyrax 1.5 or higher. @@ -32,7 +32,7 @@ to add localhost to the inventory file. (Ansible may not require this manual st [localhost] localhost ansible_connection=local -In playbook steps we'll typically be using the following pattern: +In playbook steps, we'll typically be using the following pattern: .. code-block:: yaml @@ -66,21 +66,19 @@ https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authentic Running from a Python Virtual Environment (Optional) ++++++++++++++++++++++++++++++++++++++++++++++++++++ -Special considerations need to -be taken if pyrax is not installed globally but instead using a python virtualenv (it's fine if you install it globally). +Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to. -Ansible assumes, unless otherwise instructed, that the python binary will live at -/usr/bin/python. This is done so via the interpret line in the modules, however -when instructed using ansible_python_interpreter, ansible will use this specified path instead for finding -python. - -If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows: +There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running via 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows: .. code-block:: ini [localhost] localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python +.. note:: + + pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax. + .. _provisioning: Provisioning @@ -88,16 +86,20 @@ Provisioning Now for the fun parts. -The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the -provisioning task will be performed from your Ansible control server against the Rackspace cloud API. +The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server (in our example, localhost) against the Rackspace cloud API. This is done for several reasons: + + - Avoiding installing the pyrax library on remote nodes + - No need to encrypt and distribute credentials to remote nodes + - Speed and simplicity .. note:: Authentication with the Rackspace-related modules is handled by either specifying your username and API key as environment variables or passing - them as module arguments. + them as module arguments, or by specifying the location of a credentials + file. -Here is a basic example of provisioning a instance in ad-hoc mode: +Here is a basic example of provisioning an instance in ad-hoc mode: .. code-block:: bash @@ -119,8 +121,9 @@ Here's what it would look like in a playbook, assuming the parameters were defin wait: yes register: rax -By registering the return value of the step, it is then possible to dynamically add the resulting hosts to inventory (temporarily, in memory). -This facilitates performing configuration actions on the hosts immediately in a subsequent task:: +The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called "raxhosts", with each nodes hostname, IP address, and root password being added to the inventory. + +.. code-block:: yaml - name: Add the instances we created (by public IP) to the group 'raxhosts' local_action: @@ -132,7 +135,9 @@ This facilitates performing configuration actions on the hosts immediately in a with_items: rax.success when: rax.action == 'create' -With the host group now created, a second play in your provision playbook could now configure them, for example:: +With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group. + +.. code-block:: yaml - name: Configuration play hosts: raxhosts @@ -141,7 +146,6 @@ With the host group now created, a second play in your provision playbook could - ntp - webserver - The method above ties the configuration of a host with the provisioning step. This isn't always what you want, and leads us to the next section. @@ -150,41 +154,28 @@ to the next section. Host Inventory `````````````` -Once your nodes are spun up, you'll probably want to talk to them again. - -The best way to handle his is to use the rax inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what -nodes you have to manage. - -You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user interface. - -The inventory plugin can be used to group resources by their meta data. Utilizing meta data is highly -recommended in rax and can provide an easy way to sort between host groups and roles. - -If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, -though this is less recommended. +Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle his is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, etc. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended. -In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common -directory and be sure the scripts are chmod +x, and the INI-based ones are not. +In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not. .. _raxpy: rax.py ++++++ -To use the rackspace dynamic inventory script, copy ``rax.py`` from ``plugins/inventory`` into your inventory directory and make it executable. You can specify credentials for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable. +To use the rackspace dynamic inventory script, copy ``rax.py`` into your inventory directory and make it executable. You can specify a credentails file for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable. + +.. note:: Dynamic inventory scripts (like ``rax.py``) are saved in ``/usr/share/ansible/inventory`` if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to ``$VIRTUALENV/share/inventory``. .. note:: Users of :doc:`tower` will note that dynamic inventory is natively supported by Tower, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps:: $ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup -``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a -comma separated list of regions. +``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a comma separated list of regions. When using ``rax.py``, you will not have a 'localhost' defined in the inventory. -As mentioned previously, you will often be running most of these modules outside of the host loop, -and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory, -and place both the ``rax.py`` script and a file containing ``localhost`` in it. +As mentioned previously, you will often be running most of these modules outside of the host loop, and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory, and place both the ``rax.py`` script and a file containing ``localhost`` in it. Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead of an individual file, will cause ansible to evaluate each file in that directory for inventory. @@ -295,8 +286,7 @@ following information, which will be utilized for inventory and variables. Standard Inventory ++++++++++++++++++ -When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), -it may still be adventageous to retrieve discoverable hostvar information from the Rackspace API. +When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be adventageous to retrieve discoverable hostvar information from the Rackspace API. This can be achieved with the ``rax_facts`` module and an inventory file similar to the following: @@ -579,7 +569,7 @@ Autoscaling with Tower :doc:`tower` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a defined URL and the server will "dial out" to the requester -and configure an instance that is spinning up. This can be a great way to reconfigure ephmeral nodes. +and configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes. See the Tower documentation for more details. A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded @@ -587,9 +577,16 @@ and less information has to be shared with remote hosts. .. _pending_information: -Pending Information -``````````````````` +Orchestration in the Rackspace Cloud +++++++++++++++++++++++++++++++++++++ + +Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other pice of software in an environment. Complex deployments might have previously required manaul manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additioanl nodes contingent on the current number of running nodes, or the configuration of a clustered applicaiton dependent on the number of nodes with common metadata. One could automate the following scenarios, for example: + +* Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool +* Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed +* A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned +* Servers and load balancers that have DNS receords created and destroyed on creation and decomissioning, respectively + -More to come! diff --git a/docsite/rst/guide_rolling_upgrade.rst b/docsite/rst/guide_rolling_upgrade.rst index b464ef11a42..f730e8d7899 100644 --- a/docsite/rst/guide_rolling_upgrade.rst +++ b/docsite/rst/guide_rolling_upgrade.rst @@ -172,7 +172,7 @@ Here's another example, from the same template:: {% endfor %} This loops over all of the hosts in the group called ``monitoring``, and adds an ACCEPT line for -each monitoring hosts's default IPV4 address to the current machine's iptables configuration, so that Nagios can monitor those hosts. +each monitoring hosts' default IPV4 address to the current machine's iptables configuration, so that Nagios can monitor those hosts. You can learn a lot more about Jinja2 and its capabilities `here `_, and you can read more about Ansible variables in general in the :doc:`playbooks_variables` section. @@ -184,7 +184,7 @@ The Rolling Upgrade Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is where Ansible's orchestration features come into play. While some applications use the term 'orchestration' to mean basic ordering or command-blasting, Ansible -referes to orchestration as 'conducting machines like an orchestra', and has a pretty sophisticated engine for it. +refers to orchestration as 'conducting machines like an orchestra', and has a pretty sophisticated engine for it. Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called ``rolling_upgrade.yml``. @@ -201,7 +201,7 @@ The next part is the update play. The first part looks like this:: user: root serial: 1 -This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will paralleize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time. +This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will parallelize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time. Here is the next part of the update play:: diff --git a/docsite/rst/guide_vagrant.rst b/docsite/rst/guide_vagrant.rst index 4fb40d569f2..9472b74dd2f 100644 --- a/docsite/rst/guide_vagrant.rst +++ b/docsite/rst/guide_vagrant.rst @@ -7,7 +7,7 @@ Introduction ```````````` Vagrant is a tool to manage virtual machine environments, and allows you to -configure and use reproducable work environments on top of various +configure and use reproducible work environments on top of various virtualization and cloud platforms. It also has integration with Ansible as a provisioner for these virtual machines, and the two tools work together well. diff --git a/docsite/rst/guides.rst b/docsite/rst/guides.rst index 05af9b023d7..cf9c821bdbb 100644 --- a/docsite/rst/guides.rst +++ b/docsite/rst/guides.rst @@ -8,8 +8,9 @@ This section is new and evolving. The idea here is explore particular use cases guide_aws guide_rax + guide_gce guide_vagrant guide_rolling_upgrade -Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/Digital Ocean, Continous Deployment, and more. +Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/Digital Ocean, Continuous Deployment, and more. diff --git a/docsite/rst/guru.rst b/docsite/rst/guru.rst index 4267396c94a..e4f07fd3478 100644 --- a/docsite/rst/guru.rst +++ b/docsite/rst/guru.rst @@ -3,7 +3,7 @@ Ansible Guru While many users should be able to get on fine with the documentation, mailing list, and IRC, sometimes you want a bit more. -`Ansible Guru `_ is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible, including building playbooks, best practices, architecture suggestions, and more -- all from our awesome support and services team. It also includes some useful discounts and also some free T-shirts, though you shoudn't get it just for the free shirts! It's a great way to train up to becoming an Ansible expert. +`Ansible Guru `_ is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible, including building playbooks, best practices, architecture suggestions, and more -- all from our awesome support and services team. It also includes some useful discounts and also some free T-shirts, though you shouldn't get it just for the free shirts! It's a great way to train up to becoming an Ansible expert. For those interested, click through the link above. You can sign up in minutes! diff --git a/docsite/rst/index.rst b/docsite/rst/index.rst index 592c683fbb0..14f0e326f4b 100644 --- a/docsite/rst/index.rst +++ b/docsite/rst/index.rst @@ -16,7 +16,7 @@ We believe simplicity is relevant to all sizes of environments and design for bu Ansible manages machines in an agentless manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. As OpenSSH is one of the most peer reviewed open source components, the security exposure of using the tool is greatly reduced. Ansible is decentralized -- it relies on your existing OS credentials to control access to remote machines; if needed it can easily connect with Kerberos, LDAP, and other centralized authentication management systems. -This documentation covers the current released version of Ansible (1.5) and also some development version features (1.6). For recent features, in each section, the version of Ansible where the feature is added is indicated. Ansible, Inc releases a new major release of Ansible approximately every 2 months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup, while the community around new modules and plugins being developed and contributed moves very very quickly, typically adding 20 or so new modules in each release. +This documentation covers the current released version of Ansible (1.5.3) and also some development version features (1.6). For recent features, in each section, the version of Ansible where the feature is added is indicated. Ansible, Inc releases a new major release of Ansible approximately every 2 months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup, while the community around new modules and plugins being developed and contributed moves very very quickly, typically adding 20 or so new modules in each release. .. _an_introduction: diff --git a/docsite/rst/intro_adhoc.rst b/docsite/rst/intro_adhoc.rst index a49fdcfdc40..f849a1021c0 100644 --- a/docsite/rst/intro_adhoc.rst +++ b/docsite/rst/intro_adhoc.rst @@ -248,7 +248,7 @@ Be sure to use a high enough ``--forks`` value if you want to get all of your jo very quickly. After the time limit (in seconds) runs out (``-B``), the process on the remote nodes will be terminated. -Typically you'll be only be backgrounding long-running +Typically you'll only be backgrounding long-running shell commands or software upgrades only. Backgrounding the copy module does not do a background file transfer. :doc:`Playbooks ` also support polling, and have a simplified syntax for this. .. _checking_facts: diff --git a/docsite/rst/intro_configuration.rst b/docsite/rst/intro_configuration.rst index 450ca91aba2..f37ba6012cd 100644 --- a/docsite/rst/intro_configuration.rst +++ b/docsite/rst/intro_configuration.rst @@ -211,6 +211,16 @@ is very very conservative:: forks=5 +.. _gathering: + +gathering +========= + +New in 1.6, the 'gathering' setting controls the default policy of facts gathering (variables discovered about remote systems). + +The value 'implicit' is the default, meaning facts will be gathered per play unless 'gather_facts: False' is set in the play. The value 'explicit' is the inverse, facts will not be gathered unless directly requested in the play. + +The value 'smart' means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run. This option can be useful for those wishing to save fact gathering time. hash_behaviour ============== @@ -310,6 +320,13 @@ different locations:: Most users will not need to use this feature. See :doc:`developing_plugins` for more details +.. _module_lang: + +module_lang +=========== + +This is to set the default language to communicate between the module and the system. By default, the value is 'C'. + .. _module_name: module_name @@ -422,6 +439,10 @@ choose to establish a convention to checkout roles in /opt/mysite/roles like so: roles_path = /opt/mysite/roles +Additional paths can be provided separated by colon characters, in the same way as other pathstrings:: + + roles_path = /opt/mysite/roles:/opt/othersite/roles + Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible paths that were searched. @@ -622,4 +643,29 @@ This setting controls the timeout for the socket connect call, and should be kep Note, this value can be set to less than one second, however it is probably not a good idea to do so unless you're on a very fast and reliable LAN. If you're connecting to systems over the internet, it may be necessary to increase this timeout. +.. _accelerate_daemon_timeout: + +accelerate_daemon_timeout +========================= + +.. versionadded:: 1.6 + +This setting controls the timeout for the accelerated daemon, as measured in minutes. The default daemon timeout is 30 minutes:: + + accelerate_daemon_timeout = 30 + +Note, prior to 1.6, the timeout was hard-coded from the time of the daemon's launch. For version 1.6+, the timeout is now based on the last activity to the daemon and is configurable via this option. + +.. _accelerate_multi_key: + +accelerate_multi_key +==================== + +.. versionadded:: 1.6 + +If enabled, this setting allows multiple private keys to be uploaded to the daemon. Any clients connecting to the daemon must also enable this option:: + + accelerate_multi_key = yes + +New clients first connect to the target node over SSH to upload the key, which is done via a local socket file, so they must have the same access as the user that launched the daemon originally. diff --git a/docsite/rst/intro_dynamic_inventory.rst b/docsite/rst/intro_dynamic_inventory.rst index e42da4bad8f..7eeb517b2f4 100644 --- a/docsite/rst/intro_dynamic_inventory.rst +++ b/docsite/rst/intro_dynamic_inventory.rst @@ -28,11 +28,11 @@ It is expected that many Ansible users with a reasonable amount of physical hard While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic layer that allows it to represent data for multiple configuration management systems (even at the same time), and has -been referred to as a 'lightweight CMDB' by some admins. This particular script will communicate with Cobbler -using Cobbler's XMLRPC API. +been referred to as a 'lightweight CMDB' by some admins. To tie Ansible's inventory to Cobbler (optional), copy `this script `_ to /etc/ansible and `chmod +x` the file. cobblerd will now need to be running when you are using Ansible and you'll need to use Ansible's ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``). +This particular script will communicate with Cobbler using Cobbler's XMLRPC API. First test the script by running ``/etc/ansible/cobbler.py`` directly. You should see some JSON data output, but it may not have anything in it just yet. diff --git a/docsite/rst/intro_installation.rst b/docsite/rst/intro_installation.rst index 7541e2a5da5..fb1f7fabf45 100644 --- a/docsite/rst/intro_installation.rst +++ b/docsite/rst/intro_installation.rst @@ -204,6 +204,18 @@ You may also wish to install from ports, run: $ sudo make -C /usr/ports/sysutils/ansible install +.. _from_brew: + +Latest Releases Via Homebrew (Mac OSX) +++++++++++++++++++++++++++++++++++++++ + +To install on a Mac, make sure you have Homebrew, then run: + +.. code-block:: bash + + $ brew update + $ brew install ansible + .. _from_pip: Latest Releases Via Pip diff --git a/docsite/rst/modules.rst b/docsite/rst/modules.rst index 1e2a851d4a4..aa9ca0f40a1 100644 --- a/docsite/rst/modules.rst +++ b/docsite/rst/modules.rst @@ -17,7 +17,7 @@ handle executing system commands. Let's review how we execute three different modules from the command line:: - ansible webservers -m service -a "name=httpd state=running" + ansible webservers -m service -a "name=httpd state=started" ansible webservers -m ping ansible webservers -m command -a "/sbin/reboot -t now" diff --git a/docsite/rst/playbooks_acceleration.rst b/docsite/rst/playbooks_acceleration.rst index c11961ca9d6..b7f08828a84 100644 --- a/docsite/rst/playbooks_acceleration.rst +++ b/docsite/rst/playbooks_acceleration.rst @@ -8,7 +8,7 @@ You Might Not Need This! Are you running Ansible 1.5 or later? If so, you may not need accelerate mode due to a new feature called "SSH pipelining" and should read the :ref:`pipelining` section of the documentation. -For users on 1.5 and later, accelerate mode only makes sense if you are (A) are managing from an Enterprise Linux 6 or earlier host +For users on 1.5 and later, accelerate mode only makes sense if you (A) are managing from an Enterprise Linux 6 or earlier host and still are on paramiko, or (B) can't enable TTYs with sudo as described in the pipelining docs. If you can use pipelining, Ansible will reduce the amount of files transferred over the wire, @@ -76,4 +76,11 @@ As noted above, accelerated mode also supports running tasks via sudo, however t * You must remove requiretty from your sudoers options. * Prompting for the sudo password is not yet supported, so the NOPASSWD option is required for sudo'ed commands. +As of Ansible version `1.6`, you can also allow the use of multiple keys for connections from multiple Ansible management nodes. To do so, add the following option +to your `ansible.cfg` configuration:: + + accelerate_multi_key = yes + +When enabled, the daemon will open a UNIX socket file (by default `$ANSIBLE_REMOTE_TEMP/.ansible-accelerate/.local.socket`). New connections over SSH can +use this socket file to upload new keys to the daemon. diff --git a/docsite/rst/playbooks_best_practices.rst b/docsite/rst/playbooks_best_practices.rst index fbe34ca344e..487262a4b75 100644 --- a/docsite/rst/playbooks_best_practices.rst +++ b/docsite/rst/playbooks_best_practices.rst @@ -51,6 +51,8 @@ The top level of the directory would contain files and directories like so:: foo.sh # <-- script files for use with the script resource vars/ # main.yml # <-- variables associated with this role + meta/ # + main.yml # <-- role dependencies webtier/ # same kind of structure as "common" was above, done for the webtier role monitoring/ # "" @@ -223,8 +225,8 @@ What about just the first 10, and then the next 10?:: And of course just basic ad-hoc stuff is also possible.:: - ansible -i production -m ping - ansible -i production -m command -a '/sbin/reboot' --limit boston + ansible boston -i production -m ping + ansible boston -i production -m command -a '/sbin/reboot' And there are some useful commands to know (at least in 1.1 and higher):: diff --git a/docsite/rst/playbooks_environment.rst b/docsite/rst/playbooks_environment.rst index 971765ab303..11334fdb2f0 100644 --- a/docsite/rst/playbooks_environment.rst +++ b/docsite/rst/playbooks_environment.rst @@ -23,7 +23,7 @@ The environment can also be stored in a variable, and accessed like so:: - hosts: all remote_user: root - # here we make a variable named "env" that is a dictionary + # here we make a variable named "proxy_env" that is a dictionary vars: proxy_env: http_proxy: http://proxy.example.com:8080 diff --git a/docsite/rst/playbooks_intro.rst b/docsite/rst/playbooks_intro.rst index db82e2c483a..70db3f7fe27 100644 --- a/docsite/rst/playbooks_intro.rst +++ b/docsite/rst/playbooks_intro.rst @@ -350,7 +350,7 @@ Assuming you load balance your checkout location, ansible-pull scales essentiall Run ``ansible-pull --help`` for details. -There's also a `clever playbook `_ available to using ansible in push mode to configure ansible-pull via a crontab! +There's also a `clever playbook `_ available to configure ansible-pull via a crontab from push mode. .. _tips_and_tricks: @@ -370,7 +370,7 @@ package is installed. Try it! To see what hosts would be affected by a playbook before you run it, you can do this:: - ansible-playbook playbook.yml --list-hosts. + ansible-playbook playbook.yml --list-hosts .. seealso:: diff --git a/docsite/rst/playbooks_lookups.rst b/docsite/rst/playbooks_lookups.rst index afa12821546..1f4e4ed5d7f 100644 --- a/docsite/rst/playbooks_lookups.rst +++ b/docsite/rst/playbooks_lookups.rst @@ -7,6 +7,8 @@ in Ansible, and are typically used to load variables or templates with informati .. note:: This is considered an advanced feature, and many users will probably not rely on these features. +.. note:: Lookups occur on the local computer, not on the remote computer. + .. contents:: Topics .. _getting_file_contents: diff --git a/docsite/rst/playbooks_loops.rst b/docsite/rst/playbooks_loops.rst index 3917228229f..f19776396ea 100644 --- a/docsite/rst/playbooks_loops.rst +++ b/docsite/rst/playbooks_loops.rst @@ -250,7 +250,7 @@ that matches a given criteria, and some of the filenames are determined by varia - name: INTERFACES | Create Ansible header for /etc/network/interfaces template: src={{ item }} dest=/etc/foo.conf with_first_found: - - "{{ansible_virtualization_type}_foo.conf" + - "{{ansible_virtualization_type}}_foo.conf" - "default_foo.conf" This tool also has a long form version that allows for configurable search paths. Here's an example:: diff --git a/docsite/rst/playbooks_variables.rst b/docsite/rst/playbooks_variables.rst index bdb31577ed1..ce70daf54ff 100644 --- a/docsite/rst/playbooks_variables.rst +++ b/docsite/rst/playbooks_variables.rst @@ -101,7 +101,7 @@ Inside a template you automatically have access to all of the variables that are it's more than that -- you can also read variables about other hosts. We'll show how to do that in a bit. .. note:: ansible allows Jinja2 loops and conditionals in templates, but in playbooks, we do not use them. Ansible - templates are pure machine-parseable YAML. This is a rather important feature as it means it is possible to code-generate + playbooks are pure machine-parseable YAML. This is a rather important feature as it means it is possible to code-generate pieces of files, or to have other ecosystem tools read Ansible files. Not everyone will need this but it can unlock possibilities. @@ -208,11 +208,62 @@ To get the symmetric difference of 2 lists (items exclusive to each list):: {{ list1 | symmetric_difference(list2) }} +.. _version_comparison_filters: + +Version Comparison Filters +-------------------------- + +.. versionadded:: 1.6 + +To compare a version number, such as checking if the ``ansible_distribution_version`` +version is greater than or equal to '12.04', you can use the ``version_compare`` filter:: + +The ``version_compare`` filter can also be used to evaluate the ``ansible_distribution_version``:: + + {{ ansible_distribution_version | version_compare('12.04', '>=') }} + +If ``ansible_distribution_version`` is greater than or equal to 12, this filter will return True, otherwise +it will return False. + +The ``version_compare`` filter accepts the following operators:: + + <, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne + +This filter also accepts a 3rd parameter, ``strict`` which defines if strict version parsing should +be used. The default is ``False``, and if set as ``True`` will use more strict version parsing:: + + {{ sample_version_var | version_compare('1.0', operator='lt', strict=True) }} + +.. _random_filter + +Random Number Filter +-------------------------- + +.. versionadded:: 1.6 + +To get a random number from 0 to supplied end:: + + {{ 59 |random}} * * * * root /script/from/cron + +Get a random number from 0 to 100 but in steps of 10:: + + {{ 100 |random(step=10) }} => 70 + +Get a random number from 1 to 100 but in steps of 10:: + + {{ 100 |random(1, 10) }} => 31 + {{ 100 |random(start=1, step=10) }} => 51 + + .. _other_useful_filters: Other Useful Filters -------------------- +To concatenate a list into a string:: + + {{ list | join(" ") }} + To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt':: {{ path | basename }} @@ -240,6 +291,14 @@ doesn't know it is a boolean value:: - debug: msg=test when: some_string_value | bool +To replace text in a string with regex, use the "regex_replace" filter:: + + # convert "ansible" to "able" + {{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }} + + # convert "foobar" to "bar" + {{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }} + A few useful filters are typically added with each new Ansible release. The development documentation shows how to extend Ansible filters by writing your own as plugins, though in general, we encourage new ones to be added to core so everyone can make use of them. @@ -837,8 +896,11 @@ If multiple variables of the same name are defined in different places, they win * -e variables always win * then comes "most everything else" * then comes variables defined in inventory + * then comes facts discovered about a system * then "role defaults", which are the most "defaulty" and lose in priority to everything. +.. note:: In versions prior to 1.5.4, facts discovered about a system were in the "most everything else" category above. + That seems a little theoretical. Let's show some examples and where you would choose to put what based on the kind of control you might want over values. @@ -880,7 +942,7 @@ See :doc:`playbooks_roles` for more info about this:: --- # file: roles/x/defaults/main.yml - # if not overriden in inventory or as a parameter, this is the value that will be used + # if not overridden in inventory or as a parameter, this is the value that will be used http_port: 80 if you are writing a role and want to ensure the value in the role is absolutely used in that role, and is not going to be overridden diff --git a/docsite/rst/playbooks_vault.rst b/docsite/rst/playbooks_vault.rst index 20981215657..991c58f16ce 100644 --- a/docsite/rst/playbooks_vault.rst +++ b/docsite/rst/playbooks_vault.rst @@ -14,7 +14,7 @@ What Can Be Encrypted With Vault The vault feature can encrypt any structured data file used by Ansible. This can include "group_vars/" or "host_vars/" inventory variables, variables loaded by "include_vars" or "vars_files", or variable files passed on the ansible-playbook command line with "-e @file.yml" or "-e @file.json". Role variables and defaults are also included! -Because Ansible tasks, handlers, and so on are also data, these two can also be encrypted with vault. If you'd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted. However, that might be a little much and could annoy your coworkers :) +Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with vault. If you'd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted. However, that might be a little much and could annoy your coworkers :) .. _creating_files: diff --git a/examples/ansible.cfg b/examples/ansible.cfg index 2edbe361b0b..6e297d4f0e4 100644 --- a/examples/ansible.cfg +++ b/examples/ansible.cfg @@ -22,8 +22,17 @@ sudo_user = root #ask_pass = True transport = smart remote_port = 22 +module_lang = C -# additional paths to search for roles in, colon seperated +# plays will gather facts by default, which contain information about +# the remote system. +# +# smart - gather by default, but don't regather if already gathered +# implicit - gather by default, turn off with gather_facts: False +# explicit - do not gather by default, must say gather_facts: True +gathering = implicit + +# additional paths to search for roles in, colon separated #roles_path = /etc/ansible/roles # uncomment this to disable SSH key host checking @@ -82,7 +91,7 @@ ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} # to revert the behavior to pre-1.3. #error_on_undefined_vars = False -# set plugin path directories here, seperate with colons +# set plugin path directories here, separate with colons action_plugins = /usr/share/ansible_plugins/action_plugins callback_plugins = /usr/share/ansible_plugins/callback_plugins connection_plugins = /usr/share/ansible_plugins/connection_plugins @@ -98,6 +107,20 @@ filter_plugins = /usr/share/ansible_plugins/filter_plugins # set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 #nocolor = 1 +# the CA certificate path used for validating SSL certs. This path +# should exist on the controlling node, not the target nodes +# common locations: +# RHEL/CentOS: /etc/pki/tls/certs/ca-bundle.crt +# Fedora : /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem +# Ubuntu : /usr/share/ca-certificates/cacert.org/cacert.org.crt +#ca_file_path = + +# the http user-agent string to use when fetching urls. Some web server +# operators block the default urllib user agent as it is frequently used +# by malicious attacks/scripts, so we set it to something unique to +# avoid issues. +#http_user_agent = ansible-agent + [paramiko_connection] # uncomment this line to cause the paramiko connection plugin to not record new host @@ -145,3 +168,14 @@ filter_plugins = /usr/share/ansible_plugins/filter_plugins accelerate_port = 5099 accelerate_timeout = 30 accelerate_connect_timeout = 5.0 + +# The daemon timeout is measured in minutes. This time is measured +# from the last activity to the accelerate daemon. +accelerate_daemon_timeout = 30 + +# If set to yes, accelerate_multi_key will allow multiple +# private keys to be uploaded to it, though each user must +# have access to the system via SSH to add a new key. The default +# is "no". +#accelerate_multi_key = yes + diff --git a/hacking/README.md b/hacking/README.md index 5ac4e3de192..6d65464eee8 100644 --- a/hacking/README.md +++ b/hacking/README.md @@ -17,7 +17,7 @@ and do not wish to install them from your operating system package manager, you can install them from pip $ easy_install pip # if pip is not already available - $ pip install pyyaml jinja2 + $ pip install pyyaml jinja2 nose passlib pycrypto From there, follow ansible instructions on docs.ansible.com as normal. diff --git a/hacking/module_formatter.py b/hacking/module_formatter.py index d5ed3031508..0a36c3951ca 100755 --- a/hacking/module_formatter.py +++ b/hacking/module_formatter.py @@ -185,7 +185,7 @@ def process_module(module, options, env, template, outputname, module_map): fname = module_map[module] # ignore files with extensions - if os.path.basename(fname).find(".") != -1: + if "." in os.path.basename(fname): return # use ansible core library to parse out doc metadata YAML and plaintext examples diff --git a/hacking/test-module b/hacking/test-module index 3f7a8a2d648..f293458ad4b 100755 --- a/hacking/test-module +++ b/hacking/test-module @@ -93,6 +93,10 @@ def boilerplate_module(modfile, args, interpreter): # Argument is a YAML file (JSON is a subset of YAML) complex_args = utils.combine_vars(complex_args, utils.parse_yaml_from_file(args[1:])) args='' + elif args.startswith("{"): + # Argument is a YAML document (not a file) + complex_args = utils.combine_vars(complex_args, utils.parse_yaml(args)) + args='' inject = {} if interpreter: diff --git a/lib/ansible/callbacks.py b/lib/ansible/callbacks.py index b56d1f90695..1abfe681cc5 100644 --- a/lib/ansible/callbacks.py +++ b/lib/ansible/callbacks.py @@ -115,6 +115,12 @@ def log_unflock(runner): except OSError: pass +def set_playbook(callback, playbook): + ''' used to notify callback plugins of playbook context ''' + callback.playbook = playbook + for callback_plugin in callback_plugins: + callback_plugin.playbook = playbook + def set_play(callback, play): ''' used to notify callback plugins of context ''' callback.play = play @@ -250,7 +256,7 @@ def regular_generic_msg(hostname, result, oneline, caption): def banner_cowsay(msg): - if msg.find(": [") != -1: + if ": [" in msg: msg = msg.replace("[","") if msg.endswith("]"): msg = msg[:-1] diff --git a/lib/ansible/color.py b/lib/ansible/color.py index e5f6f4d2bae..069684f16c0 100644 --- a/lib/ansible/color.py +++ b/lib/ansible/color.py @@ -15,7 +15,6 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . -import os import sys import constants @@ -37,7 +36,7 @@ else: # curses returns an error (e.g. could not find terminal) ANSIBLE_COLOR=False -if os.getenv("ANSIBLE_FORCE_COLOR") is not None: +if constants.ANSIBLE_FORCE_COLOR: ANSIBLE_COLOR=True # --- begin "pretty" diff --git a/lib/ansible/constants.py b/lib/ansible/constants.py index 94070f641f2..ea909243761 100644 --- a/lib/ansible/constants.py +++ b/lib/ansible/constants.py @@ -93,8 +93,8 @@ else: DIST_MODULE_PATH = '/usr/share/ansible/' # check all of these extensions when looking for yaml files for things like -# group variables -YAML_FILENAME_EXTENSIONS = [ "", ".yml", ".yaml" ] +# group variables -- really anything we can load +YAML_FILENAME_EXTENSIONS = [ "", ".yml", ".yaml", ".json" ] # sections in config file DEFAULTS='defaults' @@ -134,6 +134,7 @@ DEFAULT_SU = get_config(p, DEFAULTS, 'su', 'ANSIBLE_SU', False, boolean=True) DEFAULT_SU_FLAGS = get_config(p, DEFAULTS, 'su_flags', 'ANSIBLE_SU_FLAGS', '') DEFAULT_SU_USER = get_config(p, DEFAULTS, 'su_user', 'ANSIBLE_SU_USER', 'root') DEFAULT_ASK_SU_PASS = get_config(p, DEFAULTS, 'ask_su_pass', 'ANSIBLE_ASK_SU_PASS', False, boolean=True) +DEFAULT_GATHERING = get_config(p, DEFAULTS, 'gathering', 'ANSIBLE_GATHERING', 'implicit').lower() DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '/usr/share/ansible_plugins/action_plugins') DEFAULT_CALLBACK_PLUGIN_PATH = get_config(p, DEFAULTS, 'callback_plugins', 'ANSIBLE_CALLBACK_PLUGINS', '/usr/share/ansible_plugins/callback_plugins') @@ -143,6 +144,7 @@ DEFAULT_VARS_PLUGIN_PATH = get_config(p, DEFAULTS, 'vars_plugins', ' DEFAULT_FILTER_PLUGIN_PATH = get_config(p, DEFAULTS, 'filter_plugins', 'ANSIBLE_FILTER_PLUGINS', '/usr/share/ansible_plugins/filter_plugins') DEFAULT_LOG_PATH = shell_expand_path(get_config(p, DEFAULTS, 'log_path', 'ANSIBLE_LOG_PATH', '')) +ANSIBLE_FORCE_COLOR = get_config(p, DEFAULTS, 'force_color', 'ANSIBLE_FORCE_COLOR', None, boolean=True) ANSIBLE_NOCOLOR = get_config(p, DEFAULTS, 'nocolor', 'ANSIBLE_NOCOLOR', None, boolean=True) ANSIBLE_NOCOWS = get_config(p, DEFAULTS, 'nocows', 'ANSIBLE_NOCOWS', None, boolean=True) DISPLAY_SKIPPED_HOSTS = get_config(p, DEFAULTS, 'display_skipped_hosts', 'DISPLAY_SKIPPED_HOSTS', True, boolean=True) @@ -160,9 +162,11 @@ ZEROMQ_PORT = get_config(p, 'fireball_connection', 'zeromq_po ACCELERATE_PORT = get_config(p, 'accelerate', 'accelerate_port', 'ACCELERATE_PORT', 5099, integer=True) ACCELERATE_TIMEOUT = get_config(p, 'accelerate', 'accelerate_timeout', 'ACCELERATE_TIMEOUT', 30, integer=True) ACCELERATE_CONNECT_TIMEOUT = get_config(p, 'accelerate', 'accelerate_connect_timeout', 'ACCELERATE_CONNECT_TIMEOUT', 1.0, floating=True) +ACCELERATE_DAEMON_TIMEOUT = get_config(p, 'accelerate', 'accelerate_daemon_timeout', 'ACCELERATE_DAEMON_TIMEOUT', 30, integer=True) ACCELERATE_KEYS_DIR = get_config(p, 'accelerate', 'accelerate_keys_dir', 'ACCELERATE_KEYS_DIR', '~/.fireball.keys') ACCELERATE_KEYS_DIR_PERMS = get_config(p, 'accelerate', 'accelerate_keys_dir_perms', 'ACCELERATE_KEYS_DIR_PERMS', '700') ACCELERATE_KEYS_FILE_PERMS = get_config(p, 'accelerate', 'accelerate_keys_file_perms', 'ACCELERATE_KEYS_FILE_PERMS', '600') +ACCELERATE_MULTI_KEY = get_config(p, 'accelerate', 'accelerate_multi_key', 'ACCELERATE_MULTI_KEY', False, boolean=True) PARAMIKO_PTY = get_config(p, 'paramiko_connection', 'pty', 'ANSIBLE_PARAMIKO_PTY', True, boolean=True) # characters included in auto-generated passwords diff --git a/lib/ansible/inventory/__init__.py b/lib/ansible/inventory/__init__.py index 8f74d5ea9e9..830d74c01ef 100644 --- a/lib/ansible/inventory/__init__.py +++ b/lib/ansible/inventory/__init__.py @@ -99,12 +99,40 @@ class Inventory(object): self.host_list = os.path.join(self.host_list, "") self.parser = InventoryDirectory(filename=host_list) self.groups = self.parser.groups.values() - elif utils.is_executable(host_list): - self.parser = InventoryScript(filename=host_list) - self.groups = self.parser.groups.values() else: - self.parser = InventoryParser(filename=host_list) - self.groups = self.parser.groups.values() + # check to see if the specified file starts with a + # shebang (#!/), so if an error is raised by the parser + # class we can show a more apropos error + shebang_present = False + try: + inv_file = open(host_list) + first_line = inv_file.readlines()[0] + inv_file.close() + if first_line.startswith('#!'): + shebang_present = True + except: + pass + + if utils.is_executable(host_list): + try: + self.parser = InventoryScript(filename=host_list) + self.groups = self.parser.groups.values() + except: + if not shebang_present: + raise errors.AnsibleError("The file %s is marked as executable, but failed to execute correctly. " % host_list + \ + "If this is not supposed to be an executable script, correct this with `chmod -x %s`." % host_list) + else: + raise + else: + try: + self.parser = InventoryParser(filename=host_list) + self.groups = self.parser.groups.values() + except: + if shebang_present: + raise errors.AnsibleError("The file %s looks like it should be an executable inventory script, but is not marked executable. " % host_list + \ + "Perhaps you want to correct this with `chmod +x %s`?" % host_list) + else: + raise utils.plugins.vars_loader.add_directory(self.basedir(), with_subdir=True) else: @@ -208,12 +236,14 @@ class Inventory(object): """ # The regex used to match on the range, which can be [x] or [x-y]. - pattern_re = re.compile("^(.*)\[([0-9]+)(?:(?:-)([0-9]+))?\](.*)$") + pattern_re = re.compile("^(.*)\[([-]?[0-9]+)(?:(?:-)([0-9]+))?\](.*)$") m = pattern_re.match(pattern) if m: (target, first, last, rest) = m.groups() first = int(first) if last: + if first < 0: + raise errors.AnsibleError("invalid range: negative indices cannot be used as the first item in a range") last = int(last) else: last = first @@ -245,10 +275,13 @@ class Inventory(object): right = 0 left=int(left) right=int(right) - if left != right: - return hosts[left:right] - else: - return [ hosts[left] ] + try: + if left != right: + return hosts[left:right] + else: + return [ hosts[left] ] + except IndexError: + raise errors.AnsibleError("no hosts matching the pattern '%s' were found" % pat) def _create_implicit_localhost(self, pattern): new_host = Host(pattern) @@ -363,9 +396,9 @@ class Inventory(object): vars_results = [ plugin.run(host, vault_password=vault_password) for plugin in self._vars_plugins ] for updated in vars_results: if updated is not None: - vars.update(updated) + vars = utils.combine_vars(vars, updated) - vars.update(host.get_variables()) + vars = utils.combine_vars(vars, host.get_variables()) if self.parser is not None: vars = utils.combine_vars(vars, self.parser.get_host_variables(host)) return vars diff --git a/lib/ansible/inventory/expand_hosts.py b/lib/ansible/inventory/expand_hosts.py index a1db9f1c6a4..b1cc0dcb82f 100644 --- a/lib/ansible/inventory/expand_hosts.py +++ b/lib/ansible/inventory/expand_hosts.py @@ -41,10 +41,7 @@ def detect_range(line = None): Returnes True if the given line contains a pattern, else False. ''' - if (line.find("[") != -1 and - line.find(":") != -1 and - line.find("]") != -1 and - line.index("[") < line.index(":") < line.index("]")): + if 0 <= line.find("[") < line.find(":") < line.find("]"): return True else: return False diff --git a/lib/ansible/inventory/host.py b/lib/ansible/inventory/host.py index 19b919ac66d..1b3c10f9d4e 100644 --- a/lib/ansible/inventory/host.py +++ b/lib/ansible/inventory/host.py @@ -16,6 +16,7 @@ # along with Ansible. If not, see . import ansible.constants as C +from ansible import utils class Host(object): ''' a single ansible host ''' @@ -56,7 +57,7 @@ class Host(object): results = {} groups = self.get_groups() for group in sorted(groups, key=lambda g: g.depth): - results.update(group.get_variables()) + results = utils.combine_vars(results, group.get_variables()) results.update(self.vars) results['inventory_hostname'] = self.name results['inventory_hostname_short'] = self.name.split('.')[0] diff --git a/lib/ansible/inventory/ini.py b/lib/ansible/inventory/ini.py index c50fae61164..9863de17b8e 100644 --- a/lib/ansible/inventory/ini.py +++ b/lib/ansible/inventory/ini.py @@ -23,6 +23,7 @@ from ansible.inventory.group import Group from ansible.inventory.expand_hosts import detect_range from ansible.inventory.expand_hosts import expand_hostname_range from ansible import errors +from ansible import utils import shlex import re import ast @@ -47,6 +48,20 @@ class InventoryParser(object): self._parse_group_variables() return self.groups + @staticmethod + def _parse_value(v): + if "#" not in v: + try: + return ast.literal_eval(v) + # Using explicit exceptions. + # Likely a string that literal_eval does not like. We wil then just set it. + except ValueError: + # For some reason this was thought to be malformed. + pass + except SyntaxError: + # Is this a hash with an equals at the end? + pass + return v # [webservers] # alpha @@ -65,10 +80,10 @@ class InventoryParser(object): active_group_name = 'ungrouped' for line in self.lines: - line = line.split("#")[0].strip() + line = utils.before_comment(line).strip() if line.startswith("[") and line.endswith("]"): active_group_name = line.replace("[","").replace("]","") - if line.find(":vars") != -1 or line.find(":children") != -1: + if ":vars" in line or ":children" in line: active_group_name = active_group_name.rsplit(":", 1)[0] if active_group_name not in self.groups: new_group = self.groups[active_group_name] = Group(name=active_group_name) @@ -94,11 +109,11 @@ class InventoryParser(object): # FQDN foo.example.com if hostname.count(".") == 1: (hostname, port) = hostname.rsplit(".", 1) - elif (hostname.find("[") != -1 and - hostname.find("]") != -1 and - hostname.find(":") != -1 and + elif ("[" in hostname and + "]" in hostname and + ":" in hostname and (hostname.rindex("]") < hostname.rindex(":")) or - (hostname.find("]") == -1 and hostname.find(":") != -1)): + ("]" not in hostname and ":" in hostname)): (hostname, port) = hostname.rsplit(":", 1) hostnames = [] @@ -122,12 +137,7 @@ class InventoryParser(object): (k,v) = t.split("=", 1) except ValueError, e: raise errors.AnsibleError("Invalid ini entry: %s - %s" % (t, str(e))) - try: - host.set_variable(k,ast.literal_eval(v)) - except: - # most likely a string that literal_eval - # doesn't like, so just set it - host.set_variable(k,v) + host.set_variable(k, self._parse_value(v)) self.groups[active_group_name].add_host(host) # [southeast:children] @@ -141,7 +151,7 @@ class InventoryParser(object): line = line.strip() if line is None or line == '': continue - if line.startswith("[") and line.find(":children]") != -1: + if line.startswith("[") and ":children]" in line: line = line.replace("[","").replace(":children]","") group = self.groups.get(line, None) if group is None: @@ -166,7 +176,7 @@ class InventoryParser(object): group = None for line in self.lines: line = line.strip() - if line.startswith("[") and line.find(":vars]") != -1: + if line.startswith("[") and ":vars]" in line: line = line.replace("[","").replace(":vars]","") group = self.groups.get(line, None) if group is None: @@ -178,16 +188,11 @@ class InventoryParser(object): elif line == '': pass elif group: - if line.find("=") == -1: + if "=" not in line: raise errors.AnsibleError("variables assigned to group must be in key=value form") else: (k, v) = [e.strip() for e in line.split("=", 1)] - # When the value is a single-quoted or double-quoted string - if re.match(r"^(['\"]).*\1$", v): - # Unquote the string - group.set_variable(k, re.sub(r"^['\"]|['\"]$", '', v)) - else: - group.set_variable(k, v) + group.set_variable(k, self._parse_value(v)) def get_host_variables(self, host): return {} diff --git a/lib/ansible/inventory/vars_plugins/group_vars.py b/lib/ansible/inventory/vars_plugins/group_vars.py index 3421565a5fb..93edceeecb5 100644 --- a/lib/ansible/inventory/vars_plugins/group_vars.py +++ b/lib/ansible/inventory/vars_plugins/group_vars.py @@ -86,7 +86,7 @@ def _load_vars_from_path(path, results, vault_password=None): if stat.S_ISDIR(pathstat.st_mode): # support organizing variables across multiple files in a directory - return True, _load_vars_from_folder(path, results) + return True, _load_vars_from_folder(path, results, vault_password=vault_password) # regular file elif stat.S_ISREG(pathstat.st_mode): @@ -105,7 +105,7 @@ def _load_vars_from_path(path, results, vault_password=None): raise errors.AnsibleError("Expected a variable file or directory " "but found a non-file object at path %s" % (path, )) -def _load_vars_from_folder(folder_path, results): +def _load_vars_from_folder(folder_path, results, vault_password=None): """ Load all variables within a folder recursively. """ @@ -123,9 +123,10 @@ def _load_vars_from_folder(folder_path, results): # filesystem lists them. names.sort() - paths = [os.path.join(folder_path, name) for name in names] + # do not parse hidden files or dirs, e.g. .svn/ + paths = [os.path.join(folder_path, name) for name in names if not name.startswith('.')] for path in paths: - _found, results = _load_vars_from_path(path, results) + _found, results = _load_vars_from_path(path, results, vault_password=vault_password) return results diff --git a/lib/ansible/module_common.py b/lib/ansible/module_common.py index da02882d935..a6af86d6fcb 100644 --- a/lib/ansible/module_common.py +++ b/lib/ansible/module_common.py @@ -95,7 +95,7 @@ class ModuleReplacer(object): for line in lines: - if line.find(REPLACER) != -1: + if REPLACER in line: output.write(self.slurp(os.path.join(self.snippet_path, "basic.py"))) snippet_names.append('basic') elif line.startswith('from ansible.module_utils.'): @@ -103,7 +103,7 @@ class ModuleReplacer(object): import_error = False if len(tokens) != 3: import_error = True - if line.find(" import *") == -1: + if " import *" not in line: import_error = True if import_error: raise errors.AnsibleError("error importing module in %s, expecting format like 'from ansible.module_utils.basic import *'" % module_path) diff --git a/lib/ansible/module_utils/basic.py b/lib/ansible/module_utils/basic.py index c2be621d4bf..0ab1ad03abe 100644 --- a/lib/ansible/module_utils/basic.py +++ b/lib/ansible/module_utils/basic.py @@ -46,6 +46,7 @@ BOOLEANS = BOOLEANS_TRUE + BOOLEANS_FALSE import os import re +import pipes import shlex import subprocess import sys @@ -54,11 +55,13 @@ import types import time import shutil import stat +import tempfile import traceback import grp import pwd import platform import errno +import tempfile try: import json @@ -112,8 +115,11 @@ FILE_COMMON_ARGUMENTS=dict( backup = dict(), force = dict(), remote_src = dict(), # used by assemble + delimiter = dict(), # used by assemble + directory_mode = dict(), # used by copy ) + def get_platform(): ''' what's the platform? example: Linux is a platform. ''' return platform.system() @@ -188,7 +194,7 @@ class AnsibleModule(object): os.environ['LANG'] = MODULE_LANG (self.params, self.args) = self._load_params() - self._legal_inputs = [ 'CHECKMODE', 'NO_LOG' ] + self._legal_inputs = ['CHECKMODE', 'NO_LOG'] self.aliases = self._handle_aliases() @@ -214,6 +220,9 @@ class AnsibleModule(object): if not self.no_log: self._log_invocation() + # finally, make sure we're in a sane working dir + self._set_cwd() + def load_file_common_arguments(self, params): ''' many modules deal with files, this encapsulates common @@ -461,7 +470,7 @@ class AnsibleModule(object): changed = True return changed - def set_file_attributes_if_different(self, file_args, changed): + def set_fs_attributes_if_different(self, file_args, changed): # set modes owners and context as needed changed = self.set_context_if_different( file_args['path'], file_args['secontext'], changed @@ -478,19 +487,10 @@ class AnsibleModule(object): return changed def set_directory_attributes_if_different(self, file_args, changed): - changed = self.set_context_if_different( - file_args['path'], file_args['secontext'], changed - ) - changed = self.set_owner_if_different( - file_args['path'], file_args['owner'], changed - ) - changed = self.set_group_if_different( - file_args['path'], file_args['group'], changed - ) - changed = self.set_mode_if_different( - file_args['path'], file_args['mode'], changed - ) - return changed + return self.set_fs_attributes_if_different(file_args, changed) + + def set_file_attributes_if_different(self, file_args, changed): + return self.set_fs_attributes_if_different(file_args, changed) def add_path_info(self, kwargs): ''' @@ -571,8 +571,9 @@ class AnsibleModule(object): def _check_invalid_arguments(self): for (k,v) in self.params.iteritems(): - if k in ('CHECKMODE', 'NO_LOG'): - continue + # these should be in legal inputs already + #if k in ('CHECKMODE', 'NO_LOG'): + # continue if k not in self._legal_inputs: self.fail_json(msg="unsupported parameter for module: %s" % k) @@ -686,6 +687,8 @@ class AnsibleModule(object): if not isinstance(value, list): if isinstance(value, basestring): self.params[k] = value.split(",") + elif isinstance(value, int) or isinstance(value, float): + self.params[k] = [ str(value) ] else: is_invalid = True elif wanted == 'dict': @@ -805,6 +808,12 @@ class AnsibleModule(object): else: msg = 'Invoked' + # 6655 - allow for accented characters + try: + msg = unicode(msg).encode('utf8') + except UnicodeDecodeError, e: + pass + if (has_journal): journal_args = ["MESSAGE=%s %s" % (module, msg)] journal_args.append("MODULE=%s" % os.path.basename(__file__)) @@ -815,10 +824,30 @@ class AnsibleModule(object): except IOError, e: # fall back to syslog since logging to journal failed syslog.openlog(str(module), 0, syslog.LOG_USER) - syslog.syslog(syslog.LOG_NOTICE, unicode(msg).encode('utf8')) + syslog.syslog(syslog.LOG_NOTICE, msg) #1 else: syslog.openlog(str(module), 0, syslog.LOG_USER) - syslog.syslog(syslog.LOG_NOTICE, unicode(msg).encode('utf8')) + syslog.syslog(syslog.LOG_NOTICE, msg) #2 + + def _set_cwd(self): + try: + cwd = os.getcwd() + if not os.access(cwd, os.F_OK|os.R_OK): + raise + return cwd + except: + # we don't have access to the cwd, probably because of sudo. + # Try and move to a neutral location to prevent errors + for cwd in [os.path.expandvars('$HOME'), tempfile.gettempdir()]: + try: + if os.access(cwd, os.F_OK|os.R_OK): + os.chdir(cwd) + return cwd + except: + pass + # we won't error here, as it may *not* be a problem, + # and we don't want to break modules unnecessarily + return None def get_bin_path(self, arg, required=False, opt_dirs=[]): ''' @@ -865,6 +894,9 @@ class AnsibleModule(object): for encoding in ("utf-8", "latin-1", "unicode_escape"): try: return json.dumps(data, encoding=encoding) + # Old systems using simplejson module does not support encoding keyword. + except TypeError, e: + return json.dumps(data) except UnicodeDecodeError, e: continue self.fail_json(msg='Invalid unicode encoding encountered') @@ -944,11 +976,12 @@ class AnsibleModule(object): it uses os.rename to ensure this as it is an atomic operation, rest of the function is to work around limitations, corner cases and ensure selinux context is saved if possible''' context = None + dest_stat = None if os.path.exists(dest): try: - st = os.stat(dest) - os.chmod(src, st.st_mode & 07777) - os.chown(src, st.st_uid, st.st_gid) + dest_stat = os.stat(dest) + os.chmod(src, dest_stat.st_mode & 07777) + os.chown(src, dest_stat.st_uid, dest_stat.st_gid) except OSError, e: if e.errno != errno.EPERM: raise @@ -958,8 +991,10 @@ class AnsibleModule(object): if self.selinux_enabled(): context = self.selinux_default_context(dest) + creating = not os.path.exists(dest) + try: - # Optimistically try a rename, solves some corner cases and can avoid useless work. + # Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic. os.rename(src, dest) except (IOError,OSError), e: # only try workarounds for errno 18 (cross device), 1 (not permited) and 13 (permission denied) @@ -968,31 +1003,40 @@ class AnsibleModule(object): dest_dir = os.path.dirname(dest) dest_file = os.path.basename(dest) - tmp_dest = "%s/.%s.%s.%s" % (dest_dir,dest_file,os.getpid(),time.time()) + tmp_dest = tempfile.NamedTemporaryFile( + prefix=".ansible_tmp", dir=dest_dir, suffix=dest_file) try: # leaves tmp file behind when sudo and not root if os.getenv("SUDO_USER") and os.getuid() != 0: # cleanup will happen by 'rm' of tempdir - shutil.copy(src, tmp_dest) + # copy2 will preserve some metadata + shutil.copy2(src, tmp_dest.name) else: - shutil.move(src, tmp_dest) + shutil.move(src, tmp_dest.name) if self.selinux_enabled(): - self.set_context_if_different(tmp_dest, context, False) - os.rename(tmp_dest, dest) + self.set_context_if_different( + tmp_dest.name, context, False) + if dest_stat: + os.chown(tmp_dest.name, dest_stat.st_uid, dest_stat.st_gid) + os.rename(tmp_dest.name, dest) except (shutil.Error, OSError, IOError), e: - self.cleanup(tmp_dest) + self.cleanup(tmp_dest.name) self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, e)) + if creating and os.getenv("SUDO_USER"): + os.chown(dest, os.getuid(), os.getgid()) + if self.selinux_enabled(): # rename might not preserve context self.set_context_if_different(dest, context, False) - def run_command(self, args, check_rc=False, close_fds=False, executable=None, data=None, binary_data=False, path_prefix=None): + def run_command(self, args, check_rc=False, close_fds=False, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False): ''' Execute a command, returns rc, stdout, and stderr. args is the command to run If args is a list, the command will be run with shell=False. - Otherwise, the command will be run with shell=True when args is a string. + If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False + If args is a string and use_unsafe_shell=True it run with shell=True. Other arguments: - check_rc (boolean) Whether to call fail_json in case of non zero RC. Default is False. @@ -1001,13 +1045,24 @@ class AnsibleModule(object): - executable (string) See documentation for subprocess.Popen(). Default is None. ''' + + shell = False if isinstance(args, list): - shell = False - elif isinstance(args, basestring): + if use_unsafe_shell: + args = " ".join([pipes.quote(x) for x in args]) + shell = True + elif isinstance(args, basestring) and use_unsafe_shell: shell = True + elif isinstance(args, basestring): + args = shlex.split(args.encode('utf-8')) else: msg = "Argument 'args' to run_command must be list or string" self.fail_json(rc=257, cmd=args, msg=msg) + + # expand things like $HOME and ~ + if not shell: + args = [ os.path.expandvars(os.path.expanduser(x)) for x in args ] + rc = 0 msg = None st_in = None @@ -1017,41 +1072,85 @@ class AnsibleModule(object): if path_prefix: env['PATH']="%s:%s" % (path_prefix, env['PATH']) + # create a printable version of the command for use + # in reporting later, which strips out things like + # passwords from the args list + if isinstance(args, list): + clean_args = " ".join(pipes.quote(arg) for arg in args) + else: + clean_args = args + + # all clean strings should return two match groups, + # where the first is the CLI argument and the second + # is the password/key/phrase that will be hidden + clean_re_strings = [ + # this removes things like --password, --pass, --pass-wd, etc. + # optionally followed by an '=' or a space. The password can + # be quoted or not too, though it does not care about quotes + # that are not balanced + # source: http://blog.stevenlevithan.com/archives/match-quoted-string + r'([-]{0,2}pass[-]?(?:word|wd)?[=\s]?)((?:["\'])?(?:[^\s])*(?:\1)?)', + # TODO: add more regex checks here + ] + for re_str in clean_re_strings: + r = re.compile(re_str) + clean_args = r.sub(r'\1********', clean_args) + if data: st_in = subprocess.PIPE + + kwargs = dict( + executable=executable, + shell=shell, + close_fds=close_fds, + stdin= st_in, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE + ) + + if path_prefix: + kwargs['env'] = env + if cwd and os.path.isdir(cwd): + kwargs['cwd'] = cwd + + # store the pwd + prev_dir = os.getcwd() + + # make sure we're in the right working directory + if cwd and os.path.isdir(cwd): + try: + os.chdir(cwd) + except (OSError, IOError), e: + self.fail_json(rc=e.errno, msg="Could not open %s , %s" % (cwd, str(e))) + try: - if path_prefix is not None: - cmd = subprocess.Popen(args, - executable=executable, - shell=shell, - close_fds=close_fds, - stdin=st_in, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, - env=env) - else: - cmd = subprocess.Popen(args, - executable=executable, - shell=shell, - close_fds=close_fds, - stdin=st_in, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE) - + cmd = subprocess.Popen(args, **kwargs) + if data: if not binary_data: - data += '\\n' + data += '\n' out, err = cmd.communicate(input=data) rc = cmd.returncode except (OSError, IOError), e: - self.fail_json(rc=e.errno, msg=str(e), cmd=args) + self.fail_json(rc=e.errno, msg=str(e), cmd=clean_args) except: - self.fail_json(rc=257, msg=traceback.format_exc(), cmd=args) + self.fail_json(rc=257, msg=traceback.format_exc(), cmd=clean_args) + if rc != 0 and check_rc: msg = err.rstrip() - self.fail_json(cmd=args, rc=rc, stdout=out, stderr=err, msg=msg) + self.fail_json(cmd=clean_args, rc=rc, stdout=out, stderr=err, msg=msg) + + # reset the pwd + os.chdir(prev_dir) + return (rc, out, err) + def append_to_file(self, filename, str): + filename = os.path.expandvars(os.path.expanduser(filename)) + fh = open(filename, 'a') + fh.write(str) + fh.close() + def pretty_bytes(self,size): ranges = ( (1<<70L, 'ZB'), @@ -1068,4 +1167,5 @@ class AnsibleModule(object): break return '%.2f %s' % (float(size)/ limit, suffix) - +def get_module_path(): + return os.path.dirname(os.path.realpath(__file__)) diff --git a/lib/ansible/module_utils/ec2.py b/lib/ansible/module_utils/ec2.py index 9156df766b2..98f9da92d49 100644 --- a/lib/ansible/module_utils/ec2.py +++ b/lib/ansible/module_utils/ec2.py @@ -1,3 +1,31 @@ +# This code is part of Ansible, but is an independent component. +# This particular file snippet, and this file snippet only, is BSD licensed. +# Modules you write using this snippet, which is embedded dynamically by Ansible +# still belong to the author of the module, and may assign their own license +# to the complete work. +# +# Copyright (c), Michael DeHaan , 2012-2013 +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without modification, +# are permitted provided that the following conditions are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. +# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + try: from distutils.version import LooseVersion HAS_LOOSE_VERSION = True @@ -14,33 +42,44 @@ AWS_REGIONS = ['ap-northeast-1', 'us-west-2'] -def ec2_argument_keys_spec(): +def aws_common_argument_spec(): return dict( + ec2_url=dict(), aws_secret_key=dict(aliases=['ec2_secret_key', 'secret_key'], no_log=True), aws_access_key=dict(aliases=['ec2_access_key', 'access_key']), + validate_certs=dict(default=True, type='bool'), + security_token=dict(no_log=True), + profile=dict(), ) + return spec def ec2_argument_spec(): - spec = ec2_argument_keys_spec() + spec = aws_common_argument_spec() spec.update( dict( region=dict(aliases=['aws_region', 'ec2_region'], choices=AWS_REGIONS), - validate_certs=dict(default=True, type='bool'), - ec2_url=dict(), ) ) return spec -def get_ec2_creds(module): +def boto_supports_profile_name(): + return hasattr(boto.ec2.EC2Connection, 'profile_name') + + +def get_aws_connection_info(module): # Check module args for credentials, then check environment vars + # access_key ec2_url = module.params.get('ec2_url') - ec2_secret_key = module.params.get('aws_secret_key') - ec2_access_key = module.params.get('aws_access_key') + access_key = module.params.get('aws_access_key') + secret_key = module.params.get('aws_secret_key') + security_token = module.params.get('security_token') region = module.params.get('region') + profile_name = module.params.get('profile') + validate_certs = module.params.get('validate_certs') if not ec2_url: if 'EC2_URL' in os.environ: @@ -48,21 +87,27 @@ def get_ec2_creds(module): elif 'AWS_URL' in os.environ: ec2_url = os.environ['AWS_URL'] - if not ec2_access_key: + if not access_key: if 'EC2_ACCESS_KEY' in os.environ: - ec2_access_key = os.environ['EC2_ACCESS_KEY'] + access_key = os.environ['EC2_ACCESS_KEY'] elif 'AWS_ACCESS_KEY_ID' in os.environ: - ec2_access_key = os.environ['AWS_ACCESS_KEY_ID'] + access_key = os.environ['AWS_ACCESS_KEY_ID'] elif 'AWS_ACCESS_KEY' in os.environ: - ec2_access_key = os.environ['AWS_ACCESS_KEY'] + access_key = os.environ['AWS_ACCESS_KEY'] + else: + # in case access_key came in as empty string + access_key = None - if not ec2_secret_key: + if not secret_key: if 'EC2_SECRET_KEY' in os.environ: - ec2_secret_key = os.environ['EC2_SECRET_KEY'] + secret_key = os.environ['EC2_SECRET_KEY'] elif 'AWS_SECRET_ACCESS_KEY' in os.environ: - ec2_secret_key = os.environ['AWS_SECRET_ACCESS_KEY'] + secret_key = os.environ['AWS_SECRET_ACCESS_KEY'] elif 'AWS_SECRET_KEY' in os.environ: - ec2_secret_key = os.environ['AWS_SECRET_KEY'] + secret_key = os.environ['AWS_SECRET_KEY'] + else: + # in case secret_key came in as empty string + secret_key = None if not region: if 'EC2_REGION' in os.environ: @@ -71,39 +116,75 @@ def get_ec2_creds(module): region = os.environ['AWS_REGION'] else: # boto.config.get returns None if config not found - region = boto.config.get('Boto', 'aws_region') + region = boto.config.get('Boto', 'aws_region') if not region: region = boto.config.get('Boto', 'ec2_region') - return ec2_url, ec2_access_key, ec2_secret_key, region + if not security_token: + if 'AWS_SECURITY_TOKEN' in os.environ: + security_token = os.environ['AWS_SECURITY_TOKEN'] + else: + # in case security_token came in as empty string + security_token = None + + boto_params = dict(aws_access_key_id=access_key, + aws_secret_access_key=secret_key, + security_token=security_token) + + # profile_name only works as a key in boto >= 2.24 + # so only set profile_name if passed as an argument + if profile_name: + if not boto_supports_profile_name(): + module.fail_json("boto does not support profile_name before 2.24") + boto_params['profile_name'] = profile_name + + if validate_certs and HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"): + boto_params['validate_certs'] = validate_certs + + return region, ec2_url, boto_params + + +def get_ec2_creds(module): + ''' for compatibility mode with old modules that don't/can't yet + use ec2_connect method ''' + region, ec2_url, boto_params = get_aws_connection_info(module) + return ec2_url, boto_params['aws_access_key_id'], boto_params['aws_secret_access_key'], region + + +def boto_fix_security_token_in_profile(conn, profile_name): + ''' monkey patch for boto issue boto/boto#2100 ''' + profile = 'profile ' + profile_name + if boto.config.has_option(profile, 'aws_security_token'): + conn.provider.set_security_token(boto.config.get(profile, 'aws_security_token')) + return conn + + +def connect_to_aws(aws_module, region, **params): + conn = aws_module.connect_to_region(region, **params) + if params.get('profile_name'): + conn = boto_fix_security_token_in_profile(conn, params['profile_name']) + return conn def ec2_connect(module): """ Return an ec2 connection""" - ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) - validate_certs = module.params.get('validate_certs', True) + region, ec2_url, boto_params = get_aws_connection_info(module) # If we have a region specified, connect to its endpoint. if region: try: - if HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"): - ec2 = boto.ec2.connect_to_region(region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key, validate_certs=validate_certs) - else: - ec2 = boto.ec2.connect_to_region(region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key) + ec2 = connect_to_aws(boto.ec2, region, **boto_params) except boto.exception.NoAuthHandlerFound, e: - module.fail_json(msg = str(e)) + module.fail_json(msg=str(e)) # Otherwise, no region so we fallback to the old connection method elif ec2_url: try: - if HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"): - ec2 = boto.connect_ec2_endpoint(ec2_url, aws_access_key, aws_secret_key, validate_certs=validate_certs) - else: - ec2 = boto.connect_ec2_endpoint(ec2_url, aws_access_key, aws_secret_key) + ec2 = boto.connect_ec2_endpoint(ec2_url, **boto_params) except boto.exception.NoAuthHandlerFound, e: - module.fail_json(msg = str(e)) + module.fail_json(msg=str(e)) else: module.fail_json(msg="Either region or ec2_url must be specified") - return ec2 + return ec2 diff --git a/lib/ansible/module_utils/facts.py b/lib/ansible/module_utils/facts.py new file mode 100644 index 00000000000..c056404210f --- /dev/null +++ b/lib/ansible/module_utils/facts.py @@ -0,0 +1,2345 @@ +# (c) 2012, Michael DeHaan +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +import os +import array +import errno +import fcntl +import fnmatch +import glob +import platform +import re +import signal +import socket +import struct +import datetime +import getpass +import ConfigParser +import StringIO + +try: + import selinux + HAVE_SELINUX=True +except ImportError: + HAVE_SELINUX=False + +try: + import json +except ImportError: + import simplejson as json + +# -------------------------------------------------------------- +# timeout function to make sure some fact gathering +# steps do not exceed a time limit + +class TimeoutError(Exception): + pass + +def timeout(seconds=10, error_message=os.strerror(errno.ETIME)): + def decorator(func): + def _handle_timeout(signum, frame): + raise TimeoutError(error_message) + + def wrapper(*args, **kwargs): + signal.signal(signal.SIGALRM, _handle_timeout) + signal.alarm(seconds) + try: + result = func(*args, **kwargs) + finally: + signal.alarm(0) + return result + + return wrapper + + return decorator + +# -------------------------------------------------------------- + +class Facts(object): + """ + This class should only attempt to populate those facts that + are mostly generic to all systems. This includes platform facts, + service facts (eg. ssh keys or selinux), and distribution facts. + Anything that requires extensive code or may have more than one + possible implementation to establish facts for a given topic should + subclass Facts. + """ + + _I386RE = re.compile(r'i[3456]86') + # For the most part, we assume that platform.dist() will tell the truth. + # This is the fallback to handle unknowns or exceptions + OSDIST_DICT = { '/etc/redhat-release': 'RedHat', + '/etc/vmware-release': 'VMwareESX', + '/etc/openwrt_release': 'OpenWrt', + '/etc/system-release': 'OtherLinux', + '/etc/alpine-release': 'Alpine', + '/etc/release': 'Solaris', + '/etc/arch-release': 'Archlinux', + '/etc/SuSE-release': 'SuSE', + '/etc/gentoo-release': 'Gentoo', + '/etc/os-release': 'Debian' } + SELINUX_MODE_DICT = { 1: 'enforcing', 0: 'permissive', -1: 'disabled' } + + # A list of dicts. If there is a platform with more than one + # package manager, put the preferred one last. If there is an + # ansible module, use that as the value for the 'name' key. + PKG_MGRS = [ { 'path' : '/usr/bin/yum', 'name' : 'yum' }, + { 'path' : '/usr/bin/apt-get', 'name' : 'apt' }, + { 'path' : '/usr/bin/zypper', 'name' : 'zypper' }, + { 'path' : '/usr/sbin/urpmi', 'name' : 'urpmi' }, + { 'path' : '/usr/bin/pacman', 'name' : 'pacman' }, + { 'path' : '/bin/opkg', 'name' : 'opkg' }, + { 'path' : '/opt/local/bin/pkgin', 'name' : 'pkgin' }, + { 'path' : '/opt/local/bin/port', 'name' : 'macports' }, + { 'path' : '/sbin/apk', 'name' : 'apk' }, + { 'path' : '/usr/sbin/pkg', 'name' : 'pkgng' }, + { 'path' : '/usr/sbin/swlist', 'name' : 'SD-UX' }, + { 'path' : '/usr/bin/emerge', 'name' : 'portage' }, + ] + + def __init__(self): + self.facts = {} + self.get_platform_facts() + self.get_distribution_facts() + self.get_cmdline() + self.get_public_ssh_host_keys() + self.get_selinux_facts() + self.get_pkg_mgr_facts() + self.get_lsb_facts() + self.get_date_time_facts() + self.get_user_facts() + self.get_local_facts() + self.get_env_facts() + + def populate(self): + return self.facts + + # Platform + # platform.system() can be Linux, Darwin, Java, or Windows + def get_platform_facts(self): + self.facts['system'] = platform.system() + self.facts['kernel'] = platform.release() + self.facts['machine'] = platform.machine() + self.facts['python_version'] = platform.python_version() + self.facts['fqdn'] = socket.getfqdn() + self.facts['hostname'] = platform.node().split('.')[0] + self.facts['nodename'] = platform.node() + self.facts['domain'] = '.'.join(self.facts['fqdn'].split('.')[1:]) + arch_bits = platform.architecture()[0] + self.facts['userspace_bits'] = arch_bits.replace('bit', '') + if self.facts['machine'] == 'x86_64': + self.facts['architecture'] = self.facts['machine'] + if self.facts['userspace_bits'] == '64': + self.facts['userspace_architecture'] = 'x86_64' + elif self.facts['userspace_bits'] == '32': + self.facts['userspace_architecture'] = 'i386' + elif Facts._I386RE.search(self.facts['machine']): + self.facts['architecture'] = 'i386' + if self.facts['userspace_bits'] == '64': + self.facts['userspace_architecture'] = 'x86_64' + elif self.facts['userspace_bits'] == '32': + self.facts['userspace_architecture'] = 'i386' + else: + self.facts['architecture'] = self.facts['machine'] + if self.facts['system'] == 'Linux': + self.get_distribution_facts() + elif self.facts['system'] == 'AIX': + rc, out, err = module.run_command("/usr/sbin/bootinfo -p") + data = out.split('\n') + self.facts['architecture'] = data[0] + + + def get_local_facts(self): + + fact_path = module.params.get('fact_path', None) + if not fact_path or not os.path.exists(fact_path): + return + + local = {} + for fn in sorted(glob.glob(fact_path + '/*.fact')): + # where it will sit under local facts + fact_base = os.path.basename(fn).replace('.fact','') + if os.access(fn, os.X_OK): + # run it + # try to read it as json first + # if that fails read it with ConfigParser + # if that fails, skip it + rc, out, err = module.run_command(fn) + else: + out = open(fn).read() + + # load raw json + fact = 'loading %s' % fact_base + try: + fact = json.loads(out) + except ValueError, e: + # load raw ini + cp = ConfigParser.ConfigParser() + try: + cp.readfp(StringIO.StringIO(out)) + except ConfigParser.Error, e: + fact="error loading fact - please check content" + else: + fact = {} + #print cp.sections() + for sect in cp.sections(): + if sect not in fact: + fact[sect] = {} + for opt in cp.options(sect): + val = cp.get(sect, opt) + fact[sect][opt]=val + + local[fact_base] = fact + if not local: + return + self.facts['local'] = local + + # platform.dist() is deprecated in 2.6 + # in 2.6 and newer, you should use platform.linux_distribution() + def get_distribution_facts(self): + + # A list with OS Family members + OS_FAMILY = dict( + RedHat = 'RedHat', Fedora = 'RedHat', CentOS = 'RedHat', Scientific = 'RedHat', + SLC = 'RedHat', Ascendos = 'RedHat', CloudLinux = 'RedHat', PSBM = 'RedHat', + OracleLinux = 'RedHat', OVS = 'RedHat', OEL = 'RedHat', Amazon = 'RedHat', + XenServer = 'RedHat', Ubuntu = 'Debian', Debian = 'Debian', SLES = 'Suse', + SLED = 'Suse', OpenSuSE = 'Suse', SuSE = 'Suse', Gentoo = 'Gentoo', Funtoo = 'Gentoo', + Archlinux = 'Archlinux', Mandriva = 'Mandrake', Mandrake = 'Mandrake', + Solaris = 'Solaris', Nexenta = 'Solaris', OmniOS = 'Solaris', OpenIndiana = 'Solaris', + SmartOS = 'Solaris', AIX = 'AIX', Alpine = 'Alpine', MacOSX = 'Darwin', + FreeBSD = 'FreeBSD', HPUX = 'HP-UX' + ) + + if self.facts['system'] == 'AIX': + self.facts['distribution'] = 'AIX' + rc, out, err = module.run_command("/usr/bin/oslevel") + data = out.split('.') + self.facts['distribution_version'] = data[0] + self.facts['distribution_release'] = data[1] + elif self.facts['system'] == 'HP-UX': + self.facts['distribution'] = 'HP-UX' + rc, out, err = module.run_command("/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'", use_unsafe_shell=True) + data = re.search('HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out) + if data: + self.facts['distribution_version'] = data.groups()[0] + self.facts['distribution_release'] = data.groups()[1] + elif self.facts['system'] == 'Darwin': + self.facts['distribution'] = 'MacOSX' + rc, out, err = module.run_command("/usr/bin/sw_vers -productVersion") + data = out.split()[-1] + self.facts['distribution_version'] = data + elif self.facts['system'] == 'FreeBSD': + self.facts['distribution'] = 'FreeBSD' + self.facts['distribution_release'] = platform.release() + self.facts['distribution_version'] = platform.version() + elif self.facts['system'] == 'OpenBSD': + self.facts['distribution'] = 'OpenBSD' + self.facts['distribution_release'] = platform.release() + rc, out, err = module.run_command("/sbin/sysctl -n kern.version") + match = re.match('OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out) + if match: + self.facts['distribution_version'] = match.groups()[0] + else: + self.facts['distribution_version'] = 'release' + else: + dist = platform.dist() + self.facts['distribution'] = dist[0].capitalize() or 'NA' + self.facts['distribution_version'] = dist[1] or 'NA' + self.facts['distribution_major_version'] = dist[1].split('.')[0] or 'NA' + self.facts['distribution_release'] = dist[2] or 'NA' + # Try to handle the exceptions now ... + for (path, name) in Facts.OSDIST_DICT.items(): + if os.path.exists(path): + if self.facts['distribution'] == 'Fedora': + pass + elif name == 'RedHat': + data = get_file_content(path) + if 'Red Hat' in data: + self.facts['distribution'] = name + else: + self.facts['distribution'] = data.split()[0] + elif name == 'OtherLinux': + data = get_file_content(path) + if 'Amazon' in data: + self.facts['distribution'] = 'Amazon' + self.facts['distribution_version'] = data.split()[-1] + elif name == 'OpenWrt': + data = get_file_content(path) + if 'OpenWrt' in data: + self.facts['distribution'] = name + version = re.search('DISTRIB_RELEASE="(.*)"', data) + if version: + self.facts['distribution_version'] = version.groups()[0] + release = re.search('DISTRIB_CODENAME="(.*)"', data) + if release: + self.facts['distribution_release'] = release.groups()[0] + elif name == 'Alpine': + data = get_file_content(path) + self.facts['distribution'] = 'Alpine' + self.facts['distribution_version'] = data + elif name == 'Solaris': + data = get_file_content(path).split('\n')[0] + ora_prefix = '' + if 'Oracle Solaris' in data: + data = data.replace('Oracle ','') + ora_prefix = 'Oracle ' + self.facts['distribution'] = data.split()[0] + self.facts['distribution_version'] = data.split()[1] + self.facts['distribution_release'] = ora_prefix + data + elif name == 'SuSE': + data = get_file_content(path).splitlines() + self.facts['distribution_release'] = data[2].split('=')[1].strip() + elif name == 'Debian': + data = get_file_content(path).split('\n')[0] + release = re.search("PRETTY_NAME.+ \(?([^ ]+?)\)?\"", data) + if release: + self.facts['distribution_release'] = release.groups()[0] + else: + self.facts['distribution'] = name + + self.facts['os_family'] = self.facts['distribution'] + if self.facts['distribution'] in OS_FAMILY: + self.facts['os_family'] = OS_FAMILY[self.facts['distribution']] + + def get_cmdline(self): + data = get_file_content('/proc/cmdline') + if data: + self.facts['cmdline'] = {} + for piece in shlex.split(data): + item = piece.split('=', 1) + if len(item) == 1: + self.facts['cmdline'][item[0]] = True + else: + self.facts['cmdline'][item[0]] = item[1] + + def get_public_ssh_host_keys(self): + dsa_filename = '/etc/ssh/ssh_host_dsa_key.pub' + rsa_filename = '/etc/ssh/ssh_host_rsa_key.pub' + ecdsa_filename = '/etc/ssh/ssh_host_ecdsa_key.pub' + + if self.facts['system'] == 'Darwin': + dsa_filename = '/etc/ssh_host_dsa_key.pub' + rsa_filename = '/etc/ssh_host_rsa_key.pub' + ecdsa_filename = '/etc/ssh_host_ecdsa_key.pub' + dsa = get_file_content(dsa_filename) + rsa = get_file_content(rsa_filename) + ecdsa = get_file_content(ecdsa_filename) + if dsa is None: + dsa = 'NA' + else: + self.facts['ssh_host_key_dsa_public'] = dsa.split()[1] + if rsa is None: + rsa = 'NA' + else: + self.facts['ssh_host_key_rsa_public'] = rsa.split()[1] + if ecdsa is None: + ecdsa = 'NA' + else: + self.facts['ssh_host_key_ecdsa_public'] = ecdsa.split()[1] + + def get_pkg_mgr_facts(self): + self.facts['pkg_mgr'] = 'unknown' + for pkg in Facts.PKG_MGRS: + if os.path.exists(pkg['path']): + self.facts['pkg_mgr'] = pkg['name'] + if self.facts['system'] == 'OpenBSD': + self.facts['pkg_mgr'] = 'openbsd_pkg' + + def get_lsb_facts(self): + lsb_path = module.get_bin_path('lsb_release') + if lsb_path: + rc, out, err = module.run_command([lsb_path, "-a"]) + if rc == 0: + self.facts['lsb'] = {} + for line in out.split('\n'): + if len(line) < 1: + continue + value = line.split(':', 1)[1].strip() + if 'LSB Version:' in line: + self.facts['lsb']['release'] = value + elif 'Distributor ID:' in line: + self.facts['lsb']['id'] = value + elif 'Description:' in line: + self.facts['lsb']['description'] = value + elif 'Release:' in line: + self.facts['lsb']['release'] = value + elif 'Codename:' in line: + self.facts['lsb']['codename'] = value + if 'lsb' in self.facts and 'release' in self.facts['lsb']: + self.facts['lsb']['major_release'] = self.facts['lsb']['release'].split('.')[0] + elif lsb_path is None and os.path.exists('/etc/lsb-release'): + self.facts['lsb'] = {} + f = open('/etc/lsb-release', 'r') + try: + for line in f.readlines(): + value = line.split('=',1)[1].strip() + if 'DISTRIB_ID' in line: + self.facts['lsb']['id'] = value + elif 'DISTRIB_RELEASE' in line: + self.facts['lsb']['release'] = value + elif 'DISTRIB_DESCRIPTION' in line: + self.facts['lsb']['description'] = value + elif 'DISTRIB_CODENAME' in line: + self.facts['lsb']['codename'] = value + finally: + f.close() + else: + return self.facts + + if 'lsb' in self.facts and 'release' in self.facts['lsb']: + self.facts['lsb']['major_release'] = self.facts['lsb']['release'].split('.')[0] + + + def get_selinux_facts(self): + if not HAVE_SELINUX: + self.facts['selinux'] = False + return + self.facts['selinux'] = {} + if not selinux.is_selinux_enabled(): + self.facts['selinux']['status'] = 'disabled' + else: + self.facts['selinux']['status'] = 'enabled' + try: + self.facts['selinux']['policyvers'] = selinux.security_policyvers() + except OSError, e: + self.facts['selinux']['policyvers'] = 'unknown' + try: + (rc, configmode) = selinux.selinux_getenforcemode() + if rc == 0: + self.facts['selinux']['config_mode'] = Facts.SELINUX_MODE_DICT.get(configmode, 'unknown') + else: + self.facts['selinux']['config_mode'] = 'unknown' + except OSError, e: + self.facts['selinux']['config_mode'] = 'unknown' + try: + mode = selinux.security_getenforce() + self.facts['selinux']['mode'] = Facts.SELINUX_MODE_DICT.get(mode, 'unknown') + except OSError, e: + self.facts['selinux']['mode'] = 'unknown' + try: + (rc, policytype) = selinux.selinux_getpolicytype() + if rc == 0: + self.facts['selinux']['type'] = policytype + else: + self.facts['selinux']['type'] = 'unknown' + except OSError, e: + self.facts['selinux']['type'] = 'unknown' + + + def get_date_time_facts(self): + self.facts['date_time'] = {} + + now = datetime.datetime.now() + self.facts['date_time']['year'] = now.strftime('%Y') + self.facts['date_time']['month'] = now.strftime('%m') + self.facts['date_time']['weekday'] = now.strftime('%A') + self.facts['date_time']['day'] = now.strftime('%d') + self.facts['date_time']['hour'] = now.strftime('%H') + self.facts['date_time']['minute'] = now.strftime('%M') + self.facts['date_time']['second'] = now.strftime('%S') + self.facts['date_time']['epoch'] = now.strftime('%s') + if self.facts['date_time']['epoch'] == '' or self.facts['date_time']['epoch'][0] == '%': + self.facts['date_time']['epoch'] = str(int(time.time())) + self.facts['date_time']['date'] = now.strftime('%Y-%m-%d') + self.facts['date_time']['time'] = now.strftime('%H:%M:%S') + self.facts['date_time']['iso8601_micro'] = now.utcnow().strftime("%Y-%m-%dT%H:%M:%S.%fZ") + self.facts['date_time']['iso8601'] = now.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ") + self.facts['date_time']['tz'] = time.strftime("%Z") + self.facts['date_time']['tz_offset'] = time.strftime("%z") + + + # User + def get_user_facts(self): + self.facts['user_id'] = getpass.getuser() + + def get_env_facts(self): + self.facts['env'] = {} + for k,v in os.environ.iteritems(): + self.facts['env'][k] = v + +class Hardware(Facts): + """ + This is a generic Hardware subclass of Facts. This should be further + subclassed to implement per platform. If you subclass this, it + should define: + - memfree_mb + - memtotal_mb + - swapfree_mb + - swaptotal_mb + - processor (a list) + - processor_cores + - processor_count + + All subclasses MUST define platform. + """ + platform = 'Generic' + + def __new__(cls, *arguments, **keyword): + subclass = cls + for sc in Hardware.__subclasses__(): + if sc.platform == platform.system(): + subclass = sc + return super(cls, subclass).__new__(subclass, *arguments, **keyword) + + def __init__(self): + Facts.__init__(self) + + def populate(self): + return self.facts + +class LinuxHardware(Hardware): + """ + Linux-specific subclass of Hardware. Defines memory and CPU facts: + - memfree_mb + - memtotal_mb + - swapfree_mb + - swaptotal_mb + - processor (a list) + - processor_cores + - processor_count + + In addition, it also defines number of DMI facts and device facts. + """ + + platform = 'Linux' + MEMORY_FACTS = ['MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'] + + def __init__(self): + Hardware.__init__(self) + + def populate(self): + self.get_cpu_facts() + self.get_memory_facts() + self.get_dmi_facts() + self.get_device_facts() + try: + self.get_mount_facts() + except TimeoutError: + pass + return self.facts + + def get_memory_facts(self): + if not os.access("/proc/meminfo", os.R_OK): + return + for line in open("/proc/meminfo").readlines(): + data = line.split(":", 1) + key = data[0] + if key in LinuxHardware.MEMORY_FACTS: + val = data[1].strip().split(' ')[0] + self.facts["%s_mb" % key.lower()] = long(val) / 1024 + + def get_cpu_facts(self): + i = 0 + physid = 0 + coreid = 0 + sockets = {} + cores = {} + if not os.access("/proc/cpuinfo", os.R_OK): + return + self.facts['processor'] = [] + for line in open("/proc/cpuinfo").readlines(): + data = line.split(":", 1) + key = data[0].strip() + # model name is for Intel arch, Processor (mind the uppercase P) + # works for some ARM devices, like the Sheevaplug. + if key == 'model name' or key == 'Processor': + if 'processor' not in self.facts: + self.facts['processor'] = [] + self.facts['processor'].append(data[1].strip()) + i += 1 + elif key == 'physical id': + physid = data[1].strip() + if physid not in sockets: + sockets[physid] = 1 + elif key == 'core id': + coreid = data[1].strip() + if coreid not in sockets: + cores[coreid] = 1 + elif key == 'cpu cores': + sockets[physid] = int(data[1].strip()) + elif key == 'siblings': + cores[coreid] = int(data[1].strip()) + self.facts['processor_count'] = sockets and len(sockets) or i + self.facts['processor_cores'] = sockets.values() and sockets.values()[0] or 1 + self.facts['processor_threads_per_core'] = ((cores.values() and + cores.values()[0] or 1) / self.facts['processor_cores']) + self.facts['processor_vcpus'] = (self.facts['processor_threads_per_core'] * + self.facts['processor_count'] * self.facts['processor_cores']) + + def get_dmi_facts(self): + ''' learn dmi facts from system + + Try /sys first for dmi related facts. + If that is not available, fall back to dmidecode executable ''' + + if os.path.exists('/sys/devices/virtual/dmi/id/product_name'): + # Use kernel DMI info, if available + + # DMI SPEC -- http://www.dmtf.org/sites/default/files/standards/documents/DSP0134_2.7.0.pdf + FORM_FACTOR = [ "Unknown", "Other", "Unknown", "Desktop", + "Low Profile Desktop", "Pizza Box", "Mini Tower", "Tower", + "Portable", "Laptop", "Notebook", "Hand Held", "Docking Station", + "All In One", "Sub Notebook", "Space-saving", "Lunch Box", + "Main Server Chassis", "Expansion Chassis", "Sub Chassis", + "Bus Expansion Chassis", "Peripheral Chassis", "RAID Chassis", + "Rack Mount Chassis", "Sealed-case PC", "Multi-system", + "CompactPCI", "AdvancedTCA", "Blade" ] + + DMI_DICT = { + 'bios_date': '/sys/devices/virtual/dmi/id/bios_date', + 'bios_version': '/sys/devices/virtual/dmi/id/bios_version', + 'form_factor': '/sys/devices/virtual/dmi/id/chassis_type', + 'product_name': '/sys/devices/virtual/dmi/id/product_name', + 'product_serial': '/sys/devices/virtual/dmi/id/product_serial', + 'product_uuid': '/sys/devices/virtual/dmi/id/product_uuid', + 'product_version': '/sys/devices/virtual/dmi/id/product_version', + 'system_vendor': '/sys/devices/virtual/dmi/id/sys_vendor' + } + + for (key,path) in DMI_DICT.items(): + data = get_file_content(path) + if data is not None: + if key == 'form_factor': + try: + self.facts['form_factor'] = FORM_FACTOR[int(data)] + except IndexError, e: + self.facts['form_factor'] = 'unknown (%s)' % data + else: + self.facts[key] = data + else: + self.facts[key] = 'NA' + + else: + # Fall back to using dmidecode, if available + dmi_bin = module.get_bin_path('dmidecode') + DMI_DICT = { + 'bios_date': 'bios-release-date', + 'bios_version': 'bios-version', + 'form_factor': 'chassis-type', + 'product_name': 'system-product-name', + 'product_serial': 'system-serial-number', + 'product_uuid': 'system-uuid', + 'product_version': 'system-version', + 'system_vendor': 'system-manufacturer' + } + for (k, v) in DMI_DICT.items(): + if dmi_bin is not None: + (rc, out, err) = module.run_command('%s -s %s' % (dmi_bin, v)) + if rc == 0: + # Strip out commented lines (specific dmidecode output) + thisvalue = ''.join([ line for line in out.split('\n') if not line.startswith('#') ]) + try: + json.dumps(thisvalue) + except UnicodeDecodeError: + thisvalue = "NA" + + self.facts[k] = thisvalue + else: + self.facts[k] = 'NA' + else: + self.facts[k] = 'NA' + + @timeout(10) + def get_mount_facts(self): + self.facts['mounts'] = [] + mtab = get_file_content('/etc/mtab', '') + for line in mtab.split('\n'): + if line.startswith('/'): + fields = line.rstrip('\n').split() + if(fields[2] != 'none'): + size_total = None + size_available = None + try: + statvfs_result = os.statvfs(fields[1]) + size_total = statvfs_result.f_bsize * statvfs_result.f_blocks + size_available = statvfs_result.f_bsize * (statvfs_result.f_bavail) + except OSError, e: + continue + + self.facts['mounts'].append( + {'mount': fields[1], + 'device':fields[0], + 'fstype': fields[2], + 'options': fields[3], + # statvfs data + 'size_total': size_total, + 'size_available': size_available, + }) + + def get_device_facts(self): + self.facts['devices'] = {} + lspci = module.get_bin_path('lspci') + if lspci: + rc, pcidata, err = module.run_command([lspci, '-D']) + else: + pcidata = None + + try: + block_devs = os.listdir("/sys/block") + except OSError: + return + + for block in block_devs: + virtual = 1 + sysfs_no_links = 0 + try: + path = os.readlink(os.path.join("/sys/block/", block)) + except OSError, e: + if e.errno == errno.EINVAL: + path = block + sysfs_no_links = 1 + else: + continue + if "virtual" in path: + continue + sysdir = os.path.join("/sys/block", path) + if sysfs_no_links == 1: + for folder in os.listdir(sysdir): + if "device" in folder: + virtual = 0 + break + if virtual: + continue + d = {} + diskname = os.path.basename(sysdir) + for key in ['vendor', 'model']: + d[key] = get_file_content(sysdir + "/device/" + key) + + for key,test in [ ('removable','/removable'), \ + ('support_discard','/queue/discard_granularity'), + ]: + d[key] = get_file_content(sysdir + test) + + d['partitions'] = {} + for folder in os.listdir(sysdir): + m = re.search("(" + diskname + "\d+)", folder) + if m: + part = {} + partname = m.group(1) + part_sysdir = sysdir + "/" + partname + + part['start'] = get_file_content(part_sysdir + "/start",0) + part['sectors'] = get_file_content(part_sysdir + "/size",0) + part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size",512) + part['size'] = module.pretty_bytes((float(part['sectors']) * float(part['sectorsize']))) + d['partitions'][partname] = part + + d['rotational'] = get_file_content(sysdir + "/queue/rotational") + d['scheduler_mode'] = "" + scheduler = get_file_content(sysdir + "/queue/scheduler") + if scheduler is not None: + m = re.match(".*?(\[(.*)\])", scheduler) + if m: + d['scheduler_mode'] = m.group(2) + + d['sectors'] = get_file_content(sysdir + "/size") + if not d['sectors']: + d['sectors'] = 0 + d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size") + if not d['sectorsize']: + d['sectorsize'] = 512 + d['size'] = module.pretty_bytes(float(d['sectors']) * float(d['sectorsize'])) + + d['host'] = "" + + # domains are numbered (0 to ffff), bus (0 to ff), slot (0 to 1f), and function (0 to 7). + m = re.match(".+/([a-f0-9]{4}:[a-f0-9]{2}:[0|1][a-f0-9]\.[0-7])/", sysdir) + if m and pcidata: + pciid = m.group(1) + did = re.escape(pciid) + m = re.search("^" + did + "\s(.*)$", pcidata, re.MULTILINE) + d['host'] = m.group(1) + + d['holders'] = [] + if os.path.isdir(sysdir + "/holders"): + for folder in os.listdir(sysdir + "/holders"): + if not folder.startswith("dm-"): + continue + name = get_file_content(sysdir + "/holders/" + folder + "/dm/name") + if name: + d['holders'].append(name) + else: + d['holders'].append(folder) + + self.facts['devices'][diskname] = d + + +class SunOSHardware(Hardware): + """ + In addition to the generic memory and cpu facts, this also sets + swap_reserved_mb and swap_allocated_mb that is available from *swap -s*. + """ + platform = 'SunOS' + + def __init__(self): + Hardware.__init__(self) + + def populate(self): + self.get_cpu_facts() + self.get_memory_facts() + return self.facts + + def get_cpu_facts(self): + physid = 0 + sockets = {} + rc, out, err = module.run_command("/usr/bin/kstat cpu_info") + self.facts['processor'] = [] + for line in out.split('\n'): + if len(line) < 1: + continue + data = line.split(None, 1) + key = data[0].strip() + # "brand" works on Solaris 10 & 11. "implementation" for Solaris 9. + if key == 'module:': + brand = '' + elif key == 'brand': + brand = data[1].strip() + elif key == 'clock_MHz': + clock_mhz = data[1].strip() + elif key == 'implementation': + processor = brand or data[1].strip() + # Add clock speed to description for SPARC CPU + if self.facts['machine'] != 'i86pc': + processor += " @ " + clock_mhz + "MHz" + if 'processor' not in self.facts: + self.facts['processor'] = [] + self.facts['processor'].append(processor) + elif key == 'chip_id': + physid = data[1].strip() + if physid not in sockets: + sockets[physid] = 1 + else: + sockets[physid] += 1 + # Counting cores on Solaris can be complicated. + # https://blogs.oracle.com/mandalika/entry/solaris_show_me_the_cpu + # Treat 'processor_count' as physical sockets and 'processor_cores' as + # virtual CPUs visisble to Solaris. Not a true count of cores for modern SPARC as + # these processors have: sockets -> cores -> threads/virtual CPU. + if len(sockets) > 0: + self.facts['processor_count'] = len(sockets) + self.facts['processor_cores'] = reduce(lambda x, y: x + y, sockets.values()) + else: + self.facts['processor_cores'] = 'NA' + self.facts['processor_count'] = len(self.facts['processor']) + + def get_memory_facts(self): + rc, out, err = module.run_command(["/usr/sbin/prtconf"]) + for line in out.split('\n'): + if 'Memory size' in line: + self.facts['memtotal_mb'] = line.split()[2] + rc, out, err = module.run_command("/usr/sbin/swap -s") + allocated = long(out.split()[1][:-1]) + reserved = long(out.split()[5][:-1]) + used = long(out.split()[8][:-1]) + free = long(out.split()[10][:-1]) + self.facts['swapfree_mb'] = free / 1024 + self.facts['swaptotal_mb'] = (free + used) / 1024 + self.facts['swap_allocated_mb'] = allocated / 1024 + self.facts['swap_reserved_mb'] = reserved / 1024 + +class OpenBSDHardware(Hardware): + """ + OpenBSD-specific subclass of Hardware. Defines memory, CPU and device facts: + - memfree_mb + - memtotal_mb + - swapfree_mb + - swaptotal_mb + - processor (a list) + - processor_cores + - processor_count + - processor_speed + - devices + """ + platform = 'OpenBSD' + DMESG_BOOT = '/var/run/dmesg.boot' + + def __init__(self): + Hardware.__init__(self) + + def populate(self): + self.sysctl = self.get_sysctl() + self.get_memory_facts() + self.get_processor_facts() + self.get_device_facts() + return self.facts + + def get_sysctl(self): + rc, out, err = module.run_command(["/sbin/sysctl", "hw"]) + if rc != 0: + return dict() + sysctl = dict() + for line in out.splitlines(): + (key, value) = line.split('=') + sysctl[key] = value.strip() + return sysctl + + def get_memory_facts(self): + # Get free memory. vmstat output looks like: + # procs memory page disks traps cpu + # r b w avm fre flt re pi po fr sr wd0 fd0 int sys cs us sy id + # 0 0 0 47512 28160 51 0 0 0 0 0 1 0 116 89 17 0 1 99 + rc, out, err = module.run_command("/usr/bin/vmstat") + if rc == 0: + self.facts['memfree_mb'] = long(out.splitlines()[-1].split()[4]) / 1024 + self.facts['memtotal_mb'] = long(self.sysctl['hw.usermem']) / 1024 / 1024 + + # Get swapctl info. swapctl output looks like: + # total: 69268 1K-blocks allocated, 0 used, 69268 available + # And for older OpenBSD: + # total: 69268k bytes allocated = 0k used, 69268k available + rc, out, err = module.run_command("/sbin/swapctl -sk") + if rc == 0: + data = out.split() + self.facts['swapfree_mb'] = long(data[-2].translate(None, "kmg")) / 1024 + self.facts['swaptotal_mb'] = long(data[1].translate(None, "kmg")) / 1024 + + def get_processor_facts(self): + processor = [] + dmesg_boot = get_file_content(OpenBSDHardware.DMESG_BOOT) + if not dmesg_boot: + rc, dmesg_boot, err = module.run_command("/sbin/dmesg") + i = 0 + for line in dmesg_boot.splitlines(): + if line.split(' ', 1)[0] == 'cpu%i:' % i: + processor.append(line.split(' ', 1)[1]) + i = i + 1 + processor_count = i + self.facts['processor'] = processor + self.facts['processor_count'] = processor_count + # I found no way to figure out the number of Cores per CPU in OpenBSD + self.facts['processor_cores'] = 'NA' + + def get_device_facts(self): + devices = [] + devices.extend(self.sysctl['hw.disknames'].split(',')) + self.facts['devices'] = devices + +class FreeBSDHardware(Hardware): + """ + FreeBSD-specific subclass of Hardware. Defines memory and CPU facts: + - memfree_mb + - memtotal_mb + - swapfree_mb + - swaptotal_mb + - processor (a list) + - processor_cores + - processor_count + - devices + """ + platform = 'FreeBSD' + DMESG_BOOT = '/var/run/dmesg.boot' + + def __init__(self): + Hardware.__init__(self) + + def populate(self): + self.get_cpu_facts() + self.get_memory_facts() + self.get_dmi_facts() + self.get_device_facts() + try: + self.get_mount_facts() + except TimeoutError: + pass + return self.facts + + def get_cpu_facts(self): + self.facts['processor'] = [] + rc, out, err = module.run_command("/sbin/sysctl -n hw.ncpu") + self.facts['processor_count'] = out.strip() + + dmesg_boot = get_file_content(FreeBSDHardware.DMESG_BOOT) + if not dmesg_boot: + rc, dmesg_boot, err = module.run_command("/sbin/dmesg") + for line in dmesg_boot.split('\n'): + if 'CPU:' in line: + cpu = re.sub(r'CPU:\s+', r"", line) + self.facts['processor'].append(cpu.strip()) + if 'Logical CPUs per core' in line: + self.facts['processor_cores'] = line.split()[4] + + + def get_memory_facts(self): + rc, out, err = module.run_command("/sbin/sysctl vm.stats") + for line in out.split('\n'): + data = line.split() + if 'vm.stats.vm.v_page_size' in line: + pagesize = long(data[1]) + if 'vm.stats.vm.v_page_count' in line: + pagecount = long(data[1]) + if 'vm.stats.vm.v_free_count' in line: + freecount = long(data[1]) + self.facts['memtotal_mb'] = pagesize * pagecount / 1024 / 1024 + self.facts['memfree_mb'] = pagesize * freecount / 1024 / 1024 + # Get swapinfo. swapinfo output looks like: + # Device 1M-blocks Used Avail Capacity + # /dev/ada0p3 314368 0 314368 0% + # + rc, out, err = module.run_command("/usr/sbin/swapinfo -m") + lines = out.split('\n') + if len(lines[-1]) == 0: + lines.pop() + data = lines[-1].split() + self.facts['swaptotal_mb'] = data[1] + self.facts['swapfree_mb'] = data[3] + + @timeout(10) + def get_mount_facts(self): + self.facts['mounts'] = [] + fstab = get_file_content('/etc/fstab') + if fstab: + for line in fstab.split('\n'): + if line.startswith('#') or line.strip() == '': + continue + fields = re.sub(r'\s+',' ',line.rstrip('\n')).split() + self.facts['mounts'].append({'mount': fields[1] , 'device': fields[0], 'fstype' : fields[2], 'options': fields[3]}) + + def get_device_facts(self): + sysdir = '/dev' + self.facts['devices'] = {} + drives = re.compile('(ada?\d+|da\d+|a?cd\d+)') #TODO: rc, disks, err = module.run_command("/sbin/sysctl kern.disks") + slices = re.compile('(ada?\d+s\d+\w*|da\d+s\d+\w*)') + if os.path.isdir(sysdir): + dirlist = sorted(os.listdir(sysdir)) + for device in dirlist: + d = drives.match(device) + if d: + self.facts['devices'][d.group(1)] = [] + s = slices.match(device) + if s: + self.facts['devices'][d.group(1)].append(s.group(1)) + + def get_dmi_facts(self): + ''' learn dmi facts from system + + Use dmidecode executable if available''' + + # Fall back to using dmidecode, if available + dmi_bin = module.get_bin_path('dmidecode') + DMI_DICT = dict( + bios_date='bios-release-date', + bios_version='bios-version', + form_factor='chassis-type', + product_name='system-product-name', + product_serial='system-serial-number', + product_uuid='system-uuid', + product_version='system-version', + system_vendor='system-manufacturer' + ) + for (k, v) in DMI_DICT.items(): + if dmi_bin is not None: + (rc, out, err) = module.run_command('%s -s %s' % (dmi_bin, v)) + if rc == 0: + # Strip out commented lines (specific dmidecode output) + self.facts[k] = ''.join([ line for line in out.split('\n') if not line.startswith('#') ]) + try: + json.dumps(self.facts[k]) + except UnicodeDecodeError: + self.facts[k] = 'NA' + else: + self.facts[k] = 'NA' + else: + self.facts[k] = 'NA' + + +class NetBSDHardware(Hardware): + """ + NetBSD-specific subclass of Hardware. Defines memory and CPU facts: + - memfree_mb + - memtotal_mb + - swapfree_mb + - swaptotal_mb + - processor (a list) + - processor_cores + - processor_count + - devices + """ + platform = 'NetBSD' + MEMORY_FACTS = ['MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'] + + def __init__(self): + Hardware.__init__(self) + + def populate(self): + self.get_cpu_facts() + self.get_memory_facts() + try: + self.get_mount_facts() + except TimeoutError: + pass + return self.facts + + def get_cpu_facts(self): + + i = 0 + physid = 0 + sockets = {} + if not os.access("/proc/cpuinfo", os.R_OK): + return + self.facts['processor'] = [] + for line in open("/proc/cpuinfo").readlines(): + data = line.split(":", 1) + key = data[0].strip() + # model name is for Intel arch, Processor (mind the uppercase P) + # works for some ARM devices, like the Sheevaplug. + if key == 'model name' or key == 'Processor': + if 'processor' not in self.facts: + self.facts['processor'] = [] + self.facts['processor'].append(data[1].strip()) + i += 1 + elif key == 'physical id': + physid = data[1].strip() + if physid not in sockets: + sockets[physid] = 1 + elif key == 'cpu cores': + sockets[physid] = int(data[1].strip()) + if len(sockets) > 0: + self.facts['processor_count'] = len(sockets) + self.facts['processor_cores'] = reduce(lambda x, y: x + y, sockets.values()) + else: + self.facts['processor_count'] = i + self.facts['processor_cores'] = 'NA' + + def get_memory_facts(self): + if not os.access("/proc/meminfo", os.R_OK): + return + for line in open("/proc/meminfo").readlines(): + data = line.split(":", 1) + key = data[0] + if key in NetBSDHardware.MEMORY_FACTS: + val = data[1].strip().split(' ')[0] + self.facts["%s_mb" % key.lower()] = long(val) / 1024 + + @timeout(10) + def get_mount_facts(self): + self.facts['mounts'] = [] + fstab = get_file_content('/etc/fstab') + if fstab: + for line in fstab.split('\n'): + if line.startswith('#') or line.strip() == '': + continue + fields = re.sub(r'\s+',' ',line.rstrip('\n')).split() + self.facts['mounts'].append({'mount': fields[1] , 'device': fields[0], 'fstype' : fields[2], 'options': fields[3]}) + +class AIX(Hardware): + """ + AIX-specific subclass of Hardware. Defines memory and CPU facts: + - memfree_mb + - memtotal_mb + - swapfree_mb + - swaptotal_mb + - processor (a list) + - processor_cores + - processor_count + """ + platform = 'AIX' + + def __init__(self): + Hardware.__init__(self) + + def populate(self): + self.get_cpu_facts() + self.get_memory_facts() + self.get_dmi_facts() + return self.facts + + def get_cpu_facts(self): + self.facts['processor'] = [] + + + rc, out, err = module.run_command("/usr/sbin/lsdev -Cc processor") + if out: + i = 0 + for line in out.split('\n'): + + if 'Available' in line: + if i == 0: + data = line.split(' ') + cpudev = data[0] + + i += 1 + self.facts['processor_count'] = int(i) + + rc, out, err = module.run_command("/usr/sbin/lsattr -El " + cpudev + " -a type") + + data = out.split(' ') + self.facts['processor'] = data[1] + + rc, out, err = module.run_command("/usr/sbin/lsattr -El " + cpudev + " -a smt_threads") + + data = out.split(' ') + self.facts['processor_cores'] = int(data[1]) + + def get_memory_facts(self): + pagesize = 4096 + rc, out, err = module.run_command("/usr/bin/vmstat -v") + for line in out.split('\n'): + data = line.split() + if 'memory pages' in line: + pagecount = long(data[0]) + if 'free pages' in line: + freecount = long(data[0]) + self.facts['memtotal_mb'] = pagesize * pagecount / 1024 / 1024 + self.facts['memfree_mb'] = pagesize * freecount / 1024 / 1024 + # Get swapinfo. swapinfo output looks like: + # Device 1M-blocks Used Avail Capacity + # /dev/ada0p3 314368 0 314368 0% + # + rc, out, err = module.run_command("/usr/sbin/lsps -s") + if out: + lines = out.split('\n') + data = lines[1].split() + swaptotal_mb = long(data[0].rstrip('MB')) + percused = int(data[1].rstrip('%')) + self.facts['swaptotal_mb'] = swaptotal_mb + self.facts['swapfree_mb'] = long(swaptotal_mb * ( 100 - percused ) / 100) + + def get_dmi_facts(self): + rc, out, err = module.run_command("/usr/sbin/lsattr -El sys0 -a fwversion") + data = out.split() + self.facts['firmware_version'] = data[1].strip('IBM,') + +class HPUX(Hardware): + """ + HP-UX-specifig subclass of Hardware. Defines memory and CPU facts: + - memfree_mb + - memtotal_mb + - swapfree_mb + - swaptotal_mb + - processor + - processor_cores + - processor_count + - model + - firmware + """ + + platform = 'HP-UX' + + def __init__(self): + Hardware.__init__(self) + + def populate(self): + self.get_cpu_facts() + self.get_memory_facts() + self.get_hw_facts() + return self.facts + + def get_cpu_facts(self): + if self.facts['architecture'] == '9000/800': + rc, out, err = module.run_command("ioscan -FkCprocessor | wc -l", use_unsafe_shell=True) + self.facts['processor_count'] = int(out.strip()) + #Working with machinfo mess + elif self.facts['architecture'] == 'ia64': + if self.facts['distribution_version'] == "B.11.23": + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | grep 'Number of CPUs'", use_unsafe_shell=True) + self.facts['processor_count'] = int(out.strip().split('=')[1]) + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | grep 'processor family'", use_unsafe_shell=True) + self.facts['processor'] = re.search('.*(Intel.*)', out).groups()[0].strip() + rc, out, err = module.run_command("ioscan -FkCprocessor | wc -l", use_unsafe_shell=True) + self.facts['processor_cores'] = int(out.strip()) + if self.facts['distribution_version'] == "B.11.31": + #if machinfo return cores strings release B.11.31 > 1204 + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | grep core | wc -l", use_unsafe_shell=True) + if out.strip()== '0': + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | grep Intel", use_unsafe_shell=True) + self.facts['processor_count'] = int(out.strip().split(" ")[0]) + #If hyperthreading is active divide cores by 2 + rc, out, err = module.run_command("/usr/sbin/psrset | grep LCPU", use_unsafe_shell=True) + data = re.sub(' +',' ',out).strip().split(' ') + if len(data) == 1: + hyperthreading = 'OFF' + else: + hyperthreading = data[1] + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | grep logical", use_unsafe_shell=True) + data = out.strip().split(" ") + if hyperthreading == 'ON': + self.facts['processor_cores'] = int(data[0])/2 + else: + if len(data) == 1: + self.facts['processor_cores'] = self.facts['processor_count'] + else: + self.facts['processor_cores'] = int(data[0]) + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | grep Intel |cut -d' ' -f4-", use_unsafe_shell=True) + self.facts['processor'] = out.strip() + else: + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | egrep 'socket[s]?$' | tail -1", use_unsafe_shell=True) + self.facts['processor_count'] = int(out.strip().split(" ")[0]) + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | grep -e '[0-9] core' | tail -1", use_unsafe_shell=True) + self.facts['processor_cores'] = int(out.strip().split(" ")[0]) + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | grep Intel", use_unsafe_shell=True) + self.facts['processor'] = out.strip() + + def get_memory_facts(self): + pagesize = 4096 + rc, out, err = module.run_command("/usr/bin/vmstat | tail -1", use_unsafe_shell=True) + data = int(re.sub(' +',' ',out).split(' ')[5].strip()) + self.facts['memfree_mb'] = pagesize * data / 1024 / 1024 + if self.facts['architecture'] == '9000/800': + rc, out, err = module.run_command("grep Physical /var/adm/syslog/syslog.log") + data = re.search('.*Physical: ([0-9]*) Kbytes.*',out).groups()[0].strip() + self.facts['memtotal_mb'] = int(data) / 1024 + else: + rc, out, err = module.run_command("/usr/contrib/bin/machinfo | grep Memory", use_unsafe_shell=True) + data = re.search('Memory[\ :=]*([0-9]*).*MB.*',out).groups()[0].strip() + self.facts['memtotal_mb'] = int(data) + rc, out, err = module.run_command("/usr/sbin/swapinfo -m -d -f -q") + self.facts['swaptotal_mb'] = int(out.strip()) + rc, out, err = module.run_command("/usr/sbin/swapinfo -m -d -f | egrep '^dev|^fs'", use_unsafe_shell=True) + swap = 0 + for line in out.strip().split('\n'): + swap += int(re.sub(' +',' ',line).split(' ')[3].strip()) + self.facts['swapfree_mb'] = swap + + def get_hw_facts(self): + rc, out, err = module.run_command("model") + self.facts['model'] = out.strip() + if self.facts['architecture'] == 'ia64': + rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep -i 'Firmware revision' | grep -v BMC", use_unsafe_shell=True) + self.facts['firmware_version'] = out.split(':')[1].strip() + + +class Darwin(Hardware): + """ + Darwin-specific subclass of Hardware. Defines memory and CPU facts: + - processor + - processor_cores + - memtotal_mb + - memfree_mb + - model + - osversion + - osrevision + """ + platform = 'Darwin' + + def __init__(self): + Hardware.__init__(self) + + def populate(self): + self.sysctl = self.get_sysctl() + self.get_mac_facts() + self.get_cpu_facts() + self.get_memory_facts() + return self.facts + + def get_sysctl(self): + rc, out, err = module.run_command(["/usr/sbin/sysctl", "hw", "machdep", "kern"]) + if rc != 0: + return dict() + sysctl = dict() + for line in out.splitlines(): + if line.rstrip("\n"): + (key, value) = re.split(' = |: ', line, maxsplit=1) + sysctl[key] = value.strip() + return sysctl + + def get_system_profile(self): + rc, out, err = module.run_command(["/usr/sbin/system_profiler", "SPHardwareDataType"]) + if rc != 0: + return dict() + system_profile = dict() + for line in out.splitlines(): + if ': ' in line: + (key, value) = line.split(': ', 1) + system_profile[key.strip()] = ' '.join(value.strip().split()) + return system_profile + + def get_mac_facts(self): + self.facts['model'] = self.sysctl['hw.model'] + self.facts['osversion'] = self.sysctl['kern.osversion'] + self.facts['osrevision'] = self.sysctl['kern.osrevision'] + + def get_cpu_facts(self): + if 'machdep.cpu.brand_string' in self.sysctl: # Intel + self.facts['processor'] = self.sysctl['machdep.cpu.brand_string'] + self.facts['processor_cores'] = self.sysctl['machdep.cpu.core_count'] + else: # PowerPC + system_profile = self.get_system_profile() + self.facts['processor'] = '%s @ %s' % (system_profile['Processor Name'], system_profile['Processor Speed']) + self.facts['processor_cores'] = self.sysctl['hw.physicalcpu'] + + def get_memory_facts(self): + self.facts['memtotal_mb'] = long(self.sysctl['hw.memsize']) / 1024 / 1024 + self.facts['memfree_mb'] = long(self.sysctl['hw.usermem']) / 1024 / 1024 + +class Network(Facts): + """ + This is a generic Network subclass of Facts. This should be further + subclassed to implement per platform. If you subclass this, + you must define: + - interfaces (a list of interface names) + - interface_ dictionary of ipv4, ipv6, and mac address information. + + All subclasses MUST define platform. + """ + platform = 'Generic' + + IPV6_SCOPE = { '0' : 'global', + '10' : 'host', + '20' : 'link', + '40' : 'admin', + '50' : 'site', + '80' : 'organization' } + + def __new__(cls, *arguments, **keyword): + subclass = cls + for sc in Network.__subclasses__(): + if sc.platform == platform.system(): + subclass = sc + return super(cls, subclass).__new__(subclass, *arguments, **keyword) + + def __init__(self, module): + self.module = module + Facts.__init__(self) + + def populate(self): + return self.facts + +class LinuxNetwork(Network): + """ + This is a Linux-specific subclass of Network. It defines + - interfaces (a list of interface names) + - interface_ dictionary of ipv4, ipv6, and mac address information. + - all_ipv4_addresses and all_ipv6_addresses: lists of all configured addresses. + - ipv4_address and ipv6_address: the first non-local address for each family. + """ + platform = 'Linux' + + def __init__(self, module): + Network.__init__(self, module) + + def populate(self): + ip_path = self.module.get_bin_path('ip') + if ip_path is None: + return self.facts + default_ipv4, default_ipv6 = self.get_default_interfaces(ip_path) + interfaces, ips = self.get_interfaces_info(ip_path, default_ipv4, default_ipv6) + self.facts['interfaces'] = interfaces.keys() + for iface in interfaces: + self.facts[iface] = interfaces[iface] + self.facts['default_ipv4'] = default_ipv4 + self.facts['default_ipv6'] = default_ipv6 + self.facts['all_ipv4_addresses'] = ips['all_ipv4_addresses'] + self.facts['all_ipv6_addresses'] = ips['all_ipv6_addresses'] + return self.facts + + def get_default_interfaces(self, ip_path): + # Use the commands: + # ip -4 route get 8.8.8.8 -> Google public DNS + # ip -6 route get 2404:6800:400a:800::1012 -> ipv6.google.com + # to find out the default outgoing interface, address, and gateway + command = dict( + v4 = [ip_path, '-4', 'route', 'get', '8.8.8.8'], + v6 = [ip_path, '-6', 'route', 'get', '2404:6800:400a:800::1012'] + ) + interface = dict(v4 = {}, v6 = {}) + for v in 'v4', 'v6': + if v == 'v6' and self.facts['os_family'] == 'RedHat' \ + and self.facts['distribution_version'].startswith('4.'): + continue + if v == 'v6' and not socket.has_ipv6: + continue + rc, out, err = module.run_command(command[v]) + if not out: + # v6 routing may result in + # RTNETLINK answers: Invalid argument + continue + words = out.split('\n')[0].split() + # A valid output starts with the queried address on the first line + if len(words) > 0 and words[0] == command[v][-1]: + for i in range(len(words) - 1): + if words[i] == 'dev': + interface[v]['interface'] = words[i+1] + elif words[i] == 'src': + interface[v]['address'] = words[i+1] + elif words[i] == 'via' and words[i+1] != command[v][-1]: + interface[v]['gateway'] = words[i+1] + return interface['v4'], interface['v6'] + + def get_interfaces_info(self, ip_path, default_ipv4, default_ipv6): + interfaces = {} + ips = dict( + all_ipv4_addresses = [], + all_ipv6_addresses = [], + ) + + for path in glob.glob('/sys/class/net/*'): + if not os.path.isdir(path): + continue + device = os.path.basename(path) + interfaces[device] = { 'device': device } + if os.path.exists(os.path.join(path, 'address')): + macaddress = open(os.path.join(path, 'address')).read().strip() + if macaddress and macaddress != '00:00:00:00:00:00': + interfaces[device]['macaddress'] = macaddress + if os.path.exists(os.path.join(path, 'mtu')): + interfaces[device]['mtu'] = int(open(os.path.join(path, 'mtu')).read().strip()) + if os.path.exists(os.path.join(path, 'operstate')): + interfaces[device]['active'] = open(os.path.join(path, 'operstate')).read().strip() != 'down' +# if os.path.exists(os.path.join(path, 'carrier')): +# interfaces[device]['link'] = open(os.path.join(path, 'carrier')).read().strip() == '1' + if os.path.exists(os.path.join(path, 'device','driver', 'module')): + interfaces[device]['module'] = os.path.basename(os.path.realpath(os.path.join(path, 'device', 'driver', 'module'))) + if os.path.exists(os.path.join(path, 'type')): + type = open(os.path.join(path, 'type')).read().strip() + if type == '1': + interfaces[device]['type'] = 'ether' + elif type == '512': + interfaces[device]['type'] = 'ppp' + elif type == '772': + interfaces[device]['type'] = 'loopback' + if os.path.exists(os.path.join(path, 'bridge')): + interfaces[device]['type'] = 'bridge' + interfaces[device]['interfaces'] = [ os.path.basename(b) for b in glob.glob(os.path.join(path, 'brif', '*')) ] + if os.path.exists(os.path.join(path, 'bridge', 'bridge_id')): + interfaces[device]['id'] = open(os.path.join(path, 'bridge', 'bridge_id')).read().strip() + if os.path.exists(os.path.join(path, 'bridge', 'stp_state')): + interfaces[device]['stp'] = open(os.path.join(path, 'bridge', 'stp_state')).read().strip() == '1' + if os.path.exists(os.path.join(path, 'bonding')): + interfaces[device]['type'] = 'bonding' + interfaces[device]['slaves'] = open(os.path.join(path, 'bonding', 'slaves')).read().split() + interfaces[device]['mode'] = open(os.path.join(path, 'bonding', 'mode')).read().split()[0] + interfaces[device]['miimon'] = open(os.path.join(path, 'bonding', 'miimon')).read().split()[0] + interfaces[device]['lacp_rate'] = open(os.path.join(path, 'bonding', 'lacp_rate')).read().split()[0] + primary = open(os.path.join(path, 'bonding', 'primary')).read() + if primary: + interfaces[device]['primary'] = primary + path = os.path.join(path, 'bonding', 'all_slaves_active') + if os.path.exists(path): + interfaces[device]['all_slaves_active'] = open(path).read() == '1' + + # Check whether a interface is in promiscuous mode + if os.path.exists(os.path.join(path,'flags')): + promisc_mode = False + # The second byte indicates whether the interface is in promiscuous mode. + # 1 = promisc + # 0 = no promisc + data = int(open(os.path.join(path, 'flags')).read().strip(),16) + promisc_mode = (data & 0x0100 > 0) + interfaces[device]['promisc'] = promisc_mode + + def parse_ip_output(output, secondary=False): + for line in output.split('\n'): + if not line: + continue + words = line.split() + if words[0] == 'inet': + if '/' in words[1]: + address, netmask_length = words[1].split('/') + else: + # pointopoint interfaces do not have a prefix + address = words[1] + netmask_length = "32" + address_bin = struct.unpack('!L', socket.inet_aton(address))[0] + netmask_bin = (1<<32) - (1<<32>>int(netmask_length)) + netmask = socket.inet_ntoa(struct.pack('!L', netmask_bin)) + network = socket.inet_ntoa(struct.pack('!L', address_bin & netmask_bin)) + iface = words[-1] + if iface != device: + interfaces[iface] = {} + if not secondary or "ipv4" not in interfaces[iface]: + interfaces[iface]['ipv4'] = {'address': address, + 'netmask': netmask, + 'network': network} + else: + if "ipv4_secondaries" not in interfaces[iface]: + interfaces[iface]["ipv4_secondaries"] = [] + interfaces[iface]["ipv4_secondaries"].append({ + 'address': address, + 'netmask': netmask, + 'network': network, + }) + + # add this secondary IP to the main device + if secondary: + if "ipv4_secondaries" not in interfaces[device]: + interfaces[device]["ipv4_secondaries"] = [] + interfaces[device]["ipv4_secondaries"].append({ + 'address': address, + 'netmask': netmask, + 'network': network, + }) + + # If this is the default address, update default_ipv4 + if 'address' in default_ipv4 and default_ipv4['address'] == address: + default_ipv4['netmask'] = netmask + default_ipv4['network'] = network + default_ipv4['macaddress'] = macaddress + default_ipv4['mtu'] = interfaces[device]['mtu'] + default_ipv4['type'] = interfaces[device].get("type", "unknown") + default_ipv4['alias'] = words[-1] + if not address.startswith('127.'): + ips['all_ipv4_addresses'].append(address) + elif words[0] == 'inet6': + address, prefix = words[1].split('/') + scope = words[3] + if 'ipv6' not in interfaces[device]: + interfaces[device]['ipv6'] = [] + interfaces[device]['ipv6'].append({ + 'address' : address, + 'prefix' : prefix, + 'scope' : scope + }) + # If this is the default address, update default_ipv6 + if 'address' in default_ipv6 and default_ipv6['address'] == address: + default_ipv6['prefix'] = prefix + default_ipv6['scope'] = scope + default_ipv6['macaddress'] = macaddress + default_ipv6['mtu'] = interfaces[device]['mtu'] + default_ipv6['type'] = interfaces[device].get("type", "unknown") + if not address == '::1': + ips['all_ipv6_addresses'].append(address) + + ip_path = module.get_bin_path("ip") + + args = [ip_path, 'addr', 'show', 'primary', device] + rc, stdout, stderr = self.module.run_command(args) + primary_data = stdout + + args = [ip_path, 'addr', 'show', 'secondary', device] + rc, stdout, stderr = self.module.run_command(args) + secondary_data = stdout + + parse_ip_output(primary_data) + parse_ip_output(secondary_data, secondary=True) + + # replace : by _ in interface name since they are hard to use in template + new_interfaces = {} + for i in interfaces: + if ':' in i: + new_interfaces[i.replace(':','_')] = interfaces[i] + else: + new_interfaces[i] = interfaces[i] + return new_interfaces, ips + +class GenericBsdIfconfigNetwork(Network): + """ + This is a generic BSD subclass of Network using the ifconfig command. + It defines + - interfaces (a list of interface names) + - interface_ dictionary of ipv4, ipv6, and mac address information. + - all_ipv4_addresses and all_ipv6_addresses: lists of all configured addresses. + It currently does not define + - default_ipv4 and default_ipv6 + - type, mtu and network on interfaces + """ + platform = 'Generic_BSD_Ifconfig' + + def __init__(self, module): + Network.__init__(self, module) + + def populate(self): + + ifconfig_path = module.get_bin_path('ifconfig') + + if ifconfig_path is None: + return self.facts + route_path = module.get_bin_path('route') + + if route_path is None: + return self.facts + + default_ipv4, default_ipv6 = self.get_default_interfaces(route_path) + interfaces, ips = self.get_interfaces_info(ifconfig_path) + self.merge_default_interface(default_ipv4, interfaces, 'ipv4') + self.merge_default_interface(default_ipv6, interfaces, 'ipv6') + self.facts['interfaces'] = interfaces.keys() + + for iface in interfaces: + self.facts[iface] = interfaces[iface] + + self.facts['default_ipv4'] = default_ipv4 + self.facts['default_ipv6'] = default_ipv6 + self.facts['all_ipv4_addresses'] = ips['all_ipv4_addresses'] + self.facts['all_ipv6_addresses'] = ips['all_ipv6_addresses'] + + return self.facts + + def get_default_interfaces(self, route_path): + + # Use the commands: + # route -n get 8.8.8.8 -> Google public DNS + # route -n get -inet6 2404:6800:400a:800::1012 -> ipv6.google.com + # to find out the default outgoing interface, address, and gateway + + command = dict( + v4 = [route_path, '-n', 'get', '8.8.8.8'], + v6 = [route_path, '-n', 'get', '-inet6', '2404:6800:400a:800::1012'] + ) + + interface = dict(v4 = {}, v6 = {}) + + for v in 'v4', 'v6': + + if v == 'v6' and not socket.has_ipv6: + continue + rc, out, err = module.run_command(command[v]) + if not out: + # v6 routing may result in + # RTNETLINK answers: Invalid argument + continue + lines = out.split('\n') + for line in lines: + words = line.split() + # Collect output from route command + if len(words) > 1: + if words[0] == 'interface:': + interface[v]['interface'] = words[1] + if words[0] == 'gateway:': + interface[v]['gateway'] = words[1] + + return interface['v4'], interface['v6'] + + def get_interfaces_info(self, ifconfig_path): + interfaces = {} + current_if = {} + ips = dict( + all_ipv4_addresses = [], + all_ipv6_addresses = [], + ) + # FreeBSD, DragonflyBSD, NetBSD, OpenBSD and OS X all implicitly add '-a' + # when running the command 'ifconfig'. + # Solaris must explicitly run the command 'ifconfig -a'. + rc, out, err = module.run_command([ifconfig_path, '-a']) + + for line in out.split('\n'): + + if line: + words = line.split() + + if re.match('^\S', line) and len(words) > 3: + current_if = self.parse_interface_line(words) + interfaces[ current_if['device'] ] = current_if + elif words[0].startswith('options='): + self.parse_options_line(words, current_if, ips) + elif words[0] == 'nd6': + self.parse_nd6_line(words, current_if, ips) + elif words[0] == 'ether': + self.parse_ether_line(words, current_if, ips) + elif words[0] == 'media:': + self.parse_media_line(words, current_if, ips) + elif words[0] == 'status:': + self.parse_status_line(words, current_if, ips) + elif words[0] == 'lladdr': + self.parse_lladdr_line(words, current_if, ips) + elif words[0] == 'inet': + self.parse_inet_line(words, current_if, ips) + elif words[0] == 'inet6': + self.parse_inet6_line(words, current_if, ips) + else: + self.parse_unknown_line(words, current_if, ips) + + return interfaces, ips + + def parse_interface_line(self, words): + device = words[0][0:-1] + current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'} + current_if['flags'] = self.get_options(words[1]) + current_if['mtu'] = words[3] + current_if['macaddress'] = 'unknown' # will be overwritten later + return current_if + + def parse_options_line(self, words, current_if, ips): + # Mac has options like this... + current_if['options'] = self.get_options(words[0]) + + def parse_nd6_line(self, words, current_if, ips): + # FreBSD has options like this... + current_if['options'] = self.get_options(words[1]) + + def parse_ether_line(self, words, current_if, ips): + current_if['macaddress'] = words[1] + + def parse_media_line(self, words, current_if, ips): + # not sure if this is useful - we also drop information + current_if['media'] = words[1] + if len(words) > 2: + current_if['media_select'] = words[2] + if len(words) > 3: + current_if['media_type'] = words[3][1:] + if len(words) > 4: + current_if['media_options'] = self.get_options(words[4]) + + def parse_status_line(self, words, current_if, ips): + current_if['status'] = words[1] + + def parse_lladdr_line(self, words, current_if, ips): + current_if['lladdr'] = words[1] + + def parse_inet_line(self, words, current_if, ips): + address = {'address': words[1]} + # deal with hex netmask + if re.match('([0-9a-f]){8}', words[3]) and len(words[3]) == 8: + words[3] = '0x' + words[3] + if words[3].startswith('0x'): + address['netmask'] = socket.inet_ntoa(struct.pack('!L', int(words[3], base=16))) + else: + # otherwise assume this is a dotted quad + address['netmask'] = words[3] + # calculate the network + address_bin = struct.unpack('!L', socket.inet_aton(address['address']))[0] + netmask_bin = struct.unpack('!L', socket.inet_aton(address['netmask']))[0] + address['network'] = socket.inet_ntoa(struct.pack('!L', address_bin & netmask_bin)) + # broadcast may be given or we need to calculate + if len(words) > 5: + address['broadcast'] = words[5] + else: + address['broadcast'] = socket.inet_ntoa(struct.pack('!L', address_bin | (~netmask_bin & 0xffffffff))) + # add to our list of addresses + if not words[1].startswith('127.'): + ips['all_ipv4_addresses'].append(address['address']) + current_if['ipv4'].append(address) + + def parse_inet6_line(self, words, current_if, ips): + address = {'address': words[1]} + if (len(words) >= 4) and (words[2] == 'prefixlen'): + address['prefix'] = words[3] + if (len(words) >= 6) and (words[4] == 'scopeid'): + address['scope'] = words[5] + localhost6 = ['::1', '::1/128', 'fe80::1%lo0'] + if address['address'] not in localhost6: + ips['all_ipv6_addresses'].append(address['address']) + current_if['ipv6'].append(address) + + def parse_unknown_line(self, words, current_if, ips): + # we are going to ignore unknown lines here - this may be + # a bad idea - but you can override it in your subclass + pass + + def get_options(self, option_string): + start = option_string.find('<') + 1 + end = option_string.rfind('>') + if (start > 0) and (end > 0) and (end > start + 1): + option_csv = option_string[start:end] + return option_csv.split(',') + else: + return [] + + def merge_default_interface(self, defaults, interfaces, ip_type): + if not 'interface' in defaults.keys(): + return + if not defaults['interface'] in interfaces: + return + ifinfo = interfaces[defaults['interface']] + # copy all the interface values across except addresses + for item in ifinfo.keys(): + if item != 'ipv4' and item != 'ipv6': + defaults[item] = ifinfo[item] + if len(ifinfo[ip_type]) > 0: + for item in ifinfo[ip_type][0].keys(): + defaults[item] = ifinfo[ip_type][0][item] + +class DarwinNetwork(GenericBsdIfconfigNetwork, Network): + """ + This is the Mac OS X/Darwin Network Class. + It uses the GenericBsdIfconfigNetwork unchanged + """ + platform = 'Darwin' + + # media line is different to the default FreeBSD one + def parse_media_line(self, words, current_if, ips): + # not sure if this is useful - we also drop information + current_if['media'] = 'Unknown' # Mac does not give us this + current_if['media_select'] = words[1] + if len(words) > 2: + current_if['media_type'] = words[2][1:] + if len(words) > 3: + current_if['media_options'] = self.get_options(words[3]) + + +class FreeBSDNetwork(GenericBsdIfconfigNetwork, Network): + """ + This is the FreeBSD Network Class. + It uses the GenericBsdIfconfigNetwork unchanged. + """ + platform = 'FreeBSD' + +class AIXNetwork(GenericBsdIfconfigNetwork, Network): + """ + This is the AIX Network Class. + It uses the GenericBsdIfconfigNetwork unchanged. + """ + platform = 'AIX' + + # AIX 'ifconfig -a' does not have three words in the interface line + def get_interfaces_info(self, ifconfig_path): + interfaces = {} + current_if = {} + ips = dict( + all_ipv4_addresses = [], + all_ipv6_addresses = [], + ) + rc, out, err = module.run_command([ifconfig_path, '-a']) + + for line in out.split('\n'): + + if line: + words = line.split() + + # only this condition differs from GenericBsdIfconfigNetwork + if re.match('^\w*\d*:', line): + current_if = self.parse_interface_line(words) + interfaces[ current_if['device'] ] = current_if + elif words[0].startswith('options='): + self.parse_options_line(words, current_if, ips) + elif words[0] == 'nd6': + self.parse_nd6_line(words, current_if, ips) + elif words[0] == 'ether': + self.parse_ether_line(words, current_if, ips) + elif words[0] == 'media:': + self.parse_media_line(words, current_if, ips) + elif words[0] == 'status:': + self.parse_status_line(words, current_if, ips) + elif words[0] == 'lladdr': + self.parse_lladdr_line(words, current_if, ips) + elif words[0] == 'inet': + self.parse_inet_line(words, current_if, ips) + elif words[0] == 'inet6': + self.parse_inet6_line(words, current_if, ips) + else: + self.parse_unknown_line(words, current_if, ips) + + return interfaces, ips + + # AIX 'ifconfig -a' does not inform about MTU, so remove current_if['mtu'] here + def parse_interface_line(self, words): + device = words[0][0:-1] + current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'} + current_if['flags'] = self.get_options(words[1]) + current_if['macaddress'] = 'unknown' # will be overwritten later + return current_if + +class OpenBSDNetwork(GenericBsdIfconfigNetwork, Network): + """ + This is the OpenBSD Network Class. + It uses the GenericBsdIfconfigNetwork. + """ + platform = 'OpenBSD' + + # Return macaddress instead of lladdr + def parse_lladdr_line(self, words, current_if, ips): + current_if['macaddress'] = words[1] + +class SunOSNetwork(GenericBsdIfconfigNetwork, Network): + """ + This is the SunOS Network Class. + It uses the GenericBsdIfconfigNetwork. + + Solaris can have different FLAGS and MTU for IPv4 and IPv6 on the same interface + so these facts have been moved inside the 'ipv4' and 'ipv6' lists. + """ + platform = 'SunOS' + + # Solaris 'ifconfig -a' will print interfaces twice, once for IPv4 and again for IPv6. + # MTU and FLAGS also may differ between IPv4 and IPv6 on the same interface. + # 'parse_interface_line()' checks for previously seen interfaces before defining + # 'current_if' so that IPv6 facts don't clobber IPv4 facts (or vice versa). + def get_interfaces_info(self, ifconfig_path): + interfaces = {} + current_if = {} + ips = dict( + all_ipv4_addresses = [], + all_ipv6_addresses = [], + ) + rc, out, err = module.run_command([ifconfig_path, '-a']) + + for line in out.split('\n'): + + if line: + words = line.split() + + if re.match('^\S', line) and len(words) > 3: + current_if = self.parse_interface_line(words, current_if, interfaces) + interfaces[ current_if['device'] ] = current_if + elif words[0].startswith('options='): + self.parse_options_line(words, current_if, ips) + elif words[0] == 'nd6': + self.parse_nd6_line(words, current_if, ips) + elif words[0] == 'ether': + self.parse_ether_line(words, current_if, ips) + elif words[0] == 'media:': + self.parse_media_line(words, current_if, ips) + elif words[0] == 'status:': + self.parse_status_line(words, current_if, ips) + elif words[0] == 'lladdr': + self.parse_lladdr_line(words, current_if, ips) + elif words[0] == 'inet': + self.parse_inet_line(words, current_if, ips) + elif words[0] == 'inet6': + self.parse_inet6_line(words, current_if, ips) + else: + self.parse_unknown_line(words, current_if, ips) + + # 'parse_interface_line' and 'parse_inet*_line' leave two dicts in the + # ipv4/ipv6 lists which is ugly and hard to read. + # This quick hack merges the dictionaries. Purely cosmetic. + for iface in interfaces: + for v in 'ipv4', 'ipv6': + combined_facts = {} + for facts in interfaces[iface][v]: + combined_facts.update(facts) + if len(combined_facts.keys()) > 0: + interfaces[iface][v] = [combined_facts] + + return interfaces, ips + + def parse_interface_line(self, words, current_if, interfaces): + device = words[0][0:-1] + if device not in interfaces.keys(): + current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'} + else: + current_if = interfaces[device] + flags = self.get_options(words[1]) + if 'IPv4' in flags: + v = 'ipv4' + if 'IPv6' in flags: + v = 'ipv6' + current_if[v].append({'flags': flags, 'mtu': words[3]}) + current_if['macaddress'] = 'unknown' # will be overwritten later + return current_if + + # Solaris displays single digit octets in MAC addresses e.g. 0:1:2:d:e:f + # Add leading zero to each octet where needed. + def parse_ether_line(self, words, current_if, ips): + macaddress = '' + for octet in words[1].split(':'): + octet = ('0' + octet)[-2:None] + macaddress += (octet + ':') + current_if['macaddress'] = macaddress[0:-1] + +class Virtual(Facts): + """ + This is a generic Virtual subclass of Facts. This should be further + subclassed to implement per platform. If you subclass this, + you should define: + - virtualization_type + - virtualization_role + - container (e.g. solaris zones, freebsd jails, linux containers) + + All subclasses MUST define platform. + """ + + def __new__(cls, *arguments, **keyword): + subclass = cls + for sc in Virtual.__subclasses__(): + if sc.platform == platform.system(): + subclass = sc + return super(cls, subclass).__new__(subclass, *arguments, **keyword) + + def __init__(self): + Facts.__init__(self) + + def populate(self): + return self.facts + +class LinuxVirtual(Virtual): + """ + This is a Linux-specific subclass of Virtual. It defines + - virtualization_type + - virtualization_role + """ + platform = 'Linux' + + def __init__(self): + Virtual.__init__(self) + + def populate(self): + self.get_virtual_facts() + return self.facts + + # For more information, check: http://people.redhat.com/~rjones/virt-what/ + def get_virtual_facts(self): + if os.path.exists("/proc/xen"): + self.facts['virtualization_type'] = 'xen' + self.facts['virtualization_role'] = 'guest' + try: + for line in open('/proc/xen/capabilities'): + if "control_d" in line: + self.facts['virtualization_role'] = 'host' + except IOError: + pass + return + + if os.path.exists('/proc/vz'): + self.facts['virtualization_type'] = 'openvz' + if os.path.exists('/proc/bc'): + self.facts['virtualization_role'] = 'host' + else: + self.facts['virtualization_role'] = 'guest' + return + + if os.path.exists('/proc/1/cgroup'): + for line in open('/proc/1/cgroup').readlines(): + if re.search('/lxc/', line): + self.facts['virtualization_type'] = 'lxc' + self.facts['virtualization_role'] = 'guest' + return + + product_name = get_file_content('/sys/devices/virtual/dmi/id/product_name') + + if product_name in ['KVM', 'Bochs']: + self.facts['virtualization_type'] = 'kvm' + self.facts['virtualization_role'] = 'guest' + return + + if product_name == 'RHEV Hypervisor': + self.facts['virtualization_type'] = 'RHEV' + self.facts['virtualization_role'] = 'guest' + return + + if product_name == 'VMware Virtual Platform': + self.facts['virtualization_type'] = 'VMware' + self.facts['virtualization_role'] = 'guest' + return + + bios_vendor = get_file_content('/sys/devices/virtual/dmi/id/bios_vendor') + + if bios_vendor == 'Xen': + self.facts['virtualization_type'] = 'xen' + self.facts['virtualization_role'] = 'guest' + return + + if bios_vendor == 'innotek GmbH': + self.facts['virtualization_type'] = 'virtualbox' + self.facts['virtualization_role'] = 'guest' + return + + sys_vendor = get_file_content('/sys/devices/virtual/dmi/id/sys_vendor') + + # FIXME: This does also match hyperv + if sys_vendor == 'Microsoft Corporation': + self.facts['virtualization_type'] = 'VirtualPC' + self.facts['virtualization_role'] = 'guest' + return + + if sys_vendor == 'Parallels Software International Inc.': + self.facts['virtualization_type'] = 'parallels' + self.facts['virtualization_role'] = 'guest' + return + + if os.path.exists('/proc/self/status'): + for line in open('/proc/self/status').readlines(): + if re.match('^VxID: \d+', line): + self.facts['virtualization_type'] = 'linux_vserver' + if re.match('^VxID: 0', line): + self.facts['virtualization_role'] = 'host' + else: + self.facts['virtualization_role'] = 'guest' + return + + if os.path.exists('/proc/cpuinfo'): + for line in open('/proc/cpuinfo').readlines(): + if re.match('^model name.*QEMU Virtual CPU', line): + self.facts['virtualization_type'] = 'kvm' + elif re.match('^vendor_id.*User Mode Linux', line): + self.facts['virtualization_type'] = 'uml' + elif re.match('^model name.*UML', line): + self.facts['virtualization_type'] = 'uml' + elif re.match('^vendor_id.*PowerVM Lx86', line): + self.facts['virtualization_type'] = 'powervm_lx86' + elif re.match('^vendor_id.*IBM/S390', line): + self.facts['virtualization_type'] = 'ibm_systemz' + else: + continue + self.facts['virtualization_role'] = 'guest' + return + + # Beware that we can have both kvm and virtualbox running on a single system + if os.path.exists("/proc/modules") and os.access('/proc/modules', os.R_OK): + modules = [] + for line in open("/proc/modules").readlines(): + data = line.split(" ", 1) + modules.append(data[0]) + + if 'kvm' in modules: + self.facts['virtualization_type'] = 'kvm' + self.facts['virtualization_role'] = 'host' + return + + if 'vboxdrv' in modules: + self.facts['virtualization_type'] = 'virtualbox' + self.facts['virtualization_role'] = 'host' + return + + # If none of the above matches, return 'NA' for virtualization_type + # and virtualization_role. This allows for proper grouping. + self.facts['virtualization_type'] = 'NA' + self.facts['virtualization_role'] = 'NA' + return + + +class HPUXVirtual(Virtual): + """ + This is a HP-UX specific subclass of Virtual. It defines + - virtualization_type + - virtualization_role + """ + platform = 'HP-UX' + + def __init__(self): + Virtual.__init__(self) + + def populate(self): + self.get_virtual_facts() + return self.facts + + def get_virtual_facts(self): + if os.path.exists('/usr/sbin/vecheck'): + rc, out, err = module.run_command("/usr/sbin/vecheck") + if rc == 0: + self.facts['virtualization_type'] = 'guest' + self.facts['virtualization_role'] = 'HP vPar' + if os.path.exists('/opt/hpvm/bin/hpvminfo'): + rc, out, err = module.run_command("/opt/hpvm/bin/hpvminfo") + if rc == 0 and re.match('.*Running.*HPVM vPar.*', out): + self.facts['virtualization_type'] = 'guest' + self.facts['virtualization_role'] = 'HPVM vPar' + elif rc == 0 and re.match('.*Running.*HPVM guest.*', out): + self.facts['virtualization_type'] = 'guest' + self.facts['virtualization_role'] = 'HPVM IVM' + elif rc == 0 and re.match('.*Running.*HPVM host.*', out): + self.facts['virtualization_type'] = 'host' + self.facts['virtualization_role'] = 'HPVM' + if os.path.exists('/usr/sbin/parstatus'): + rc, out, err = module.run_command("/usr/sbin/parstatus") + if rc == 0: + self.facts['virtualization_type'] = 'guest' + self.facts['virtualization_role'] = 'HP nPar' + + +class SunOSVirtual(Virtual): + """ + This is a SunOS-specific subclass of Virtual. It defines + - virtualization_type + - virtualization_role + - container + """ + platform = 'SunOS' + + def __init__(self): + Virtual.__init__(self) + + def populate(self): + self.get_virtual_facts() + return self.facts + + def get_virtual_facts(self): + rc, out, err = module.run_command("/usr/sbin/prtdiag") + for line in out.split('\n'): + if 'VMware' in line: + self.facts['virtualization_type'] = 'vmware' + self.facts['virtualization_role'] = 'guest' + if 'Parallels' in line: + self.facts['virtualization_type'] = 'parallels' + self.facts['virtualization_role'] = 'guest' + if 'VirtualBox' in line: + self.facts['virtualization_type'] = 'virtualbox' + self.facts['virtualization_role'] = 'guest' + if 'HVM domU' in line: + self.facts['virtualization_type'] = 'xen' + self.facts['virtualization_role'] = 'guest' + # Check if it's a zone + if os.path.exists("/usr/bin/zonename"): + rc, out, err = module.run_command("/usr/bin/zonename") + if out.rstrip() != "global": + self.facts['container'] = 'zone' + # Check if it's a branded zone (i.e. Solaris 8/9 zone) + if os.path.isdir('/.SUNWnative'): + self.facts['container'] = 'zone' + # If it's a zone check if we can detect if our global zone is itself virtualized. + # Relies on the "guest tools" (e.g. vmware tools) to be installed + if 'container' in self.facts and self.facts['container'] == 'zone': + rc, out, err = module.run_command("/usr/sbin/modinfo") + for line in out.split('\n'): + if 'VMware' in line: + self.facts['virtualization_type'] = 'vmware' + self.facts['virtualization_role'] = 'guest' + if 'VirtualBox' in line: + self.facts['virtualization_type'] = 'virtualbox' + self.facts['virtualization_role'] = 'guest' + +def get_file_content(path, default=None): + data = default + if os.path.exists(path) and os.access(path, os.R_OK): + data = open(path).read().strip() + if len(data) == 0: + data = default + return data + +def ansible_facts(module): + facts = {} + facts.update(Facts().populate()) + facts.update(Hardware().populate()) + facts.update(Network(module).populate()) + facts.update(Virtual().populate()) + return facts + +# =========================================== + +def get_all_facts(module): + + setup_options = dict(module_setup=True) + facts = ansible_facts(module) + + for (k, v) in facts.items(): + setup_options["ansible_%s" % k.replace('-', '_')] = v + + # Look for the path to the facter and ohai binary and set + # the variable to that path. + + facter_path = module.get_bin_path('facter') + ohai_path = module.get_bin_path('ohai') + + # if facter is installed, and we can use --json because + # ruby-json is ALSO installed, include facter data in the JSON + + if facter_path is not None: + rc, out, err = module.run_command(facter_path + " --json") + facter = True + try: + facter_ds = json.loads(out) + except: + facter = False + if facter: + for (k,v) in facter_ds.items(): + setup_options["facter_%s" % k] = v + + # ditto for ohai + + if ohai_path is not None: + rc, out, err = module.run_command(ohai_path) + ohai = True + try: + ohai_ds = json.loads(out) + except: + ohai = False + if ohai: + for (k,v) in ohai_ds.items(): + k2 = "ohai_%s" % k.replace('-', '_') + setup_options[k2] = v + + setup_result = { 'ansible_facts': {} } + + for (k,v) in setup_options.items(): + if module.params['filter'] == '*' or fnmatch.fnmatch(k, module.params['filter']): + setup_result['ansible_facts'][k] = v + + # hack to keep --verbose from showing all the setup module results + setup_result['verbose_override'] = True + + return setup_result + diff --git a/lib/ansible/module_utils/gce.py b/lib/ansible/module_utils/gce.py index f6401c68d01..6d6fb158ffc 100644 --- a/lib/ansible/module_utils/gce.py +++ b/lib/ansible/module_utils/gce.py @@ -1,3 +1,32 @@ +# This code is part of Ansible, but is an independent component. +# This particular file snippet, and this file snippet only, is BSD licensed. +# Modules you write using this snippet, which is embedded dynamically by Ansible +# still belong to the author of the module, and may assign their own license +# to the complete work. +# +# Copyright (c), Franck Cuny , 2014 +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without modification, +# are permitted provided that the following conditions are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. +# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +# + USER_AGENT_PRODUCT="Ansible-gce" USER_AGENT_VERSION="v1" diff --git a/lib/ansible/module_utils/known_hosts.py b/lib/ansible/module_utils/known_hosts.py index 000db9d1e62..62600d7b4da 100644 --- a/lib/ansible/module_utils/known_hosts.py +++ b/lib/ansible/module_utils/known_hosts.py @@ -1,4 +1,36 @@ -def add_git_host_key(module, url, accept_hostkey=True): +# This code is part of Ansible, but is an independent component. +# This particular file snippet, and this file snippet only, is BSD licensed. +# Modules you write using this snippet, which is embedded dynamically by Ansible +# still belong to the author of the module, and may assign their own license +# to the complete work. +# +# Copyright (c), Michael DeHaan , 2012-2013 +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without modification, +# are permitted provided that the following conditions are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. +# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +import hmac +from hashlib import sha1 +HASHED_KEY_MAGIC = "|1|" + +def add_git_host_key(module, url, accept_hostkey=True, create_dir=True): """ idempotently add a git url hostkey """ @@ -8,7 +40,7 @@ def add_git_host_key(module, url, accept_hostkey=True): known_host = check_hostkey(module, fqdn) if not known_host: if accept_hostkey: - rc, out, err = add_host_key(module, fqdn) + rc, out, err = add_host_key(module, fqdn, create_dir=create_dir) if rc != 0: module.fail_json(msg="failed to add %s hostkey: %s" % (fqdn, out + err)) else: @@ -30,41 +62,94 @@ def get_fqdn(repo_url): return result - def check_hostkey(module, fqdn): + return not not_in_host_file(module, fqdn) - """ use ssh-keygen to check if key is known """ +# this is a variant of code found in connection_plugins/paramiko.py and we should modify +# the paramiko code to import and use this. + +def not_in_host_file(self, host): - result = False - keygen_cmd = module.get_bin_path('ssh-keygen', True) - this_cmd = keygen_cmd + " -H -F " + fqdn - rc, out, err = module.run_command(this_cmd) - if rc == 0 and out != "": - result = True + if 'USER' in os.environ: + user_host_file = os.path.expandvars("~${USER}/.ssh/known_hosts") else: - # Check the main system location - this_cmd = keygen_cmd + " -H -f /etc/ssh/ssh_known_hosts -F " + fqdn - rc, out, err = module.run_command(this_cmd) + user_host_file = "~/.ssh/known_hosts" + user_host_file = os.path.expanduser(user_host_file) + + host_file_list = [] + host_file_list.append(user_host_file) + host_file_list.append("/etc/ssh/ssh_known_hosts") + host_file_list.append("/etc/ssh/ssh_known_hosts2") + + hfiles_not_found = 0 + for hf in host_file_list: + if not os.path.exists(hf): + hfiles_not_found += 1 + continue + + try: + host_fh = open(hf) + except IOError, e: + hfiles_not_found += 1 + continue + else: + data = host_fh.read() + host_fh.close() + + for line in data.split("\n"): + if line is None or " " not in line: + continue + tokens = line.split() + if tokens[0].find(HASHED_KEY_MAGIC) == 0: + # this is a hashed known host entry + try: + (kn_salt,kn_host) = tokens[0][len(HASHED_KEY_MAGIC):].split("|",2) + hash = hmac.new(kn_salt.decode('base64'), digestmod=sha1) + hash.update(host) + if hash.digest() == kn_host.decode('base64'): + return False + except: + # invalid hashed host key, skip it + continue + else: + # standard host file entry + if host in tokens[0]: + return False - if rc == 0: - if out != "": - result = True + return True - return result -def add_host_key(module, fqdn, key_type="rsa"): +def add_host_key(module, fqdn, key_type="rsa", create_dir=False): """ use ssh-keyscan to add the hostkey """ result = False keyscan_cmd = module.get_bin_path('ssh-keyscan', True) - if not os.path.exists(os.path.expanduser("~/.ssh/")): - module.fail_json(msg="%s does not exist" % os.path.expanduser("~/.ssh/")) + if 'USER' in os.environ: + user_ssh_dir = os.path.expandvars("~${USER}/.ssh/") + user_host_file = os.path.expandvars("~${USER}/.ssh/known_hosts") + else: + user_ssh_dir = "~/.ssh/" + user_host_file = "~/.ssh/known_hosts" + user_ssh_dir = os.path.expanduser(user_ssh_dir) + + if not os.path.exists(user_ssh_dir): + if create_dir: + try: + os.makedirs(user_ssh_dir, 0700) + except: + module.fail_json(msg="failed to create host key directory: %s" % user_ssh_dir) + else: + module.fail_json(msg="%s does not exist" % user_ssh_dir) + elif not os.path.isdir(user_ssh_dir): + module.fail_json(msg="%s is not a directory" % user_ssh_dir) + + this_cmd = "%s -t %s %s" % (keyscan_cmd, key_type, fqdn) - this_cmd = "%s -t %s %s >> ~/.ssh/known_hosts" % (keyscan_cmd, key_type, fqdn) rc, out, err = module.run_command(this_cmd) + module.append_to_file(user_host_file, out) return rc, out, err diff --git a/lib/ansible/module_utils/rax.py b/lib/ansible/module_utils/rax.py index 84e5686d24f..98623c7d38e 100644 --- a/lib/ansible/module_utils/rax.py +++ b/lib/ansible/module_utils/rax.py @@ -1,5 +1,32 @@ -import os +# This code is part of Ansible, but is an independent component. +# This particular file snippet, and this file snippet only, is BSD licensed. +# Modules you write using this snippet, which is embedded dynamically by Ansible +# still belong to the author of the module, and may assign their own license +# to the complete work. +# +# Copyright (c), Michael DeHaan , 2012-2013 +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without modification, +# are permitted provided that the following conditions are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. +# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +import os def rax_argument_spec(): return dict( diff --git a/lib/ansible/module_utils/redhat.py b/lib/ansible/module_utils/redhat.py new file mode 100644 index 00000000000..a1081f9c8c7 --- /dev/null +++ b/lib/ansible/module_utils/redhat.py @@ -0,0 +1,252 @@ +import os +import re +import types +import ConfigParser +import shlex + + +class RegistrationBase(object): + def __init__(self, module, username=None, password=None): + self.module = module + self.username = username + self.password = password + + def configure(self): + raise NotImplementedError("Must be implemented by a sub-class") + + def enable(self): + # Remove any existing redhat.repo + redhat_repo = '/etc/yum.repos.d/redhat.repo' + if os.path.isfile(redhat_repo): + os.unlink(redhat_repo) + + def register(self): + raise NotImplementedError("Must be implemented by a sub-class") + + def unregister(self): + raise NotImplementedError("Must be implemented by a sub-class") + + def unsubscribe(self): + raise NotImplementedError("Must be implemented by a sub-class") + + def update_plugin_conf(self, plugin, enabled=True): + plugin_conf = '/etc/yum/pluginconf.d/%s.conf' % plugin + if os.path.isfile(plugin_conf): + cfg = ConfigParser.ConfigParser() + cfg.read([plugin_conf]) + if enabled: + cfg.set('main', 'enabled', 1) + else: + cfg.set('main', 'enabled', 0) + fd = open(plugin_conf, 'rwa+') + cfg.write(fd) + fd.close() + + def subscribe(self, **kwargs): + raise NotImplementedError("Must be implemented by a sub-class") + + +class Rhsm(RegistrationBase): + def __init__(self, module, username=None, password=None): + RegistrationBase.__init__(self, module, username, password) + self.config = self._read_config() + self.module = module + + def _read_config(self, rhsm_conf='/etc/rhsm/rhsm.conf'): + ''' + Load RHSM configuration from /etc/rhsm/rhsm.conf. + Returns: + * ConfigParser object + ''' + + # Read RHSM defaults ... + cp = ConfigParser.ConfigParser() + cp.read(rhsm_conf) + + # Add support for specifying a default value w/o having to standup some configuration + # Yeah, I know this should be subclassed ... but, oh well + def get_option_default(self, key, default=''): + sect, opt = key.split('.', 1) + if self.has_section(sect) and self.has_option(sect, opt): + return self.get(sect, opt) + else: + return default + + cp.get_option = types.MethodType(get_option_default, cp, ConfigParser.ConfigParser) + + return cp + + def enable(self): + ''' + Enable the system to receive updates from subscription-manager. + This involves updating affected yum plugins and removing any + conflicting yum repositories. + ''' + RegistrationBase.enable(self) + self.update_plugin_conf('rhnplugin', False) + self.update_plugin_conf('subscription-manager', True) + + def configure(self, **kwargs): + ''' + Configure the system as directed for registration with RHN + Raises: + * Exception - if error occurs while running command + ''' + args = ['subscription-manager', 'config'] + + # Pass supplied **kwargs as parameters to subscription-manager. Ignore + # non-configuration parameters and replace '_' with '.'. For example, + # 'server_hostname' becomes '--system.hostname'. + for k,v in kwargs.items(): + if re.search(r'^(system|rhsm)_', k): + args.append('--%s=%s' % (k.replace('_','.'), v)) + + self.module.run_command(args, check_rc=True) + + @property + def is_registered(self): + ''' + Determine whether the current system + Returns: + * Boolean - whether the current system is currently registered to + RHN. + ''' + # Quick version... + if False: + return os.path.isfile('/etc/pki/consumer/cert.pem') and \ + os.path.isfile('/etc/pki/consumer/key.pem') + + args = ['subscription-manager', 'identity'] + rc, stdout, stderr = self.module.run_command(args, check_rc=False) + if rc == 0: + return True + else: + return False + + def register(self, username, password, autosubscribe, activationkey): + ''' + Register the current system to the provided RHN server + Raises: + * Exception - if error occurs while running command + ''' + args = ['subscription-manager', 'register'] + + # Generate command arguments + if activationkey: + args.append('--activationkey "%s"' % activationkey) + else: + if autosubscribe: + args.append('--autosubscribe') + if username: + args.extend(['--username', username]) + if password: + args.extend(['--password', password]) + + # Do the needful... + rc, stderr, stdout = self.module.run_command(args, check_rc=True) + + def unsubscribe(self): + ''' + Unsubscribe a system from all subscribed channels + Raises: + * Exception - if error occurs while running command + ''' + args = ['subscription-manager', 'unsubscribe', '--all'] + rc, stderr, stdout = self.module.run_command(args, check_rc=True) + + def unregister(self): + ''' + Unregister a currently registered system + Raises: + * Exception - if error occurs while running command + ''' + args = ['subscription-manager', 'unregister'] + rc, stderr, stdout = self.module.run_command(args, check_rc=True) + + def subscribe(self, regexp): + ''' + Subscribe current system to available pools matching the specified + regular expression + Raises: + * Exception - if error occurs while running command + ''' + + # Available pools ready for subscription + available_pools = RhsmPools(self.module) + + for pool in available_pools.filter(regexp): + pool.subscribe() + + +class RhsmPool(object): + ''' + Convenience class for housing subscription information + ''' + + def __init__(self, module, **kwargs): + self.module = module + for k,v in kwargs.items(): + setattr(self, k, v) + + def __str__(self): + return str(self.__getattribute__('_name')) + + def subscribe(self): + args = "subscription-manager subscribe --pool %s" % self.PoolId + rc, stdout, stderr = self.module.run_command(args, check_rc=True) + if rc == 0: + return True + else: + return False + + +class RhsmPools(object): + """ + This class is used for manipulating pools subscriptions with RHSM + """ + def __init__(self, module): + self.module = module + self.products = self._load_product_list() + + def __iter__(self): + return self.products.__iter__() + + def _load_product_list(self): + """ + Loads list of all availaible pools for system in data structure + """ + args = "subscription-manager list --available" + rc, stdout, stderr = self.module.run_command(args, check_rc=True) + + products = [] + for line in stdout.split('\n'): + # Remove leading+trailing whitespace + line = line.strip() + # An empty line implies the end of a output group + if len(line) == 0: + continue + # If a colon ':' is found, parse + elif ':' in line: + (key, value) = line.split(':',1) + key = key.strip().replace(" ", "") # To unify + value = value.strip() + if key in ['ProductName', 'SubscriptionName']: + # Remember the name for later processing + products.append(RhsmPool(self.module, _name=value, key=value)) + elif products: + # Associate value with most recently recorded product + products[-1].__setattr__(key, value) + # FIXME - log some warning? + #else: + # warnings.warn("Unhandled subscription key/value: %s/%s" % (key,value)) + return products + + def filter(self, regexp='^$'): + ''' + Return a list of RhsmPools whose name matches the provided regular expression + ''' + r = re.compile(regexp) + for product in self.products: + if r.search(product._name): + yield product + diff --git a/lib/ansible/module_utils/urls.py b/lib/ansible/module_utils/urls.py new file mode 100644 index 00000000000..76ee34d7748 --- /dev/null +++ b/lib/ansible/module_utils/urls.py @@ -0,0 +1,319 @@ +# This code is part of Ansible, but is an independent component. +# This particular file snippet, and this file snippet only, is BSD licensed. +# Modules you write using this snippet, which is embedded dynamically by Ansible +# still belong to the author of the module, and may assign their own license +# to the complete work. +# +# Copyright (c), Michael DeHaan , 2012-2013 +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without modification, +# are permitted provided that the following conditions are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. +# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +try: + import urllib + HAS_URLLIB = True +except: + HAS_URLLIB = False + +try: + import urllib2 + HAS_URLLIB2 = True +except: + HAS_URLLIB2 = False + +try: + import urlparse + HAS_URLPARSE = True +except: + HAS_URLPARSE = False + +try: + import ssl + HAS_SSL=True +except: + HAS_SSL=False + +import socket +import tempfile + + +# This is a dummy cacert provided for Mac OS since you need at least 1 +# ca cert, regardless of validity, for Python on Mac OS to use the +# keychain functionality in OpenSSL for validating SSL certificates. +# See: http://mercurial.selenic.com/wiki/CACertificates#Mac_OS_X_10.6_and_higher +DUMMY_CA_CERT = """-----BEGIN CERTIFICATE----- +MIICvDCCAiWgAwIBAgIJAO8E12S7/qEpMA0GCSqGSIb3DQEBBQUAMEkxCzAJBgNV +BAYTAlVTMRcwFQYDVQQIEw5Ob3J0aCBDYXJvbGluYTEPMA0GA1UEBxMGRHVyaGFt +MRAwDgYDVQQKEwdBbnNpYmxlMB4XDTE0MDMxODIyMDAyMloXDTI0MDMxNTIyMDAy +MlowSTELMAkGA1UEBhMCVVMxFzAVBgNVBAgTDk5vcnRoIENhcm9saW5hMQ8wDQYD +VQQHEwZEdXJoYW0xEDAOBgNVBAoTB0Fuc2libGUwgZ8wDQYJKoZIhvcNAQEBBQAD +gY0AMIGJAoGBANtvpPq3IlNlRbCHhZAcP6WCzhc5RbsDqyh1zrkmLi0GwcQ3z/r9 +gaWfQBYhHpobK2Tiq11TfraHeNB3/VfNImjZcGpN8Fl3MWwu7LfVkJy3gNNnxkA1 +4Go0/LmIvRFHhbzgfuo9NFgjPmmab9eqXJceqZIlz2C8xA7EeG7ku0+vAgMBAAGj +gaswgagwHQYDVR0OBBYEFPnN1nPRqNDXGlCqCvdZchRNi/FaMHkGA1UdIwRyMHCA +FPnN1nPRqNDXGlCqCvdZchRNi/FaoU2kSzBJMQswCQYDVQQGEwJVUzEXMBUGA1UE +CBMOTm9ydGggQ2Fyb2xpbmExDzANBgNVBAcTBkR1cmhhbTEQMA4GA1UEChMHQW5z +aWJsZYIJAO8E12S7/qEpMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEA +MUB80IR6knq9K/tY+hvPsZer6eFMzO3JGkRFBh2kn6JdMDnhYGX7AXVHGflrwNQH +qFy+aenWXsC0ZvrikFxbQnX8GVtDADtVznxOi7XzFw7JOxdsVrpXgSN0eh0aMzvV +zKPZsZ2miVGclicJHzm5q080b1p/sZtuKIEZk6vZqEg= +-----END CERTIFICATE----- +""" + + +class RequestWithMethod(urllib2.Request): + ''' + Workaround for using DELETE/PUT/etc with urllib2 + Originally contained in library/net_infrastructure/dnsmadeeasy + ''' + + def __init__(self, url, method, data=None, headers={}): + self._method = method + urllib2.Request.__init__(self, url, data, headers) + + def get_method(self): + if self._method: + return self._method + else: + return urllib2.Request.get_method(self) + + +class SSLValidationHandler(urllib2.BaseHandler): + ''' + A custom handler class for SSL validation. + + Based on: + http://stackoverflow.com/questions/1087227/validate-ssl-certificates-with-python + http://techknack.net/python-urllib2-handlers/ + ''' + + def __init__(self, module, hostname, port): + self.module = module + self.hostname = hostname + self.port = port + + def get_ca_certs(self): + # tries to find a valid CA cert in one of the + # standard locations for the current distribution + + ca_certs = [] + paths_checked = [] + platform = get_platform() + distribution = get_distribution() + + # build a list of paths to check for .crt/.pem files + # based on the platform type + paths_checked.append('/etc/ssl/certs') + if platform == 'Linux': + paths_checked.append('/etc/pki/ca-trust/extracted/pem') + paths_checked.append('/etc/pki/tls/certs') + paths_checked.append('/usr/share/ca-certificates/cacert.org') + elif platform == 'FreeBSD': + paths_checked.append('/usr/local/share/certs') + elif platform == 'OpenBSD': + paths_checked.append('/etc/ssl') + elif platform == 'NetBSD': + ca_certs.append('/etc/openssl/certs') + + # fall back to a user-deployed cert in a standard + # location if the OS platform one is not available + paths_checked.append('/etc/ansible') + + tmp_fd, tmp_path = tempfile.mkstemp() + + # Write the dummy ca cert if we are running on Mac OS X + if platform == 'Darwin': + os.write(tmp_fd, DUMMY_CA_CERT) + + # for all of the paths, find any .crt or .pem files + # and compile them into single temp file for use + # in the ssl check to speed up the test + for path in paths_checked: + if os.path.exists(path) and os.path.isdir(path): + dir_contents = os.listdir(path) + for f in dir_contents: + full_path = os.path.join(path, f) + if os.path.isfile(full_path) and os.path.splitext(f)[1] in ('.crt','.pem'): + try: + cert_file = open(full_path, 'r') + os.write(tmp_fd, cert_file.read()) + cert_file.close() + except: + pass + + return (tmp_path, paths_checked) + + def http_request(self, req): + tmp_ca_cert_path, paths_checked = self.get_ca_certs() + try: + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED) + ssl_s.connect((self.hostname, self.port)) + ssl_s.close() + except (ssl.SSLError, socket.error), e: + # fail if we tried all of the certs but none worked + if 'connection refused' in str(e).lower(): + self.module.fail_json(msg='Failed to connect to %s:%s.' % (self.hostname, self.port)) + else: + self.module.fail_json( + msg='Failed to validate the SSL certificate for %s:%s. ' % (self.hostname, self.port) + \ + 'Use validate_certs=no or make sure your managed systems have a valid CA certificate installed. ' + \ + 'Paths checked for this platform: %s' % ", ".join(paths_checked) + ) + try: + # cleanup the temp file created, don't worry + # if it fails for some reason + os.remove(tmp_ca_cert_path) + except: + pass + + return req + + https_request = http_request + + +def url_argument_spec(): + ''' + Creates an argument spec that can be used with any module + that will be requesting content via urllib/urllib2 + ''' + return dict( + url = dict(), + force = dict(default='no', aliases=['thirsty'], type='bool'), + http_agent = dict(default='ansible-httpget'), + use_proxy = dict(default='yes', type='bool'), + validate_certs = dict(default='yes', type='bool'), + ) + + +def fetch_url(module, url, data=None, headers=None, method=None, + use_proxy=False, force=False, last_mod_time=None, timeout=10): + ''' + Fetches a file from an HTTP/FTP server using urllib2 + ''' + + if not HAS_URLLIB: + module.fail_json(msg='urllib is not installed') + if not HAS_URLLIB2: + module.fail_json(msg='urllib2 is not installed') + elif not HAS_URLPARSE: + module.fail_json(msg='urlparse is not installed') + + r = None + handlers = [] + info = dict(url=url) + + # Get validate_certs from the module params + validate_certs = module.params.get('validate_certs', True) + + parsed = urlparse.urlparse(url) + if parsed[0] == 'https': + if not HAS_SSL and validate_certs: + module.fail_json(msg='SSL validation is not available in your version of python. You can use validate_certs=no, however this is unsafe and not recommended') + elif validate_certs: + # do the cert validation + netloc = parsed[1] + if '@' in netloc: + netloc = netloc.split('@', 1)[1] + if ':' in netloc: + hostname, port = netloc.split(':', 1) + else: + hostname = netloc + port = 443 + # create the SSL validation handler and + # add it to the list of handlers + ssl_handler = SSLValidationHandler(module, hostname, port) + handlers.append(ssl_handler) + + if parsed[0] != 'ftp' and '@' in parsed[1]: + credentials, netloc = parsed[1].split('@', 1) + if ':' in credentials: + username, password = credentials.split(':', 1) + else: + username = credentials + password = '' + parsed = list(parsed) + parsed[1] = netloc + + passman = urllib2.HTTPPasswordMgrWithDefaultRealm() + # this creates a password manager + passman.add_password(None, netloc, username, password) + # because we have put None at the start it will always + # use this username/password combination for urls + # for which `theurl` is a super-url + + authhandler = urllib2.HTTPBasicAuthHandler(passman) + # create the AuthHandler + handlers.append(authhandler) + + #reconstruct url without credentials + url = urlparse.urlunparse(parsed) + + if not use_proxy: + proxyhandler = urllib2.ProxyHandler({}) + handlers.append(proxyhandler) + + opener = urllib2.build_opener(*handlers) + urllib2.install_opener(opener) + + if method: + if method.upper() not in ('OPTIONS','GET','HEAD','POST','PUT','DELETE','TRACE','CONNECT'): + module.fail_json(msg='invalid HTTP request method; %s' % method.upper()) + request = RequestWithMethod(url, method.upper(), data) + else: + request = urllib2.Request(url, data) + + # add the custom agent header, to help prevent issues + # with sites that block the default urllib agent string + request.add_header('User-agent', module.params.get('http_agent')) + + # if we're ok with getting a 304, set the timestamp in the + # header, otherwise make sure we don't get a cached copy + if last_mod_time and not force: + tstamp = last_mod_time.strftime('%a, %d %b %Y %H:%M:%S +0000') + request.add_header('If-Modified-Since', tstamp) + else: + request.add_header('cache-control', 'no-cache') + + # user defined headers now, which may override things we've set above + if headers: + if not isinstance(headers, dict): + module.fail_json("headers provided to fetch_url() must be a dict") + for header in headers: + request.add_header(header, headers[header]) + + try: + if sys.version_info < (2,6,0): + # urlopen in python prior to 2.6.0 did not + # have a timeout parameter + r = urllib2.urlopen(request, None) + else: + r = urllib2.urlopen(request, None, timeout) + info.update(r.info()) + info['url'] = r.geturl() # The URL goes in too, because of redirects. + info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), status=200)) + except urllib2.HTTPError, e: + info.update(dict(msg=str(e), status=e.code)) + except urllib2.URLError, e: + code = int(getattr(e, 'code', -1)) + info.update(dict(msg="Request failed: %s" % str(e), status=code)) + + return r, info + diff --git a/lib/ansible/playbook/__init__.py b/lib/ansible/playbook/__init__.py index 65965526251..021d62890dc 100644 --- a/lib/ansible/playbook/__init__.py +++ b/lib/ansible/playbook/__init__.py @@ -29,7 +29,11 @@ from play import Play import StringIO import pipes +# the setup cache stores all variables about a host +# gathered during the setup step, while the vars cache +# holds all other variables about a host SETUP_CACHE = collections.defaultdict(dict) +VARS_CACHE = collections.defaultdict(dict) class PlayBook(object): ''' @@ -73,6 +77,7 @@ class PlayBook(object): su_user = False, su_pass = False, vault_password = False, + force_handlers = False, ): """ @@ -92,9 +97,12 @@ class PlayBook(object): sudo: if not specified per play, requests all plays use sudo mode inventory: can be specified instead of host_list to use a pre-existing inventory object check: don't change anything, just try to detect some potential changes + any_errors_fatal: terminate the entire execution immediately when one of the hosts has failed + force_handlers: continue to notify and run handlers even if a task fails """ self.SETUP_CACHE = SETUP_CACHE + self.VARS_CACHE = VARS_CACHE arguments = [] if playbook is None: @@ -140,6 +148,7 @@ class PlayBook(object): self.su_user = su_user self.su_pass = su_pass self.vault_password = vault_password + self.force_handlers = force_handlers self.callbacks.playbook = self self.runner_callbacks.playbook = self @@ -166,6 +175,7 @@ class PlayBook(object): self.filename = playbook (self.playbook, self.play_basedirs) = self._load_playbook_from_file(playbook, vars) ansible.callbacks.load_callback_plugins() + ansible.callbacks.set_playbook(self.callbacks, self) # ***************************************************** @@ -300,7 +310,7 @@ class PlayBook(object): # since these likely got killed by async_wrapper for host in poller.hosts_to_poll: reason = { 'failed' : 1, 'rc' : None, 'msg' : 'timed out' } - self.runner_callbacks.on_async_failed(host, reason, poller.jid) + self.runner_callbacks.on_async_failed(host, reason, poller.runner.vars_cache[host]['ansible_job_id']) results['contacted'][host] = reason return results @@ -335,6 +345,7 @@ class PlayBook(object): default_vars=task.default_vars, private_key_file=self.private_key_file, setup_cache=self.SETUP_CACHE, + vars_cache=self.VARS_CACHE, basedir=task.play.basedir, conditional=task.when, callbacks=self.runner_callbacks, @@ -371,7 +382,7 @@ class PlayBook(object): results = self._async_poll(poller, task.async_seconds, task.async_poll_interval) else: for (host, res) in results.get('contacted', {}).iteritems(): - self.runner_callbacks.on_async_ok(host, res, poller.jid) + self.runner_callbacks.on_async_ok(host, res, poller.runner.vars_cache[host]['ansible_job_id']) contacted = results.get('contacted',{}) dark = results.get('dark', {}) @@ -402,6 +413,10 @@ class PlayBook(object): ansible.callbacks.set_task(self.runner_callbacks, None) return True + # template ignore_errors + cond = template(play.basedir, task.ignore_errors, task.module_vars, expand_lists=False) + task.ignore_errors = utils.check_conditional(cond , play.basedir, task.module_vars, fail_on_undefined=C.DEFAULT_UNDEFINED_VAR_BEHAVIOR) + # load up an appropriate ansible runner to run the task in parallel results = self._run_task_internal(task) @@ -426,8 +441,6 @@ class PlayBook(object): else: facts = result.get('ansible_facts', {}) self.SETUP_CACHE[host].update(facts) - # extra vars need to always trump - so update again following the facts - self.SETUP_CACHE[host].update(self.extra_vars) if task.register: if 'stdout' in result and 'stdout_lines' not in result: result['stdout_lines'] = result['stdout'].splitlines() @@ -475,11 +488,15 @@ class PlayBook(object): def _do_setup_step(self, play): ''' get facts from the remote system ''' - if play.gather_facts is False: - return {} - host_list = self._trim_unavailable_hosts(play._play_hosts) + if play.gather_facts is None and C.DEFAULT_GATHERING == 'smart': + host_list = [h for h in host_list if h not in self.SETUP_CACHE or 'module_setup' not in self.SETUP_CACHE[h]] + if len(host_list) == 0: + return {} + elif play.gather_facts is False or (play.gather_facts is None and C.DEFAULT_GATHERING == 'explicit'): + return {} + self.callbacks.on_setup() self.inventory.restrict_to(host_list) @@ -500,6 +517,7 @@ class PlayBook(object): remote_port=play.remote_port, private_key_file=self.private_key_file, setup_cache=self.SETUP_CACHE, + vars_cache=self.VARS_CACHE, callbacks=self.runner_callbacks, sudo=play.sudo, sudo_user=play.sudo_user, @@ -560,7 +578,7 @@ class PlayBook(object): def _run_play(self, play): ''' run a list of tasks for a given pattern, in order ''' - + self.callbacks.on_play_start(play.name) # Get the hosts for this play play._play_hosts = self.inventory.list_hosts(play.hosts) @@ -589,6 +607,7 @@ class PlayBook(object): play_hosts.append(all_hosts.pop()) serialized_batch.append(play_hosts) + task_errors = False for on_hosts in serialized_batch: # restrict the play to just the hosts we have in our on_hosts block that are @@ -599,41 +618,12 @@ class PlayBook(object): for task in play.tasks(): if task.meta is not None: - - # meta tasks are an internalism and are not valid for end-user playbook usage - # here a meta task is a placeholder that signals handlers should be run - + # meta tasks can force handlers to run mid-play if task.meta == 'flush_handlers': - fired_names = {} - for handler in play.handlers(): - if len(handler.notified_by) > 0: - self.inventory.restrict_to(handler.notified_by) - - # Resolve the variables first - handler_name = template(play.basedir, handler.name, handler.module_vars) - if handler_name not in fired_names: - self._run_task(play, handler, True) - # prevent duplicate handler includes from running more than once - fired_names[handler_name] = 1 - - host_list = self._trim_unavailable_hosts(play._play_hosts) - if handler.any_errors_fatal and len(host_list) < hosts_count: - play.max_fail_pct = 0 - if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count): - host_list = None - if not host_list: - self.callbacks.on_no_hosts_remaining() - return False - - self.inventory.lift_restriction() - new_list = handler.notified_by[:] - for host in handler.notified_by: - if host in on_hosts: - while host in new_list: - new_list.remove(host) - handler.notified_by = new_list - - continue + self.run_handlers(play) + + # skip calling the handler till the play is finished + continue # only run the task if the requested tags match should_run = False @@ -666,15 +656,74 @@ class PlayBook(object): play.max_fail_pct = 0 # If threshold for max nodes failed is exceeded , bail out. - if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count): - host_list = None + if play.serial > 0: + # if serial is set, we need to shorten the size of host_count + play_count = len(play._play_hosts) + if (play_count - len(host_list)) > int((play.max_fail_pct)/100.0 * play_count): + host_list = None + else: + if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count): + host_list = None # if no hosts remain, drop out if not host_list: - self.callbacks.on_no_hosts_remaining() - return False + if self.force_handlers: + task_errors = True + break + else: + self.callbacks.on_no_hosts_remaining() + return False + # lift restrictions after each play finishes self.inventory.lift_also_restriction() + if task_errors and not self.force_handlers: + # if there were failed tasks and handler execution + # is not forced, quit the play with an error + return False + else: + # no errors, go ahead and execute all handlers + if not self.run_handlers(play): + return False + return True + + def run_handlers(self, play): + on_hosts = play._play_hosts + hosts_count = len(on_hosts) + for task in play.tasks(): + if task.meta is not None: + + fired_names = {} + for handler in play.handlers(): + if len(handler.notified_by) > 0: + self.inventory.restrict_to(handler.notified_by) + + # Resolve the variables first + handler_name = template(play.basedir, handler.name, handler.module_vars) + if handler_name not in fired_names: + self._run_task(play, handler, True) + # prevent duplicate handler includes from running more than once + fired_names[handler_name] = 1 + + host_list = self._trim_unavailable_hosts(play._play_hosts) + if handler.any_errors_fatal and len(host_list) < hosts_count: + play.max_fail_pct = 0 + if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count): + host_list = None + if not host_list and not self.force_handlers: + self.callbacks.on_no_hosts_remaining() + return False + + self.inventory.lift_restriction() + new_list = handler.notified_by[:] + for host in handler.notified_by: + if host in on_hosts: + while host in new_list: + new_list.remove(host) + handler.notified_by = new_list + + continue + + return True diff --git a/lib/ansible/playbook/play.py b/lib/ansible/playbook/play.py index e9f00e47024..386170be78c 100644 --- a/lib/ansible/playbook/play.py +++ b/lib/ansible/playbook/play.py @@ -26,6 +26,7 @@ import pipes import shlex import os import sys +import uuid class Play(object): @@ -92,6 +93,10 @@ class Play(object): self._update_vars_files_for_host(None) + # apply any extra_vars specified on the command line now + if type(self.playbook.extra_vars) == dict: + self.vars = utils.combine_vars(self.vars, self.playbook.extra_vars) + # template everything to be efficient, but do not pre-mature template # tasks/handlers as they may have inventory scope overrides _tasks = ds.pop('tasks', []) @@ -117,7 +122,6 @@ class Play(object): self.sudo = ds.get('sudo', self.playbook.sudo) self.sudo_user = ds.get('sudo_user', self.playbook.sudo_user) self.transport = ds.get('connection', self.playbook.transport) - self.gather_facts = ds.get('gather_facts', True) self.remote_port = self.remote_port self.any_errors_fatal = utils.boolean(ds.get('any_errors_fatal', 'false')) self.accelerate = utils.boolean(ds.get('accelerate', 'false')) @@ -126,7 +130,13 @@ class Play(object): self.max_fail_pct = int(ds.get('max_fail_percentage', 100)) self.su = ds.get('su', self.playbook.su) self.su_user = ds.get('su_user', self.playbook.su_user) - #self.vault_password = vault_password + + # gather_facts is not a simple boolean, as None means that a 'smart' + # fact gathering mode will be used, so we need to be careful here as + # calling utils.boolean(None) returns False + self.gather_facts = ds.get('gather_facts', None) + if self.gather_facts: + self.gather_facts = utils.boolean(self.gather_facts) # Fail out if user specifies a sudo param with a su param in a given play if (ds.get('sudo') or ds.get('sudo_user')) and (ds.get('su') or ds.get('su_user')): @@ -134,6 +144,7 @@ class Play(object): '("su", "su_user") cannot be used together') load_vars = {} + load_vars['role_names'] = ds.get('role_names',[]) load_vars['playbook_dir'] = self.basedir if self.playbook.inventory.basedir() is not None: load_vars['inventory_dir'] = self.playbook.inventory.basedir() @@ -141,6 +152,8 @@ class Play(object): self._tasks = self._load_tasks(self._ds.get('tasks', []), load_vars) self._handlers = self._load_tasks(self._ds.get('handlers', []), load_vars) + # apply any missing tags to role tasks + self._late_merge_role_tags() if self.sudo_user != 'root': self.sudo = True @@ -227,6 +240,25 @@ class Play(object): if meta_data: allow_dupes = utils.boolean(meta_data.get('allow_duplicates','')) + # if any tags were specified as role/dep variables, merge + # them into the current dep_vars so they're passed on to any + # further dependencies too, and so we only have one place + # (dep_vars) to look for tags going forward + def __merge_tags(var_obj): + old_tags = dep_vars.get('tags', []) + if isinstance(old_tags, basestring): + old_tags = [old_tags, ] + if isinstance(var_obj, dict): + new_tags = var_obj.get('tags', []) + if isinstance(new_tags, basestring): + new_tags = [new_tags, ] + else: + new_tags = [] + return list(set(old_tags).union(set(new_tags))) + + dep_vars['tags'] = __merge_tags(role_vars) + dep_vars['tags'] = __merge_tags(passed_vars) + # if tags are set from this role, merge them # into the tags list for the dependent role if "tags" in passed_vars: @@ -235,7 +267,7 @@ class Play(object): included_dep_vars = included_role_dep[2] if included_dep_name == dep: if "tags" in included_dep_vars: - included_dep_vars["tags"] = list(set(included_dep_vars["tags"] + passed_vars["tags"])) + included_dep_vars["tags"] = list(set(included_dep_vars["tags"]).union(set(passed_vars["tags"]))) else: included_dep_vars["tags"] = passed_vars["tags"][:] @@ -254,13 +286,6 @@ class Play(object): if 'role' in dep_vars: del dep_vars['role'] - if "tags" in passed_vars: - if not self._is_valid_tag(passed_vars["tags"]): - # one of the tags specified for this role was in the - # skip list, or we're limiting the tags and it didn't - # match one, so we just skip it completely - continue - if not allow_dupes: if dep in self.included_roles: # skip back to the top, since we don't want to @@ -343,6 +368,13 @@ class Play(object): roles = self._build_role_dependencies(roles, [], self.vars) + # give each role a uuid + for idx, val in enumerate(roles): + this_uuid = str(uuid.uuid4()) + roles[idx][-2]['role_uuid'] = this_uuid + + role_names = [] + for (role,role_path,role_vars,default_vars) in roles: # special vars must be extracted from the dict to the included tasks special_keys = [ "sudo", "sudo_user", "when", "with_items" ] @@ -374,6 +406,7 @@ class Play(object): else: role_name = role + role_names.append(role_name) if os.path.isfile(task): nt = dict(include=pipes.quote(task), vars=role_vars, default_vars=default_vars, role_name=role_name) for k in special_keys: @@ -420,6 +453,7 @@ class Play(object): ds['tasks'] = new_tasks ds['handlers'] = new_handlers ds['vars_files'] = new_vars_files + ds['role_names'] = role_names self.default_vars = self._load_role_defaults(defaults_files) @@ -434,6 +468,7 @@ class Play(object): os.path.join(basepath, 'main'), os.path.join(basepath, 'main.yml'), os.path.join(basepath, 'main.yaml'), + os.path.join(basepath, 'main.json'), ) if sum([os.path.isfile(x) for x in mains]) > 1: raise errors.AnsibleError("found multiple main files at %s, only one allowed" % (basepath)) @@ -498,7 +533,11 @@ class Play(object): include_vars = {} for k in x: if k.startswith("with_"): - utils.deprecated("include + with_items is a removed deprecated feature", "1.5", removed=True) + if original_file: + offender = " (in %s)" % original_file + else: + offender = "" + utils.deprecated("include + with_items is a removed deprecated feature" + offender, "1.5", removed=True) elif k.startswith("when_"): utils.deprecated("\"when_:\" is a removed deprecated feature, use the simplified 'when:' conditional directly", None, removed=True) elif k == 'when': @@ -545,9 +584,9 @@ class Play(object): include_filename = utils.path_dwim(dirname, include_file) data = utils.parse_yaml_from_file(include_filename, vault_password=self.vault_password) if 'role_name' in x and data is not None: - for x in data: - if 'include' in x: - x['role_name'] = new_role + for y in data: + if isinstance(y, dict) and 'include' in y: + y['role_name'] = new_role loaded = self._load_tasks(data, mv, default_vars, included_sudo_vars, list(included_additional_conditions), original_file=include_filename, role_name=new_role) results += loaded elif type(x) == dict: @@ -671,11 +710,15 @@ class Play(object): unmatched_tags: tags that were found within the current play but do not match any provided by the user ''' - # gather all the tags in all the tasks into one list + # gather all the tags in all the tasks and handlers into one list + # FIXME: isn't this in self.tags already? + all_tags = [] for task in self._tasks: if not task.meta: all_tags.extend(task.tags) + for handler in self._handlers: + all_tags.extend(handler.tags) # compare the lists of tags using sets and return the matched and unmatched all_tags_set = set(all_tags) @@ -687,50 +730,113 @@ class Play(object): # ************************************************* + def _late_merge_role_tags(self): + # build a local dict of tags for roles + role_tags = {} + for task in self._ds['tasks']: + if 'role_name' in task: + this_role = task['role_name'] + "-" + task['vars']['role_uuid'] + + if this_role not in role_tags: + role_tags[this_role] = [] + + if 'tags' in task['vars']: + if isinstance(task['vars']['tags'], basestring): + role_tags[this_role] += shlex.split(task['vars']['tags']) + else: + role_tags[this_role] += task['vars']['tags'] + + # apply each role's tags to it's tasks + for idx, val in enumerate(self._tasks): + if getattr(val, 'role_name', None) is not None: + this_role = val.role_name + "-" + val.module_vars['role_uuid'] + if this_role in role_tags: + self._tasks[idx].tags = sorted(set(self._tasks[idx].tags + role_tags[this_role])) + + # ************************************************* + def _has_vars_in(self, msg): - return ((msg.find("$") != -1) or (msg.find("{{") != -1)) + return "$" in msg or "{{" in msg # ************************************************* def _update_vars_files_for_host(self, host, vault_password=None): + def generate_filenames(host, inject, filename): + + """ Render the raw filename into 3 forms """ + + filename2 = template(self.basedir, filename, self.vars) + filename3 = filename2 + if host is not None: + filename3 = template(self.basedir, filename2, inject) + if self._has_vars_in(filename3) and host is not None: + # allow play scoped vars and host scoped vars to template the filepath + inject.update(self.vars) + filename4 = template(self.basedir, filename3, inject) + filename4 = utils.path_dwim(self.basedir, filename4) + else: + filename4 = utils.path_dwim(self.basedir, filename3) + return filename2, filename3, filename4 + + + def update_vars_cache(host, inject, data, filename): + + """ update a host's varscache with new var data """ + + data = utils.combine_vars(inject, data) + self.playbook.VARS_CACHE[host].update(data) + self.playbook.callbacks.on_import_for_host(host, filename4) + + def process_files(filename, filename2, filename3, filename4, host=None): + + """ pseudo-algorithm for deciding where new vars should go """ + + data = utils.parse_yaml_from_file(filename4, vault_password=self.vault_password) + if data: + if type(data) != dict: + raise errors.AnsibleError("%s must be stored as a dictionary/hash" % filename4) + if host is not None: + if self._has_vars_in(filename2) and not self._has_vars_in(filename3): + # running a host specific pass and has host specific variables + # load into setup cache + update_vars_cache(host, inject, data, filename4) + elif self._has_vars_in(filename3) and not self._has_vars_in(filename4): + # handle mixed scope variables in filepath + update_vars_cache(host, inject, data, filename4) + + elif not self._has_vars_in(filename4): + # found a non-host specific variable, load into vars and NOT + # the setup cache + if host is not None: + self.vars.update(data) + else: + self.vars = utils.combine_vars(self.vars, data) + + # Enforce that vars_files is always a list if type(self.vars_files) != list: self.vars_files = [ self.vars_files ] + # Build an inject if this is a host run started by self.update_vars_files if host is not None: inject = {} inject.update(self.playbook.inventory.get_variables(host, vault_password=vault_password)) - inject.update(self.playbook.SETUP_CACHE[host]) + inject.update(self.playbook.SETUP_CACHE.get(host, {})) + inject.update(self.playbook.VARS_CACHE.get(host, {})) + else: + inject = None for filename in self.vars_files: - if type(filename) == list: - - # loop over all filenames, loading the first one, and failing if # none found + # loop over all filenames, loading the first one, and failing if none found found = False sequence = [] for real_filename in filename: - filename2 = template(self.basedir, real_filename, self.vars) - filename3 = filename2 - if host is not None: - filename3 = template(self.basedir, filename2, inject) - filename4 = utils.path_dwim(self.basedir, filename3) + filename2, filename3, filename4 = generate_filenames(host, inject, real_filename) sequence.append(filename4) if os.path.exists(filename4): found = True - data = utils.parse_yaml_from_file(filename4, vault_password=self.vault_password) - if type(data) != dict: - raise errors.AnsibleError("%s must be stored as a dictionary/hash" % filename4) - if host is not None: - if self._has_vars_in(filename2) and not self._has_vars_in(filename3): - # this filename has variables in it that were fact specific - # so it needs to be loaded into the per host SETUP_CACHE - self.playbook.SETUP_CACHE[host].update(data) - self.playbook.callbacks.on_import_for_host(host, filename4) - elif not self._has_vars_in(filename4): - # found a non-host specific variable, load into vars and NOT - # the setup cache - self.vars.update(data) + process_files(filename, filename2, filename3, filename4, host=host) elif host is not None: self.playbook.callbacks.on_not_import_for_host(host, filename4) if found: @@ -742,24 +848,11 @@ class Play(object): else: # just one filename supplied, load it! - - filename2 = template(self.basedir, filename, self.vars) - filename3 = filename2 - if host is not None: - filename3 = template(self.basedir, filename2, inject) - filename4 = utils.path_dwim(self.basedir, filename3) + filename2, filename3, filename4 = generate_filenames(host, inject, filename) if self._has_vars_in(filename4): continue - new_vars = utils.parse_yaml_from_file(filename4, vault_password=self.vault_password) - if new_vars: - if type(new_vars) != dict: - raise errors.AnsibleError("%s must be stored as dictionary/hash: %s" % (filename4, type(new_vars))) - if host is not None and self._has_vars_in(filename2) and not self._has_vars_in(filename3): - # running a host specific pass and has host specific variables - # load into setup cache - self.playbook.SETUP_CACHE[host] = utils.combine_vars( - self.playbook.SETUP_CACHE[host], new_vars) - self.playbook.callbacks.on_import_for_host(host, filename4) - elif host is None: - # running a non-host specific pass and we can update the global vars instead - self.vars = utils.combine_vars(self.vars, new_vars) + process_files(filename, filename2, filename3, filename4, host=host) + + # finally, update the VARS_CACHE for the host, if it is set + if host is not None: + self.playbook.VARS_CACHE[host].update(self.playbook.extra_vars) diff --git a/lib/ansible/playbook/task.py b/lib/ansible/playbook/task.py index 99e99d4ba18..dd76c47a052 100644 --- a/lib/ansible/playbook/task.py +++ b/lib/ansible/playbook/task.py @@ -85,7 +85,7 @@ class Task(object): elif x.startswith("with_"): if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"): - utils.warning("It is unneccessary to use '{{' in loops, leave variables in loop expressions bare.") + utils.warning("It is unnecessary to use '{{' in loops, leave variables in loop expressions bare.") plugin_name = x.replace("with_","") if plugin_name in utils.plugins.lookup_loader: @@ -97,7 +97,7 @@ class Task(object): elif x in [ 'changed_when', 'failed_when', 'when']: if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"): - utils.warning("It is unneccessary to use '{{' in conditionals, leave variables in loop expressions bare.") + utils.warning("It is unnecessary to use '{{' in conditionals, leave variables in loop expressions bare.") elif x.startswith("when_"): utils.deprecated("The 'when_' conditional has been removed. Switch to using the regular unified 'when' statements as described on docs.ansible.com.","1.5", removed=True) @@ -206,8 +206,12 @@ class Task(object): self.changed_when = ds.get('changed_when', None) self.failed_when = ds.get('failed_when', None) - self.async_seconds = int(ds.get('async', 0)) # not async by default - self.async_poll_interval = int(ds.get('poll', 10)) # default poll = 10 seconds + self.async_seconds = ds.get('async', 0) # not async by default + self.async_seconds = template.template_from_string(play.basedir, self.async_seconds, self.module_vars) + self.async_seconds = int(self.async_seconds) + self.async_poll_interval = ds.get('poll', 10) # default poll = 10 seconds + self.async_poll_interval = template.template_from_string(play.basedir, self.async_poll_interval, self.module_vars) + self.async_poll_interval = int(self.async_poll_interval) self.notify = ds.get('notify', []) self.first_available_file = ds.get('first_available_file', None) diff --git a/lib/ansible/runner/__init__.py b/lib/ansible/runner/__init__.py index 7bbc9e372e1..432ee854793 100644 --- a/lib/ansible/runner/__init__.py +++ b/lib/ansible/runner/__init__.py @@ -28,10 +28,10 @@ import collections import socket import base64 import sys -import shlex import pipes import jinja2 import subprocess +import getpass import ansible.constants as C import ansible.inventory @@ -81,18 +81,19 @@ def _executor_hook(job_queue, result_queue, new_stdin): traceback.print_exc() class HostVars(dict): - ''' A special view of setup_cache that adds values from the inventory when needed. ''' + ''' A special view of vars_cache that adds values from the inventory when needed. ''' - def __init__(self, setup_cache, inventory): - self.setup_cache = setup_cache + def __init__(self, vars_cache, inventory, vault_password=None): + self.vars_cache = vars_cache self.inventory = inventory self.lookup = dict() - self.update(setup_cache) + self.update(vars_cache) + self.vault_password = vault_password def __getitem__(self, host): if host not in self.lookup: - result = self.inventory.get_variables(host) - result.update(self.setup_cache.get(host, {})) + result = self.inventory.get_variables(host, vault_password=self.vault_password) + result.update(self.vars_cache.get(host, {})) self.lookup[host] = result return self.lookup[host] @@ -118,6 +119,7 @@ class Runner(object): background=0, # async poll every X seconds, else 0 for non-async basedir=None, # directory of playbook, if applicable setup_cache=None, # used to share fact data w/ other tasks + vars_cache=None, # used to store variables about hosts transport=C.DEFAULT_TRANSPORT, # 'ssh', 'paramiko', 'local' conditional='True', # run only if this fact expression evals to true callbacks=None, # used for output @@ -155,6 +157,7 @@ class Runner(object): self.check = check self.diff = diff self.setup_cache = utils.default(setup_cache, lambda: collections.defaultdict(dict)) + self.vars_cache = utils.default(vars_cache, lambda: collections.defaultdict(dict)) self.basedir = utils.default(basedir, lambda: os.getcwd()) self.callbacks = utils.default(callbacks, lambda: DefaultRunnerCallbacks()) self.generated_jid = str(random.randint(0, 999999999999)) @@ -243,7 +246,7 @@ class Runner(object): """ if complex_args is None: return module_args - if type(complex_args) != dict: + if not isinstance(complex_args, dict): raise errors.AnsibleError("complex arguments are not a dictionary: %s" % complex_args) for (k,v) in complex_args.iteritems(): if isinstance(v, basestring): @@ -292,7 +295,7 @@ class Runner(object): raise errors.AnsibleError("environment must be a dictionary, received %s" % enviro) result = "" for (k,v) in enviro.iteritems(): - result = "%s=%s %s" % (k, pipes.quote(str(v)), result) + result = "%s=%s %s" % (k, pipes.quote(unicode(v)), result) return result # ***************************************************** @@ -415,7 +418,7 @@ class Runner(object): environment_string = self._compute_environment_string(inject) - if tmp.find("tmp") != -1 and (self.sudo or self.su) and (self.sudo_user != 'root' or self.su_user != 'root'): + if "tmp" in tmp and ((self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root')): # deal with possible umask issues once sudo'ed to other user cmd_chmod = "chmod a+r %s" % remote_module_path self._low_level_exec_command(conn, cmd_chmod, tmp, sudoable=False) @@ -444,7 +447,7 @@ class Runner(object): else: argsfile = self._transfer_str(conn, tmp, 'arguments', args) - if (self.sudo or self.su) and (self.sudo_user != 'root' or self.su_user != 'root'): + if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root'): # deal with possible umask issues once sudo'ed to other user cmd_args_chmod = "chmod a+r %s" % argsfile self._low_level_exec_command(conn, cmd_args_chmod, tmp, sudoable=False) @@ -469,7 +472,7 @@ class Runner(object): cmd = " ".join([environment_string.strip(), shebang.replace("#!","").strip(), cmd]) cmd = cmd.strip() - if tmp.find("tmp") != -1 and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp: + if "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp: if not self.sudo or self.su or self.sudo_user == 'root' or self.su_user == 'root': # not sudoing or sudoing to root, so can cleanup files in the same step cmd = cmd + "; rm -rf %s >/dev/null 2>&1" % tmp @@ -485,8 +488,8 @@ class Runner(object): else: res = self._low_level_exec_command(conn, cmd, tmp, sudoable=sudoable, in_data=in_data) - if tmp.find("tmp") != -1 and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp: - if (self.sudo or self.su) and (self.sudo_user != 'root' or self.su_user != 'root'): + if "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp: + if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root'): # not sudoing to root, so maybe can't delete files as that other user # have to clean up temp files as original user in a second step cmd2 = "rm -rf %s >/dev/null 2>&1" % tmp @@ -508,10 +511,15 @@ class Runner(object): fileno = None try: + self._new_stdin = new_stdin if not new_stdin and fileno is not None: - self._new_stdin = os.fdopen(os.dup(fileno)) - else: - self._new_stdin = new_stdin + try: + self._new_stdin = os.fdopen(os.dup(fileno)) + except OSError, e: + # couldn't dupe stdin, most likely because it's + # not a valid file descriptor, so we just rely on + # using the one that was passed in + pass exec_rc = self._executor_internal(host, new_stdin) if type(exec_rc) != ReturnData: @@ -544,13 +552,21 @@ class Runner(object): # fireball, local, etc port = self.remote_port + # merge the VARS and SETUP caches for this host + combined_cache = self.setup_cache.copy() + combined_cache.get(host, {}).update(self.vars_cache.get(host, {})) + + # use combined_cache and host_variables to template the module_vars + module_vars_inject = utils.combine_vars(combined_cache.get(host, {}), host_variables) + module_vars = template.template(self.basedir, self.module_vars, module_vars_inject) + inject = {} inject = utils.combine_vars(inject, self.default_vars) inject = utils.combine_vars(inject, host_variables) - inject = utils.combine_vars(inject, self.module_vars) - inject = utils.combine_vars(inject, self.setup_cache[host]) + inject = utils.combine_vars(inject, module_vars) + inject = utils.combine_vars(inject, combined_cache.get(host, {})) inject.setdefault('ansible_ssh_user', self.remote_user) - inject['hostvars'] = HostVars(self.setup_cache, self.inventory) + inject['hostvars'] = HostVars(combined_cache, self.inventory, vault_password=self.vault_pass) inject['group_names'] = host_variables.get('group_names', []) inject['groups'] = self.inventory.groups_list() inject['vars'] = self.module_vars @@ -612,7 +628,6 @@ class Runner(object): if self.background > 0: raise errors.AnsibleError("lookup plugins (with_*) cannot be used with async tasks") - aggregrate = {} all_comm_ok = True all_changed = False all_failed = False @@ -711,10 +726,18 @@ class Runner(object): actual_transport = inject.get('ansible_connection', self.transport) actual_private_key_file = inject.get('ansible_ssh_private_key_file', self.private_key_file) actual_private_key_file = template.template(self.basedir, actual_private_key_file, inject, fail_on_undefined=True) + self.sudo = utils.boolean(inject.get('ansible_sudo', self.sudo)) + self.sudo_user = inject.get('ansible_sudo_user', self.sudo_user) self.sudo_pass = inject.get('ansible_sudo_pass', self.sudo_pass) self.su = inject.get('ansible_su', self.su) self.su_pass = inject.get('ansible_su_pass', self.su_pass) + # select default root user in case self.sudo requested + # but no user specified; happens e.g. in host vars when + # just ansible_sudo=True is specified + if self.sudo and self.sudo_user is None: + self.sudo_user = 'root' + if actual_private_key_file is not None: actual_private_key_file = os.path.expanduser(actual_private_key_file) @@ -750,6 +773,7 @@ class Runner(object): # user/pass may still contain variables at this stage actual_user = template.template(self.basedir, actual_user, inject) actual_pass = template.template(self.basedir, actual_pass, inject) + self.sudo_pass = template.template(self.basedir, self.sudo_pass, inject) # make actual_user available as __magic__ ansible_ssh_user variable inject['ansible_ssh_user'] = actual_user @@ -842,22 +866,25 @@ class Runner(object): changed_when = self.module_vars.get('changed_when') failed_when = self.module_vars.get('failed_when') - if changed_when is not None or failed_when is not None: + if (changed_when is not None or failed_when is not None) and self.background == 0: register = self.module_vars.get('register') - if register is not None: + if register is not None: if 'stdout' in data: data['stdout_lines'] = data['stdout'].splitlines() inject[register] = data - if changed_when is not None: - data['changed'] = utils.check_conditional(changed_when, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars) - if failed_when is not None: - data['failed_when_result'] = data['failed'] = utils.check_conditional(failed_when, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars) + # only run the final checks if the async_status has finished, + # or if we're not running an async_status check at all + if (module_name == 'async_status' and "finished" in data) or module_name != 'async_status': + if changed_when is not None and 'skipped' not in data: + data['changed'] = utils.check_conditional(changed_when, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars) + if failed_when is not None: + data['failed_when_result'] = data['failed'] = utils.check_conditional(failed_when, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars) if is_chained: # no callbacks return result if 'skipped' in data: - self.callbacks.on_skipped(host) + self.callbacks.on_skipped(host, inject.get('item',None)) elif not result.is_successful(): ignore_errors = self.module_vars.get('ignore_errors', False) self.callbacks.on_failed(host, data, ignore_errors) @@ -875,7 +902,7 @@ class Runner(object): return False def _late_needs_tmp_path(self, conn, tmp, module_style): - if tmp.find("tmp") != -1: + if "tmp" in tmp: # tmp has already been created return False if not conn.has_pipelining or not C.ANSIBLE_SSH_PIPELINING or C.DEFAULT_KEEP_REMOTE_FILES or self.su: @@ -908,6 +935,12 @@ class Runner(object): if conn.user == sudo_user or conn.user == su_user: sudoable = False su = False + else: + # assume connection type is local if no user attribute + this_user = getpass.getuser() + if this_user == sudo_user or this_user == su_user: + sudoable = False + su = False if su: rc, stdin, stdout, stderr = conn.exec_command(cmd, @@ -986,11 +1019,11 @@ class Runner(object): basefile = 'ansible-tmp-%s-%s' % (time.time(), random.randint(0, 2**48)) basetmp = os.path.join(C.DEFAULT_REMOTE_TMP, basefile) - if (self.sudo or self.su) and (self.sudo_user != 'root' or self.su != 'root') and basetmp.startswith('$HOME'): + if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root') and basetmp.startswith('$HOME'): basetmp = os.path.join('/tmp', basefile) cmd = 'mkdir -p %s' % basetmp - if self.remote_user != 'root' or ((self.sudo or self.su) and (self.sudo_user != 'root' or self.su != 'root')): + if self.remote_user != 'root' or ((self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root')): cmd += ' && chmod a+rx %s' % basetmp cmd += ' && echo %s' % basetmp @@ -1075,9 +1108,22 @@ class Runner(object): job_queue.put(host) result_queue = manager.Queue() + try: + fileno = sys.stdin.fileno() + except ValueError: + fileno = None + workers = [] for i in range(self.forks): - new_stdin = os.fdopen(os.dup(sys.stdin.fileno())) + new_stdin = None + if fileno is not None: + try: + new_stdin = os.fdopen(os.dup(fileno)) + except OSError, e: + # couldn't dupe stdin, most likely because it's + # not a valid file descriptor, so we just rely on + # using the one that was passed in + pass prc = multiprocessing.Process(target=_executor_hook, args=(job_queue, result_queue, new_stdin)) prc.start() diff --git a/lib/ansible/runner/action_plugins/assemble.py b/lib/ansible/runner/action_plugins/assemble.py index eb6faf5dfcf..741053f4cf0 100644 --- a/lib/ansible/runner/action_plugins/assemble.py +++ b/lib/ansible/runner/action_plugins/assemble.py @@ -31,18 +31,43 @@ class ActionModule(object): def __init__(self, runner): self.runner = runner - def _assemble_from_fragments(self, src_path, delimiter=None): + def _assemble_from_fragments(self, src_path, delimiter=None, compiled_regexp=None): ''' assemble a file from a directory of fragments ''' tmpfd, temp_path = tempfile.mkstemp() tmp = os.fdopen(tmpfd,'w') delimit_me = False + add_newline = False + for f in sorted(os.listdir(src_path)): + if compiled_regexp and not compiled_regexp.search(f): + continue fragment = "%s/%s" % (src_path, f) - if delimit_me and delimiter: - tmp.write(delimiter) - if os.path.isfile(fragment): - tmp.write(file(fragment).read()) + if not os.path.isfile(fragment): + continue + fragment_content = file(fragment).read() + + # always put a newline between fragments if the previous fragment didn't end with a newline. + if add_newline: + tmp.write('\n') + + # delimiters should only appear between fragments + if delimit_me: + if delimiter: + # un-escape anything like newlines + delimiter = delimiter.decode('unicode-escape') + tmp.write(delimiter) + # always make sure there's a newline after the + # delimiter, so lines don't run together + if delimiter[-1] != '\n': + tmp.write('\n') + + tmp.write(fragment_content) delimit_me = True + if fragment_content.endswith('\n'): + add_newline = False + else: + add_newline = True + tmp.close() return temp_path @@ -52,6 +77,7 @@ class ActionModule(object): options = {} if complex_args: options.update(complex_args) + options.update(utils.parse_kv(module_args)) src = options.get('src', None) @@ -59,6 +85,7 @@ class ActionModule(object): delimiter = options.get('delimiter', None) remote_src = utils.boolean(options.get('remote_src', 'yes')) + if src is None or dest is None: result = dict(failed=True, msg="src and dest are required") return ReturnData(conn=conn, comm_ok=False, result=result) diff --git a/lib/ansible/runner/action_plugins/async.py b/lib/ansible/runner/action_plugins/async.py index 12fe279a471..ac0d6e84928 100644 --- a/lib/ansible/runner/action_plugins/async.py +++ b/lib/ansible/runner/action_plugins/async.py @@ -33,7 +33,7 @@ class ActionModule(object): module_name = 'command' module_args += " #USE_SHELL" - if tmp.find("tmp") == -1: + if "tmp" not in tmp: tmp = self.runner._make_tmp_path(conn) (module_path, is_new_style, shebang) = self.runner._copy_module(conn, tmp, module_name, module_args, inject, complex_args=complex_args) diff --git a/lib/ansible/runner/action_plugins/copy.py b/lib/ansible/runner/action_plugins/copy.py index 0ee9b6f3ced..f8063862cc4 100644 --- a/lib/ansible/runner/action_plugins/copy.py +++ b/lib/ansible/runner/action_plugins/copy.py @@ -54,6 +54,16 @@ class ActionModule(object): raw = utils.boolean(options.get('raw', 'no')) force = utils.boolean(options.get('force', 'yes')) + # content with newlines is going to be escaped to safely load in yaml + # now we need to unescape it so that the newlines are evaluated properly + # when writing the file to disk + if content: + if isinstance(content, unicode): + try: + content = content.decode('unicode-escape') + except UnicodeDecodeError: + pass + if (source is None and content is None and not 'first_available_file' in inject) or dest is None: result=dict(failed=True, msg="src (or content) and dest are required") return ReturnData(conn=conn, result=result) @@ -325,7 +335,7 @@ class ActionModule(object): src = open(source) src_contents = src.read(8192) st = os.stat(source) - if src_contents.find("\x00") != -1: + if "\x00" in src_contents: diff['src_binary'] = 1 elif st[stat.ST_SIZE] > utils.MAX_FILE_SIZE_FOR_DIFF: diff['src_larger'] = utils.MAX_FILE_SIZE_FOR_DIFF diff --git a/lib/ansible/runner/action_plugins/group_by.py b/lib/ansible/runner/action_plugins/group_by.py index f8b4f318db2..4d6205ca60c 100644 --- a/lib/ansible/runner/action_plugins/group_by.py +++ b/lib/ansible/runner/action_plugins/group_by.py @@ -83,7 +83,8 @@ class ActionModule(object): inv_group = ansible.inventory.Group(name=group) inventory.add_group(inv_group) for host in hosts: - del self.runner.inventory._vars_per_host[host] + if host in self.runner.inventory._vars_per_host: + del self.runner.inventory._vars_per_host[host] inv_host = inventory.get_host(host) if not inv_host: inv_host = ansible.inventory.Host(name=host) diff --git a/lib/ansible/runner/action_plugins/pause.py b/lib/ansible/runner/action_plugins/pause.py index 8aaa87f454e..c6a06dcd7cd 100644 --- a/lib/ansible/runner/action_plugins/pause.py +++ b/lib/ansible/runner/action_plugins/pause.py @@ -77,11 +77,11 @@ class ActionModule(object): # Is 'prompt' a key in 'args'? elif 'prompt' in args: self.pause_type = 'prompt' - self.prompt = "[%s]\n%s: " % (hosts, args['prompt']) + self.prompt = "[%s]\n%s:\n" % (hosts, args['prompt']) # Is 'args' empty, then this is the default prompted pause elif len(args.keys()) == 0: self.pause_type = 'prompt' - self.prompt = "[%s]\nPress enter to continue: " % hosts + self.prompt = "[%s]\nPress enter to continue:\n" % hosts # I have no idea what you're trying to do. But it's so wrong. else: raise ae("invalid pause type given. must be one of: %s" % \ diff --git a/lib/ansible/runner/action_plugins/script.py b/lib/ansible/runner/action_plugins/script.py index 149be3cc113..f50e2b08d6f 100644 --- a/lib/ansible/runner/action_plugins/script.py +++ b/lib/ansible/runner/action_plugins/script.py @@ -128,7 +128,7 @@ class ActionModule(object): result = handler.run(conn, tmp, 'raw', module_args, inject) # clean up after - if tmp.find("tmp") != -1 and not C.DEFAULT_KEEP_REMOTE_FILES: + if "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES: self.runner._low_level_exec_command(conn, 'rm -rf %s >/dev/null 2>&1' % tmp, tmp) result.result['changed'] = True diff --git a/lib/ansible/runner/action_plugins/synchronize.py b/lib/ansible/runner/action_plugins/synchronize.py index d7c9113f28e..42432d4fcb1 100644 --- a/lib/ansible/runner/action_plugins/synchronize.py +++ b/lib/ansible/runner/action_plugins/synchronize.py @@ -26,26 +26,54 @@ class ActionModule(object): def __init__(self, runner): self.runner = runner + self.inject = None + + def _get_absolute_path(self, path=None): + if 'vars' in self.inject: + if '_original_file' in self.inject['vars']: + # roles + path = utils.path_dwim_relative(self.inject['_original_file'], 'files', path, self.runner.basedir) + elif 'inventory_dir' in self.inject['vars']: + # non-roles + abs_dir = os.path.abspath(self.inject['vars']['inventory_dir']) + path = os.path.join(abs_dir, path) + + return path def _process_origin(self, host, path, user): if not host in ['127.0.0.1', 'localhost']: - return '%s@%s:%s' % (user, host, path) + if user: + return '%s@%s:%s' % (user, host, path) + else: + return '%s:%s' % (host, path) else: + if not ':' in path: + if not path.startswith('/'): + path = self._get_absolute_path(path=path) return path def _process_remote(self, host, path, user): transport = self.runner.transport return_data = None if not host in ['127.0.0.1', 'localhost'] or transport != "local": - return_data = '%s@%s:%s' % (user, host, path) + if user: + return_data = '%s@%s:%s' % (user, host, path) + else: + return_data = '%s:%s' % (host, path) else: return_data = path + if not ':' in return_data: + if not return_data.startswith('/'): + return_data = self._get_absolute_path(path=return_data) + return return_data def setup(self, module_name, inject): ''' Always default to localhost as delegate if None defined ''' + + self.inject = inject # Store original transport and sudo values. self.original_transport = inject.get('ansible_connection', self.runner.transport) @@ -65,6 +93,8 @@ class ActionModule(object): ''' generates params and passes them on to the rsync module ''' + self.inject = inject + # load up options options = {} if complex_args: @@ -122,13 +152,14 @@ class ActionModule(object): if process_args or use_delegate: user = None - if use_delegate: - user = inject['hostvars'][conn.delegate].get('ansible_ssh_user') - - if not use_delegate or not user: - user = inject.get('ansible_ssh_user', - self.runner.remote_user) + if utils.boolean(options.get('set_remote_user', 'yes')): + if use_delegate: + user = inject['hostvars'][conn.delegate].get('ansible_ssh_user') + if not use_delegate or not user: + user = inject.get('ansible_ssh_user', + self.runner.remote_user) + if use_delegate: # FIXME private_key = inject.get('ansible_ssh_private_key_file', self.runner.private_key_file) @@ -167,12 +198,15 @@ class ActionModule(object): if rsync_path: options['rsync_path'] = '"' + rsync_path + '"' - module_items = ' '.join(['%s=%s' % (k, v) for (k, - v) in options.items()]) - + module_args = "" if self.runner.noop_on_check(inject): - module_items += " CHECKMODE=True" + module_args = "CHECKMODE=True" + + # run the module and store the result + result = self.runner._execute_module(conn, tmp, 'synchronize', module_args, complex_args=options, inject=inject) + + # reset the sudo property + self.runner.sudo = self.original_sudo - return self.runner._execute_module(conn, tmp, 'synchronize', - module_items, inject=inject) + return result diff --git a/lib/ansible/runner/action_plugins/template.py b/lib/ansible/runner/action_plugins/template.py index b34c14ec6a5..96d8f97a3aa 100644 --- a/lib/ansible/runner/action_plugins/template.py +++ b/lib/ansible/runner/action_plugins/template.py @@ -85,7 +85,7 @@ class ActionModule(object): # template the source data locally & get ready to transfer try: - resultant = template.template_from_file(self.runner.basedir, source, inject) + resultant = template.template_from_file(self.runner.basedir, source, inject, vault_password=self.runner.vault_pass) except Exception, e: result = dict(failed=True, msg=str(e)) return ReturnData(conn=conn, comm_ok=False, result=result) @@ -123,7 +123,8 @@ class ActionModule(object): return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=dict(before_header=dest, after_header=source, before=dest_contents, after=resultant)) else: res = self.runner._execute_module(conn, tmp, 'copy', module_args, inject=inject, complex_args=complex_args) - res.diff = dict(before=dest_contents, after=resultant) + if res.result.get('changed', False): + res.diff = dict(before=dest_contents, after=resultant) return res else: return self.runner._execute_module(conn, tmp, 'file', module_args, inject=inject, complex_args=complex_args) diff --git a/lib/ansible/runner/connection_plugins/accelerate.py b/lib/ansible/runner/connection_plugins/accelerate.py index 60c1319262a..3f35a325484 100644 --- a/lib/ansible/runner/connection_plugins/accelerate.py +++ b/lib/ansible/runner/connection_plugins/accelerate.py @@ -22,10 +22,10 @@ import socket import struct import time from ansible.callbacks import vvv, vvvv +from ansible.errors import AnsibleError, AnsibleFileNotFound from ansible.runner.connection_plugins.ssh import Connection as SSHConnection from ansible.runner.connection_plugins.paramiko_ssh import Connection as ParamikoConnection from ansible import utils -from ansible import errors from ansible import constants # the chunk size to read and send, assuming mtu 1500 and @@ -85,7 +85,15 @@ class Connection(object): utils.AES_KEYS = self.runner.aes_keys def _execute_accelerate_module(self): - args = "password=%s port=%s debug=%d ipv6=%s" % (base64.b64encode(self.key.__str__()), str(self.accport), int(utils.VERBOSITY), self.runner.accelerate_ipv6) + args = "password=%s port=%s minutes=%d debug=%d ipv6=%s" % ( + base64.b64encode(self.key.__str__()), + str(self.accport), + constants.ACCELERATE_DAEMON_TIMEOUT, + int(utils.VERBOSITY), + self.runner.accelerate_ipv6, + ) + if constants.ACCELERATE_MULTI_KEY: + args += " multi_key=yes" inject = dict(password=self.key) if getattr(self.runner, 'accelerate_inventory_host', False): inject = utils.combine_vars(inject, self.runner.inventory.get_variables(self.runner.accelerate_inventory_host)) @@ -109,33 +117,38 @@ class Connection(object): while tries > 0: try: self.conn.connect((self.host,self.accport)) - if not self.validate_user(): - # the accelerated daemon was started with a - # different remote_user. The above command - # should have caused the accelerate daemon to - # shutdown, so we'll reconnect. - wrong_user = True break - except: - vvvv("failed, retrying...") + except socket.error: + vvvv("connection to %s failed, retrying..." % self.host) time.sleep(0.1) tries -= 1 if tries == 0: vvv("Could not connect via the accelerated connection, exceeded # of tries") - raise errors.AnsibleError("Failed to connect") + raise AnsibleError("FAILED") elif wrong_user: vvv("Restarting daemon with a different remote_user") - raise errors.AnsibleError("Wrong user") + raise AnsibleError("WRONG_USER") + self.conn.settimeout(constants.ACCELERATE_TIMEOUT) - except: + if not self.validate_user(): + # the accelerated daemon was started with a + # different remote_user. The above command + # should have caused the accelerate daemon to + # shutdown, so we'll reconnect. + wrong_user = True + + except AnsibleError, e: if allow_ssh: + if "WRONG_USER" in e: + vvv("Switching users, waiting for the daemon on %s to shutdown completely..." % self.host) + time.sleep(5) vvv("Falling back to ssh to startup accelerated mode") res = self._execute_accelerate_module() if not res.is_successful(): - raise errors.AnsibleError("Failed to launch the accelerated daemon on %s (reason: %s)" % (self.host,res.result.get('msg'))) + raise AnsibleError("Failed to launch the accelerated daemon on %s (reason: %s)" % (self.host,res.result.get('msg'))) return self.connect(allow_ssh=False) else: - raise errors.AnsibleError("Failed to connect to %s:%s" % (self.host,self.accport)) + raise AnsibleError("Failed to connect to %s:%s" % (self.host,self.accport)) self.is_connected = True return self @@ -163,11 +176,12 @@ class Connection(object): if not d: vvvv("%s: received nothing, bailing out" % self.host) return None + vvvv("%s: received %d bytes" % (self.host, len(d))) data += d vvvv("%s: received all of the data, returning" % self.host) return data except socket.timeout: - raise errors.AnsibleError("timed out while waiting to receive data") + raise AnsibleError("timed out while waiting to receive data") def validate_user(self): ''' @@ -176,6 +190,7 @@ class Connection(object): daemon to exit if they don't match ''' + vvvv("%s: sending request for validate_user" % self.host) data = dict( mode='validate_user', username=self.user, @@ -183,15 +198,16 @@ class Connection(object): data = utils.jsonify(data) data = utils.encrypt(self.key, data) if self.send_data(data): - raise errors.AnsibleError("Failed to send command to %s" % self.host) + raise AnsibleError("Failed to send command to %s" % self.host) + vvvv("%s: waiting for validate_user response" % self.host) while True: # we loop here while waiting for the response, because a # long running command may cause us to receive keepalive packets # ({"pong":"true"}) rather than the response we want. response = self.recv_data() if not response: - raise errors.AnsibleError("Failed to get a response from %s" % self.host) + raise AnsibleError("Failed to get a response from %s" % self.host) response = utils.decrypt(self.key, response) response = utils.parse_json(response) if "pong" in response: @@ -199,11 +215,11 @@ class Connection(object): vvvv("%s: received a keepalive packet" % self.host) continue else: - vvvv("%s: received the response" % self.host) + vvvv("%s: received the validate_user response: %s" % (self.host, response)) break if response.get('failed'): - raise errors.AnsibleError("Error while validating user: %s" % response.get("msg")) + return False else: return response.get('rc') == 0 @@ -211,10 +227,10 @@ class Connection(object): ''' run a command on the remote host ''' if su or su_user: - raise errors.AnsibleError("Internal Error: this module does not support running commands via su") + raise AnsibleError("Internal Error: this module does not support running commands via su") if in_data: - raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining") + raise AnsibleError("Internal Error: this module does not support optimized module pipelining") if executable == "": executable = constants.DEFAULT_EXECUTABLE @@ -233,7 +249,7 @@ class Connection(object): data = utils.jsonify(data) data = utils.encrypt(self.key, data) if self.send_data(data): - raise errors.AnsibleError("Failed to send command to %s" % self.host) + raise AnsibleError("Failed to send command to %s" % self.host) while True: # we loop here while waiting for the response, because a @@ -241,7 +257,7 @@ class Connection(object): # ({"pong":"true"}) rather than the response we want. response = self.recv_data() if not response: - raise errors.AnsibleError("Failed to get a response from %s" % self.host) + raise AnsibleError("Failed to get a response from %s" % self.host) response = utils.decrypt(self.key, response) response = utils.parse_json(response) if "pong" in response: @@ -260,7 +276,7 @@ class Connection(object): vvv("PUT %s TO %s" % (in_path, out_path), host=self.host) if not os.path.exists(in_path): - raise errors.AnsibleFileNotFound("file or module does not exist: %s" % in_path) + raise AnsibleFileNotFound("file or module does not exist: %s" % in_path) fd = file(in_path, 'rb') fstat = os.stat(in_path) @@ -279,27 +295,27 @@ class Connection(object): data = utils.encrypt(self.key, data) if self.send_data(data): - raise errors.AnsibleError("failed to send the file to %s" % self.host) + raise AnsibleError("failed to send the file to %s" % self.host) response = self.recv_data() if not response: - raise errors.AnsibleError("Failed to get a response from %s" % self.host) + raise AnsibleError("Failed to get a response from %s" % self.host) response = utils.decrypt(self.key, response) response = utils.parse_json(response) if response.get('failed',False): - raise errors.AnsibleError("failed to put the file in the requested location") + raise AnsibleError("failed to put the file in the requested location") finally: fd.close() vvvv("waiting for final response after PUT") response = self.recv_data() if not response: - raise errors.AnsibleError("Failed to get a response from %s" % self.host) + raise AnsibleError("Failed to get a response from %s" % self.host) response = utils.decrypt(self.key, response) response = utils.parse_json(response) if response.get('failed',False): - raise errors.AnsibleError("failed to put the file in the requested location") + raise AnsibleError("failed to put the file in the requested location") def fetch_file(self, in_path, out_path): ''' save a remote file to the specified path ''' @@ -309,7 +325,7 @@ class Connection(object): data = utils.jsonify(data) data = utils.encrypt(self.key, data) if self.send_data(data): - raise errors.AnsibleError("failed to initiate the file fetch with %s" % self.host) + raise AnsibleError("failed to initiate the file fetch with %s" % self.host) fh = open(out_path, "w") try: @@ -317,11 +333,11 @@ class Connection(object): while True: response = self.recv_data() if not response: - raise errors.AnsibleError("Failed to get a response from %s" % self.host) + raise AnsibleError("Failed to get a response from %s" % self.host) response = utils.decrypt(self.key, response) response = utils.parse_json(response) if response.get('failed', False): - raise errors.AnsibleError("Error during file fetch, aborting") + raise AnsibleError("Error during file fetch, aborting") out = base64.b64decode(response['data']) fh.write(out) bytes += len(out) @@ -330,7 +346,7 @@ class Connection(object): data = utils.jsonify(dict()) data = utils.encrypt(self.key, data) if self.send_data(data): - raise errors.AnsibleError("failed to send ack during file fetch") + raise AnsibleError("failed to send ack during file fetch") if response.get('last', False): break finally: diff --git a/lib/ansible/runner/connection_plugins/libvirt_lxc.py b/lib/ansible/runner/connection_plugins/libvirt_lxc.py new file mode 100644 index 00000000000..b1de672ddd8 --- /dev/null +++ b/lib/ansible/runner/connection_plugins/libvirt_lxc.py @@ -0,0 +1,121 @@ +# Based on local.py (c) 2012, Michael DeHaan +# Based on chroot.py (c) 2013, Maykel Moya +# (c) 2013, Michael Scherer +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +import distutils.spawn +import os +import subprocess +from ansible import errors +from ansible.callbacks import vvv + +class Connection(object): + ''' Local lxc based connections ''' + + def _search_executable(self, executable): + cmd = distutils.spawn.find_executable(executable) + if not cmd: + raise errors.AnsibleError("%s command not found in PATH") % executable + return cmd + + def _check_domain(self, domain): + p = subprocess.Popen([self.cmd, '-q', '-c', 'lxc:///', 'dominfo', domain], + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + p.communicate() + if p.returncode: + raise errors.AnsibleError("%s is not a lxc defined in libvirt" % domain) + + def __init__(self, runner, host, port, *args, **kwargs): + self.lxc = host + + self.cmd = self._search_executable('virsh') + + self._check_domain(host) + + self.runner = runner + self.host = host + # port is unused, since this is local + self.port = port + + def connect(self, port=None): + ''' connect to the lxc; nothing to do here ''' + + vvv("THIS IS A LOCAL LXC DIR", host=self.lxc) + + return self + + def _generate_cmd(self, executable, cmd): + if executable: + local_cmd = [self.cmd, '-q', '-c', 'lxc:///', 'lxc-enter-namespace', self.lxc, '--', executable , '-c', cmd] + else: + local_cmd = '%s -q -c lxc:/// lxc-enter-namespace %s -- %s' % (self.cmd, self.lxc, cmd) + return local_cmd + + def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False, executable='/bin/sh'): + ''' run a command on the chroot ''' + + # We enter lxc as root so sudo stuff can be ignored + local_cmd = self._generate_cmd(executable, cmd) + + vvv("EXEC %s" % (local_cmd), host=self.lxc) + p = subprocess.Popen(local_cmd, shell=isinstance(local_cmd, basestring), + cwd=self.runner.basedir, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + + stdout, stderr = p.communicate() + return (p.returncode, '', stdout, stderr) + + def _normalize_path(self, path, prefix): + if not path.startswith(os.path.sep): + path = os.path.join(os.path.sep, path) + normpath = os.path.normpath(path) + return os.path.join(prefix, normpath[1:]) + + def put_file(self, in_path, out_path): + ''' transfer a file from local to lxc ''' + + out_path = self._normalize_path(out_path, '/') + vvv("PUT %s TO %s" % (in_path, out_path), host=self.lxc) + + local_cmd = [self.cmd, '-q', '-c', 'lxc:///', 'lxc-enter-namespace', self.lxc, '--', '/bin/tee', out_path] + vvv("EXEC %s" % (local_cmd), host=self.lxc) + + p = subprocess.Popen(local_cmd, cwd=self.runner.basedir, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + stdout, stderr = p.communicate(open(in_path,'rb').read()) + + def fetch_file(self, in_path, out_path): + ''' fetch a file from lxc to local ''' + + in_path = self._normalize_path(in_path, '/') + vvv("FETCH %s TO %s" % (in_path, out_path), host=self.lxc) + + local_cmd = [self.cmd, '-q', '-c', 'lxc:///', 'lxc-enter-namespace', self.lxc, '--', '/bin/cat', in_path] + vvv("EXEC %s" % (local_cmd), host=self.lxc) + + p = subprocess.Popen(local_cmd, cwd=self.runner.basedir, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + stdout, stderr = p.communicate() + open(out_path,'wb').write(stdout) + + + def close(self): + ''' terminate the connection; nothing to do here ''' + pass diff --git a/lib/ansible/runner/connection_plugins/ssh.py b/lib/ansible/runner/connection_plugins/ssh.py index 22189caadf3..94fae31a0b4 100644 --- a/lib/ansible/runner/connection_plugins/ssh.py +++ b/lib/ansible/runner/connection_plugins/ssh.py @@ -68,9 +68,9 @@ class Connection(object): cp_in_use = False cp_path_set = False for arg in self.common_args: - if arg.find("ControlPersist") != -1: + if "ControlPersist" in arg: cp_in_use = True - if arg.find("ControlPath") != -1: + if "ControlPath" in arg: cp_path_set = True if cp_in_use and not cp_path_set: @@ -98,6 +98,28 @@ class Connection(object): return self + def _run(self, cmd, indata): + if indata: + # do not use pseudo-pty + p = subprocess.Popen(cmd, stdin=subprocess.PIPE, + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + stdin = p.stdin + else: + # try to use upseudo-pty + try: + # Make sure stdin is a proper (pseudo) pty to avoid: tcgetattr errors + master, slave = pty.openpty() + p = subprocess.Popen(cmd, stdin=slave, + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + stdin = os.fdopen(master, 'w', 0) + os.close(slave) + except: + p = subprocess.Popen(cmd, stdin=subprocess.PIPE, + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + stdin = p.stdin + + return (p, stdin) + def _password_cmd(self): if self.password: try: @@ -116,6 +138,64 @@ class Connection(object): os.write(self.wfd, "%s\n" % self.password) os.close(self.wfd) + def _communicate(self, p, stdin, indata, su=False, sudoable=False, prompt=None): + fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK) + fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK) + # We can't use p.communicate here because the ControlMaster may have stdout open as well + stdout = '' + stderr = '' + rpipes = [p.stdout, p.stderr] + if indata: + try: + stdin.write(indata) + stdin.close() + except: + raise errors.AnsibleError('SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh') + # Read stdout/stderr from process + while True: + rfd, wfd, efd = select.select(rpipes, [], rpipes, 1) + + # fail early if the sudo/su password is wrong + if self.runner.sudo and sudoable and self.runner.sudo_pass: + incorrect_password = gettext.dgettext( + "sudo", "Sorry, try again.") + if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)): + raise errors.AnsibleError('Incorrect sudo password') + + if self.runner.su and su and self.runner.su_pass: + incorrect_password = gettext.dgettext( + "su", "Sorry") + if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)): + raise errors.AnsibleError('Incorrect su password') + + if p.stdout in rfd: + dat = os.read(p.stdout.fileno(), 9000) + stdout += dat + if dat == '': + rpipes.remove(p.stdout) + if p.stderr in rfd: + dat = os.read(p.stderr.fileno(), 9000) + stderr += dat + if dat == '': + rpipes.remove(p.stderr) + # only break out if no pipes are left to read or + # the pipes are completely read and + # the process is terminated + if (not rpipes or not rfd) and p.poll() is not None: + break + # No pipes are left to read but process is not yet terminated + # Only then it is safe to wait for the process to be finished + # NOTE: Actually p.poll() is always None here if rpipes is empty + elif not rpipes and p.poll() == None: + p.wait() + # The process is terminated. Since no pipes to read from are + # left, there is no need to call select() again. + break + # close stdin after process is terminated and stdout/stderr are read + # completely (see also issue #848) + stdin.close() + return (p.returncode, stdout, stderr) + def not_in_host_file(self, host): if 'USER' in os.environ: user_host_file = os.path.expandvars("~${USER}/.ssh/known_hosts") @@ -137,7 +217,7 @@ class Connection(object): data = host_fh.read() host_fh.close() for line in data.split("\n"): - if line is None or line.find(" ") == -1: + if line is None or " " not in line: continue tokens = line.split() if tokens[0].find(self.HASHED_KEY_MAGIC) == 0: @@ -157,7 +237,7 @@ class Connection(object): return False if (hfiles_not_found == len(host_file_list)): - print "previous known host file not found" + vvv("EXEC previous known host file not found for %s" % host) return True def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable='/bin/sh', in_data=None, su_user=None, su=False): @@ -184,6 +264,7 @@ class Connection(object): sudocmd, prompt, success_key = utils.make_su_cmd(su_user, executable, cmd) ssh_cmd.append(sudocmd) elif not self.runner.sudo or not sudoable: + prompt = None if executable: ssh_cmd.append(executable + ' -c ' + pipes.quote(cmd)) else: @@ -203,24 +284,7 @@ class Connection(object): fcntl.lockf(self.runner.output_lockfile, fcntl.LOCK_EX) # create process - if in_data: - # do not use pseudo-pty - p = subprocess.Popen(ssh_cmd, stdin=subprocess.PIPE, - stdout=subprocess.PIPE, stderr=subprocess.PIPE) - stdin = p.stdin - else: - # try to use upseudo-pty - try: - # Make sure stdin is a proper (pseudo) pty to avoid: tcgetattr errors - master, slave = pty.openpty() - p = subprocess.Popen(ssh_cmd, stdin=slave, - stdout=subprocess.PIPE, stderr=subprocess.PIPE) - stdin = os.fdopen(master, 'w', 0) - os.close(slave) - except: - p = subprocess.Popen(ssh_cmd, stdin=subprocess.PIPE, - stdout=subprocess.PIPE, stderr=subprocess.PIPE) - stdin = p.stdin + (p, stdin) = self._run(ssh_cmd, in_data) self._send_password() @@ -269,62 +333,16 @@ class Connection(object): stdin.write(self.runner.sudo_pass + '\n') elif su: stdin.write(self.runner.su_pass + '\n') - fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK) - fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK) - # We can't use p.communicate here because the ControlMaster may have stdout open as well - stdout = '' - stderr = '' - rpipes = [p.stdout, p.stderr] - if in_data: - try: - stdin.write(in_data) - stdin.close() - except: - raise errors.AnsibleError('SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh') - while True: - rfd, wfd, efd = select.select(rpipes, [], rpipes, 1) - - # fail early if the sudo/su password is wrong - if self.runner.sudo and sudoable and self.runner.sudo_pass: - incorrect_password = gettext.dgettext( - "sudo", "Sorry, try again.") - if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)): - raise errors.AnsibleError('Incorrect sudo password') - if self.runner.su and su and self.runner.sudo_pass: - incorrect_password = gettext.dgettext( - "su", "Sorry") - if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)): - raise errors.AnsibleError('Incorrect su password') + (returncode, stdout, stderr) = self._communicate(p, stdin, in_data, su=su, sudoable=sudoable, prompt=prompt) - if p.stdout in rfd: - dat = os.read(p.stdout.fileno(), 9000) - stdout += dat - if dat == '': - rpipes.remove(p.stdout) - if p.stderr in rfd: - dat = os.read(p.stderr.fileno(), 9000) - stderr += dat - if dat == '': - rpipes.remove(p.stderr) - # only break out if we've emptied the pipes, or there is nothing to - # read from and the process has finished. - if (not rpipes or not rfd) and p.poll() is not None: - break - # Calling wait while there are still pipes to read can cause a lock - elif not rpipes and p.poll() == None: - p.wait() - # the process has finished and the pipes are empty, - # if we loop and do the select it waits all the timeout - break - stdin.close() # close stdin after we read from stdout (see also issue #848) - if C.HOST_KEY_CHECKING and not_in_host_file: # lock around the initial SSH connectivity so the user prompt about whether to add # the host to known hosts is not intermingled with multiprocess output. fcntl.lockf(self.runner.output_lockfile, fcntl.LOCK_UN) fcntl.lockf(self.runner.process_lockfile, fcntl.LOCK_UN) - controlpersisterror = stderr.find('Bad configuration option: ControlPersist') != -1 or stderr.find('unknown configuration option: ControlPersist') != -1 + controlpersisterror = 'Bad configuration option: ControlPersist' in stderr or \ + 'unknown configuration option: ControlPersist' in stderr if C.HOST_KEY_CHECKING: if ssh_cmd[0] == "sshpass" and p.returncode == 6: @@ -332,7 +350,7 @@ class Connection(object): if p.returncode != 0 and controlpersisterror: raise errors.AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" (or ansible_ssh_args in the config file) before running again') - if p.returncode == 255 and in_data: + if p.returncode == 255 and (in_data or self.runner.module_name == 'raw'): raise errors.AnsibleError('SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh') return (p.returncode, '', stdout, stderr) @@ -356,12 +374,13 @@ class Connection(object): cmd += ["sftp"] + self.common_args + [host] indata = "put %s %s\n" % (pipes.quote(in_path), pipes.quote(out_path)) - p = subprocess.Popen(cmd, stdin=subprocess.PIPE, - stdout=subprocess.PIPE, stderr=subprocess.PIPE) + (p, stdin) = self._run(cmd, indata) + self._send_password() - stdout, stderr = p.communicate(indata) - if p.returncode != 0: + (returncode, stdout, stderr) = self._communicate(p, stdin, indata) + + if returncode != 0: raise errors.AnsibleError("failed to transfer file to %s:\n%s\n%s" % (out_path, stdout, stderr)) def fetch_file(self, in_path, out_path): diff --git a/lib/ansible/runner/filter_plugins/core.py b/lib/ansible/runner/filter_plugins/core.py index 9b9a9b5cf2b..8557a42c072 100644 --- a/lib/ansible/runner/filter_plugins/core.py +++ b/lib/ansible/runner/filter_plugins/core.py @@ -23,8 +23,11 @@ import types import pipes import glob import re +import operator as py_operator from ansible import errors from ansible.utils import md5s +from distutils.version import LooseVersion, StrictVersion +from random import SystemRandom def to_nice_yaml(*a, **kw): '''Make verbose, human readable yaml''' @@ -42,8 +45,6 @@ def failed(*a, **kw): ''' Test if task result yields failed ''' item = a[0] if type(item) != dict: - print "DEBUG: GOT A" - print item raise errors.AnsibleFilterError("|failed expects a dictionary") rc = item.get('rc',0) failed = item.get('failed',False) @@ -129,6 +130,15 @@ def search(value, pattern='', ignorecase=False): ''' Perform a `re.search` returning a boolean ''' return regex(value, pattern, ignorecase, 'search') +def regex_replace(value='', pattern='', replacement='', ignorecase=False): + ''' Perform a `re.sub` returning a string ''' + if ignorecase: + flags = re.I + else: + flags = 0 + _re = re.compile(pattern, flags=flags) + return _re.sub(replacement, value) + def unique(a): return set(a) @@ -144,6 +154,37 @@ def symmetric_difference(a, b): def union(a, b): return set(a).union(b) +def version_compare(value, version, operator='eq', strict=False): + ''' Perform a version comparison on a value ''' + op_map = { + '==': 'eq', '=': 'eq', 'eq': 'eq', + '<': 'lt', 'lt': 'lt', + '<=': 'le', 'le': 'le', + '>': 'gt', 'gt': 'gt', + '>=': 'ge', 'ge': 'ge', + '!=': 'ne', '<>': 'ne', 'ne': 'ne' + } + + if strict: + Version = StrictVersion + else: + Version = LooseVersion + + if operator in op_map: + operator = op_map[operator] + else: + raise errors.AnsibleFilterError('Invalid operator type') + + try: + method = getattr(py_operator, operator) + return method(Version(str(value)), Version(str(version))) + except Exception, e: + raise errors.AnsibleFilterError('Version comparison: %s' % e) + +def rand(end, start=0, step=1): + r = SystemRandom() + return r.randrange(start, end, step) + class FilterModule(object): ''' Ansible core jinja2 filters ''' @@ -198,6 +239,7 @@ class FilterModule(object): 'match': match, 'search': search, 'regex': regex, + 'regex_replace': regex_replace, # list 'unique' : unique, @@ -205,5 +247,11 @@ class FilterModule(object): 'difference': difference, 'symmetric_difference': symmetric_difference, 'union': union, + + # version comparison + 'version_compare': version_compare, + + # random numbers + 'random': rand, } diff --git a/lib/ansible/runner/lookup_plugins/etcd.py b/lib/ansible/runner/lookup_plugins/etcd.py index 07adec80297..a758a2fb0b5 100644 --- a/lib/ansible/runner/lookup_plugins/etcd.py +++ b/lib/ansible/runner/lookup_plugins/etcd.py @@ -16,6 +16,7 @@ # along with Ansible. If not, see . from ansible import utils +import os import urllib2 try: import json @@ -24,6 +25,8 @@ except ImportError: # this can be made configurable, not should not use ansible.cfg ANSIBLE_ETCD_URL = 'http://127.0.0.1:4001' +if os.getenv('ANSIBLE_ETCD_URL') is not None: + ANSIBLE_ETCD_URL = os.environ['ANSIBLE_ETCD_URL'] class etcd(): def __init__(self, url=ANSIBLE_ETCD_URL): @@ -62,7 +65,7 @@ class LookupModule(object): def run(self, terms, inject=None, **kwargs): - terms = utils.listify_lookup_plugin_terms(terms, self.basedir, inject) + terms = utils.listify_lookup_plugin_terms(terms, self.basedir, inject) if isinstance(terms, basestring): terms = [ terms ] diff --git a/lib/ansible/runner/lookup_plugins/pipe.py b/lib/ansible/runner/lookup_plugins/pipe.py index 4205b887ffe..0cd9e1cda5d 100644 --- a/lib/ansible/runner/lookup_plugins/pipe.py +++ b/lib/ansible/runner/lookup_plugins/pipe.py @@ -32,6 +32,17 @@ class LookupModule(object): ret = [] for term in terms: + ''' + http://docs.python.org/2/library/subprocess.html#popen-constructor + + The shell argument (which defaults to False) specifies whether to use the + shell as the program to execute. If shell is True, it is recommended to pass + args as a string rather than as a sequence + + https://github.com/ansible/ansible/issues/6550 + ''' + term = str(term) + p = subprocess.Popen(term, cwd=self.basedir, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE) (stdout, stderr) = p.communicate() if p.returncode == 0: diff --git a/lib/ansible/runner/poller.py b/lib/ansible/runner/poller.py index c69b2e76da6..cb2da738b1f 100644 --- a/lib/ansible/runner/poller.py +++ b/lib/ansible/runner/poller.py @@ -30,18 +30,21 @@ class AsyncPoller(object): self.hosts_to_poll = [] self.completed = False - # Get job id and which hosts to poll again in the future - jid = None + # flag to determine if at least one host was contacted + self.active = False # True to work with & below skipped = True for (host, res) in results['contacted'].iteritems(): if res.get('started', False): self.hosts_to_poll.append(host) jid = res.get('ansible_job_id', None) + self.runner.vars_cache[host]['ansible_job_id'] = jid + self.active = True else: skipped = skipped & res.get('skipped', False) self.results['contacted'][host] = res for (host, res) in results['dark'].iteritems(): + self.runner.vars_cache[host]['ansible_job_id'] = '' self.results['dark'][host] = res if not skipped: @@ -49,14 +52,13 @@ class AsyncPoller(object): raise errors.AnsibleError("unexpected error: unable to determine jid") if len(self.hosts_to_poll)==0: raise errors.AnsibleError("unexpected error: no hosts to poll") - self.jid = jid def poll(self): """ Poll the job status. Returns the changes in this iteration.""" self.runner.module_name = 'async_status' - self.runner.module_args = "jid=%s" % self.jid + self.runner.module_args = "jid={{ansible_job_id}}" self.runner.pattern = "*" self.runner.background = 0 self.runner.complex_args = None @@ -75,13 +77,14 @@ class AsyncPoller(object): self.results['contacted'][host] = res poll_results['contacted'][host] = res if res.get('failed', False) or res.get('rc', 0) != 0: - self.runner.callbacks.on_async_failed(host, res, self.jid) + self.runner.callbacks.on_async_failed(host, res, self.runner.vars_cache[host]['ansible_job_id']) else: - self.runner.callbacks.on_async_ok(host, res, self.jid) + self.runner.callbacks.on_async_ok(host, res, self.runner.vars_cache[host]['ansible_job_id']) for (host, res) in results['dark'].iteritems(): self.results['dark'][host] = res poll_results['dark'][host] = res - self.runner.callbacks.on_async_failed(host, res, self.jid) + if host in self.hosts_to_poll: + self.runner.callbacks.on_async_failed(host, res, self.runner.vars_cache[host].get('ansible_job_id','XX')) self.hosts_to_poll = hosts if len(hosts)==0: @@ -92,7 +95,7 @@ class AsyncPoller(object): def wait(self, seconds, poll_interval): """ Wait a certain time for job completion, check status every poll_interval. """ # jid is None when all hosts were skipped - if self.jid is None: + if not self.active: return self.results clock = seconds - poll_interval @@ -103,7 +106,7 @@ class AsyncPoller(object): for (host, res) in poll_results['polled'].iteritems(): if res.get('started'): - self.runner.callbacks.on_async_poll(host, res, self.jid, clock) + self.runner.callbacks.on_async_poll(host, res, self.runner.vars_cache[host]['ansible_job_id'], clock) clock = clock - poll_interval diff --git a/lib/ansible/utils/__init__.py b/lib/ansible/utils/__init__.py index 02148faff0c..ff73e0629a5 100644 --- a/lib/ansible/utils/__init__.py +++ b/lib/ansible/utils/__init__.py @@ -29,6 +29,7 @@ from ansible.utils.plugins import * from ansible.utils import template from ansible.callbacks import display import ansible.constants as C +import ast import time import StringIO import stat @@ -42,6 +43,7 @@ import traceback import getpass import sys import textwrap +import json #import vault from vault import VaultLib @@ -98,7 +100,7 @@ def key_for_hostname(hostname): raise errors.AnsibleError('ACCELERATE_KEYS_DIR is not a directory.') if stat.S_IMODE(os.stat(key_path).st_mode) != int(C.ACCELERATE_KEYS_DIR_PERMS, 8): - raise errors.AnsibleError('Incorrect permissions on ACCELERATE_KEYS_DIR (%s)' % (C.ACCELERATE_KEYS_DIR,)) + raise errors.AnsibleError('Incorrect permissions on the private key directory. Use `chmod 0%o %s` to correct this issue, and make sure any of the keys files contained within that directory are set to 0%o' % (int(C.ACCELERATE_KEYS_DIR_PERMS, 8), C.ACCELERATE_KEYS_DIR, int(C.ACCELERATE_KEYS_FILE_PERMS, 8))) key_path = os.path.join(key_path, hostname) @@ -112,7 +114,7 @@ def key_for_hostname(hostname): return key else: if stat.S_IMODE(os.stat(key_path).st_mode) != int(C.ACCELERATE_KEYS_FILE_PERMS, 8): - raise errors.AnsibleError('Incorrect permissions on ACCELERATE_KEYS_FILE (%s)' % (key_path,)) + raise errors.AnsibleError('Incorrect permissions on the key file for this host. Use `chmod 0%o %s` to correct this issue.' % (int(C.ACCELERATE_KEYS_FILE_PERMS, 8), key_path)) fh = open(key_path) key = AesKey.Read(fh.read()) fh.close() @@ -192,7 +194,7 @@ def check_conditional(conditional, basedir, inject, fail_on_undefined=False): conditional = conditional.replace("jinja2_compare ","") # allow variable names - if conditional in inject and str(inject[conditional]).find('-') == -1: + if conditional in inject and '-' not in str(inject[conditional]): conditional = inject[conditional] conditional = template.template(basedir, conditional, inject, fail_on_undefined=fail_on_undefined) original = str(conditional).replace("jinja2_compare ","") @@ -205,9 +207,9 @@ def check_conditional(conditional, basedir, inject, fail_on_undefined=False): # variable was undefined. If we happened to be # looking for an undefined variable, return True, # otherwise fail - if conditional.find("is undefined") != -1: + if "is undefined" in conditional: return True - elif conditional.find("is defined") != -1: + elif "is defined" in conditional: return False else: raise errors.AnsibleError("error while evaluating conditional: %s" % original) @@ -313,7 +315,7 @@ def parse_json(raw_data): raise for t in tokens: - if t.find("=") == -1: + if "=" not in t: raise errors.AnsibleError("failed to parse: %s" % orig_data) (key,value) = t.split("=", 1) if key == 'changed' or 'failed': @@ -330,9 +332,9 @@ def parse_json(raw_data): def smush_braces(data): ''' smush Jinaj2 braces so unresolved templates like {{ foo }} don't get parsed weird by key=value code ''' - while data.find('{{ ') != -1: + while '{{ ' in data: data = data.replace('{{ ', '{{') - while data.find(' }}') != -1: + while ' }}' in data: data = data.replace(' }}', '}}') return data @@ -350,14 +352,30 @@ def smush_ds(data): else: return data -def parse_yaml(data): - ''' convert a yaml string to a data structure ''' - return smush_ds(yaml.safe_load(data)) +def parse_yaml(data, path_hint=None): + ''' convert a yaml string to a data structure. Also supports JSON, ssssssh!!!''' + + stripped_data = data.lstrip() + loaded = None + if stripped_data.startswith("{") or stripped_data.startswith("["): + # since the line starts with { or [ we can infer this is a JSON document. + try: + loaded = json.loads(data) + except ValueError, ve: + if path_hint: + raise errors.AnsibleError(path_hint + ": " + str(ve)) + else: + raise errors.AnsibleError(str(ve)) + else: + # else this is pretty sure to be a YAML document + loaded = yaml.safe_load(data) + + return smush_ds(loaded) def process_common_errors(msg, probline, column): replaced = probline.replace(" ","") - if replaced.find(":{{") != -1 and replaced.find("}}") != -1: + if ":{{" in replaced and "}}" in replaced: msg = msg + """ This one looks easy to fix. YAML thought it was looking for the start of a hash/dictionary and was confused to see a second "{". Most likely this was @@ -407,7 +425,7 @@ Or: match = True elif middle.startswith('"') and not middle.endswith('"'): match = True - if len(middle) > 0 and middle[0] in [ '"', "'" ] and middle[-1] in [ '"', "'" ] and probline.count("'") > 2 or probline.count("'") > 2: + if len(middle) > 0 and middle[0] in [ '"', "'" ] and middle[-1] in [ '"', "'" ] and probline.count("'") > 2 or probline.count('"') > 2: unbalanced = True if match: msg = msg + """ @@ -512,7 +530,7 @@ def parse_yaml_from_file(path, vault_password=None): data = vault.decrypt(data) try: - return parse_yaml(data) + return parse_yaml(data, path_hint=path) except yaml.YAMLError, exc: process_yaml_error(exc, data, path) @@ -522,10 +540,16 @@ def parse_kv(args): if args is not None: # attempting to split a unicode here does bad things args = args.encode('utf-8') - vargs = [x.decode('utf-8') for x in shlex.split(args, posix=True)] - #vargs = shlex.split(str(args), posix=True) + try: + vargs = shlex.split(args, posix=True) + except ValueError, ve: + if 'no closing quotation' in str(ve).lower(): + raise errors.AnsibleError("error parsing argument string, try quoting the entire line.") + else: + raise + vargs = [x.decode('utf-8') for x in vargs] for x in vargs: - if x.find("=") != -1: + if "=" in x: k, v = x.split("=",1) options[k]=v return options @@ -566,12 +590,15 @@ def md5(filename): return None digest = _md5() blocksize = 64 * 1024 - infile = open(filename, 'rb') - block = infile.read(blocksize) - while block: - digest.update(block) + try: + infile = open(filename, 'rb') block = infile.read(blocksize) - infile.close() + while block: + digest.update(block) + block = infile.read(blocksize) + infile.close() + except IOError, e: + raise errors.AnsibleError("error while accessing the file %s, error was: %s" % (filename, e)) return digest.hexdigest() def default(value, function): @@ -787,6 +814,12 @@ def ask_vault_passwords(ask_vault_pass=False, ask_new_vault_pass=False, confirm_ if new_vault_pass != new_vault_pass2: raise errors.AnsibleError("Passwords do not match") + # enforce no newline chars at the end of passwords + if vault_pass: + vault_pass = vault_pass.strip() + if new_vault_pass: + new_vault_pass = new_vault_pass.strip() + return vault_pass, new_vault_pass def ask_passwords(ask_pass=False, ask_sudo_pass=False, ask_su_pass=False, ask_vault_pass=False): @@ -945,51 +978,95 @@ def is_list_of_strings(items): return False return True -def safe_eval(str, locals=None, include_exceptions=False): +def safe_eval(expr, locals={}, include_exceptions=False): ''' this is intended for allowing things like: with_items: a_list_variable where Jinja2 would return a string but we do not want to allow it to call functions (outside of Jinja2, where the env is constrained) + + Based on: + http://stackoverflow.com/questions/12523516/using-ast-and-whitelists-to-make-pythons-eval-safe ''' - # FIXME: is there a more native way to do this? - def is_set(var): - return not var.startswith("$") and not '{{' in var + # this is the whitelist of AST nodes we are going to + # allow in the evaluation. Any node type other than + # those listed here will raise an exception in our custom + # visitor class defined below. + SAFE_NODES = set( + ( + ast.Expression, + ast.Compare, + ast.Str, + ast.List, + ast.Tuple, + ast.Dict, + ast.Call, + ast.Load, + ast.BinOp, + ast.UnaryOp, + ast.Num, + ast.Name, + ast.Add, + ast.Sub, + ast.Mult, + ast.Div, + ) + ) + + # AST node types were expanded after 2.6 + if not sys.version.startswith('2.6'): + SAFE_NODES.union( + set( + (ast.Set,) + ) + ) - def is_unset(var): - return var.startswith("$") or '{{' in var + # builtin functions that are not safe to call + INVALID_CALLS = ( + 'classmethod', 'compile', 'delattr', 'eval', 'execfile', 'file', + 'filter', 'help', 'input', 'object', 'open', 'raw_input', 'reduce', + 'reload', 'repr', 'setattr', 'staticmethod', 'super', 'type', + ) - # do not allow method calls to modules - if not isinstance(str, basestring): + class CleansingNodeVisitor(ast.NodeVisitor): + def generic_visit(self, node): + if type(node) not in SAFE_NODES: + #raise Exception("invalid expression (%s) type=%s" % (expr, type(node))) + raise Exception("invalid expression (%s)" % expr) + super(CleansingNodeVisitor, self).generic_visit(node) + def visit_Call(self, call): + if call.func.id in INVALID_CALLS: + raise Exception("invalid function: %s" % call.func.id) + + if not isinstance(expr, basestring): # already templated to a datastructure, perhaps? if include_exceptions: - return (str, None) - return str - if re.search(r'\w\.\w+\(', str): - if include_exceptions: - return (str, None) - return str - # do not allow imports - if re.search(r'import \w+', str): - if include_exceptions: - return (str, None) - return str + return (expr, None) + return expr + try: - result = None - if not locals: - result = eval(str) - else: - result = eval(str, None, locals) + parsed_tree = ast.parse(expr, mode='eval') + cnv = CleansingNodeVisitor() + cnv.visit(parsed_tree) + compiled = compile(parsed_tree, expr, 'eval') + result = eval(compiled, {}, locals) + if include_exceptions: return (result, None) else: return result + except SyntaxError, e: + # special handling for syntax errors, we just return + # the expression string back as-is + if include_exceptions: + return (expr, None) + return expr except Exception, e: if include_exceptions: - return (str, e) - return str + return (expr, e) + return expr def listify_lookup_plugin_terms(terms, basedir, inject): @@ -1001,12 +1078,12 @@ def listify_lookup_plugin_terms(terms, basedir, inject): # with_items: {{ alist }} stripped = terms.strip() - if not (stripped.startswith('{') or stripped.startswith('[')) and not stripped.startswith("/"): + if not (stripped.startswith('{') or stripped.startswith('[')) and not stripped.startswith("/") and not stripped.startswith('set(['): # if not already a list, get ready to evaluate with Jinja2 # not sure why the "/" is in above code :) try: new_terms = template.template(basedir, "{{ %s }}" % terms, inject) - if isinstance(new_terms, basestring) and new_terms.find("{{") != -1: + if isinstance(new_terms, basestring) and "{{" in new_terms: pass else: terms = new_terms @@ -1071,3 +1148,13 @@ def random_password(length=20, chars=C.DEFAULT_PASSWORD_CHARS): password.append(new_char) return ''.join(password) + +def before_comment(msg): + ''' what's the part of a string before a comment? ''' + msg = msg.replace("\#","**NOT_A_COMMENT**") + msg = msg.split("#")[0] + msg = msg.replace("**NOT_A_COMMENT**","#") + return msg + + + diff --git a/lib/ansible/utils/module_docs.py b/lib/ansible/utils/module_docs.py index 3a5d0782961..3983efd508b 100644 --- a/lib/ansible/utils/module_docs.py +++ b/lib/ansible/utils/module_docs.py @@ -23,6 +23,8 @@ import ast import yaml import traceback +from ansible import utils + # modules that are ok that they do not have documentation strings BLACKLIST_MODULES = [ 'async_wrapper', 'accelerate', 'async_status' @@ -34,6 +36,10 @@ def get_docstring(filename, verbose=False): in the given file. Parse DOCUMENTATION from YAML and return the YAML doc or None together with EXAMPLES, as plain text. + + DOCUMENTATION can be extended using documentation fragments + loaded by the PluginLoader from the module_docs_fragments + directory. """ doc = None @@ -46,6 +52,41 @@ def get_docstring(filename, verbose=False): if isinstance(child, ast.Assign): if 'DOCUMENTATION' in (t.id for t in child.targets): doc = yaml.safe_load(child.value.s) + fragment_slug = doc.get('extends_documentation_fragment', + 'doesnotexist').lower() + + # Allow the module to specify a var other than DOCUMENTATION + # to pull the fragment from, using dot notation as a separator + if '.' in fragment_slug: + fragment_name, fragment_var = fragment_slug.split('.', 1) + fragment_var = fragment_var.upper() + else: + fragment_name, fragment_var = fragment_slug, 'DOCUMENTATION' + + + if fragment_slug != 'doesnotexist': + fragment_class = utils.plugins.fragment_loader.get(fragment_name) + assert fragment_class is not None + + fragment_yaml = getattr(fragment_class, fragment_var, '{}') + fragment = yaml.safe_load(fragment_yaml) + + if fragment.has_key('notes'): + notes = fragment.pop('notes') + if notes: + if not doc.has_key('notes'): + doc['notes'] = [] + doc['notes'].extend(notes) + + if 'options' not in fragment.keys(): + raise Exception("missing options in fragment, possibly misformatted?") + + for key, value in fragment.items(): + if not doc.has_key(key): + doc[key] = value + else: + doc[key].update(value) + if 'EXAMPLES' in (t.id for t in child.targets): plainexamples = child.value.s[1:] # Skip first empty line except: diff --git a/lib/ansible/utils/module_docs_fragments/__init__.py b/lib/ansible/utils/module_docs_fragments/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/lib/ansible/utils/module_docs_fragments/aws.py b/lib/ansible/utils/module_docs_fragments/aws.py new file mode 100644 index 00000000000..9bbe84a1355 --- /dev/null +++ b/lib/ansible/utils/module_docs_fragments/aws.py @@ -0,0 +1,76 @@ +# (c) 2014, Will Thames +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + + +class ModuleDocFragment(object): + + # AWS only documentation fragment + DOCUMENTATION = """ +options: + ec2_url: + description: + - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used + required: false + default: null + aliases: [] + aws_secret_key: + description: + - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. + required: false + default: null + aliases: [ 'ec2_secret_key', 'secret_key' ] + aws_access_key: + description: + - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. + required: false + default: null + aliases: [ 'ec2_access_key', 'access_key' ] + validate_certs: + description: + - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. + required: false + default: "yes" + choices: ["yes", "no"] + aliases: [] + version_added: "1.5" + profile: + description: + - uses a boto profile. Only works with boto >= 2.24.0 + required: false + default: null + aliases: [] + version_added: "1.6" + security_token: + description: + - security token to authenticate against AWS + required: false + default: null + aliases: [] + version_added: "1.6" +requirements: + - boto +notes: + - The following environment variables can be used C(AWS_ACCESS_KEY) or + C(EC2_ACCESS_KEY) or C(AWS_ACCESS_KEY_ID), + C(AWS_SECRET_KEY) or C(EC2_SECRET_KEY) or C(AWS_SECRET_ACCESS_KEY), + C(AWS_REGION) or C(EC2_REGION), C(AWS_SECURITY_TOKEN) + - Ansible uses the boto configuration file (typically ~/.boto) if no + credentials are provided. See http://boto.readthedocs.org/en/latest/boto_config_tut.html + - C(AWS_REGION) or C(EC2_REGION) can be typically be used to specify the + AWS region, when required, but + this can also be configured in the boto config file +""" diff --git a/lib/ansible/utils/module_docs_fragments/files.py b/lib/ansible/utils/module_docs_fragments/files.py new file mode 100644 index 00000000000..15c6b69bab8 --- /dev/null +++ b/lib/ansible/utils/module_docs_fragments/files.py @@ -0,0 +1,58 @@ +# (c) 2014, Matt Martz +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + + +class ModuleDocFragment(object): + + # Standard files documentation fragment + DOCUMENTATION = """ +options: + path: + description: + - 'path to the file being managed. Aliases: I(dest), I(name)' + required: true + default: [] + aliases: ['dest', 'name'] + state: + description: + - If C(directory), all immediate subdirectories will be created if they + do not exist. If C(file), the file will NOT be created if it does not + exist, see the M(copy) or M(template) module if you want that behavior. + If C(link), the symbolic link will be created or changed. Use C(hard) + for hardlinks. If C(absent), directories will be recursively deleted, + and files or symlinks will be unlinked. If C(touch) (new in 1.4), an empty file will + be created if the c(path) does not exist, while an existing file or + directory will receive updated file access and modification times (similar + to the way `touch` works from the command line). + required: false + default: file + choices: [ file, link, directory, hard, touch, absent ] + src: + required: false + default: null + choices: [] + description: + - path of the file to link to (applies only to C(state= link or hard)). Will accept absolute, + relative and nonexisting (with C(force)) paths. Relative paths are not expanded. + recurse: + required: false + default: "no" + choices: [ "yes", "no" ] + version_added: "1.1" + description: + - recursively set the specified file attributes (applies only to state=directory) +""" diff --git a/lib/ansible/utils/module_docs_fragments/rackspace.py b/lib/ansible/utils/module_docs_fragments/rackspace.py new file mode 100644 index 00000000000..a49202c500f --- /dev/null +++ b/lib/ansible/utils/module_docs_fragments/rackspace.py @@ -0,0 +1,122 @@ +# (c) 2014, Matt Martz +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + + +class ModuleDocFragment(object): + + # Standard Rackspace only documentation fragment + DOCUMENTATION = """ +options: + api_key: + description: + - Rackspace API key (overrides I(credentials)) + aliases: + - password + credentials: + description: + - File to find the Rackspace credentials in (ignored if I(api_key) and + I(username) are provided) + default: null + aliases: + - creds_file + env: + description: + - Environment as configured in ~/.pyrax.cfg, + see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration) + version_added: 1.5 + region: + description: + - Region to create an instance in + default: DFW + username: + description: + - Rackspace username (overrides I(credentials)) + verify_ssl: + description: + - Whether or not to require SSL validation of API endpoints + version_added: 1.5 +requirements: + - pyrax +notes: + - The following environment variables can be used, C(RAX_USERNAME), + C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). + - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file + appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) + - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file + - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +""" + + # Documentation fragment including attributes to enable communication + # of other OpenStack clouds. Not all rax modules support this. + OPENSTACK = """ +options: + api_key: + description: + - Rackspace API key (overrides I(credentials)) + aliases: + - password + auth_endpoint: + description: + - The URI of the authentication service + default: https://identity.api.rackspacecloud.com/v2.0/ + version_added: 1.5 + credentials: + description: + - File to find the Rackspace credentials in (ignored if I(api_key) and + I(username) are provided) + default: null + aliases: + - creds_file + env: + description: + - Environment as configured in ~/.pyrax.cfg, + see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration) + version_added: 1.5 + identity_type: + description: + - Authentication machanism to use, such as rackspace or keystone + default: rackspace + version_added: 1.5 + region: + description: + - Region to create an instance in + default: DFW + tenant_id: + description: + - The tenant ID used for authentication + version_added: 1.5 + tenant_name: + description: + - The tenant name used for authentication + version_added: 1.5 + username: + description: + - Rackspace username (overrides I(credentials)) + verify_ssl: + description: + - Whether or not to require SSL validation of API endpoints + version_added: 1.5 +requirements: + - pyrax +notes: + - The following environment variables can be used, C(RAX_USERNAME), + C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). + - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file + appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) + - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file + - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +""" diff --git a/lib/ansible/utils/plugins.py b/lib/ansible/utils/plugins.py index 1aeba166931..22d74c185a3 100644 --- a/lib/ansible/utils/plugins.py +++ b/lib/ansible/utils/plugins.py @@ -30,7 +30,7 @@ _basedirs = [] def push_basedir(basedir): # avoid pushing the same absolute dir more than once - basedir = os.path.abspath(basedir) + basedir = os.path.realpath(basedir) if basedir not in _basedirs: _basedirs.insert(0, basedir) @@ -99,7 +99,7 @@ class PluginLoader(object): ret = [] ret += self._extra_dirs for basedir in _basedirs: - fullpath = os.path.abspath(os.path.join(basedir, self.subdir)) + fullpath = os.path.realpath(os.path.join(basedir, self.subdir)) if os.path.isdir(fullpath): files = glob.glob("%s/*" % fullpath) for file in files: @@ -111,7 +111,7 @@ class PluginLoader(object): # look in any configured plugin paths, allow one level deep for subcategories configured_paths = self.config.split(os.pathsep) for path in configured_paths: - path = os.path.abspath(os.path.expanduser(path)) + path = os.path.realpath(os.path.expanduser(path)) contents = glob.glob("%s/*" % path) for c in contents: if os.path.isdir(c) and c not in ret: @@ -131,7 +131,7 @@ class PluginLoader(object): ''' Adds an additional directory to the search path ''' self._paths = None - directory = os.path.abspath(directory) + directory = os.path.realpath(directory) if directory is not None: if with_subdir: @@ -240,4 +240,9 @@ filter_loader = PluginLoader( 'filter_plugins' ) - +fragment_loader = PluginLoader( + 'ModuleDocFragment', + 'ansible.utils.module_docs_fragments', + os.path.join(os.path.dirname(__file__), 'module_docs_fragments'), + '', +) diff --git a/lib/ansible/utils/string_functions.py b/lib/ansible/utils/string_functions.py index 4972cc07625..3b452718f74 100644 --- a/lib/ansible/utils/string_functions.py +++ b/lib/ansible/utils/string_functions.py @@ -1,9 +1,12 @@ def isprintable(instring): - #http://stackoverflow.com/a/3637294 - import string - printset = set(string.printable) - isprintable = set(instring).issubset(printset) - return isprintable + if isinstance(instring, str): + #http://stackoverflow.com/a/3637294 + import string + printset = set(string.printable) + isprintable = set(instring).issubset(printset) + return isprintable + else: + return True def count_newlines_from_end(str): i = len(str) diff --git a/lib/ansible/utils/template.py b/lib/ansible/utils/template.py index fc4ff9fd204..8ec27ac0976 100644 --- a/lib/ansible/utils/template.py +++ b/lib/ansible/utils/template.py @@ -88,8 +88,14 @@ def lookup(name, *args, **kwargs): vars = kwargs.get('vars', None) if instance is not None: - ran = instance.run(*args, inject=vars, **kwargs) - return ",".join(ran) + # safely catch run failures per #5059 + try: + ran = instance.run(*args, inject=vars, **kwargs) + except Exception, e: + ran = None + if ran: + ran = ",".join(ran) + return ran else: raise errors.AnsibleError("lookup plugin (%s) not found" % name) @@ -193,7 +199,7 @@ class J2Template(jinja2.environment.Template): def new_context(self, vars=None, shared=False, locals=None): return jinja2.runtime.Context(self.environment, vars.add_locals(locals), self.name, self.blocks) -def template_from_file(basedir, path, vars): +def template_from_file(basedir, path, vars, vault_password=None): ''' run a file through the templating engine ''' fail_on_undefined = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR @@ -310,7 +316,13 @@ def template_from_string(basedir, data, vars, fail_on_undefined=False): if os.path.exists(filesdir): basedir = filesdir - data = data.decode('utf-8') + # 6227 + if isinstance(data, unicode): + try: + data = data.decode('utf-8') + except UnicodeEncodeError, e: + pass + try: t = environment.from_string(data) except Exception, e: @@ -332,7 +344,10 @@ def template_from_string(basedir, data, vars, fail_on_undefined=False): res = jinja2.utils.concat(rf) except TypeError, te: if 'StrictUndefined' in str(te): - raise errors.AnsibleUndefinedVariable("unable to look up a name or access an attribute in template string") + raise errors.AnsibleUndefinedVariable( + "Unable to look up a name or access an attribute in template string. " + \ + "Make sure your variable name does not contain invalid characters like '-'." + ) else: raise errors.AnsibleError("an unexpected type error occured. Error was %s" % te) return res diff --git a/lib/ansible/utils/vault.py b/lib/ansible/utils/vault.py index 9a43fee1b92..88fa710938b 100644 --- a/lib/ansible/utils/vault.py +++ b/lib/ansible/utils/vault.py @@ -19,6 +19,7 @@ # installs ansible and sets it up to run on cron. import os +import shlex import shutil import tempfile from io import BytesIO @@ -30,6 +31,26 @@ from binascii import hexlify from binascii import unhexlify from ansible import constants as C +try: + from Crypto.Hash import SHA256, HMAC + HAS_HASH = True +except ImportError: + HAS_HASH = False + +# Counter import fails for 2.0.1, requires >= 2.6.1 from pip +try: + from Crypto.Util import Counter + HAS_COUNTER = True +except ImportError: + HAS_COUNTER = False + +# KDF import fails for 2.0.1, requires >= 2.6.1 from pip +try: + from Crypto.Protocol.KDF import PBKDF2 + HAS_PBKDF2 = True +except ImportError: + HAS_PBKDF2 = False + # AES IMPORTS try: from Crypto.Cipher import AES as AES @@ -37,15 +58,17 @@ try: except ImportError: HAS_AES = False +CRYPTO_UPGRADE = "ansible-vault requires a newer version of pycrypto than the one installed on your platform. You may fix this with OS-specific commands such as: yum install python-devel; rpm -e --nodeps python-crypto; pip install pycrypto" + HEADER='$ANSIBLE_VAULT' -CIPHER_WHITELIST=['AES'] +CIPHER_WHITELIST=['AES', 'AES256'] class VaultLib(object): def __init__(self, password): self.password = password self.cipher_name = None - self.version = '1.0' + self.version = '1.1' def is_encrypted(self, data): if data.startswith(HEADER): @@ -59,7 +82,8 @@ class VaultLib(object): raise errors.AnsibleError("data is already encrypted") if not self.cipher_name: - raise errors.AnsibleError("the cipher must be set before encrypting data") + self.cipher_name = "AES256" + #raise errors.AnsibleError("the cipher must be set before encrypting data") if 'Vault' + self.cipher_name in globals() and self.cipher_name in CIPHER_WHITELIST: cipher = globals()['Vault' + self.cipher_name] @@ -67,13 +91,17 @@ class VaultLib(object): else: raise errors.AnsibleError("%s cipher could not be found" % self.cipher_name) + """ # combine sha + data this_sha = sha256(data).hexdigest() tmp_data = this_sha + "\n" + data + """ + # encrypt sha + data - tmp_data = this_cipher.encrypt(tmp_data, self.password) + enc_data = this_cipher.encrypt(data, self.password) + # add header - tmp_data = self._add_headers_and_hexify_encrypted_data(tmp_data) + tmp_data = self._add_header(enc_data) return tmp_data def decrypt(self, data): @@ -83,8 +111,8 @@ class VaultLib(object): if not self.is_encrypted(data): raise errors.AnsibleError("data is not encrypted") - # clean out header, hex and sha - data = self._split_headers_and_get_unhexified_data(data) + # clean out header + data = self._split_header(data) # create the cipher object if 'Vault' + self.cipher_name in globals() and self.cipher_name in CIPHER_WHITELIST: @@ -95,34 +123,29 @@ class VaultLib(object): # try to unencrypt data data = this_cipher.decrypt(data, self.password) - - # split out sha and verify decryption - split_data = data.split("\n") - this_sha = split_data[0] - this_data = '\n'.join(split_data[1:]) - test_sha = sha256(this_data).hexdigest() - if this_sha != test_sha: + if data is None: raise errors.AnsibleError("Decryption failed") - return this_data + return data - def _add_headers_and_hexify_encrypted_data(self, data): - # combine header and hexlified encrypted data in 80 char columns + def _add_header(self, data): + # combine header and encrypted data in 80 char columns - tmpdata = hexlify(data) - tmpdata = [tmpdata[i:i+80] for i in range(0, len(tmpdata), 80)] + #tmpdata = hexlify(data) + tmpdata = [data[i:i+80] for i in range(0, len(data), 80)] if not self.cipher_name: raise errors.AnsibleError("the cipher must be set before adding a header") dirty_data = HEADER + ";" + str(self.version) + ";" + self.cipher_name + "\n" + for l in tmpdata: dirty_data += l + '\n' return dirty_data - def _split_headers_and_get_unhexified_data(self, data): + def _split_header(self, data): # used by decrypt tmpdata = data.split('\n') @@ -130,14 +153,22 @@ class VaultLib(object): self.version = str(tmpheader[1].strip()) self.cipher_name = str(tmpheader[2].strip()) - clean_data = ''.join(tmpdata[1:]) + clean_data = '\n'.join(tmpdata[1:]) + """ # strip out newline, join, unhex clean_data = [ x.strip() for x in clean_data ] clean_data = unhexlify(''.join(clean_data)) + """ return clean_data + def __enter__(self): + return self + + def __exit__(self, *err): + pass + class VaultEditor(object): # uses helper methods for write_file(self, filename, data) # to write a file so that code isn't duplicated for simple @@ -153,12 +184,14 @@ class VaultEditor(object): def create_file(self): """ create a new encrypted file """ + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH: + raise errors.AnsibleError(CRYPTO_UPGRADE) + if os.path.isfile(self.filename): raise errors.AnsibleError("%s exists, please use 'edit' instead" % self.filename) # drop the user into vim on file - EDITOR = os.environ.get('EDITOR','vim') - call([EDITOR, self.filename]) + call(self._editor_shell_command(self.filename)) tmpdata = self.read_data(self.filename) this_vault = VaultLib(self.password) this_vault.cipher_name = self.cipher_name @@ -166,6 +199,10 @@ class VaultEditor(object): self.write_data(enc_data, self.filename) def decrypt_file(self): + + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH: + raise errors.AnsibleError(CRYPTO_UPGRADE) + if not os.path.isfile(self.filename): raise errors.AnsibleError("%s does not exist" % self.filename) @@ -173,12 +210,18 @@ class VaultEditor(object): this_vault = VaultLib(self.password) if this_vault.is_encrypted(tmpdata): dec_data = this_vault.decrypt(tmpdata) - self.write_data(dec_data, self.filename) + if dec_data is None: + raise errors.AnsibleError("Decryption failed") + else: + self.write_data(dec_data, self.filename) else: raise errors.AnsibleError("%s is not encrypted" % self.filename) def edit_file(self): + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH: + raise errors.AnsibleError(CRYPTO_UPGRADE) + # decrypt to tmpfile tmpdata = self.read_data(self.filename) this_vault = VaultLib(self.password) @@ -187,13 +230,14 @@ class VaultEditor(object): self.write_data(dec_data, tmp_path) # drop the user into vim on the tmp file - EDITOR = os.environ.get('EDITOR','vim') - call([EDITOR, tmp_path]) + call(self._editor_shell_command(tmp_path)) new_data = self.read_data(tmp_path) - # create new vault and set cipher to old + # create new vault new_vault = VaultLib(self.password) - new_vault.cipher_name = this_vault.cipher_name + + # we want the cipher to default to AES256 + #new_vault.cipher_name = this_vault.cipher_name # encrypt new data a write out to tmp enc_data = new_vault.encrypt(new_data) @@ -203,6 +247,10 @@ class VaultEditor(object): self.shuffle_files(tmp_path, self.filename) def encrypt_file(self): + + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH: + raise errors.AnsibleError(CRYPTO_UPGRADE) + if not os.path.isfile(self.filename): raise errors.AnsibleError("%s does not exist" % self.filename) @@ -216,14 +264,20 @@ class VaultEditor(object): raise errors.AnsibleError("%s is already encrypted" % self.filename) def rekey_file(self, new_password): + + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH: + raise errors.AnsibleError(CRYPTO_UPGRADE) + # decrypt tmpdata = self.read_data(self.filename) this_vault = VaultLib(self.password) dec_data = this_vault.decrypt(tmpdata) - # create new vault, set cipher to old and password to new + # create new vault new_vault = VaultLib(new_password) - new_vault.cipher_name = this_vault.cipher_name + + # we want to force cipher to the default + #new_vault.cipher_name = this_vault.cipher_name # re-encrypt data and re-write file enc_data = new_vault.encrypt(dec_data) @@ -248,17 +302,27 @@ class VaultEditor(object): os.remove(dest) shutil.move(src, dest) + def _editor_shell_command(self, filename): + EDITOR = os.environ.get('EDITOR','vim') + editor = shlex.split(EDITOR) + editor.append(filename) + + return editor + ######################################## # CIPHERS # ######################################## class VaultAES(object): + # this version has been obsoleted by the VaultAES256 class + # which uses encrypt-then-mac (fixing order) and also improving the KDF used + # code remains for upgrade purposes only # http://stackoverflow.com/a/16761459 def __init__(self): if not HAS_AES: - raise errors.AnsibleError("pycrypto is not installed. Fix this with your package manager, for instance, yum-install python-crypto OR (apt equivalent)") + raise errors.AnsibleError(CRYPTO_UPGRADE) def aes_derive_key_and_iv(self, password, salt, key_length, iv_length): @@ -278,7 +342,12 @@ class VaultAES(object): """ Read plaintext data from in_file and write encrypted to out_file """ - in_file = BytesIO(data) + + # combine sha + data + this_sha = sha256(data).hexdigest() + tmp_data = this_sha + "\n" + data + + in_file = BytesIO(tmp_data) in_file.seek(0) out_file = BytesIO() @@ -301,14 +370,21 @@ class VaultAES(object): out_file.write(cipher.encrypt(chunk)) out_file.seek(0) - return out_file.read() + enc_data = out_file.read() + tmp_data = hexlify(enc_data) + + return tmp_data + def decrypt(self, data, password, key_length=32): """ Read encrypted data from in_file and write decrypted to out_file """ # http://stackoverflow.com/a/14989032 + data = ''.join(data.split('\n')) + data = unhexlify(data) + in_file = BytesIO(data) in_file.seek(0) out_file = BytesIO() @@ -330,6 +406,127 @@ class VaultAES(object): # reset the stream pointer to the beginning out_file.seek(0) - return out_file.read() + new_data = out_file.read() + + # split out sha and verify decryption + split_data = new_data.split("\n") + this_sha = split_data[0] + this_data = '\n'.join(split_data[1:]) + test_sha = sha256(this_data).hexdigest() + + if this_sha != test_sha: + raise errors.AnsibleError("Decryption failed") + + #return out_file.read() + return this_data + + +class VaultAES256(object): + + """ + Vault implementation using AES-CTR with an HMAC-SHA256 authentication code. + Keys are derived using PBKDF2 + """ + + # http://www.daemonology.net/blog/2009-06-11-cryptographic-right-answers.html + + def __init__(self): + + if not HAS_PBKDF2 or not HAS_COUNTER or not HAS_HASH: + raise errors.AnsibleError(CRYPTO_UPGRADE) + + def gen_key_initctr(self, password, salt): + # 16 for AES 128, 32 for AES256 + keylength = 32 + + # match the size used for counter.new to avoid extra work + ivlength = 16 + + hash_function = SHA256 + + # make two keys and one iv + pbkdf2_prf = lambda p, s: HMAC.new(p, s, hash_function).digest() + + + derivedkey = PBKDF2(password, salt, dkLen=(2 * keylength) + ivlength, + count=10000, prf=pbkdf2_prf) + + key1 = derivedkey[:keylength] + key2 = derivedkey[keylength:(keylength * 2)] + iv = derivedkey[(keylength * 2):(keylength * 2) + ivlength] + + return key1, key2, hexlify(iv) + + + def encrypt(self, data, password): + + salt = os.urandom(32) + key1, key2, iv = self.gen_key_initctr(password, salt) + + # PKCS#7 PAD DATA http://tools.ietf.org/html/rfc5652#section-6.3 + bs = AES.block_size + padding_length = (bs - len(data) % bs) or bs + data += padding_length * chr(padding_length) + + # COUNTER.new PARAMETERS + # 1) nbits (integer) - Length of the counter, in bits. + # 2) initial_value (integer) - initial value of the counter. "iv" from gen_key_initctr + + ctr = Counter.new(128, initial_value=long(iv, 16)) + + # AES.new PARAMETERS + # 1) AES key, must be either 16, 24, or 32 bytes long -- "key" from gen_key_initctr + # 2) MODE_CTR, is the recommended mode + # 3) counter= + + cipher = AES.new(key1, AES.MODE_CTR, counter=ctr) + + # ENCRYPT PADDED DATA + cryptedData = cipher.encrypt(data) + + # COMBINE SALT, DIGEST AND DATA + hmac = HMAC.new(key2, cryptedData, SHA256) + message = "%s\n%s\n%s" % ( hexlify(salt), hmac.hexdigest(), hexlify(cryptedData) ) + message = hexlify(message) + return message + + def decrypt(self, data, password): + + # SPLIT SALT, DIGEST, AND DATA + data = ''.join(data.split("\n")) + data = unhexlify(data) + salt, cryptedHmac, cryptedData = data.split("\n", 2) + salt = unhexlify(salt) + cryptedData = unhexlify(cryptedData) + + key1, key2, iv = self.gen_key_initctr(password, salt) + + # EXIT EARLY IF DIGEST DOESN'T MATCH + hmacDecrypt = HMAC.new(key2, cryptedData, SHA256) + if not self.is_equal(cryptedHmac, hmacDecrypt.hexdigest()): + return None + + # SET THE COUNTER AND THE CIPHER + ctr = Counter.new(128, initial_value=long(iv, 16)) + cipher = AES.new(key1, AES.MODE_CTR, counter=ctr) + + # DECRYPT PADDED DATA + decryptedData = cipher.decrypt(cryptedData) + + # UNPAD DATA + padding_length = ord(decryptedData[-1]) + decryptedData = decryptedData[:-padding_length] + + return decryptedData + + def is_equal(self, a, b): + # http://codahale.com/a-lesson-in-timing-attacks/ + if len(a) != len(b): + return False + + result = 0 + for x, y in zip(a, b): + result |= ord(x) ^ ord(y) + return result == 0 + - diff --git a/library/cloud/cloudformation b/library/cloud/cloudformation index e072f3923f8..02132f56325 100644 --- a/library/cloud/cloudformation +++ b/library/cloud/cloudformation @@ -196,7 +196,7 @@ def main(): template_parameters=dict(required=False, type='dict', default={}), state=dict(default='present', choices=['present', 'absent']), template=dict(default=None, required=True), - disable_rollback=dict(default=False), + disable_rollback=dict(default=False, type='bool'), tags=dict(default=None) ) ) @@ -250,7 +250,7 @@ def main(): operation = 'CREATE' except Exception, err: error_msg = boto_exception(err) - if 'AlreadyExistsException' in error_msg: + if 'AlreadyExistsException' in error_msg or 'already exists' in error_msg: update = True else: module.fail_json(msg=error_msg) diff --git a/library/cloud/digital_ocean b/library/cloud/digital_ocean index a6721a55da1..efebf5f1bcf 100644 --- a/library/cloud/digital_ocean +++ b/library/cloud/digital_ocean @@ -20,7 +20,7 @@ DOCUMENTATION = ''' module: digital_ocean short_description: Create/delete a droplet/SSH_key in DigitalOcean description: - - Create/delete a droplet in DigitalOcean and optionally waits for it to be 'running', or deploy an SSH key. + - Create/delete a droplet in DigitalOcean and optionally wait for it to be 'running', or deploy an SSH key. version_added: "1.3" options: command: @@ -35,10 +35,10 @@ options: choices: ['present', 'active', 'absent', 'deleted'] client_id: description: - - Digital Ocean manager id. + - DigitalOcean manager id. api_key: description: - - Digital Ocean api key. + - DigitalOcean api key. id: description: - Numeric, the droplet id you want to operate on. @@ -47,34 +47,40 @@ options: - String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key. unique_name: description: - - Bool, require unique hostnames. By default, digital ocean allows multiple hosts with the same name. Setting this to "yes" allows only one host per name. Useful for idempotence. + - Bool, require unique hostnames. By default, DigitalOcean allows multiple hosts with the same name. Setting this to "yes" allows only one host per name. Useful for idempotence. version_added: "1.4" default: "no" choices: [ "yes", "no" ] size_id: description: - - Numeric, this is the id of the size you would like the droplet created at. + - Numeric, this is the id of the size you would like the droplet created with. image_id: description: - Numeric, this is the id of the image you would like the droplet created with. region_id: description: - - "Numeric, this is the id of the region you would like your server" + - "Numeric, this is the id of the region you would like your server to be created in." ssh_key_ids: description: - - Optional, comma separated list of ssh_key_ids that you would like to be added to the server + - Optional, comma separated list of ssh_key_ids that you would like to be added to the server. virtio: description: - - "Bool, turn on virtio driver in droplet for improved network and storage I/O" + - "Bool, turn on virtio driver in droplet for improved network and storage I/O." version_added: "1.4" default: "yes" choices: [ "yes", "no" ] private_networking: description: - - "Bool, add an additional, private network interface to droplet for inter-droplet communication" + - "Bool, add an additional, private network interface to droplet for inter-droplet communication." version_added: "1.4" default: "no" choices: [ "yes", "no" ] + backups_enabled: + description: + - Optional, Boolean, enables backups for your droplet. + version_added: "1.6" + default: "no" + choices: [ "yes", "no" ] wait: description: - Wait for the droplet to be in state 'running' before returning. If wait is "no" an ip_address may not be returned. @@ -164,11 +170,11 @@ try: import dopy from dopy.manager import DoError, DoManager except ImportError, e: - print "failed=True msg='dopy >= 0.2.2 required for this module'" + print "failed=True msg='dopy >= 0.2.3 required for this module'" sys.exit(1) -if dopy.__version__ < '0.2.2': - print "failed=True msg='dopy >= 0.2.2 required for this module'" +if dopy.__version__ < '0.2.3': + print "failed=True msg='dopy >= 0.2.3 required for this module'" sys.exit(1) class TimeoutError(DoError): @@ -229,8 +235,8 @@ class Droplet(JsonfyMixIn): cls.manager = DoManager(client_id, api_key) @classmethod - def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False): - json = cls.manager.new_droplet(name, size_id, image_id, region_id, ssh_key_ids, virtio, private_networking) + def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False, backups_enabled=False): + json = cls.manager.new_droplet(name, size_id, image_id, region_id, ssh_key_ids, virtio, private_networking, backups_enabled) droplet = cls(json) return droplet @@ -333,7 +339,8 @@ def core(module): region_id=getkeyordie('region_id'), ssh_key_ids=module.params['ssh_key_ids'], virtio=module.params['virtio'], - private_networking=module.params['private_networking'] + private_networking=module.params['private_networking'], + backups_enabled=module.params['backups_enabled'], ) if droplet.is_powered_on(): @@ -348,7 +355,7 @@ def core(module): elif state in ('absent', 'deleted'): # First, try to find a droplet by id. - droplet = Droplet.find(id=getkeyordie('id')) + droplet = Droplet.find(module.params['id']) # If we couldn't find the droplet and the user is allowing unique # hostnames, then check to see if a droplet with the specified @@ -392,8 +399,9 @@ def main(): image_id = dict(type='int'), region_id = dict(type='int'), ssh_key_ids = dict(default=''), - virtio = dict(type='bool', choices=BOOLEANS, default='yes'), - private_networking = dict(type='bool', choices=BOOLEANS, default='no'), + virtio = dict(type='bool', default='yes'), + private_networking = dict(type='bool', default='no'), + backups_enabled = dict(type='bool', default='no'), id = dict(aliases=['droplet_id'], type='int'), unique_name = dict(type='bool', default='no'), wait = dict(type='bool', default=True), diff --git a/library/cloud/digital_ocean_domain b/library/cloud/digital_ocean_domain new file mode 100644 index 00000000000..ef9338c1765 --- /dev/null +++ b/library/cloud/digital_ocean_domain @@ -0,0 +1,242 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . +DOCUMENTATION = ''' +--- +module: digital_ocean_domain +short_description: Create/delete a DNS record in DigitalOcean +description: + - Create/delete a DNS record in DigitalOcean. +version_added: "1.6" +options: + state: + description: + - Indicate desired state of the target. + default: present + choices: ['present', 'active', 'absent', 'deleted'] + client_id: + description: + - Digital Ocean manager id. + api_key: + description: + - Digital Ocean api key. + id: + description: + - Numeric, the droplet id you want to operate on. + name: + description: + - String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key, or the name of a domain. + ip: + description: + - The IP address to point a domain at. + +notes: + - Two environment variables can be used, DO_CLIENT_ID and DO_API_KEY. +''' + + +EXAMPLES = ''' +# Create a domain record + +- digital_ocean_domain: > + state=present + name=my.digitalocean.domain + ip=127.0.0.1 + +# Create a droplet and a corresponding domain record + +- digital_cean_droplet: > + state=present + name=test_droplet + size_id=1 + region_id=2 + image_id=3 + register: test_droplet + +- digital_ocean_domain: > + state=present + name={{ test_droplet.name }}.my.domain + ip={{ test_droplet.ip_address }} +''' + +import sys +import os +import time + +try: + from dopy.manager import DoError, DoManager +except ImportError as e: + print "failed=True msg='dopy required for this module'" + sys.exit(1) + +class TimeoutError(DoError): + def __init__(self, msg, id): + super(TimeoutError, self).__init__(msg) + self.id = id + +class JsonfyMixIn(object): + def to_json(self): + return self.__dict__ + +class DomainRecord(JsonfyMixIn): + manager = None + + def __init__(self, json): + self.__dict__.update(json) + update_attr = __init__ + + def update(self, data = None, record_type = None): + json = self.manager.edit_domain_record(self.domain_id, + self.id, + record_type if record_type is not None else self.record_type, + data if data is not None else self.data) + self.__dict__.update(json) + return self + + def destroy(self): + json = self.manager.destroy_domain_record(self.domain_id, self.id) + return json + +class Domain(JsonfyMixIn): + manager = None + + def __init__(self, domain_json): + self.__dict__.update(domain_json) + + def destroy(self): + self.manager.destroy_domain(self.id) + + def records(self): + json = self.manager.all_domain_records(self.id) + return map(DomainRecord, json) + + @classmethod + def add(cls, name, ip): + json = cls.manager.new_domain(name, ip) + return cls(json) + + @classmethod + def setup(cls, client_id, api_key): + cls.manager = DoManager(client_id, api_key) + DomainRecord.manager = cls.manager + + @classmethod + def list_all(cls): + domains = cls.manager.all_domains() + return map(cls, domains) + + @classmethod + def find(cls, name=None, id=None): + if name is None and id is None: + return False + + domains = Domain.list_all() + + if id is not None: + for domain in domains: + if domain.id == id: + return domain + + if name is not None: + for domain in domains: + if domain.name == name: + return domain + + return False + +def core(module): + def getkeyordie(k): + v = module.params[k] + if v is None: + module.fail_json(msg='Unable to load %s' % k) + return v + + try: + # params['client_id'] will be None even if client_id is not passed in + client_id = module.params['client_id'] or os.environ['DO_CLIENT_ID'] + api_key = module.params['api_key'] or os.environ['DO_API_KEY'] + except KeyError, e: + module.fail_json(msg='Unable to load %s' % e.message) + + changed = True + state = module.params['state'] + + Domain.setup(client_id, api_key) + if state in ('present'): + domain = Domain.find(id=module.params["id"]) + + if not domain: + domain = Domain.find(name=getkeyordie("name")) + + if not domain: + domain = Domain.add(getkeyordie("name"), + getkeyordie("ip")) + module.exit_json(changed=True, domain=domain.to_json()) + else: + records = domain.records() + at_record = None + for record in records: + if record.name == "@": + at_record = record + + if not at_record.data == getkeyordie("ip"): + record.update(data=getkeyordie("ip"), record_type='A') + module.exit_json(changed=True, domain=Domain.find(id=record.domain_id).to_json()) + + module.exit_json(changed=False, domain=domain.to_json()) + + elif state in ('absent'): + domain = None + if "id" in module.params: + domain = Domain.find(id=module.params["id"]) + + if not domain and "name" in module.params: + domain = Domain.find(name=module.params["name"]) + + if not domain: + module.exit_json(changed=False, msg="Domain not found.") + + event_json = domain.destroy() + module.exit_json(changed=True, event=event_json) + + +def main(): + module = AnsibleModule( + argument_spec = dict( + state = dict(choices=['active', 'present', 'absent', 'deleted'], default='present'), + client_id = dict(aliases=['CLIENT_ID'], no_log=True), + api_key = dict(aliases=['API_KEY'], no_log=True), + name = dict(type='str'), + id = dict(aliases=['droplet_id'], type='int'), + ip = dict(type='str'), + ), + required_one_of = ( + ['id', 'name'], + ), + ) + + try: + core(module) + except TimeoutError as e: + module.fail_json(msg=str(e), id=e.id) + except (DoError, Exception) as e: + module.fail_json(msg=str(e)) + +# import module snippets +from ansible.module_utils.basic import * + +main() diff --git a/library/cloud/digital_ocean_sshkey b/library/cloud/digital_ocean_sshkey new file mode 100644 index 00000000000..8ae7af47793 --- /dev/null +++ b/library/cloud/digital_ocean_sshkey @@ -0,0 +1,178 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . +DOCUMENTATION = ''' +--- +module: digital_ocean_sshkey +short_description: Create/delete an SSH key in DigitalOcean +description: + - Create/delete an SSH key. +version_added: "1.6" +options: + state: + description: + - Indicate desired state of the target. + default: present + choices: ['present', 'absent'] + client_id: + description: + - Digital Ocean manager id. + api_key: + description: + - Digital Ocean api key. + id: + description: + - Numeric, the SSH key id you want to operate on. + name: + description: + - String, this is the name of an SSH key to create or destroy. + ssh_pub_key: + description: + - The public SSH key you want to add to your account. + +notes: + - Two environment variables can be used, DO_CLIENT_ID and DO_API_KEY. +''' + + +EXAMPLES = ''' +# Ensure a SSH key is present +# If a key matches this name, will return the ssh key id and changed = False +# If no existing key matches this name, a new key is created, the ssh key id is returned and changed = False + +- digital_ocean_sshkey: > + state=present + name=my_ssh_key + ssh_pub_key='ssh-rsa AAAA...' + client_id=XXX + api_key=XXX + +''' + +import sys +import os +import time + +try: + from dopy.manager import DoError, DoManager +except ImportError as e: + print "failed=True msg='dopy required for this module'" + sys.exit(1) + +class TimeoutError(DoError): + def __init__(self, msg, id): + super(TimeoutError, self).__init__(msg) + self.id = id + +class JsonfyMixIn(object): + def to_json(self): + return self.__dict__ + +class SSH(JsonfyMixIn): + manager = None + + def __init__(self, ssh_key_json): + self.__dict__.update(ssh_key_json) + update_attr = __init__ + + def destroy(self): + self.manager.destroy_ssh_key(self.id) + return True + + @classmethod + def setup(cls, client_id, api_key): + cls.manager = DoManager(client_id, api_key) + + @classmethod + def find(cls, name): + if not name: + return False + keys = cls.list_all() + for key in keys: + if key.name == name: + return key + return False + + @classmethod + def list_all(cls): + json = cls.manager.all_ssh_keys() + return map(cls, json) + + @classmethod + def add(cls, name, key_pub): + json = cls.manager.new_ssh_key(name, key_pub) + return cls(json) + +def core(module): + def getkeyordie(k): + v = module.params[k] + if v is None: + module.fail_json(msg='Unable to load %s' % k) + return v + + try: + # params['client_id'] will be None even if client_id is not passed in + client_id = module.params['client_id'] or os.environ['DO_CLIENT_ID'] + api_key = module.params['api_key'] or os.environ['DO_API_KEY'] + except KeyError, e: + module.fail_json(msg='Unable to load %s' % e.message) + + changed = True + state = module.params['state'] + + SSH.setup(client_id, api_key) + name = getkeyordie('name') + if state in ('present'): + key = SSH.find(name) + if key: + module.exit_json(changed=False, ssh_key=key.to_json()) + key = SSH.add(name, getkeyordie('ssh_pub_key')) + module.exit_json(changed=True, ssh_key=key.to_json()) + + elif state in ('absent'): + key = SSH.find(name) + if not key: + module.exit_json(changed=False, msg='SSH key with the name of %s is not found.' % name) + key.destroy() + module.exit_json(changed=True) + +def main(): + module = AnsibleModule( + argument_spec = dict( + state = dict(choices=['present', 'absent'], default='present'), + client_id = dict(aliases=['CLIENT_ID'], no_log=True), + api_key = dict(aliases=['API_KEY'], no_log=True), + name = dict(type='str'), + id = dict(aliases=['droplet_id'], type='int'), + ssh_pub_key = dict(type='str'), + ), + required_one_of = ( + ['id', 'name'], + ), + ) + + try: + core(module) + except TimeoutError as e: + module.fail_json(msg=str(e), id=e.id) + except (DoError, Exception) as e: + module.fail_json(msg=str(e)) + +# import module snippets +from ansible.module_utils.basic import * + +main() diff --git a/library/cloud/docker b/library/cloud/docker index a1e9a5074c8..3fb82fd7dc5 100644 --- a/library/cloud/docker +++ b/library/cloud/docker @@ -148,7 +148,7 @@ options: - Set the state of the container required: false default: present - choices: [ "present", "stopped", "absent", "killed", "restarted" ] + choices: [ "present", "running", "stopped", "absent", "killed", "restarted" ] aliases: [] privileged: description: @@ -169,6 +169,20 @@ options: default: null aliases: [] version_added: "1.5" + stdin_open: + description: + - Keep stdin open + required: false + default: false + aliases: [] + version_added: "1.6" + tty: + description: + - Allocate a pseudo-tty + required: false + default: false + aliases: [] + version_added: "1.6" author: Cove Schneider, Joshua Conner, Pavel Antonov requirements: [ "docker-py >= 0.3.0" ] ''' @@ -287,6 +301,7 @@ import sys from urlparse import urlparse try: import docker.client + import docker.utils from requests.exceptions import * except ImportError, e: HAS_DOCKER_PY = False @@ -331,7 +346,7 @@ class DockerManager: if self.module.params.get('volumes'): self.binds = {} self.volumes = {} - vols = self.parse_list_from_param('volumes') + vols = self.module.params.get('volumes') for vol in vols: parts = vol.split(":") # host mount (e.g. /mnt:/tmp, bind mounts host's /tmp to /mnt in the container) @@ -345,46 +360,32 @@ class DockerManager: self.lxc_conf = None if self.module.params.get('lxc_conf'): self.lxc_conf = [] - options = self.parse_list_from_param('lxc_conf') + options = self.module.params.get('lxc_conf') for option in options: parts = option.split(':') self.lxc_conf.append({"Key": parts[0], "Value": parts[1]}) self.exposed_ports = None if self.module.params.get('expose'): - expose = self.parse_list_from_param('expose') - self.exposed_ports = self.get_exposed_ports(expose) + self.exposed_ports = self.get_exposed_ports(self.module.params.get('expose')) self.port_bindings = None if self.module.params.get('ports'): - ports = self.parse_list_from_param('ports') - self.port_bindings = self.get_port_bindings(ports) + self.port_bindings = self.get_port_bindings(self.module.params.get('ports')) self.links = None if self.module.params.get('links'): - links = self.parse_list_from_param('links') - self.links = dict(map(lambda x: x.split(':'), links)) + self.links = dict(map(lambda x: x.split(':'), self.module.params.get('links'))) self.env = None if self.module.params.get('env'): - env = self.parse_list_from_param('env') - self.env = dict(map(lambda x: x.split("="), env)) + self.env = dict(map(lambda x: x.split("="), self.module.params.get('env'))) # connect to docker server docker_url = urlparse(module.params.get('docker_url')) self.client = docker.Client(base_url=docker_url.geturl()) - def parse_list_from_param(self, param_name, delimiter=','): - """ - Get a list from a module parameter, whether it's specified as a delimiter-separated string or is already in list form. - """ - param_list = self.module.params.get(param_name) - if not isinstance(param_list, list): - param_list = param_list.split(delimiter) - return param_list - - def get_exposed_ports(self, expose_list): """ Parse the ports and protocols (TCP/UDP) to expose in the docker-py `create_container` call from the docker CLI-style syntax. @@ -409,7 +410,9 @@ class DockerManager: """ binds = {} for port in ports: - parts = port.split(':') + # ports could potentially be an array like [80, 443], so we make sure they're strings + # before splitting + parts = str(port).split(':') container_port = parts[-1] if '/' not in container_port: container_port = int(parts[-1]) @@ -522,15 +525,19 @@ class DockerManager: 'command': self.module.params.get('command'), 'ports': self.exposed_ports, 'volumes': self.volumes, - 'volumes_from': self.module.params.get('volumes_from'), 'mem_limit': _human_to_bytes(self.module.params.get('memory_limit')), 'environment': self.env, - 'dns': self.module.params.get('dns'), 'hostname': self.module.params.get('hostname'), 'detach': self.module.params.get('detach'), 'name': self.module.params.get('name'), + 'stdin_open': self.module.params.get('stdin_open'), + 'tty': self.module.params.get('tty'), } + if docker.utils.compare_version('1.10', self.client.version()['ApiVersion']) < 0: + params['dns'] = self.module.params.get('dns') + params['volumes_from'] = self.module.params.get('volumes_from') + def do_create(count, params): results = [] for _ in range(count): @@ -558,6 +565,11 @@ class DockerManager: 'privileged': self.module.params.get('privileged'), 'links': self.links, } + + if docker.utils.compare_version('1.10', self.client.version()['ApiVersion']) >= 0: + params['dns'] = self.module.params.get('dns') + params['volumes_from'] = self.module.params.get('volumes_from') + for i in containers: self.client.start(i['Id'], **params) self.increment_counter('started') @@ -616,12 +628,12 @@ def main(): count = dict(default=1), image = dict(required=True), command = dict(required=False, default=None), - expose = dict(required=False, default=None), - ports = dict(required=False, default=None), + expose = dict(required=False, default=None, type='list'), + ports = dict(required=False, default=None, type='list'), publish_all_ports = dict(default=False, type='bool'), - volumes = dict(default=None), + volumes = dict(default=None, type='list'), volumes_from = dict(default=None), - links = dict(default=None), + links = dict(default=None, type='list'), memory_limit = dict(default=0), memory_swap = dict(default=0), docker_url = dict(default='unix://var/run/docker.sock'), @@ -629,13 +641,15 @@ def main(): password = dict(), email = dict(), hostname = dict(default=None), - env = dict(), + env = dict(type='list'), dns = dict(), detach = dict(default=True, type='bool'), - state = dict(default='present', choices=['absent', 'present', 'stopped', 'killed', 'restarted']), + state = dict(default='running', choices=['absent', 'present', 'running', 'stopped', 'killed', 'restarted']), debug = dict(default=False, type='bool'), privileged = dict(default=False, type='bool'), - lxc_conf = dict(default=None), + stdin_open = dict(default=False, type='bool'), + tty = dict(default=False, type='bool'), + lxc_conf = dict(default=None, type='list'), name = dict(default=None) ) ) @@ -662,25 +676,35 @@ def main(): changed = False # start/stop containers - if state == "present": - - # make sure a container with `name` is running - if name and "/" + name not in map(lambda x: x.get('Name'), running_containers): + if state in [ "running", "present" ]: + + # make sure a container with `name` exists, if not create and start it + if name and "/" + name not in map(lambda x: x.get('Name'), deployed_containers): containers = manager.create_containers(1) - manager.start_containers(containers) - - # start more containers if we don't have enough - elif delta > 0: - containers = manager.create_containers(delta) - manager.start_containers(containers) - - # stop containers if we have too many - elif delta < 0: - containers_to_stop = running_containers[0:abs(delta)] - containers = manager.stop_containers(containers_to_stop) - manager.remove_containers(containers_to_stop) - - facts = manager.get_running_containers() + if state == "present": #otherwise it get (re)started later anyways.. + manager.start_containers(containers) + running_containers = manager.get_running_containers() + deployed_containers = manager.get_deployed_containers() + + if state == "running": + # make sure a container with `name` is running + if name and "/" + name not in map(lambda x: x.get('Name'), running_containers): + manager.start_containers(deployed_containers) + + # start more containers if we don't have enough + elif delta > 0: + containers = manager.create_containers(delta) + manager.start_containers(containers) + + # stop containers if we have too many + elif delta < 0: + containers_to_stop = running_containers[0:abs(delta)] + containers = manager.stop_containers(containers_to_stop) + manager.remove_containers(containers_to_stop) + + facts = manager.get_running_containers() + else: + acts = manager.get_deployed_containers() # stop and remove containers elif state == "absent": diff --git a/library/cloud/docker_image b/library/cloud/docker_image index 5fcdfad573c..2f5a02b4521 100644 --- a/library/cloud/docker_image +++ b/library/cloud/docker_image @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/bin/python # # (c) 2014, Pavel Antonov @@ -137,6 +137,9 @@ class DockerImageManager: self.changed = True for chunk in stream: + if not chunk: + continue + chunk_json = json.loads(chunk) if 'error' in chunk_json: diff --git a/library/cloud/ec2 b/library/cloud/ec2 index e050611fcf8..5935b7dc578 100644 --- a/library/cloud/ec2 +++ b/library/cloud/ec2 @@ -67,6 +67,13 @@ options: required: true default: null aliases: [] + spot_price: + version_added: "1.5" + description: + - Maximum spot price to bid, If not set a regular on-demand instance is requested. A spot request is made with this maximum bid. When it is filled, the instance is started. + required: false + default: null + aliases: [] image: description: - I(emi) (or I(ami)) to use for the instance @@ -97,24 +104,12 @@ options: - how long before wait gives up, in seconds default: 300 aliases: [] - ec2_url: + spot_wait_timeout: + version_added: "1.5" description: - - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used - required: false - default: null + - how long to wait for the spot instance request to be fulfilled + default: 600 aliases: [] - aws_secret_key: - description: - - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. - required: false - default: null - aliases: [ 'ec2_secret_key', 'secret_key' ] - aws_access_key: - description: - - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. - required: false - default: null - aliases: [ 'ec2_access_key', 'access_key' ] count: description: - number of instances to launch @@ -157,7 +152,7 @@ options: default: null aliases: [] assign_public_ip: - version_added: "1.4" + version_added: "1.5" description: - when provisioning within vpc, assign a public IP address. Boto library must be 2.13.0+ required: false @@ -184,6 +179,12 @@ options: required: false default: null aliases: [] + source_dest_check: + version_added: "1.6" + description: + - Enable or Disable the Source/Destination checks (for NAT instances and Virtual Routers) + required: false + default: true state: version_added: "1.3" description: @@ -198,6 +199,12 @@ options: required: false default: null aliases: [] + ebs_optimized: + version_added: "1.6" + description: + - whether instance is using optimized EBS volumes, see U(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) + required: false + default: false exact_count: version_added: "1.5" description: @@ -212,17 +219,9 @@ options: required: false default: null aliases: [] - validate_certs: - description: - - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. - required: false - default: "yes" - choices: ["yes", "no"] - aliases: [] - version_added: "1.5" -requirements: [ "boto" ] author: Seth Vidal, Tim Gerla, Lester Wade +extends_documentation_fragment: aws ''' EXAMPLES = ''' @@ -253,7 +252,7 @@ EXAMPLES = ''' db: postgres monitoring: yes -# Single instance with additional IOPS volume from snapshot +# Single instance with additional IOPS volume from snapshot and volume delete on termination local_action: module: ec2 key_name: mykey @@ -268,6 +267,7 @@ local_action: device_type: io1 iops: 1000 volume_size: 100 + delete_on_termination: true monitoring: yes # Multiple groups example @@ -311,6 +311,19 @@ local_action: vpc_subnet_id: subnet-29e63245 assign_public_ip: yes +# Spot instance example +- local_action: + module: ec2 + spot_price: 0.24 + spot_wait_timeout: 600 + keypair: mykey + group_id: sg-1dc53f72 + instance_type: m1.small + image: ami-6e649707 + wait: yes + vpc_subnet_id: subnet-29e63245 + assign_public_ip: yes + # Launch instances, runs some tasks # and then terminate them @@ -557,7 +570,8 @@ def get_instance_info(inst): 'root_device_type': inst.root_device_type, 'root_device_name': inst.root_device_name, 'state': inst.state, - 'hypervisor': inst.hypervisor} + 'hypervisor': inst.hypervisor, + 'ebs_optimized': inst.ebs_optimized} try: instance_info['virtualization_type'] = getattr(inst,'virtualization_type') except AttributeError: @@ -620,6 +634,17 @@ def create_block_device(module, ec2, volume): delete_on_termination=volume.get('delete_on_termination', False), iops=volume.get('iops')) +def boto_supports_param_in_spot_request(ec2, param): + """ + Check if Boto library has a in its request_spot_instances() method. For example, the placement_group parameter wasn't added until 2.3.0. + + ec2: authenticated ec2 connection object + + Returns: + True if boto library has the named param as an argument on the request_spot_instances method, else False + """ + method = getattr(ec2, 'request_spot_instances') + return param in method.func_code.co_varnames def enforce_count(module, ec2): @@ -644,7 +669,6 @@ def enforce_count(module, ec2): for inst in instance_dict_array: instances.append(inst) - elif len(instances) > exact_count: changed = True to_remove = len(instances) - exact_count @@ -690,6 +714,7 @@ def create_instances(module, ec2, override_count=None): group_id = module.params.get('group_id') zone = module.params.get('zone') instance_type = module.params.get('instance_type') + spot_price = module.params.get('spot_price') image = module.params.get('image') if override_count: count = override_count @@ -700,6 +725,7 @@ def create_instances(module, ec2, override_count=None): ramdisk = module.params.get('ramdisk') wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) + spot_wait_timeout = int(module.params.get('spot_wait_timeout')) placement_group = module.params.get('placement_group') user_data = module.params.get('user_data') instance_tags = module.params.get('instance_tags') @@ -708,8 +734,10 @@ def create_instances(module, ec2, override_count=None): private_ip = module.params.get('private_ip') instance_profile_name = module.params.get('instance_profile_name') volumes = module.params.get('volumes') + ebs_optimized = module.params.get('ebs_optimized') exact_count = module.params.get('exact_count') count_tag = module.params.get('count_tag') + source_dest_check = module.boolean(module.params.get('source_dest_check')) # group_id and group_name are exclusive of each other if group_id and group_name: @@ -760,18 +788,16 @@ def create_instances(module, ec2, override_count=None): try: params = {'image_id': image, 'key_name': key_name, - 'client_token': id, - 'min_count': count_remaining, - 'max_count': count_remaining, 'monitoring_enabled': monitoring, 'placement': zone, - 'placement_group': placement_group, 'instance_type': instance_type, 'kernel_id': kernel, 'ramdisk_id': ramdisk, - 'private_ip_address': private_ip, 'user_data': user_data} + if ebs_optimized: + params['ebs_optimized'] = ebs_optimized + if boto_supports_profile_name_arg(ec2): params['instance_profile_name'] = instance_profile_name else: @@ -788,13 +814,19 @@ def create_instances(module, ec2, override_count=None): msg="assign_public_ip only available with vpc_subnet_id") else: - interface = boto.ec2.networkinterface.NetworkInterfaceSpecification( - subnet_id=vpc_subnet_id, - groups=group_id, - associate_public_ip_address=assign_public_ip) + if private_ip: + interface = boto.ec2.networkinterface.NetworkInterfaceSpecification( + subnet_id=vpc_subnet_id, + private_ip_address=private_ip, + groups=group_id, + associate_public_ip_address=assign_public_ip) + else: + interface = boto.ec2.networkinterface.NetworkInterfaceSpecification( + subnet_id=vpc_subnet_id, + groups=group_id, + associate_public_ip_address=assign_public_ip) interfaces = boto.ec2.networkinterface.NetworkInterfaceCollection(interface) - params['network_interfaces'] = interfaces - + params['network_interfaces'] = interfaces else: params['subnet_id'] = vpc_subnet_id if vpc_subnet_id: @@ -814,38 +846,88 @@ def create_instances(module, ec2, override_count=None): params['block_device_map'] = bdm - res = ec2.run_instances(**params) - except boto.exception.BotoServerError, e: - module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) - - instids = [ i.id for i in res.instances ] - while True: - try: - res.connection.get_all_instances(instids) - break - except boto.exception.EC2ResponseError, e: - if "InvalidInstanceID.NotFound" in str(e): - # there's a race between start and get an instance - continue + # check to see if we're using spot pricing first before starting instances + if not spot_price: + if assign_public_ip and private_ip: + params.update(dict( + min_count = count_remaining, + max_count = count_remaining, + client_token = id, + placement_group = placement_group, + )) else: - module.fail_json(msg = str(e)) + params.update(dict( + min_count = count_remaining, + max_count = count_remaining, + client_token = id, + placement_group = placement_group, + private_ip_address = private_ip, + )) + + res = ec2.run_instances(**params) + instids = [ i.id for i in res.instances ] + while True: + try: + ec2.get_all_instances(instids) + break + except boto.exception.EC2ResponseError as e: + if "InvalidInstanceID.NotFound" in str(e): + # there's a race between start and get an instance + continue + else: + module.fail_json(msg = str(e)) + else: + if private_ip: + module.fail_json( + msg='private_ip only available with on-demand (non-spot) instances') + if boto_supports_param_in_spot_request(ec2, placement_group): + params['placement_group'] = placement_group + elif placement_group : + module.fail_json( + msg="placement_group parameter requires Boto version 2.3.0 or higher.") + + params.update(dict( + count = count_remaining, + )) + res = ec2.request_spot_instances(spot_price, **params) + + # Now we have to do the intermediate waiting + if wait: + spot_req_inst_ids = dict() + spot_wait_timeout = time.time() + spot_wait_timeout + while spot_wait_timeout > time.time(): + reqs = ec2.get_all_spot_instance_requests() + for sirb in res: + if sirb.id in spot_req_inst_ids: + continue + for sir in reqs: + if sir.id == sirb.id and sir.instance_id is not None: + spot_req_inst_ids[sirb.id] = sir.instance_id + if len(spot_req_inst_ids) < count: + time.sleep(5) + else: + break + if spot_wait_timeout <= time.time(): + module.fail_json(msg = "wait for spot requests timeout on %s" % time.asctime()) + instids = spot_req_inst_ids.values() + except boto.exception.BotoServerError, e: + module.fail_json(msg = "Instance creation failed => %s: %s" % (e.error_code, e.error_message)) if instance_tags: try: ec2.create_tags(instids, instance_tags) except boto.exception.EC2ResponseError, e: - module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) + module.fail_json(msg = "Instance tagging failed => %s: %s" % (e.error_code, e.error_message)) # wait here until the instances are up - this_res = [] num_running = 0 wait_timeout = time.time() + wait_timeout while wait_timeout > time.time() and num_running < len(instids): - res_list = res.connection.get_all_instances(instids) - if len(res_list) > 0: - this_res = res_list[0] - num_running = len([ i for i in this_res.instances if i.state=='running' ]) - else: + res_list = ec2.get_all_instances(instids) + num_running = 0 + for res in res_list: + num_running += len([ i for i in res.instances if i.state=='running' ]) + if len(res_list) <= 0: # got a bad response of some sort, possibly due to # stale/cached data. Wait a second and then try again time.sleep(1) @@ -859,8 +941,14 @@ def create_instances(module, ec2, override_count=None): # waiting took too long module.fail_json(msg = "wait for instances running timeout on %s" % time.asctime()) - for inst in this_res.instances: - running_instances.append(inst) + #We do this after the loop ends so that we end up with one list + for res in res_list: + running_instances.extend(res.instances) + + # Enabled by default by Amazon + if not source_dest_check: + for inst in res.instances: + inst.modify_attribute('sourceDestCheck', False) instance_dict_array = [] created_instance_ids = [] @@ -1020,13 +1108,15 @@ def main(): group_id = dict(type='list'), zone = dict(aliases=['aws_zone', 'ec2_zone']), instance_type = dict(aliases=['type']), + spot_price = dict(), image = dict(), kernel = dict(), - count = dict(default='1'), + count = dict(type='int', default='1'), monitoring = dict(type='bool', default=False), ramdisk = dict(), wait = dict(type='bool', default=False), wait_timeout = dict(default=300), + spot_wait_timeout = dict(default=600), placement_group = dict(), user_data = dict(), instance_tags = dict(type='dict'), @@ -1035,10 +1125,12 @@ def main(): private_ip = dict(), instance_profile_name = dict(), instance_ids = dict(type='list'), + source_dest_check = dict(type='bool', default=True), state = dict(default='present'), exact_count = dict(type='int', default=None), count_tag = dict(), volumes = dict(type='list'), + ebs_optimized = dict(), ) ) diff --git a/library/cloud/ec2_ami b/library/cloud/ec2_ami index 866f2caf767..3baf70a438f 100644 --- a/library/cloud/ec2_ami +++ b/library/cloud/ec2_ami @@ -22,24 +22,6 @@ short_description: create or destroy an image in ec2, return imageid description: - Creates or deletes ec2 images. This module has a dependency on python-boto >= 2.5 options: - ec2_url: - description: - - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used - required: false - default: null - aliases: [] - aws_secret_key: - description: - - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. - required: false - default: null - aliases: [ 'ec2_secret_key', 'secret_key' ] - aws_access_key: - description: - - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. - required: false - default: null - aliases: ['ec2_access_key', 'access_key' ] instance_id: description: - instance id of the image to create @@ -101,17 +83,9 @@ options: required: false default: null aliases: [] - validate_certs: - description: - - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. - required: false - default: "yes" - choices: ["yes", "no"] - aliases: [] - version_added: "1.5" -requirements: [ "boto" ] author: Evan Duffield +extends_documentation_fragment: aws ''' # Thank you to iAcquire for sponsoring development of this module. diff --git a/library/cloud/ec2_ami_search b/library/cloud/ec2_ami_search new file mode 100644 index 00000000000..932dca855a8 --- /dev/null +++ b/library/cloud/ec2_ami_search @@ -0,0 +1,196 @@ +#!/usr/bin/python +# +# (c) 2013, Nimbis Services +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . +DOCUMENTATION = ''' +--- +module: ec2_ami_search +short_description: Retrieve AWS AMI for a given operating system. +version_added: "1.6" +description: + - Look up the most recent AMI on AWS for a given operating system. + - Returns C(ami), C(aki), C(ari), C(serial), C(tag) + - If there is no AKI or ARI associated with an image, these will be C(null). + - Only supports images from cloud-images.ubuntu.com + - 'Example output: C({"ami": "ami-69f5a900", "changed": false, "aki": "aki-88aa75e1", "tag": "release", "ari": null, "serial": "20131024"})' +version_added: "1.6" +options: + distro: + description: Linux distribution (e.g., C(ubuntu)) + required: true + choices: ["ubuntu"] + release: + description: short name of the release (e.g., C(precise)) + required: true + stream: + description: Type of release. + required: false + default: "server" + choices: ["server", "desktop"] + store: + description: Back-end store for instance + required: false + default: "ebs" + choices: ["ebs", "instance-store"] + arch: + description: CPU architecture + required: false + default: "amd64" + choices: ["i386", "amd64"] + region: + description: EC2 region + required: false + default: us-east-1 + choices: ["ap-northeast-1", "ap-southeast-1", "ap-southeast-2", + "eu-west-1", "sa-east-1", "us-east-1", "us-west-1", "us-west-2"] + virt: + description: virutalization type + required: false + default: paravirtual + choices: ["paravirtual", "hvm"] + +author: Lorin Hochstein +''' + +EXAMPLES = ''' +- name: Launch an Ubuntu 12.04 (Precise Pangolin) EC2 instance + hosts: 127.0.0.1 + connection: local + tasks: + - name: Get the Ubuntu precise AMI + ec2_ami_search: distro=ubuntu release=precise region=us-west-1 store=instance-store + register: ubuntu_image + - name: Start the EC2 instance + ec2: image={{ ubuntu_image.ami }} instance_type=m1.small key_name=mykey +''' + +import csv +import json +import urllib2 +import urlparse + +SUPPORTED_DISTROS = ['ubuntu'] + +AWS_REGIONS = ['ap-northeast-1', + 'ap-southeast-1', + 'ap-southeast-2', + 'eu-west-1', + 'sa-east-1', + 'us-east-1', + 'us-west-1', + 'us-west-2'] + + +def get_url(module, url): + """ Get url and return response """ + try: + r = urllib2.urlopen(url) + except (urllib2.HTTPError, urllib2.URLError), e: + code = getattr(e, 'code', -1) + module.fail_json(msg="Request failed: %s" % str(e), status_code=code) + return r + + +def ubuntu(module): + """ Get the ami for ubuntu """ + + release = module.params['release'] + stream = module.params['stream'] + store = module.params['store'] + arch = module.params['arch'] + region = module.params['region'] + virt = module.params['virt'] + + url = get_ubuntu_url(release, stream) + + req = get_url(module, url) + reader = csv.reader(req, delimiter='\t') + try: + ami, aki, ari, tag, serial = lookup_ubuntu_ami(reader, release, stream, + store, arch, region, virt) + module.exit_json(changed=False, ami=ami, aki=aki, ari=ari, tag=tag, + serial=serial) + except KeyError: + module.fail_json(msg="No matching AMI found") + + +def lookup_ubuntu_ami(table, release, stream, store, arch, region, virt): + """ Look up the Ubuntu AMI that matches query given a table of AMIs + + table: an iterable that returns a row of + (release, stream, tag, serial, region, ami, aki, ari, virt) + release: ubuntu release name + stream: 'server' or 'desktop' + store: 'ebs' or 'instance-store' + arch: 'i386' or 'amd64' + region: EC2 region + virt: 'paravirtual' or 'hvm' + + Returns (ami, aki, ari, tag, serial)""" + expected = (release, stream, store, arch, region, virt) + + for row in table: + (actual_release, actual_stream, tag, serial, + actual_store, actual_arch, actual_region, ami, aki, ari, + actual_virt) = row + actual = (actual_release, actual_stream, actual_store, actual_arch, + actual_region, actual_virt) + if actual == expected: + # aki and ari are sometimes blank + if aki == '': + aki = None + if ari == '': + ari = None + return (ami, aki, ari, tag, serial) + + raise KeyError() + + +def get_ubuntu_url(release, stream): + url = "https://cloud-images.ubuntu.com/query/%s/%s/released.current.txt" + return url % (release, stream) + + +def main(): + arg_spec = dict( + distro=dict(required=True, choices=SUPPORTED_DISTROS), + release=dict(required=True), + stream=dict(required=False, default='server', + choices=['desktop', 'server']), + store=dict(required=False, default='ebs', + choices=['ebs', 'instance-store']), + arch=dict(required=False, default='amd64', + choices=['i386', 'amd64']), + region=dict(required=False, default='us-east-1', choices=AWS_REGIONS), + virt=dict(required=False, default='paravirtual', + choices=['paravirtual', 'hvm']) + ) + module = AnsibleModule(argument_spec=arg_spec) + distro = module.params['distro'] + + if distro == 'ubuntu': + ubuntu(module) + else: + module.fail_json(msg="Unsupported distro: %s" % distro) + + + +# this is magic, see lib/ansible/module_common.py +#<> + +if __name__ == '__main__': + main() diff --git a/library/cloud/ec2_asg b/library/cloud/ec2_asg new file mode 100644 index 00000000000..6528d951180 --- /dev/null +++ b/library/cloud/ec2_asg @@ -0,0 +1,219 @@ +#!/usr/bin/python +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = """ +--- +module: ec2_asg +short_description: Create or delete AWS Autoscaling Groups +description: + - Can create or delete AWS Autoscaling Groups + - Works with the ec2_lc module to manage Launch Configurations +version_added: "1.6" +author: Gareth Rushgrove +options: + state: + description: + - register or deregister the instance + required: true + choices: ['present', 'absent'] + name: + description: + - Unique name for group to be created or deleted + required: true + load_balancers: + description: + - List of ELB names to use for the group + required: false + availability_zones: + description: + - List of availability zone names in which to create the group. + required: false + launch_config_name: + description: + - Name of the Launch configuration to use for the group. See the ec2_lc module for managing these. + required: false + min_size: + description: + - Minimum number of instances in group + required: false + max_size: + description: + - Maximum number of instances in group + required: false + desired_capacity: + description: + - Desired number of instances in group + required: false + region: + description: + - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. + required: false + aliases: ['aws_region', 'ec2_region'] + vpc_zone_identifier: + description: + - List of VPC subnets to use + required: false + default: None +extends_documentation_fragment: aws +""" + +EXAMPLES = ''' +- ec2_asg: + name: special + load_balancers: 'lb1,lb2' + availability_zones: 'eu-west-1a,eu-west-1b' + launch_config_name: 'lc-1' + min_size: 1 + max_size: 10 + desired_capacity: 5 + vpc_zone_identifier: 'subnet-abcd1234,subnet-1a2b3c4d' +''' + +import sys +import time + +from ansible.module_utils.basic import * +from ansible.module_utils.ec2 import * + +try: + import boto.ec2.autoscale + from boto.ec2.autoscale import AutoScaleConnection, AutoScalingGroup + from boto.exception import BotoServerError +except ImportError: + print "failed=True msg='boto required for this module'" + sys.exit(1) + + +def enforce_required_arguments(module): + ''' As many arguments are not required for autoscale group deletion + they cannot be mandatory arguments for the module, so we enforce + them here ''' + missing_args = [] + for arg in ('min_size', 'max_size', 'launch_config_name', 'availability_zones'): + if module.params[arg] is None: + missing_args.append(arg) + if missing_args: + module.fail_json(msg="Missing required arguments for autoscaling group create/update: %s" % ",".join(missing_args)) + + +def create_autoscaling_group(connection, module): + enforce_required_arguments(module) + + group_name = module.params.get('name') + load_balancers = module.params['load_balancers'] + availability_zones = module.params['availability_zones'] + launch_config_name = module.params.get('launch_config_name') + min_size = module.params['min_size'] + max_size = module.params['max_size'] + desired_capacity = module.params.get('desired_capacity') + vpc_zone_identifier = module.params.get('vpc_zone_identifier') + + launch_configs = connection.get_all_launch_configurations(names=[launch_config_name]) + + as_groups = connection.get_all_groups(names=[group_name]) + + if not as_groups: + ag = AutoScalingGroup( + group_name=group_name, + load_balancers=load_balancers, + availability_zones=availability_zones, + launch_config=launch_configs[0], + min_size=min_size, + max_size=max_size, + desired_capacity=desired_capacity, + vpc_zone_identifier=vpc_zone_identifier, + connection=connection) + + try: + connection.create_auto_scaling_group(ag) + module.exit_json(changed=True) + except BotoServerError, e: + module.fail_json(msg=str(e)) + else: + as_group = as_groups[0] + changed = False + for attr in ('launch_config_name', 'max_size', 'min_size', 'desired_capacity', + 'vpc_zone_identifier', 'availability_zones'): + if getattr(as_group, attr) != module.params.get(attr): + changed = True + setattr(as_group, attr, module.params.get(attr)) + # handle loadbalancers separately because None != [] + load_balancers = module.params.get('load_balancers') or [] + if as_group.load_balancers != load_balancers: + changed = True + as_group.load_balancers = module.params.get('load_balancers') + + try: + if changed: + as_group.update() + module.exit_json(changed=changed) + except BotoServerError, e: + module.fail_json(msg=str(e)) + + +def delete_autoscaling_group(connection, module): + group_name = module.params.get('name') + groups = connection.get_all_groups(names=[group_name]) + if groups: + group = groups[0] + group.shutdown_instances() + + instances = True + while instances: + connection.get_all_groups() + for group in groups: + if group.name == group_name: + if not group.instances: + instances = False + time.sleep(10) + + group.delete() + module.exit_json(changed=True) + else: + module.exit_json(changed=False) + + +def main(): + argument_spec = ec2_argument_spec() + argument_spec.update( + dict( + name=dict(required=True, type='str'), + load_balancers=dict(type='list'), + availability_zones=dict(type='list'), + launch_config_name=dict(type='str'), + min_size=dict(type='int'), + max_size=dict(type='int'), + desired_capacity=dict(type='int'), + vpc_zone_identifier=dict(type='str'), + state=dict(default='present', choices=['present', 'absent']), + ) + ) + module = AnsibleModule(argument_spec=argument_spec) + + state = module.params.get('state') + + region, ec2_url, aws_connect_params = get_aws_connection_info(module) + try: + connection = connect_to_aws(boto.ec2.autoscale, region, **aws_connect_params) + except boto.exception.NoAuthHandlerFound, e: + module.fail_json(msg=str(e)) + + if state == 'present': + create_autoscaling_group(connection, module) + elif state == 'absent': + delete_autoscaling_group(connection, module) + +main() diff --git a/library/cloud/ec2_eip b/library/cloud/ec2_eip index de041f42227..e1182108097 100644 --- a/library/cloud/ec2_eip +++ b/library/cloud/ec2_eip @@ -23,24 +23,6 @@ options: required: false choices: ['present', 'absent'] default: present - ec2_url: - description: - - URL to use to connect to EC2-compatible cloud (by default the module will use EC2 endpoints) - required: false - default: null - aliases: [ EC2_URL ] - ec2_access_key: - description: - - EC2 access key. If not specified then the EC2_ACCESS_KEY environment variable is used. - required: false - default: null - aliases: [ EC2_ACCESS_KEY ] - ec2_secret_key: - description: - - EC2 secret key. If not specified then the EC2_SECRET_KEY environment variable is used. - required: false - default: null - aliases: [ EC2_SECRET_KEY ] region: description: - the EC2 region to use @@ -53,16 +35,14 @@ options: required: false default: false version_added: "1.4" - validate_certs: + reuse_existing_ip_allowed: description: - - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. + - Reuse an EIP that is not associated to an instance (when available), instead of allocating a new one. required: false - default: "yes" - choices: ["yes", "no"] - aliases: [] - version_added: "1.5" + default: false + version_added: "1.6" -requirements: [ "boto" ] +extends_documentation_fragment: aws author: Lorin Hochstein notes: - This module will return C(public_ip) on success, which will contain the @@ -175,13 +155,27 @@ def ip_is_associated_with_instance(ec2, public_ip, instance_id, module): return False -def allocate_address(ec2, domain, module): - """ Allocate a new elastic IP address and return it """ +def allocate_address(ec2, domain, module, reuse_existing_ip_allowed): + """ Allocate a new elastic IP address (when needed) and return it """ # If we're in check mode, nothing else to do if module.check_mode: module.exit_json(change=True) - address = ec2.allocate_address(domain=domain) + if reuse_existing_ip_allowed: + if domain: + domain_filter = { 'domain' : domain } + else: + domain_filter = { 'domain' : 'standard' } + all_addresses = ec2.get_all_addresses(filters=domain_filter) + + unassociated_addresses = filter(lambda a: a.instance_id is None, all_addresses) + if unassociated_addresses: + address = unassociated_addresses[0]; + else: + address = ec2.allocate_address(domain=domain) + else: + address = ec2.allocate_address(domain=domain) + return address @@ -224,7 +218,8 @@ def main(): public_ip = dict(required=False, aliases= ['ip']), state = dict(required=False, default='present', choices=['present', 'absent']), - in_vpc = dict(required=False, choices=BOOLEANS, default=False), + in_vpc = dict(required=False, type='bool', default=False), + reuse_existing_ip_allowed = dict(required=False, type='bool', default=False), ) ) @@ -243,18 +238,19 @@ def main(): state = module.params.get('state') in_vpc = module.params.get('in_vpc') domain = "vpc" if in_vpc else None + reuse_existing_ip_allowed = module.params.get('reuse_existing_ip_allowed'); if state == 'present': if public_ip is None: if instance_id is None: - address = allocate_address(ec2, domain, module) + address = allocate_address(ec2, domain, module, reuse_existing_ip_allowed) module.exit_json(changed=True, public_ip=address.public_ip) else: # Determine if the instance is inside a VPC or not instance = find_instance(ec2, instance_id, module) if instance.vpc_id != None: domain = "vpc" - address = allocate_address(ec2, domain, module) + address = allocate_address(ec2, domain, module, reuse_existing_ip_allowed) else: address = find_address(ec2, public_ip, module) associate_ip_and_instance(ec2, address, instance_id, module) diff --git a/library/cloud/ec2_elb b/library/cloud/ec2_elb index ebd90aeda82..e76816fbca3 100644 --- a/library/cloud/ec2_elb +++ b/library/cloud/ec2_elb @@ -25,7 +25,6 @@ description: if state=absent is passed as an argument. - Will be marked changed when called only if there are ELBs found to operate on. version_added: "1.2" -requirements: [ "boto" ] author: John Jarvis options: state: @@ -33,29 +32,15 @@ options: - register or deregister the instance required: true choices: ['present', 'absent'] - instance_id: description: - EC2 Instance ID required: true - ec2_elbs: description: - List of ELB names, required for registration. The ec2_elbs fact should be used if there was a previous de-register. required: false default: None - aws_secret_key: - description: - - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. - required: false - default: None - aliases: ['ec2_secret_key', 'secret_key' ] - aws_access_key: - description: - - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. - required: false - default: None - aliases: ['ec2_access_key', 'access_key' ] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. @@ -82,7 +67,13 @@ options: choices: ["yes", "no"] aliases: [] version_added: "1.5" - + wait_timeout: + description: + - Number of seconds to wait for an instance to change state. If 0 then this module may return an error if a transient error occurs. If non-zero then any transient errors are ignored until the timeout is reached. Ignored when wait=no. + required: false + default: 0 + version_added: "1.6" +extends_documentation_fragment: aws """ EXAMPLES = """ @@ -124,16 +115,15 @@ class ElbManager: """Handles EC2 instance ELB registration and de-registration""" def __init__(self, module, instance_id=None, ec2_elbs=None, - aws_access_key=None, aws_secret_key=None, region=None): - self.aws_access_key = aws_access_key - self.aws_secret_key = aws_secret_key + region=None, **aws_connect_params): self.module = module self.instance_id = instance_id self.region = region + self.aws_connect_params = aws_connect_params self.lbs = self._get_instance_lbs(ec2_elbs) self.changed = False - def deregister(self, wait): + def deregister(self, wait, timeout): """De-register the instance from all ELBs and wait for the ELB to report it out-of-service""" @@ -146,18 +136,17 @@ class ElbManager: return if wait: - self._await_elb_instance_state(lb, 'OutOfService', initial_state) + self._await_elb_instance_state(lb, 'OutOfService', initial_state, timeout) else: # We cannot assume no change was made if we don't wait # to find out self.changed = True - def register(self, wait, enable_availability_zone): + def register(self, wait, enable_availability_zone, timeout): """Register the instance for all ELBs and wait for the ELB to report the instance in-service""" for lb in self.lbs: - if wait: - initial_state = self._get_instance_health(lb) + initial_state = self._get_instance_health(lb) if enable_availability_zone: self._enable_availailability_zone(lb) @@ -165,7 +154,7 @@ class ElbManager: lb.register_instances([self.instance_id]) if wait: - self._await_elb_instance_state(lb, 'InService', initial_state) + self._await_elb_instance_state(lb, 'InService', initial_state, timeout) else: # We cannot assume no change was made if we don't wait # to find out @@ -195,10 +184,12 @@ class ElbManager: # lb.availability_zones return instance.placement in lb.availability_zones - def _await_elb_instance_state(self, lb, awaited_state, initial_state): + def _await_elb_instance_state(self, lb, awaited_state, initial_state, timeout): """Wait for an ELB to change state lb: load balancer awaited_state : state to poll for (string)""" + + wait_timeout = time.time() + timeout while True: instance_state = self._get_instance_health(lb) @@ -217,7 +208,8 @@ class ElbManager: # If it's pending, we'll skip further checks andd continue waiting pass elif (awaited_state == 'InService' - and instance_state.reason_code == "Instance"): + and instance_state.reason_code == "Instance" + and time.time() >= wait_timeout): # If the reason_code for the instance being out of service is # "Instance" this indicates a failure state, e.g. the instance # has failed a health check or the ELB does not have the @@ -262,9 +254,8 @@ class ElbManager: are attached to self.instance_id""" try: - endpoint="elasticloadbalancing.%s.amazonaws.com" % self.region - connect_region = RegionInfo(name=self.region, endpoint=endpoint) - elb = boto.ec2.elb.ELBConnection(self.aws_access_key, self.aws_secret_key, region=connect_region) + elb = connect_to_aws(boto.ec2.elb, self.region, + **self.aws_connect_params) except boto.exception.NoAuthHandlerFound, e: self.module.fail_json(msg=str(e)) @@ -283,23 +274,22 @@ class ElbManager: def _get_instance(self): """Returns a boto.ec2.InstanceObject for self.instance_id""" try: - endpoint = "ec2.%s.amazonaws.com" % self.region - connect_region = RegionInfo(name=self.region, endpoint=endpoint) - ec2_conn = boto.ec2.EC2Connection(self.aws_access_key, self.aws_secret_key, region=connect_region) + ec2 = connect_to_aws(boto.ec2, self.region, + **self.aws_connect_params) except boto.exception.NoAuthHandlerFound, e: self.module.fail_json(msg=str(e)) - return ec2_conn.get_only_instances(instance_ids=[self.instance_id])[0] + return ec2.get_only_instances(instance_ids=[self.instance_id])[0] def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( - state={'required': True, - 'choices': ['present', 'absent']}, + state={'required': True}, instance_id={'required': True}, ec2_elbs={'default': None, 'required': False, 'type':'list'}, - enable_availability_zone={'default': True, 'required': False, 'choices': BOOLEANS, 'type': 'bool'}, - wait={'required': False, 'choices': BOOLEANS, 'default': True, 'type': 'bool'} + enable_availability_zone={'default': True, 'required': False, 'type': 'bool'}, + wait={'required': False, 'default': True, 'type': 'bool'}, + wait_timeout={'requred': False, 'default': 0, 'type': 'int'} ) ) @@ -307,21 +297,22 @@ def main(): argument_spec=argument_spec, ) - # def get_ec2_creds(module): - # return ec2_url, ec2_access_key, ec2_secret_key, region - ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) + region, ec2_url, aws_connect_params = get_aws_connection_info(module) + + if not region: + module.fail_json(msg="Region must be specified as a parameter, in EC2_REGION or AWS_REGION environment variables or in boto configuration file") ec2_elbs = module.params['ec2_elbs'] - region = module.params['region'] wait = module.params['wait'] enable_availability_zone = module.params['enable_availability_zone'] + timeout = module.params['wait_timeout'] if module.params['state'] == 'present' and 'ec2_elbs' not in module.params: module.fail_json(msg="ELBs are required for registration") instance_id = module.params['instance_id'] - elb_man = ElbManager(module, instance_id, ec2_elbs, aws_access_key, - aws_secret_key, region=region) + elb_man = ElbManager(module, instance_id, ec2_elbs, + region=region, **aws_connect_params) if ec2_elbs is not None: for elb in ec2_elbs: @@ -330,9 +321,9 @@ def main(): module.fail_json(msg=msg) if module.params['state'] == 'present': - elb_man.register(wait, enable_availability_zone) + elb_man.register(wait, enable_availability_zone, timeout) elif module.params['state'] == 'absent': - elb_man.deregister(wait) + elb_man.deregister(wait, timeout) ansible_facts = {'ec2_elbs': [lb.name for lb in elb_man.lbs]} ec2_facts_result = dict(changed=elb_man.changed, ansible_facts=ansible_facts) diff --git a/library/cloud/ec2_elb_lb b/library/cloud/ec2_elb_lb index cc2c1454876..5de76cb5df0 100644 --- a/library/cloud/ec2_elb_lb +++ b/library/cloud/ec2_elb_lb @@ -22,7 +22,6 @@ short_description: Creates or destroys Amazon ELB. - Returns information about the load balancer. - Will be marked changed when called only if state is changed. version_added: "1.5" -requirements: [ "boto" ] author: Jim Dalton options: state: @@ -51,37 +50,23 @@ options: - Purge existing availability zones on ELB that are not found in zones required: false default: false - health_check: + security_group_ids: description: - - An associative array of health check configuration settigs (see example) + - A list of security groups to apply to the elb require: false default: None - aws_secret_key: - description: - - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. - required: false - default: None - aliases: ['ec2_secret_key', 'secret_key'] - aws_access_key: + version_added: "1.6" + health_check: description: - - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. - required: false + - An associative array of health check configuration settigs (see example) + require: false default: None - aliases: ['ec2_access_key', 'access_key'] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: false aliases: ['aws_region', 'ec2_region'] - validate_certs: - description: - - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. - required: false - default: "yes" - choices: ["yes", "no"] - aliases: [] - version_added: "1.5" - +extends_documentation_fragment: aws """ EXAMPLES = """ @@ -183,18 +168,18 @@ class ElbManager(object): """Handles ELB creation and destruction""" def __init__(self, module, name, listeners=None, purge_listeners=None, - zones=None, purge_zones=None, health_check=None, - aws_access_key=None, aws_secret_key=None, region=None): + zones=None, purge_zones=None, security_group_ids=None, health_check=None, + region=None, **aws_connect_params): self.module = module self.name = name self.listeners = listeners self.purge_listeners = purge_listeners self.zones = zones self.purge_zones = purge_zones + self.security_group_ids = security_group_ids self.health_check = health_check - self.aws_access_key = aws_access_key - self.aws_secret_key = aws_secret_key + self.aws_connect_params = aws_connect_params self.region = region self.changed = False @@ -209,6 +194,7 @@ class ElbManager(object): self._create_elb() else: self._set_zones() + self._set_security_groups() self._set_elb_listeners() self._set_health_check() @@ -228,6 +214,7 @@ class ElbManager(object): 'name': self.elb.name, 'dns_name': self.elb.dns_name, 'zones': self.elb.availability_zones, + 'security_group_ids': self.elb.security_groups, 'status': self.status } @@ -262,11 +249,8 @@ class ElbManager(object): def _get_elb_connection(self): try: - endpoint = "elasticloadbalancing.%s.amazonaws.com" % self.region - connect_region = RegionInfo(name=self.region, endpoint=endpoint) - return boto.ec2.elb.ELBConnection(self.aws_access_key, - self.aws_secret_key, - region=connect_region) + return connect_to_aws(boto.ec2.elb, self.region, + **self.aws_connect_params) except boto.exception.NoAuthHandlerFound, e: self.module.fail_json(msg=str(e)) @@ -281,6 +265,7 @@ class ElbManager(object): listeners = [self._listener_as_tuple(l) for l in self.listeners] self.elb = self.elb_conn.create_load_balancer(name=self.name, zones=self.zones, + security_groups=self.security_group_ids, complex_listeners=listeners) if self.elb: self.changed = True @@ -405,6 +390,11 @@ class ElbManager(object): if zones_to_disable: self._disable_zones(zones_to_disable) + def _set_security_groups(self): + if self.security_group_ids != None and set(self.elb.security_groups) != set(self.security_group_ids): + self.elb_conn.apply_security_groups_to_lb(self.name, self.security_group_ids) + self.Changed = True + def _set_health_check(self): """Set health check values on ELB as needed""" if self.health_check: @@ -452,11 +442,10 @@ def main(): state={'required': True, 'choices': ['present', 'absent']}, name={'required': True}, listeners={'default': None, 'required': False, 'type': 'list'}, - purge_listeners={'default': True, 'required': False, - 'choices': BOOLEANS, 'type': 'bool'}, + purge_listeners={'default': True, 'required': False, 'type': 'bool'}, zones={'default': None, 'required': False, 'type': 'list'}, - purge_zones={'default': False, 'required': False, - 'choices': BOOLEANS, 'type': 'bool'}, + purge_zones={'default': False, 'required': False, 'type': 'bool'}, + security_group_ids={'default': None, 'required': False, 'type': 'list'}, health_check={'default': None, 'required': False, 'type': 'dict'}, ) ) @@ -465,9 +454,9 @@ def main(): argument_spec=argument_spec, ) - # def get_ec2_creds(module): - # return ec2_url, ec2_access_key, ec2_secret_key, region - ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) + region, ec2_url, aws_connect_params = get_aws_connection_info(module) + if not region: + module.fail_json(msg="Region must be specified as a parameter, in EC2_REGION or AWS_REGION environment variables or in boto configuration file") name = module.params['name'] state = module.params['state'] @@ -475,6 +464,7 @@ def main(): purge_listeners = module.params['purge_listeners'] zones = module.params['zones'] purge_zones = module.params['purge_zones'] + security_group_ids = module.params['security_group_ids'] health_check = module.params['health_check'] if state == 'present' and not listeners: @@ -484,8 +474,8 @@ def main(): module.fail_json(msg="At least one availability zone is required for ELB creation") elb_man = ElbManager(module, name, listeners, purge_listeners, zones, - purge_zones, health_check, aws_access_key, - aws_secret_key, region=region) + purge_zones, security_group_ids, health_check, + region=region, **aws_connect_params) if state == 'present': elb_man.ensure_ok() diff --git a/library/cloud/ec2_facts b/library/cloud/ec2_facts index 1c17fa5b717..3fade4d1a05 100644 --- a/library/cloud/ec2_facts +++ b/library/cloud/ec2_facts @@ -21,7 +21,15 @@ DOCUMENTATION = ''' module: ec2_facts short_description: Gathers facts about remote hosts within ec2 (aws) version_added: "1.0" -options: {} +options: + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 description: - This module fetches data from the metadata servers in ec2 (aws). Eucalyptus cloud provides a similar service and this module should @@ -41,7 +49,6 @@ EXAMPLES = ''' when: ansible_ec2_instance_type == "t1.micro" ''' -import urllib2 import socket import re @@ -62,7 +69,8 @@ class Ec2Metadata(object): 'us-west-1', 'us-west-2') - def __init__(self, ec2_metadata_uri=None, ec2_sshdata_uri=None, ec2_userdata_uri=None): + def __init__(self, module, ec2_metadata_uri=None, ec2_sshdata_uri=None, ec2_userdata_uri=None): + self.module = module self.uri_meta = ec2_metadata_uri or self.ec2_metadata_uri self.uri_user = ec2_userdata_uri or self.ec2_userdata_uri self.uri_ssh = ec2_sshdata_uri or self.ec2_sshdata_uri @@ -70,12 +78,12 @@ class Ec2Metadata(object): self._prefix = 'ansible_ec2_%s' def _fetch(self, url): - try: - return urllib2.urlopen(url).read() - except urllib2.HTTPError: - return - except urllib2.URLError: - return + (response, info) = fetch_url(self.module, url, force=True) + if response: + data = response.read() + else: + data = None + return data def _mangle_fields(self, fields, uri, filter_patterns=['public-keys-0']): new_fields = {} @@ -150,17 +158,20 @@ class Ec2Metadata(object): return data def main(): - - ec2_facts = Ec2Metadata().run() - ec2_facts_result = dict(changed=False, ansible_facts=ec2_facts) + argument_spec = url_argument_spec() module = AnsibleModule( - argument_spec = dict(), + argument_spec = argument_spec, supports_check_mode = True, ) + + ec2_facts = Ec2Metadata(module).run() + ec2_facts_result = dict(changed=False, ansible_facts=ec2_facts) + module.exit_json(**ec2_facts_result) # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * main() diff --git a/library/cloud/ec2_group b/library/cloud/ec2_group index bbbb0fc24e0..56581ecd778 100644 --- a/library/cloud/ec2_group +++ b/library/cloud/ec2_group @@ -24,32 +24,19 @@ options: required: false rules: description: - - List of firewall rules to enforce in this group (see example). - required: true - region: - description: - - the EC2 region to use - required: false - default: null - aliases: [] - ec2_url: - description: - - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints) + - List of firewall inbound rules to enforce in this group (see example). required: false - default: null - aliases: [] - ec2_secret_key: + rules_egress: description: - - EC2 secret key + - List of firewall outbound rules to enforce in this group (see example). required: false - default: null - aliases: ['aws_secret_key'] - ec2_access_key: + version_added: "1.6" + region: description: - - EC2 access key + - the EC2 region to use required: false default: null - aliases: ['aws_access_key'] + aliases: [] state: version_added: "1.4" description: @@ -57,16 +44,13 @@ options: required: false default: 'present' aliases: [] - validate_certs: - description: - - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. - required: false - default: "yes" - choices: ["yes", "no"] - aliases: [] - version_added: "1.5" -requirements: [ "boto" ] +extends_documentation_fragment: aws + +notes: + - If a rule declares a group_name and that group doesn't exist, it will be + automatically created. In that case, group_desc should be provided as well. + The module will refuse to create a depended-on group without a description. ''' EXAMPLES = ''' @@ -99,6 +83,13 @@ EXAMPLES = ''' - proto: all # the containing group name may be specified here group_name: example + rules_egress: + - proto: tcp + from_port: 80 + to_port: 80 + group_name: example-other + # description to use if example-other needs to be created + group_desc: other example EC2 group ''' try: @@ -114,6 +105,55 @@ def addRulesToLookup(rules, prefix, dict): dict["%s-%s-%s-%s-%s-%s" % (prefix, rule.ip_protocol, rule.from_port, rule.to_port, grant.group_id, grant.cidr_ip)] = rule + +def get_target_from_rule(rule, name, groups): + """ + Returns tuple of (group_id, ip) after validating rule params. + + rule: Dict describing a rule. + name: Name of the security group being managed. + groups: Dict of all available security groups. + + AWS accepts an ip range or a security group as target of a rule. This + function validate the rule specification and return either a non-None + group_id or a non-None ip range. + """ + + group_id = None + group_name = None + ip = None + target_group_created = False + if 'group_id' in rule and 'cidr_ip' in rule: + module.fail_json(msg="Specify group_id OR cidr_ip, not both") + elif 'group_name' in rule and 'cidr_ip' in rule: + module.fail_json(msg="Specify group_name OR cidr_ip, not both") + elif 'group_id' in rule and 'group_name' in rule: + module.fail_json(msg="Specify group_id OR group_name, not both") + elif 'group_id' in rule: + group_id = rule['group_id'] + elif 'group_name' in rule: + group_name = rule['group_name'] + if group_name in groups: + group_id = groups[group_name].id + elif group_name == name: + group_id = group.id + groups[group_id] = group + groups[group_name] = group + else: + if not rule.get('group_desc', '').strip(): + module.fail_json(msg="group %s will be automatically created by rule %s and no description was provided" % (group_name, rule)) + if not module.check_mode: + auto_group = ec2.create_security_group(group_name, rule['group_desc'], vpc_id=vpc_id) + group_id = auto_group.id + groups[group_id] = auto_group + groups[group_name] = auto_group + target_group_created = True + elif 'cidr_ip' in rule: + ip = rule['cidr_ip'] + + return group_id, ip, target_group_created + + def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( @@ -121,6 +161,7 @@ def main(): description=dict(required=True), vpc_id=dict(), rules=dict(), + rules_egress=dict(), state = dict(default='present', choices=['present', 'absent']), ) ) @@ -133,6 +174,7 @@ def main(): description = module.params['description'] vpc_id = module.params['vpc_id'] rules = module.params['rules'] + rules_egress = module.params['rules_egress'] state = module.params.get('state') changed = False @@ -183,39 +225,29 @@ def main(): '''no match found, create it''' if not module.check_mode: group = ec2.create_security_group(name, description, vpc_id=vpc_id) + + # When a group is created, an egress_rule ALLOW ALL + # to 0.0.0.0/0 is added automatically but it's not + # reflected in the object returned by the AWS API + # call. We re-read the group for getting an updated object + group = ec2.get_all_security_groups(group_ids=(group.id,))[0] changed = True else: module.fail_json(msg="Unsupported state requested: %s" % state) # create a lookup for all existing rules on the group if group: + + # Manage ingress rules groupRules = {} addRulesToLookup(group.rules, 'in', groupRules) # Now, go through all provided rules and ensure they are there. if rules: for rule in rules: - group_id = None - group_name = None - ip = None - if 'group_id' in rule and 'cidr_ip' in rule: - module.fail_json(msg="Specify group_id OR cidr_ip, not both") - elif 'group_name' in rule and 'cidr_ip' in rule: - module.fail_json(msg="Specify group_name OR cidr_ip, not both") - elif 'group_id' in rule and 'group_name' in rule: - module.fail_json(msg="Specify group_id OR group_name, not both") - elif 'group_id' in rule: - group_id = rule['group_id'] - elif 'group_name' in rule: - group_name = rule['group_name'] - if group_name in groups: - group_id = groups[group_name].id - elif group_name == name: - group_id = group.id - groups[group_id] = group - groups[group_name] = group - elif 'cidr_ip' in rule: - ip = rule['cidr_ip'] + group_id, ip, target_group_created = get_target_from_rule(rule, name, groups) + if target_group_created: + changed = True if rule['proto'] == 'all': rule['proto'] = -1 @@ -246,6 +278,58 @@ def main(): group.revoke(rule.ip_protocol, rule.from_port, rule.to_port, grant.cidr_ip, grantGroup) changed = True + # Manage egress rules + groupRules = {} + addRulesToLookup(group.rules_egress, 'out', groupRules) + + # Now, go through all provided rules and ensure they are there. + if rules_egress: + for rule in rules_egress: + group_id, ip, target_group_created = get_target_from_rule(rule, name, groups) + if target_group_created: + changed = True + + if rule['proto'] == 'all': + rule['proto'] = -1 + rule['from_port'] = None + rule['to_port'] = None + + # If rule already exists, don't later delete it + ruleId = "%s-%s-%s-%s-%s-%s" % ('out', rule['proto'], rule['from_port'], rule['to_port'], group_id, ip) + if ruleId in groupRules: + del groupRules[ruleId] + # Otherwise, add new rule + else: + grantGroup = None + if group_id: + grantGroup = groups[group_id].id + + if not module.check_mode: + ec2.authorize_security_group_egress( + group_id=group.id, + ip_protocol=rule['proto'], + from_port=rule['from_port'], + to_port=rule['to_port'], + src_group_id=grantGroup, + cidr_ip=ip) + changed = True + + # Finally, remove anything left in the groupRules -- these will be defunct rules + for rule in groupRules.itervalues(): + for grant in rule.grants: + grantGroup = None + if grant.group_id: + grantGroup = groups[grant.group_id].id + if not module.check_mode: + ec2.revoke_security_group_egress( + group_id=group.id, + ip_protocol=rule.ip_protocol, + from_port=rule.from_port, + to_port=rule.to_port, + src_group_id=grantGroup, + cidr_ip=grant.cidr_ip) + changed = True + if group: module.exit_json(changed=changed, group_id=group.id) else: diff --git a/library/cloud/ec2_key b/library/cloud/ec2_key index 5e6950d2c8b..9c8274f764a 100644 --- a/library/cloud/ec2_key +++ b/library/cloud/ec2_key @@ -24,40 +24,28 @@ options: required: false default: null aliases: [] - ec2_url: - description: - - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints) - required: false - default: null - aliases: [] - ec2_secret_key: - description: - - EC2 secret key - required: false - default: null - aliases: ['aws_secret_key', 'secret_key'] - ec2_access_key: - description: - - EC2 access key - required: false - default: null - aliases: ['aws_access_key', 'access_key'] state: description: - create or delete keypair required: false default: 'present' aliases: [] - validate_certs: + wait: + description: + - Wait for the specified action to complete before returning. + required: false + default: false + aliases: [] + version_added: "1.6" + wait_timeout: description: - - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. + - How long before wait gives up, in seconds required: false - default: "yes" - choices: ["yes", "no"] + default: 300 aliases: [] - version_added: "1.5" + version_added: "1.6" -requirements: [ "boto" ] +extends_documentation_fragment: aws author: Vincent Viallet ''' @@ -104,12 +92,18 @@ except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) +import random +import string + + def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( name=dict(required=True), key_material=dict(required=False), state = dict(default='present', choices=['present', 'absent']), + wait = dict(type='bool', default=False), + wait_timeout = dict(default=300), ) ) module = AnsibleModule( @@ -120,6 +114,8 @@ def main(): name = module.params['name'] state = module.params.get('state') key_material = module.params.get('key_material') + wait = module.params.get('wait') + wait_timeout = int(module.params.get('wait_timeout')) changed = False @@ -134,6 +130,16 @@ def main(): '''found a match, delete it''' try: key.delete() + if wait: + start = time.time() + action_complete = False + while (time.time() - start) < wait_timeout: + if not ec2.get_key_pair(name): + action_complete = True + break + time.sleep(1) + if not action_complete: + module.fail_json(msg="timed out while waiting for the key to be removed") except Exception, e: module.fail_json(msg="Unable to delete key pair '%s' - %s" % (key, e)) else: @@ -145,10 +151,45 @@ def main(): # Ensure requested key is present elif state == 'present': if key: - '''existing key found''' - # Should check if the fingerprint is the same - but lack of info - # and different fingerprint provided (pub or private) depending if - # the key has been created of imported. + # existing key found + if key_material: + # EC2's fingerprints are non-trivial to generate, so push this key + # to a temporary name and make ec2 calculate the fingerprint for us. + # + # http://blog.jbrowne.com/?p=23 + # https://forums.aws.amazon.com/thread.jspa?messageID=352828 + + # find an unused name + test = 'empty' + while test: + randomchars = [random.choice(string.ascii_letters + string.digits) for x in range(0,10)] + tmpkeyname = "ansible-" + ''.join(randomchars) + test = ec2.get_key_pair(tmpkeyname) + + # create tmp key + tmpkey = ec2.import_key_pair(tmpkeyname, key_material) + # get tmp key fingerprint + tmpfingerprint = tmpkey.fingerprint + # delete tmp key + tmpkey.delete() + + if key.fingerprint != tmpfingerprint: + if not module.check_mode: + key.delete() + key = ec2.import_key_pair(name, key_material) + + if wait: + start = time.time() + action_complete = False + while (time.time() - start) < wait_timeout: + if ec2.get_key_pair(name): + action_complete = True + break + time.sleep(1) + if not action_complete: + module.fail_json(msg="timed out while waiting for the key to be re-created") + + changed = True pass # if the key doesn't exist, create it now @@ -164,6 +205,18 @@ def main(): retrieve the private key ''' key = ec2.create_key_pair(name) + + if wait: + start = time.time() + action_complete = False + while (time.time() - start) < wait_timeout: + if ec2.get_key_pair(name): + action_complete = True + break + time.sleep(1) + if not action_complete: + module.fail_json(msg="timed out while waiting for the key to be created") + changed = True if key: diff --git a/library/cloud/ec2_lc b/library/cloud/ec2_lc new file mode 100644 index 00000000000..91905a38894 --- /dev/null +++ b/library/cloud/ec2_lc @@ -0,0 +1,199 @@ +#!/usr/bin/python +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = """ +--- +module: ec2_lc +short_description: Create or delete AWS Autoscaling Launch Configurations +description: + - Can create or delete AwS Autoscaling Configurations + - Works with the ec2_asg module to manage Autoscaling Groups +version_added: "1.6" +author: Gareth Rushgrove +options: + state: + description: + - register or deregister the instance + required: true + choices: ['present', 'absent'] + name: + description: + - Unique name for configuration + required: true + image_id: + description: + - The AMI unique identifier to be used for the group + required: false + key_name: + description: + - The SSH key name to be used for access to managed instances + required: false + security_groups: + description: + - A list of security groups into which instances should be found + required: false + region: + description: + - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. + required: false + aliases: ['aws_region', 'ec2_region'] + volumes: + description: + - a list of volume dicts, each containing device name and optionally ephemeral id or snapshot id. Size and type (and number of iops for io device type) must be specified for a new volume or a root volume, and may be passed for a snapshot volume. For any volume, a volume size less than 1 will be interpreted as a request not to create the volume. + required: false + default: null + aliases: [] + user_data: + description: + - opaque blob of data which is made available to the ec2 instance + required: false + default: null + aliases: [] +extends_documentation_fragment: aws +""" + +EXAMPLES = ''' +- ec2_lc: + name: special + image_id: ami-XXX + key_name: default + security_groups: 'group,group2' + +''' + +import sys +import time + +from ansible.module_utils.basic import * +from ansible.module_utils.ec2 import * + +try: + from boto.ec2.blockdevicemapping import BlockDeviceType, BlockDeviceMapping + import boto.ec2.autoscale + from boto.ec2.autoscale import LaunchConfiguration + from boto.exception import BotoServerError +except ImportError: + print "failed=True msg='boto required for this module'" + sys.exit(1) + + +def create_block_device(module, volume): + # Not aware of a way to determine this programatically + # http://aws.amazon.com/about-aws/whats-new/2013/10/09/ebs-provisioned-iops-maximum-iops-gb-ratio-increased-to-30-1/ + MAX_IOPS_TO_SIZE_RATIO = 30 + if 'snapshot' not in volume and 'ephemeral' not in volume: + if 'volume_size' not in volume: + module.fail_json(msg='Size must be specified when creating a new volume or modifying the root volume') + if 'snapshot' in volume: + if 'device_type' in volume and volume.get('device_type') == 'io1' and 'iops' not in volume: + module.fail_json(msg='io1 volumes must have an iops value set') + if 'ephemeral' in volume: + if 'snapshot' in volume: + module.fail_json(msg='Cannot set both ephemeral and snapshot') + return BlockDeviceType(snapshot_id=volume.get('snapshot'), + ephemeral_name=volume.get('ephemeral'), + size=volume.get('volume_size'), + volume_type=volume.get('device_type'), + delete_on_termination=volume.get('delete_on_termination', False), + iops=volume.get('iops')) + + +def create_launch_config(connection, module): + name = module.params.get('name') + image_id = module.params.get('image_id') + key_name = module.params.get('key_name') + security_groups = module.params['security_groups'] + user_data = module.params.get('user_data') + volumes = module.params['volumes'] + instance_type = module.params.get('instance_type') + bdm = BlockDeviceMapping() + + if volumes: + for volume in volumes: + if 'device_name' not in volume: + module.fail_json(msg='Device name must be set for volume') + # Minimum volume size is 1GB. We'll use volume size explicitly set to 0 + # to be a signal not to create this volume + if 'volume_size' not in volume or int(volume['volume_size']) > 0: + bdm[volume['device_name']] = create_block_device(module, volume) + + lc = LaunchConfiguration( + name=name, + image_id=image_id, + key_name=key_name, + security_groups=security_groups, + user_data=user_data, + block_device_mappings=[bdm], + instance_type=instance_type) + + launch_configs = connection.get_all_launch_configurations(names=[name]) + changed = False + if not launch_configs: + try: + connection.create_launch_configuration(lc) + launch_configs = connection.get_all_launch_configurations(names=[name]) + changed = True + except BotoServerError, e: + module.fail_json(msg=str(e)) + result = launch_configs[0] + + module.exit_json(changed=changed, name=result.name, created_time=str(result.created_time), + image_id=result.image_id, arn=result.launch_configuration_arn, + security_groups=result.security_groups, instance_type=instance_type) + + +def delete_launch_config(connection, module): + name = module.params.get('name') + launch_configs = connection.get_all_launch_configurations(names=[name]) + if launch_configs: + launch_configs[0].delete() + module.exit_json(changed=True) + else: + module.exit_json(changed=False) + + +def main(): + argument_spec = ec2_argument_spec() + argument_spec.update( + dict( + name=dict(required=True, type='str'), + image_id=dict(type='str'), + key_name=dict(type='str'), + security_groups=dict(type='list'), + user_data=dict(type='str'), + volumes=dict(type='list'), + instance_type=dict(type='str'), + state=dict(default='present', choices=['present', 'absent']), + ) + ) + + module = AnsibleModule(argument_spec=argument_spec) + + region, ec2_url, aws_connect_params = get_aws_connection_info(module) + + try: + connection = connect_to_aws(boto.ec2.autoscale, region, **aws_connect_params) + except boto.exception.NoAuthHandlerFound, e: + module.fail_json(msg=str(e)) + + state = module.params.get('state') + + if state == 'present': + create_launch_config(connection, module) + elif state == 'absent': + delete_launch_config(connection, module) + +main() diff --git a/library/cloud/ec2_metric_alarm b/library/cloud/ec2_metric_alarm new file mode 100644 index 00000000000..4791330dbe2 --- /dev/null +++ b/library/cloud/ec2_metric_alarm @@ -0,0 +1,264 @@ +#!/usr/bin/python +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = """ +module: ec2_metric_alarm +short_description: "Create/update or delete AWS Cloudwatch 'metric alarms'" +description: + - Can create or delete AWS metric alarms + - Metrics you wish to alarm on must already exist +version_added: "1.6" +author: Zacharie Eakin +options: + state: + description: + - register or deregister the alarm + required: true + choices: ['present', 'absent'] + name: + desciption: + - Unique name for the alarm + required: true + metric: + description: + - Name of the monitored metric (e.g. CPUUtilization) + - Metric must already exist + required: false + namespace: + description: + - Name of the appropriate namespace, which determines the category it will appear under in cloudwatch + required: false + options: ['AWS/AutoScaling','AWS/Billing','AWS/DynamoDB','AWS/ElastiCache','AWS/EBS','AWS/EC2','AWS/ELB','AWS/ElasticMapReduce','AWS/OpsWorks','AWS/Redshift','AWS/RDS','AWS/Route53','AWS/SNS','AWS/SQS','AWS/StorageGateway'] + statistic: + description: + - Operation applied to the metric + - Works in conjunction with period and evaluation_periods to determine the comparison value + required: false + options: ['SampleCount','Average','Sum','Minimum','Maximum'] + comparison: + description: + - Determines how the threshold value is compared + required: false + options: ['<=','<','>','>='] + threshold: + description: + - Sets the min/max bound for triggering the alarm + required: false + period: + description: + - The time (in seconds) between metric evaluations + required: false + evaluation_periods: + description: + - The number of times in which the metric is evaluated before final calculation + required: false + unit: + description: + - The threshold's unit of measurement + required: false + options: ['Seconds','Microseconds','Milliseconds','Bytes','Kilobytes','Megabytes','Gigabytes','Terabytes','Bits','Kilobits','Megabits','Gigabits','Terabits','Percent','Count','Bytes/Second','Kilobytes/Second','Megabytes/Second','Gigabytes/Second','Terabytes/Second','Bits/Second','Kilobits/Second','Megabits/Second','Gigabits/Second','Terabits/Second','Count/Second','None'] + description: + description: + - A longer desciption of the alarm + required: false + dimensions: + description: + - Describes to what the alarm is applied + required: false + alarm_actions: + description: + - A list of the names action(s) taken when the alarm is in the 'alarm' status + required: false + insufficient_data_actions: + description: + - A list of the names of action(s) to take when the alarm is in the 'insufficient_data' status + required: false + ok_actions: + description: + - A list of the names of action(s) to take when the alarm is in the 'ok' status + required: false +extends_documentation_fragment: aws +""" + +EXAMPLES = ''' + - name: create alarm + ec2_metric_alarm: + state: present + region: ap-southeast-2 + name: "cpu-low" + metric: "CPUUtilization" + namespace: "AWS/EC2" + statistic: Average + comparison: "<=" + threshold: 5.0 + period: 300 + evaluation_periods: 3 + unit: "Percent" + description: "This will alarm when a bamboo slave's cpu usage average is lower than 5% for 15 minutes " + dimensions: {'InstanceId':'i-XXX'} + alarm_actions: ["action1","action2"] + + +''' + +import sys + +from ansible.module_utils.basic import * +from ansible.module_utils.ec2 import * + +try: + import boto.ec2.cloudwatch + from boto.ec2.cloudwatch import CloudWatchConnection, MetricAlarm + from boto.exception import BotoServerError +except ImportError: + print "failed=True msg='boto required for this module'" + sys.exit(1) + + +def create_metric_alarm(connection, module): + + name = module.params.get('name') + metric = module.params.get('metric') + namespace = module.params.get('namespace') + statistic = module.params.get('statistic') + comparison = module.params.get('comparison') + threshold = module.params.get('threshold') + period = module.params.get('period') + evaluation_periods = module.params.get('evaluation_periods') + unit = module.params.get('unit') + description = module.params.get('description') + dimensions = module.params.get('dimensions') + alarm_actions = module.params.get('alarm_actions') + insufficient_data_actions = module.params.get('insufficient_data_actions') + ok_actions = module.params.get('ok_actions') + + alarms = connection.describe_alarms(alarm_names=[name]) + + if not alarms: + + alm = MetricAlarm( + name=name, + metric=metric, + namespace=namespace, + statistic=statistic, + comparison=comparison, + threshold=threshold, + period=period, + evaluation_periods=evaluation_periods, + unit=unit, + description=description, + dimensions=dimensions, + alarm_actions=alarm_actions, + insufficient_data_actions=insufficient_data_actions, + ok_actions=ok_actions + ) + try: + connection.create_alarm(alm) + module.exit_json(changed=True) + except BotoServerError, e: + module.fail_json(msg=str(e)) + + else: + alarm = alarms[0] + changed = False + + for attr in ('comparison','metric','namespace','statistic','threshold','period','evaluation_periods','unit','description'): + if getattr(alarm, attr) != module.params.get(attr): + changed = True + setattr(alarm, attr, module.params.get(attr)) + #this is to deal with a current bug where you cannot assign '<=>' to the comparator when modifying an existing alarm + comparison = alarm.comparison + comparisons = {'<=' : 'LessThanOrEqualToThreshold', '<' : 'LessThanThreshold', '>=' : 'GreaterThanOrEqualToThreshold', '>' : 'GreaterThanThreshold'} + alarm.comparison = comparisons[comparison] + + dim1 = module.params.get('dimensions') + dim2 = alarm.dimensions + + for keys in dim1: + if not isinstance(dim1[keys], list): + dim1[keys] = [dim1[keys]] + if dim1[keys] != dim2[keys]: + changed=True + setattr(alarm, 'dimensions', dim1) + + for attr in ('alarm_actions','insufficient_data_actions','ok_actions'): + action = module.params.get(attr) or [] + if getattr(alarm, attr) != action: + changed = True + setattr(alarm, attr, module.params.get(attr)) + + try: + if changed: + connection.create_alarm(alarm) + module.exit_json(changed=changed) + except BotoServerError, e: + module.fail_json(msg=str(e)) + + +def delete_metric_alarm(connection, module): + name = module.params.get('name') + + alarms = connection.describe_alarms(alarm_names=[name]) + + if alarms: + try: + connection.delete_alarms([name]) + module.exit_json(changed=True) + except BotoServerError, e: + module.fail_json(msg=str(e)) + else: + module.exit_json(changed=False) + + +def main(): + argument_spec = ec2_argument_spec() + argument_spec.update( + dict( + name=dict(required=True, type='str'), + metric=dict(type='str'), + namespace=dict(type='str', choices=['AWS/AutoScaling', 'AWS/Billing', 'AWS/DynamoDB', 'AWS/ElastiCache', 'AWS/EBS', 'AWS/EC2', + 'AWS/ELB', 'AWS/ElasticMapReduce', 'AWS/OpsWorks', 'AWS/Redshift', 'AWS/RDS', 'AWS/Route53', 'AWS/SNS', 'AWS/SQS', 'AWS/StorageGateway']), statistic=dict(type='str', choices=['SampleCount', 'Average', 'Sum', 'Minimum', 'Maximum']), + comparison=dict(type='str', choices=['<=', '<', '>', '>=']), + threshold=dict(type='float'), + period=dict(type='int'), + unit=dict(type='str', choices=['Seconds', 'Microseconds', 'Milliseconds', 'Bytes', 'Kilobytes', 'Megabytes', 'Gigabytes', 'Terabytes', 'Bits', 'Kilobits', 'Megabits', 'Gigabits', 'Terabits', 'Percent', 'Count', 'Bytes/Second', 'Kilobytes/Second', 'Megabytes/Second', 'Gigabytes/Second', 'Terabytes/Second', 'Bits/Second', 'Kilobits/Second', 'Megabits/Second', 'Gigabits/Second', 'Terabits/Second', 'Count/Second', 'None']), + evaluation_periods=dict(type='int'), + description=dict(type='str'), + dimensions=dict(type='dict'), + alarm_actions=dict(type='list'), + insufficient_data_actions=dict(type='list'), + ok_actions=dict(type='list'), + state=dict(default='present', choices=['present', 'absent']), + region=dict(aliases=['aws_region', 'ec2_region'], choices=AWS_REGIONS), + ) + ) + + module = AnsibleModule(argument_spec=argument_spec) + + state = module.params.get('state') + + region, ec2_url, aws_connect_params = get_aws_connection_info(module) + try: + connection = connect_to_aws(boto.ec2.cloudwatch, region, **aws_connect_params) + except boto.exception.NoAuthHandlerFound, e: + module.fail_json(msg=str(e)) + + if state == 'present': + create_metric_alarm(connection, module) + elif state == 'absent': + delete_metric_alarm(connection, module) + +main() diff --git a/library/cloud/ec2_scaling_policy b/library/cloud/ec2_scaling_policy new file mode 100755 index 00000000000..4e66f463063 --- /dev/null +++ b/library/cloud/ec2_scaling_policy @@ -0,0 +1,180 @@ +#!/usr/bin/python + +DOCUMENTATION = """ +module: ec2_scaling_policy +short_description: Create or delete AWS scaling policies for Autoscaling groups +description: + - Can create or delete scaling policies for autoscaling groups + - Referenced autoscaling groups must already exist +version_added: "1.6" +author: Zacharie Eakin +options: + state: + description: + - register or deregister the policy + required: true + choices: ['present', 'absent'] + name: + description: + - Unique name for the scaling policy + required: true + asg_name: + description: + - Name of the associated autoscaling group + required: true + adjustment_type: + desciption: + - The type of change in capacity of the autoscaling group + required: false + choices: ['ChangeInCapacity','ExactCapacity','PercentChangeInCapacity'] + scaling_adjustment: + description: + - The amount by which the autoscaling group is adjusted by the policy + required: false + min_adjustment_step: + description: + - Minimum amount of adjustment when policy is triggered + required: false + cooldown: + description: + - The minimum period of time between which autoscaling actions can take place + required: false +extends_documentation_fragment: aws +""" + +EXAMPLES = ''' +- ec2_scaling_policy: + state: present + region: US-XXX + name: "scaledown-policy" + adjustment_type: "ChangeInCapacity" + asg_name: "slave-pool" + scaling_adjustment: -1 + min_adjustment_step: 1 + cooldown: 300 +''' + + +import sys + +from ansible.module_utils.basic import * +from ansible.module_utils.ec2 import * + +try: + import boto.ec2.autoscale + from boto.ec2.autoscale import ScalingPolicy + from boto.exception import BotoServerError + +except ImportError: + print "failed=True msg='boto required for this module'" + sys.exit(1) + + +def create_scaling_policy(connection, module): + sp_name = module.params.get('name') + adjustment_type = module.params.get('adjustment_type') + asg_name = module.params.get('asg_name') + scaling_adjustment = module.params.get('scaling_adjustment') + min_adjustment_step = module.params.get('min_adjustment_step') + cooldown = module.params.get('cooldown') + + scalingPolicies = connection.get_all_policies(as_group=asg_name,policy_names=[sp_name]) + + if not scalingPolicies: + sp = ScalingPolicy( + name=sp_name, + adjustment_type=adjustment_type, + as_name=asg_name, + scaling_adjustment=scaling_adjustment, + min_adjustment_step=min_adjustment_step, + cooldown=cooldown) + + try: + connection.create_scaling_policy(sp) + module.exit_json(changed=True) + except BotoServerError, e: + module.fail_json(msg=str(e)) + else: + policy = scalingPolicies[0] + changed = False + + #min_adjustment_step attribute is only relevant if the adjustment_type + #is set to percentage change in capacity, so it is a special case + if getattr(policy, 'adjustment_type') == 'PercentChangeInCapacity': + if getattr(policy, 'min_adjustment_step') != module.params.get('min_adjustment_step'): + changed = True + + #set the min adjustment step incase the user decided to change their adjustment type to percentage + setattr(policy, 'min_adjustment_step', module.params.get('min_adjustment_step')) + + #check the remaining attributes + for attr in ('adjustment_type','scaling_adjustment','cooldown'): + if getattr(policy, attr) != module.params.get(attr): + changed = True + setattr(policy, attr, module.params.get(attr)) + + try: + if changed: + connection.create_scaling_policy(policy) + policy = connection.get_all_policies(policy_names=[sp_name])[0] + module.exit_json(changed=changed, name=policy.name, arn=policy.policy_arn, as_name=policy.as_name, scaling_adjustment=policy.scaling_adjustment, cooldown=policy.cooldown, adjustment_type=policy.adjustment_type, min_adjustment_step=policy.min_adjustment_step) + module.exit_json(changed=changed) + except BotoServerError, e: + module.fail_json(msg=str(e)) + + +def delete_scaling_policy(connection, module): + sp_name = module.params.get('name') + asg_name = module.params.get('asg_name') + + scalingPolicies = connection.get_all_policies(as_group=asg_name,policy_names=[sp_name]) + + if scalingPolicies: + try: + connection.delete_policy(sp_name, asg_name) + module.exit_json(changed=True) + except BotoServerError, e: + module.exit_json(changed=False, msg=str(e)) + else: + module.exit_json(changed=False) + + +def main(): + argument_spec = ec2_argument_spec() + argument_spec.update( + dict( + name = dict(required=True, type='str'), + adjustment_type = dict(type='str', choices=['ChangeInCapacity','ExactCapacity','PercentChangeInCapacity']), + asg_name = dict(required=True, type='str'), + scaling_adjustment = dict(type='int'), + min_adjustment_step = dict(type='int'), + cooldown = dict(type='int'), + region = dict(aliases=['aws_region', 'ec2_region'], choices=AWS_REGIONS), + state=dict(default='present', choices=['present', 'absent']), + ) + ) + + module = AnsibleModule(argument_spec=argument_spec) + + region, ec2_url, aws_connect_params = get_aws_connection_info(module) + + state = module.params.get('state') + + try: + connection = connect_to_aws(boto.ec2.autoscale, region, **aws_connect_params) + except boto.exception.NoAuthHandlerFound, e: + module.fail_json(msg = str(e)) + + if state == 'present': + create_scaling_policy(connection, module) + elif state == 'absent': + delete_scaling_policy(connection, module) + + +main() + + + + + + diff --git a/library/cloud/ec2_snapshot b/library/cloud/ec2_snapshot index b5d9df3b525..10aba7963c6 100644 --- a/library/cloud/ec2_snapshot +++ b/library/cloud/ec2_snapshot @@ -22,24 +22,6 @@ description: - creates an EC2 snapshot from an existing EBS volume version_added: "1.5" options: - ec2_secret_key: - description: - - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. - required: false - default: None - aliases: ['aws_secret_key', 'secret_key' ] - ec2_access_key: - description: - - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. - required: false - default: None - aliases: ['aws_access_key', 'access_key' ] - ec2_url: - description: - - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used - required: false - default: null - aliases: [] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. @@ -59,19 +41,20 @@ options: default: null aliases: [] instance_id: - description: - - instance that has a the required volume to snapshot mounted + description: + - instance that has the required volume to snapshot mounted required: false default: null aliases: [] device_name: - description: + description: - device name of a mounted volume to be snapshotted required: false default: null aliases: [] -requirements: [ "boto" ] + author: Will Thames +extends_documentation_fragment: aws ''' EXAMPLES = ''' @@ -109,6 +92,9 @@ def main(): ec2_url = dict(), ec2_secret_key = dict(aliases=['aws_secret_key', 'secret_key'], no_log=True), ec2_access_key = dict(aliases=['aws_access_key', 'access_key']), + wait = dict(type='bool', default='true'), + wait_timeout = dict(default=0), + snapshot_tags = dict(type='dict', default=dict()), ) ) @@ -116,6 +102,9 @@ def main(): description = module.params.get('description') instance_id = module.params.get('instance_id') device_name = module.params.get('device_name') + wait = module.params.get('wait') + wait_timeout = module.params.get('wait_timeout') + snapshot_tags = module.params.get('snapshot_tags') if not volume_id and not instance_id or volume_id and instance_id: module.fail_json('One and only one of volume_id or instance_id must be specified') @@ -135,10 +124,22 @@ def main(): try: snapshot = ec2.create_snapshot(volume_id, description=description) + time_waited = 0 + if wait: + snapshot.update() + while snapshot.status != 'completed': + time.sleep(3) + snapshot.update() + time_waited += 3 + if wait_timeout and time_waited > wait_timeout: + module.fail_json('Timed out while creating snapshot.') + for k, v in snapshot_tags.items(): + snapshot.add_tag(k, v) except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) - module.exit_json(changed=True, snapshot_id=snapshot.id) + module.exit_json(changed=True, snapshot_id=snapshot.id, volume_id=snapshot.volume_id, + volume_size=snapshot.volume_size, tags=snapshot.tags.copy()) # import module snippets from ansible.module_utils.basic import * diff --git a/library/cloud/ec2_tag b/library/cloud/ec2_tag index ca5a337646f..6c6eb94d218 100644 --- a/library/cloud/ec2_tag +++ b/library/cloud/ec2_tag @@ -19,7 +19,7 @@ DOCUMENTATION = ''' module: ec2_tag short_description: create and remove tag(s) to ec2 resources. description: - - Creates and removes tags from any EC2 resource. The resource is referenced by its resource id (e.g. an instance being i-XXXXXXX). It is designed to be used with complex args (tags), see the examples. This module has a dependency on python-boto. + - Creates, removes and lists tags from any EC2 resource. The resource is referenced by its resource id (e.g. an instance being i-XXXXXXX). It is designed to be used with complex args (tags), see the examples. This module has a dependency on python-boto. version_added: "1.3" options: resource: @@ -30,7 +30,7 @@ options: aliases: [] state: description: - - Whether the tags should be present or absent on the resource. + - Whether the tags should be present or absent on the resource. Use list to interrogate the tags of an instance. required: false default: present choices: ['present', 'absent'] @@ -41,35 +41,9 @@ options: required: false default: null aliases: ['aws_region', 'ec2_region'] - aws_secret_key: - description: - - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. - required: false - default: None - aliases: ['ec2_secret_key', 'secret_key' ] - aws_access_key: - description: - - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. - required: false - default: None - aliases: ['ec2_access_key', 'access_key' ] - ec2_url: - description: - - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used. - required: false - default: null - aliases: [] - validate_certs: - description: - - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. - required: false - default: "yes" - choices: ["yes", "no"] - aliases: [] - version_added: "1.5" -requirements: [ "boto" ] author: Lester Wade +extends_documentation_fragment: aws ''' EXAMPLES = ''' @@ -115,14 +89,14 @@ def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( resource = dict(required=True), - tags = dict(required=True), - state = dict(default='present', choices=['present', 'absent']), + tags = dict(), + state = dict(default='present', choices=['present', 'absent', 'list']), ) ) module = AnsibleModule(argument_spec=argument_spec) resource = module.params.get('resource') - tags = module.params['tags'] + tags = module.params.get('tags') state = module.params.get('state') ec2 = ec2_connect(module) @@ -140,6 +114,8 @@ def main(): tagdict[tag.name] = tag.value if state == 'present': + if not tags: + module.fail_json(msg="tags argument is required when state is present") if set(tags.items()).issubset(set(tagdict.items())): module.exit_json(msg="Tags already exists in %s." %resource, changed=False) else: @@ -151,6 +127,8 @@ def main(): module.exit_json(msg="Tags %s created for resource %s." % (dictadd,resource), changed=True) if state == 'absent': + if not tags: + module.fail_json(msg="tags argument is required when state is absent") for (key, value) in set(tags.items()): if (key, value) not in set(tagdict.items()): baddict[key] = value @@ -162,10 +140,9 @@ def main(): tagger = ec2.delete_tags(resource, dictremove) gettags = ec2.get_all_tags(filters=filters) module.exit_json(msg="Tags %s removed for resource %s." % (dictremove,resource), changed=True) - -# print json.dumps({ -# "current_resource_tags": gettags, -# }) + + if state == 'list': + module.exit_json(changed=False, **tagdict) sys.exit(0) # import module snippets diff --git a/library/cloud/ec2_vol b/library/cloud/ec2_vol index bdd2eae3822..152094d9b9b 100644 --- a/library/cloud/ec2_vol +++ b/library/cloud/ec2_vol @@ -22,34 +22,30 @@ description: - creates an EBS volume and optionally attaches it to an instance. If both an instance ID and a device name is given and the instance has a device at the device name, then no volume is created and no attachment is made. This module has a dependency on python-boto. version_added: "1.1" options: - aws_secret_key: - description: - - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. - required: false - default: None - aliases: ['ec2_secret_key', 'secret_key' ] - aws_access_key: + instance: description: - - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. + - instance ID if you wish to attach the volume. required: false - default: None - aliases: ['ec2_access_key', 'access_key' ] - ec2_url: + default: null + aliases: [] + name: description: - - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used + - volume Name tag if you wish to attach an existing volume (requires instance) required: false default: null aliases: [] - instance: + version_added: "1.6" + id: description: - - instance ID if you wish to attach the volume. + - volume id if you wish to attach an existing volume (requires instance) or remove an existing volume required: false - default: null + default: null aliases: [] + version_added: "1.6" volume_size: description: - size of volume (in GB) to create. - required: true + required: false default: null aliases: [] iops: @@ -82,6 +78,7 @@ options: - snapshot ID on which to base the volume required: false default: null + version_added: "1.5" validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. @@ -90,9 +87,15 @@ options: choices: ["yes", "no"] aliases: [] version_added: "1.5" - -requirements: [ "boto" ] + state: + description: + - whether to ensure the volume is present or absent + required: false + default: present + choices: ['absent', 'present'] + version_added: "1.6" author: Lester Wade +extends_documentation_fragment: aws ''' EXAMPLES = ''' @@ -131,6 +134,34 @@ EXAMPLES = ''' volume_size: 5 with_items: ec2.instances register: ec2_vol + +# Example: Launch an instance and then add a volue if not already present +# * Nothing will happen if the volume is already attached. +# * Volume must exist in the same zone. + +- local_action: + module: ec2 + keypair: "{{ keypair }}" + image: "{{ image }}" + zone: YYYYYY + id: my_instance + wait: yes + count: 1 + register: ec2 + +- local_action: + module: ec2_vol + instance: "{{ item.id }}" + name: my_existing_volume_Name_tag + device_name: /dev/xvdf + with_items: ec2.instances + register: ec2_vol + +# Remove a volume +- location: action + module: ec2_vol + id: vol-XXXXXXXX + state: absent ''' # Note: this module needs to be made idempotent. Possible solution is to use resource tags with the volumes. @@ -147,82 +178,104 @@ except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) -def main(): - argument_spec = ec2_argument_spec() - argument_spec.update(dict( - instance = dict(), - volume_size = dict(required=True), - iops = dict(), - device_name = dict(), - zone = dict(aliases=['availability_zone', 'aws_zone', 'ec2_zone']), - snapshot = dict(), - ) - ) - module = AnsibleModule(argument_spec=argument_spec) - - instance = module.params.get('instance') - volume_size = module.params.get('volume_size') - iops = module.params.get('iops') - device_name = module.params.get('device_name') +def get_volume(module, ec2): + name = module.params.get('name') + id = module.params.get('id') zone = module.params.get('zone') - snapshot = module.params.get('snapshot') - - ec2 = ec2_connect(module) + filters = {} + volume_ids = None + if zone: + filters['availability_zone'] = zone + if name: + filters = {'tag:Name': name} + if id: + volume_ids = [id] + try: + vols = ec2.get_all_volumes(volume_ids=volume_ids, filters=filters) + except boto.exception.BotoServerError, e: + module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) - # Here we need to get the zone info for the instance. This covers situation where - # instance is specified but zone isn't. - # Useful for playbooks chaining instance launch with volume create + attach and where the - # zone doesn't matter to the user. + if not vols: + module.fail_json(msg="Could not find volume in zone (if specified): %s" % name or id) + if len(vols) > 1: + module.fail_json(msg="Found more than one volume in zone (if specified) with name: %s" % name) + return vols[0] - if instance: - reservation = ec2.get_all_instances(instance_ids=instance) - inst = reservation[0].instances[0] - zone = inst.placement - # Check if there is a volume already mounted there. - if device_name: - if device_name in inst.block_device_mapping: - module.exit_json(msg="Volume mapping for %s already exists on instance %s" % (device_name, instance), - volume_id=inst.block_device_mapping[device_name].volume_id, - device=device_name, - changed=False) +def delete_volume(module, ec2): + vol = get_volume(module, ec2) + if not vol: + module.exit_json(changed=False) + else: + if vol.attachment_state() is not None: + adata = vol.attach_data + module.fail_json(msg="Volume %s is attached to an instance %s." % (vol.id, adata.instance_id)) + ec2.delete_volume(vol.id) + module.exit_json(changed=True) - # If custom iops is defined we use volume_type "io1" rather than the default of "standard" +def create_volume(module, ec2, zone): + name = module.params.get('name') + id = module.params.get('id') + instance = module.params.get('instance') + iops = module.params.get('iops') + volume_size = module.params.get('volume_size') + snapshot = module.params.get('snapshot') + # If custom iops is defined we use volume_type "io1" rather than the default of "standard" if iops: volume_type = 'io1' else: volume_type = 'standard' # If no instance supplied, try volume creation based on module parameters. + if name or id: + if not instance: + module.fail_json(msg = "If name or id is specified, instance must also be specified") + if iops or volume_size: + module.fail_json(msg = "Parameters are not compatible: [id or name] and [iops or volume_size]") - try: - volume = ec2.create_volume(volume_size, zone, snapshot, volume_type, iops) - while volume.status != 'available': - time.sleep(3) - volume.update() - except boto.exception.BotoServerError, e: - module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) + volume = get_volume(module, ec2) + if volume.attachment_state() is not None: + adata = volume.attach_data + if adata.instance_id != instance: + module.fail_json(msg = "Volume %s is already attached to another instance: %s" + % (name or id, adata.instance_id)) + else: + module.exit_json(msg="Volume %s is already mapped on instance %s: %s" % + (name or id, adata.instance_id, adata.device), + volume_id=id, + device=adata.device, + changed=False) + else: + try: + volume = ec2.create_volume(volume_size, zone, snapshot, volume_type, iops) + while volume.status != 'available': + time.sleep(3) + volume.update() + except boto.exception.BotoServerError, e: + module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) + return volume - # Attach the created volume. + +def attach_volume(module, ec2, volume, instance): + device_name = module.params.get('device_name') if device_name and instance: try: - attach = volume.attach(inst.id, device_name) + attach = volume.attach(instance.id, device_name) while volume.attachment_state() != 'attached': time.sleep(3) volume.update() except boto.exception.BotoServerError, e: - module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) - + module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) + # If device_name isn't set, make a choice based on best practices here: # http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html - + # In future this needs to be more dynamic but combining block device mapping best practices # (bounds for devices, as above) with instance.block_device_mapping data would be tricky. For me ;) # Use password data attribute to tell whether the instance is Windows or Linux - if device_name is None and instance: try: if not ec2.get_password_data(inst.id): @@ -240,11 +293,65 @@ def main(): except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) - print json.dumps({ - "volume_id": volume.id, - "device": device_name - }) - sys.exit(0) + +def main(): + argument_spec = ec2_argument_spec() + argument_spec.update(dict( + instance = dict(), + id = dict(), + name = dict(), + volume_size = dict(), + iops = dict(), + device_name = dict(), + zone = dict(aliases=['availability_zone', 'aws_zone', 'ec2_zone']), + snapshot = dict(), + state = dict(choices=['absent', 'present'], default='present') + ) + ) + module = AnsibleModule(argument_spec=argument_spec) + + id = module.params.get('id') + name = module.params.get('name') + instance = module.params.get('instance') + volume_size = module.params.get('volume_size') + iops = module.params.get('iops') + device_name = module.params.get('device_name') + zone = module.params.get('zone') + snapshot = module.params.get('snapshot') + state = module.params.get('state') + + ec2 = ec2_connect(module) + + if id and name: + module.fail_json(msg="Both id and name cannot be specified") + + if not (id or name or volume_size): + module.fail_json(msg="Cannot specify volume_size and either one of name or id") + + # Here we need to get the zone info for the instance. This covers situation where + # instance is specified but zone isn't. + # Useful for playbooks chaining instance launch with volume create + attach and where the + # zone doesn't matter to the user. + if instance: + reservation = ec2.get_all_instances(instance_ids=instance) + inst = reservation[0].instances[0] + zone = inst.placement + + # Check if there is a volume already mounted there. + if device_name: + if device_name in inst.block_device_mapping: + module.exit_json(msg="Volume mapping for %s already exists on instance %s" % (device_name, instance), + volume_id=inst.block_device_mapping[device_name].volume_id, + device=device_name, + changed=False) + + if state == 'absent': + delete_volume(module, ec2) + else: + volume = create_volume(module, ec2, zone) + if instance: + attach_volume(module, ec2, volume, inst) + module.exit_json(volume_id=volume.id, device=device_name) # import module snippets from ansible.module_utils.basic import * diff --git a/library/cloud/ec2_vpc b/library/cloud/ec2_vpc index 9b9fb95a0b2..1bd569f478c 100644 --- a/library/cloud/ec2_vpc +++ b/library/cloud/ec2_vpc @@ -46,7 +46,7 @@ options: choices: [ "yes", "no" ] subnets: description: - - "A dictionary array of subnets to add of the form: { cidr: ..., az: ... }. Where az is the desired availability zone of the subnet, but it is not required. All VPC subnets not in this list will be removed." + - 'A dictionary array of subnets to add of the form: { cidr: ..., az: ... , resource_tags: ... }. Where az is the desired availability zone of the subnet, but it is not required. Tags (i.e.: resource_tags) is also optional and use dictionary form: { "Environment":"Dev", "Tier":"Web", ...}. All VPC subnets not in this list will be removed.' required: false default: null aliases: [] @@ -56,6 +56,13 @@ options: required: false default: null aliases: [] + resource_tags: + description: + - 'A dictionary array of resource tags of the form: { tag1: value1, tag2: value2 }. Tags in this list are used in conjunction with CIDR block to uniquely identify a VPC in lieu of vpc_id. Therefore, if CIDR/Tag combination does not exits, a new VPC will be created. VPC tags not on this list will be ignored.' + required: false + default: null + aliases: [] + version_added: "1.6" internet_gateway: description: - Toggle whether there should be an Internet gateway attached to the VPC @@ -65,7 +72,7 @@ options: aliases: [] route_tables: description: - - "A dictionary array of route tables to add of the form: { subnets: [172.22.2.0/24, 172.22.3.0/24,], routes: [{ dest: 0.0.0.0/0, gw: igw},] }. Where the subnets list is those subnets the route table should be associated with, and the routes list is a list of routes to be in the table. The special keyword for the gw of igw specifies that you should the route should go through the internet gateway attached to the VPC. gw also accepts instance-ids in addition igw. This module is currently unable to affect the 'main' route table due to some limitations in boto, so you must explicitly define the associated subnets or they will be attached to the main table implicitly." + - 'A dictionary array of route tables to add of the form: { subnets: [172.22.2.0/24, 172.22.3.0/24,], routes: [{ dest: 0.0.0.0/0, gw: igw},] }. Where the subnets list is those subnets the route table should be associated with, and the routes list is a list of routes to be in the table. The special keyword for the gw of igw specifies that you should the route should go through the internet gateway attached to the VPC. gw also accepts instance-ids in addition igw. This module is currently unable to affect the "main" route table due to some limitations in boto, so you must explicitly define the associated subnets or they will be attached to the main table implicitly.' required: false default: null aliases: [] @@ -127,6 +134,7 @@ EXAMPLES = ''' module: ec2_vpc state: present cidr_block: 172.23.0.0/16 + resource_tags: { "Environment":"Development" } region: us-west-2 # Full creation example with subnets and optional availability zones. # The absence or presense of subnets deletes or creates them respectively. @@ -134,13 +142,17 @@ EXAMPLES = ''' module: ec2_vpc state: present cidr_block: 172.22.0.0/16 + resource_tags: { "Environment":"Development" } subnets: - cidr: 172.22.1.0/24 az: us-west-2c + resource_tags: { "Environment":"Dev", "Tier" : "Web" } - cidr: 172.22.2.0/24 az: us-west-2b + resource_tags: { "Environment":"Dev", "Tier" : "App" } - cidr: 172.22.3.0/24 az: us-west-2a + resource_tags: { "Environment":"Dev", "Tier" : "DB" } internet_gateway: True route_tables: - subnets: @@ -193,9 +205,54 @@ def get_vpc_info(vpc): 'state': vpc.state, }) +def find_vpc(module, vpc_conn, vpc_id=None, cidr=None): + """ + Finds a VPC that matches a specific id or cidr + tags + + module : AnsibleModule object + vpc_conn: authenticated VPCConnection connection object + + Returns: + A VPC object that matches either an ID or CIDR and one or more tag values + """ + + if vpc_id == None and cidr == None: + module.fail_json( + msg='You must specify either a vpc id or a cidr block + list of unique tags, aborting' + ) + + found_vpcs = [] + + resource_tags = module.params.get('resource_tags') + + # Check for existing VPC by cidr_block or id + if vpc_id is not None: + found_vpcs = vpc_conn.get_all_vpcs(None, {'vpc-id': vpc_id, 'state': 'available',}) + + else: + previous_vpcs = vpc_conn.get_all_vpcs(None, {'cidr': cidr, 'state': 'available'}) + + for vpc in previous_vpcs: + # Get all tags for each of the found VPCs + vpc_tags = dict((t.name, t.value) for t in vpc_conn.get_all_tags(filters={'resource-id': vpc.id})) + + # If the supplied list of ID Tags match a subset of the VPC Tags, we found our VPC + if resource_tags and set(resource_tags.items()).issubset(set(vpc_tags.items())): + found_vpcs.append(vpc) + + found_vpc = None + + if len(found_vpcs) == 1: + found_vpc = found_vpcs[0] + + if len(found_vpcs) > 1: + module.fail_json(msg='Found more than one vpc based on the supplied criteria, aborting') + + return (found_vpc) + def create_vpc(module, vpc_conn): """ - Creates a new VPC + Creates a new or modifies an existing VPC. module : AnsibleModule object vpc_conn: authenticated VPCConnection connection object @@ -217,20 +274,12 @@ def create_vpc(module, vpc_conn): wait_timeout = int(module.params.get('wait_timeout')) changed = False - # Check for existing VPC by cidr_block or id - if id != None: - filter_dict = {'vpc-id':id, 'state': 'available',} - previous_vpcs = vpc_conn.get_all_vpcs(None, filter_dict) - else: - filter_dict = {'cidr': cidr_block, 'state': 'available'} - previous_vpcs = vpc_conn.get_all_vpcs(None, filter_dict) + # Check for existing VPC by cidr_block + tags or id + previous_vpc = find_vpc(module, vpc_conn, id, cidr_block) - if len(previous_vpcs) > 1: - module.fail_json(msg='EC2 returned more than one VPC, aborting') - - if len(previous_vpcs) == 1: + if previous_vpc is not None: changed = False - vpc = previous_vpcs[0] + vpc = previous_vpc else: changed = True try: @@ -255,7 +304,21 @@ def create_vpc(module, vpc_conn): module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) # Done with base VPC, now change to attributes and features. - + + # Add resource tags + vpc_spec_tags = module.params.get('resource_tags') + vpc_tags = dict((t.name, t.value) for t in vpc_conn.get_all_tags(filters={'resource-id': vpc.id})) + + if vpc_spec_tags and not set(vpc_spec_tags.items()).issubset(set(vpc_tags.items())): + new_tags = {} + + for (key, value) in set(vpc_spec_tags.items()): + if (key, value) not in set(vpc_tags.items()): + new_tags[key] = value + + if new_tags: + vpc_conn.create_tags(vpc.id, new_tags) + # boto doesn't appear to have a way to determine the existing # value of the dns attributes, so we just set them. @@ -269,6 +332,7 @@ def create_vpc(module, vpc_conn): module.fail_json(msg='subnets needs to be a list of cidr blocks') current_subnets = vpc_conn.get_all_subnets(filters={ 'vpc_id': vpc.id }) + # First add all new subnets for subnet in subnets: add_subnet = True @@ -277,10 +341,22 @@ def create_vpc(module, vpc_conn): add_subnet = False if add_subnet: try: - vpc_conn.create_subnet(vpc.id, subnet['cidr'], subnet.get('az', None)) + new_subnet = vpc_conn.create_subnet(vpc.id, subnet['cidr'], subnet.get('az', None)) + new_subnet_tags = subnet.get('resource_tags', None) + if new_subnet_tags: + # Sometimes AWS takes its time to create a subnet and so using new subnets's id + # to create tags results in exception. + # boto doesn't seem to refresh 'state' of the newly created subnet, i.e.: it's always 'pending' + # so i resorted to polling vpc_conn.get_all_subnets with the id of the newly added subnet + while len(vpc_conn.get_all_subnets(filters={ 'subnet-id': new_subnet.id })) == 0: + time.sleep(0.1) + + vpc_conn.create_tags(new_subnet.id, new_subnet_tags) + changed = True except EC2ResponseError, e: module.fail_json(msg='Unable to create subnet {0}, error: {1}'.format(subnet['cidr'], e)) + # Now delete all absent subnets for csubnet in current_subnets: delete_subnet = True @@ -332,7 +408,7 @@ def create_vpc(module, vpc_conn): if not isinstance(route_tables, list): module.fail_json(msg='route tables need to be a list of dictionaries') - # Work through each route table and update/create to match dictionary array +# Work through each route table and update/create to match dictionary array all_route_tables = [] for rt in route_tables: try: @@ -350,7 +426,7 @@ def create_vpc(module, vpc_conn): # Associate with subnets for sn in rt['subnets']: - rsn = vpc_conn.get_all_subnets(filters={'cidr': sn}) + rsn = vpc_conn.get_all_subnets(filters={'cidr': sn, 'vpc_id': vpc.id }) if len(rsn) != 1: module.fail_json( msg='The subnet {0} to associate with route_table {1} ' \ @@ -360,7 +436,7 @@ def create_vpc(module, vpc_conn): # Disassociate then associate since we don't have replace old_rt = vpc_conn.get_all_route_tables( - filters={'association.subnet_id': rsn.id} + filters={'association.subnet_id': rsn.id, 'vpc_id': vpc.id} ) if len(old_rt) == 1: old_rt = old_rt[0] @@ -405,14 +481,15 @@ def create_vpc(module, vpc_conn): created_vpc_id = vpc.id returned_subnets = [] current_subnets = vpc_conn.get_all_subnets(filters={ 'vpc_id': vpc.id }) + for sn in current_subnets: returned_subnets.append({ + 'resource_tags': dict((t.name, t.value) for t in vpc_conn.get_all_tags(filters={'resource-id': sn.id})), 'cidr': sn.cidr_block, 'az': sn.availability_zone, 'id': sn.id, }) - return (vpc_dict, created_vpc_id, returned_subnets, changed) def terminate_vpc(module, vpc_conn, vpc_id=None, cidr=None): @@ -434,23 +511,10 @@ def terminate_vpc(module, vpc_conn, vpc_id=None, cidr=None): vpc_dict = {} terminated_vpc_id = '' changed = False - - if vpc_id == None and cidr == None: - module.fail_json( - msg='You must either specify a vpc id or a cidr '\ - 'block to terminate a VPC, aborting' - ) - if vpc_id is not None: - vpc_rs = vpc_conn.get_all_vpcs(vpc_id) - else: - vpc_rs = vpc_conn.get_all_vpcs(filters={'cidr': cidr}) - if len(vpc_rs) > 1: - module.fail_json( - msg='EC2 returned more than one VPC for id {0} ' \ - 'or cidr {1}, aborting'.format(vpc_id,vidr) - ) - if len(vpc_rs) == 1: - vpc = vpc_rs[0] + + vpc = find_vpc(module, vpc_conn, vpc_id, cidr) + + if vpc is not None: if vpc.state == 'available': terminated_vpc_id=vpc.id vpc_dict=get_vpc_info(vpc) @@ -491,13 +555,14 @@ def main(): argument_spec.update(dict( cidr_block = dict(), instance_tenancy = dict(choices=['default', 'dedicated'], default='default'), - wait = dict(choices=BOOLEANS, default=False), + wait = dict(type='bool', default=False), wait_timeout = dict(default=300), - dns_support = dict(choices=BOOLEANS, default=True), - dns_hostnames = dict(choices=BOOLEANS, default=True), + dns_support = dict(type='bool', default=True), + dns_hostnames = dict(type='bool', default=True), subnets = dict(type='list'), vpc_id = dict(), - internet_gateway = dict(choices=BOOLEANS, default=False), + internet_gateway = dict(type='bool', default=False), + resource_tags = dict(type='dict'), route_tables = dict(type='list'), state = dict(choices=['present', 'absent'], default='present'), ) @@ -527,11 +592,6 @@ def main(): if module.params.get('state') == 'absent': vpc_id = module.params.get('vpc_id') cidr = module.params.get('cidr_block') - if vpc_id == None and cidr == None: - module.fail_json( - msg='You must either specify a vpc id or a cidr '\ - 'block to terminate a VPC, aborting' - ) (changed, vpc_dict, new_vpc_id) = terminate_vpc(module, vpc_conn, vpc_id, cidr) subnets_changed = None elif module.params.get('state') == 'present': diff --git a/library/cloud/elasticache b/library/cloud/elasticache index 7cbd72d736d..8c82f2fcc20 100644 --- a/library/cloud/elasticache +++ b/library/cloud/elasticache @@ -58,6 +58,12 @@ options: - The port number on which each of the cache nodes will accept connections required: false default: 11211 + security_group_ids: + description: + - A list of vpc security group names to associate with this cache cluster. Only use if inside a vpc + required: false + default: ['default'] + version_added: "1.6" cache_security_groups: description: - A list of cache security group names to associate with this cache cluster @@ -152,7 +158,7 @@ class ElastiCacheManager(object): EXIST_STATUSES = ['available', 'creating', 'rebooting', 'modifying'] def __init__(self, module, name, engine, cache_engine_version, node_type, - num_nodes, cache_port, cache_security_groups, zone, wait, + num_nodes, cache_port, cache_security_groups, security_group_ids, zone, wait, hard_modify, aws_access_key, aws_secret_key, region): self.module = module self.name = name @@ -162,6 +168,7 @@ class ElastiCacheManager(object): self.num_nodes = num_nodes self.cache_port = cache_port self.cache_security_groups = cache_security_groups + self.security_group_ids = security_group_ids self.zone = zone self.wait = wait self.hard_modify = hard_modify @@ -217,6 +224,7 @@ class ElastiCacheManager(object): engine=self.engine, engine_version=self.cache_engine_version, cache_security_group_names=self.cache_security_groups, + security_group_ids=self.security_group_ids, preferred_availability_zone=self.zone, port=self.cache_port) except boto.exception.BotoServerError, e: @@ -291,6 +299,7 @@ class ElastiCacheManager(object): num_cache_nodes=self.num_nodes, cache_node_ids_to_remove=nodes_to_remove, cache_security_group_names=self.cache_security_groups, + security_group_ids=self.security_group_ids, apply_immediately=True, engine_version=self.cache_engine_version) except boto.exception.BotoServerError, e: @@ -377,12 +386,21 @@ class ElastiCacheManager(object): if self.data[key] != value: return True - # Check security groups + # Check cache security groups cache_security_groups = [] for sg in self.data['CacheSecurityGroups']: cache_security_groups.append(sg['CacheSecurityGroupName']) if set(cache_security_groups) - set(self.cache_security_groups): return True + + # check vpc security groups + vpc_security_groups = [] + security_groups = self.data['SecurityGroups'] or [] + for sg in security_groups: + vpc_security_groups.append(sg['SecurityGroupId']) + if set(vpc_security_groups) - set(self.security_group_ids): + return True + return False def _requires_destroy_and_create(self): @@ -469,9 +487,11 @@ def main(): cache_port={'required': False, 'default': 11211, 'type': 'int'}, cache_security_groups={'required': False, 'default': ['default'], 'type': 'list'}, + security_group_ids={'required': False, 'default': [], + 'type': 'list'}, zone={'required': False, 'default': None}, - wait={'required': False, 'choices': BOOLEANS, 'default': True}, - hard_modify={'required': False, 'choices': BOOLEANS, 'default': False} + wait={'required': False, 'type' : 'bool', 'default': True}, + hard_modify={'required': False, 'type': 'bool', 'default': False} ) ) @@ -489,6 +509,7 @@ def main(): num_nodes = module.params['num_nodes'] cache_port = module.params['cache_port'] cache_security_groups = module.params['cache_security_groups'] + security_group_ids = module.params['security_group_ids'] zone = module.params['zone'] wait = module.params['wait'] hard_modify = module.params['hard_modify'] @@ -502,7 +523,8 @@ def main(): elasticache_manager = ElastiCacheManager(module, name, engine, cache_engine_version, node_type, num_nodes, cache_port, - cache_security_groups, zone, wait, + cache_security_groups, + security_group_ids, zone, wait, hard_modify, aws_access_key, aws_secret_key, region) diff --git a/library/cloud/gc_storage b/library/cloud/gc_storage index cbf72aa8e92..8696f8e965d 100644 --- a/library/cloud/gc_storage +++ b/library/cloud/gc_storage @@ -152,11 +152,12 @@ def key_check(module, gs, bucket, obj): def keysum(module, gs, bucket, obj): bucket = gs.lookup(bucket) key_check = bucket.get_key(obj) - if key_check: - md5_remote = key_check.etag[1:-1] - etag_multipart = md5_remote.find('-')!=-1 #Check for multipart, etag is not md5 - if etag_multipart is True: - module.fail_json(msg="Files uploaded with multipart of gs are not supported with checksum, unable to compute checksum.") + if not key_check: + return None + md5_remote = key_check.etag[1:-1] + etag_multipart = '-' in md5_remote # Check for multipart, etag is not md5 + if etag_multipart is True: + module.fail_json(msg="Files uploaded with multipart of gs are not supported with checksum, unable to compute checksum.") return md5_remote def bucket_check(module, gs, bucket): diff --git a/library/cloud/gce b/library/cloud/gce index b14ce8996da..2d95c8143bc 100755 --- a/library/cloud/gce +++ b/library/cloud/gce @@ -351,7 +351,7 @@ def main(): metadata = dict(), name = dict(), network = dict(default='default'), - persistent_boot_disk = dict(type='bool', choices=BOOLEANS, default=False), + persistent_boot_disk = dict(type='bool', default=False), state = dict(choices=['active', 'present', 'absent', 'deleted'], default='present'), tags = dict(type='list'), diff --git a/library/cloud/gce_lb b/library/cloud/gce_lb index 3e22c216998..4d7190d8752 100644 --- a/library/cloud/gce_lb +++ b/library/cloud/gce_lb @@ -111,21 +111,21 @@ options: choices: ["active", "present", "absent", "deleted"] aliases: [] service_account_email: - version_added: 1.5.1 + version_added: "1.6" description: - service account email required: false default: null aliases: [] pem_file: - version_added: 1.5.1 + version_added: "1.6" description: - path to the pem file associated with the service account email required: false default: null aliases: [] project_id: - version_added: 1.5.1 + version_added: "1.6" description: - your GCE project ID required: false diff --git a/library/cloud/gce_net b/library/cloud/gce_net index 4e731f196d3..c2c0b30452d 100644 --- a/library/cloud/gce_net +++ b/library/cloud/gce_net @@ -74,21 +74,21 @@ options: choices: ["active", "present", "absent", "deleted"] aliases: [] service_account_email: - version_added: 1.5.1 + version_added: "1.6" description: - service account email required: false default: null aliases: [] pem_file: - version_added: 1.5.1 + version_added: "1.6" description: - path to the pem file associated with the service account email required: false default: null aliases: [] project_id: - version_added: 1.5.1 + version_added: "1.6" description: - your GCE project ID required: false diff --git a/library/cloud/gce_pd b/library/cloud/gce_pd index a8e631a5522..e5ea6cc4ad8 100644 --- a/library/cloud/gce_pd +++ b/library/cloud/gce_pd @@ -76,21 +76,21 @@ options: default: "us-central1-b" aliases: [] service_account_email: - version_added: 1.5.1 + version_added: "1.6" description: - service account email required: false default: null aliases: [] pem_file: - version_added: 1.5.1 + version_added: "1.6" description: - path to the pem file associated with the service account email required: false default: null aliases: [] project_id: - version_added: 1.5.1 + version_added: "1.6" description: - your GCE project ID required: false @@ -127,10 +127,9 @@ except ImportError: def main(): module = AnsibleModule( argument_spec = dict( - detach_only = dict(choice=BOOLEANS), + detach_only = dict(type='bool'), instance_name = dict(), - mode = dict(default='READ_ONLY', - choices=['READ_WRITE', 'READ_ONLY']), + mode = dict(default='READ_ONLY', choices=['READ_WRITE', 'READ_ONLY']), name = dict(required=True), size_gb = dict(default=10), state = dict(default='present'), diff --git a/library/cloud/keystone_user b/library/cloud/keystone_user index 206fd68b070..d6529b537ed 100644 --- a/library/cloud/keystone_user +++ b/library/cloud/keystone_user @@ -26,6 +26,7 @@ options: - The tenant login_user belongs to required: false default: None + version_added: "1.3" token: description: - The token to be uses in case the password is not specified diff --git a/library/cloud/nova_compute b/library/cloud/nova_compute index d0bc79b1a2a..049c8116bbc 100644 --- a/library/cloud/nova_compute +++ b/library/cloud/nova_compute @@ -107,6 +107,12 @@ options: - The amount of time the module should wait for the VM to get into active state required: false default: 180 + user_data: + description: + - Opaque blob of data which is made available to the instance + required: false + default: None + version_added: "1.6" requirements: ["novaclient"] ''' @@ -157,6 +163,8 @@ def _create_server(module, nova): 'meta' : module.params['meta'], 'key_name': module.params['key_name'], 'security_groups': module.params['security_groups'].split(','), + #userdata is unhyphenated in novaclient, but hyphenated here for consistency with the ec2 module: + 'userdata': module.params['user_data'], } if not module.params['key_name']: del bootkwargs['key_name'] @@ -193,7 +201,12 @@ def _get_server_state(module, nova): try: servers = nova.servers.list(True, {'name': module.params['name']}) if servers: - server = [x for x in servers if x.name == module.params['name']][0] + # the {'name': module.params['name']} will also return servers + # with names that partially match the server name, so we have to + # strictly filter here + servers = [x for x in servers if x.name == module.params['name']] + if servers: + server = servers[0] except Exception, e: module.fail_json(msg = "Error in getting the server list: %s" % e.message) if server and module.params['state'] == 'present': @@ -227,7 +240,8 @@ def main(): meta = dict(default=None), wait = dict(default='yes', choices=['yes', 'no']), wait_for = dict(default=180), - state = dict(default='present', choices=['absent', 'present']) + state = dict(default='present', choices=['absent', 'present']), + user_data = dict(default=None) ), ) diff --git a/library/cloud/nova_keypair b/library/cloud/nova_keypair index 19d3fa49b95..18674a1220a 100644 --- a/library/cloud/nova_keypair +++ b/library/cloud/nova_keypair @@ -18,7 +18,7 @@ # along with this software. If not, see . try: - from novaclient.v1_1 import client + from novaclient.v1_1 import client as nova_client from novaclient import exceptions import time except ImportError: diff --git a/library/cloud/quantum_floating_ip b/library/cloud/quantum_floating_ip index c69f2b16587..2ad761ec3b7 100644 --- a/library/cloud/quantum_floating_ip +++ b/library/cloud/quantum_floating_ip @@ -80,6 +80,7 @@ options: - The name of the network of the port to associate with the floating ip. Necessary when VM multiple networks. required: false default: None + version_added: "1.5" requirements: ["novaclient", "quantumclient", "neutronclient", "keystoneclient"] ''' diff --git a/library/cloud/quantum_subnet b/library/cloud/quantum_subnet index 489ebb3440c..17f7a6a0056 100644 --- a/library/cloud/quantum_subnet +++ b/library/cloud/quantum_subnet @@ -98,6 +98,7 @@ options: - DNS nameservers for this subnet, comma-separated required: false default: None + version_added: "1.4" allocation_pool_start: description: - From the subnet pool the starting address from which the IP should be allocated @@ -259,7 +260,7 @@ def main(): tenant_name = dict(default=None), state = dict(default='present', choices=['absent', 'present']), ip_version = dict(default='4', choices=['4', '6']), - enable_dhcp = dict(default='true', choices=BOOLEANS), + enable_dhcp = dict(default='true', type='bool'), gateway_ip = dict(default=None), dns_nameservers = dict(default=None), allocation_pool_start = dict(default=None), diff --git a/library/cloud/rax b/library/cloud/rax index 230f80df5e2..af533bca126 100644 --- a/library/cloud/rax +++ b/library/cloud/rax @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify @@ -14,6 +14,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax @@ -23,52 +25,6 @@ description: waits for it to be 'running'. version_added: "1.2" options: - api_key: - description: - - Rackspace API key (overrides I(credentials)) - aliases: - - password - auth_endpoint: - description: - - The URI of the authentication service - default: https://identity.api.rackspacecloud.com/v2.0/ - version_added: 1.5 - credentials: - description: - - File to find the Rackspace credentials in (ignored if I(api_key) and - I(username) are provided) - default: null - aliases: - - creds_file - env: - description: - - Environment as configured in ~/.pyrax.cfg, - see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration) - version_added: 1.5 - identity_type: - description: - - Authentication machanism to use, such as rackspace or keystone - default: rackspace - version_added: 1.5 - region: - description: - - Region to create an instance in - default: DFW - tenant_id: - description: - - The tenant ID used for authentication - version_added: 1.5 - tenant_name: - description: - - The tenant name used for authentication - version_added: 1.5 - username: - description: - - Rackspace username (overrides I(credentials)) - verify_ssl: - description: - - Whether or not to require SSL validation of API endpoints - version_added: 1.5 auto_increment: description: - Whether or not to increment a single number with the name of the @@ -89,7 +45,9 @@ options: disk_config: description: - Disk partitioning strategy - choices: ['auto', 'manual'] + choices: + - auto + - manual version_added: '1.4' default: auto exact_count: @@ -98,6 +56,17 @@ options: state=active/present default: no version_added: 1.4 + extra_client_args: + description: + - A hash of key/value pairs to be used when creating the cloudservers + client. This is considered an advanced option, use it wisely and + with caution. + version_added: 1.6 + extra_create_args: + description: + - A hash of key/value pairs to be used when creating a new server. + This is considered an advanced option, use it wisely and with caution. + version_added: 1.6 files: description: - Files to insert into the instance. remotefilename:localcontent @@ -124,7 +93,8 @@ options: description: - key pair to use on the instance default: null - aliases: ['keypair'] + aliases: + - keypair meta: description: - A hash of metadata to associate with the instance @@ -138,31 +108,30 @@ options: - The network to attach to the instances. If specified, you must include ALL networks including the public and private interfaces. Can be C(id) or C(label). - default: ['public', 'private'] + default: + - public + - private version_added: 1.4 state: description: - Indicate desired state of the resource - choices: ['present', 'absent'] + choices: + - present + - absent default: present wait: description: - wait for the instance to be in state 'running' before returning default: "no" - choices: [ "yes", "no" ] + choices: + - "yes" + - "no" wait_timeout: description: - how long before wait gives up, in seconds default: 300 -requirements: [ "pyrax" ] author: Jesse Keating, Matt Martz -notes: - - The following environment variables can be used, C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file - appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +extends_documentation_fragment: rackspace.openstack ''' EXAMPLES = ''' @@ -206,18 +175,18 @@ EXAMPLES = ''' register: rax ''' -import sys -import time import os import re +import time + from uuid import UUID from types import NoneType try: import pyrax + HAS_PYRAX = True except ImportError: - print("failed=True msg='pyrax is required for this module'") - sys.exit(1) + HAS_PYRAX = False ACTIVE_STATUSES = ('ACTIVE', 'BUILD', 'HARD_REBOOT', 'MIGRATING', 'PASSWORD', 'REBOOT', 'REBUILD', 'RESCUE', 'RESIZE', 'REVERT_RESIZE') @@ -246,7 +215,8 @@ def pyrax_object_to_dict(obj): def create(module, names, flavor, image, meta, key_name, files, - wait, wait_timeout, disk_config, group, nics): + wait, wait_timeout, disk_config, group, nics, + extra_create_args): cs = pyrax.cloudservers changed = False @@ -266,7 +236,8 @@ def create(module, names, flavor, image, meta, key_name, files, flavor=flavor, meta=meta, key_name=key_name, files=files, nics=nics, - disk_config=disk_config)) + disk_config=disk_config, + **extra_create_args)) except Exception, e: module.fail_json(msg='%s' % e.message) else: @@ -405,11 +376,19 @@ def delete(module, instance_ids, wait, wait_timeout): def cloudservers(module, state, name, flavor, image, meta, key_name, files, wait, wait_timeout, disk_config, count, group, instance_ids, exact_count, networks, count_offset, - auto_increment): + auto_increment, extra_create_args): cs = pyrax.cloudservers cnw = pyrax.cloud_networks + if not cnw: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') + servers = [] + for key, value in meta.items(): + meta[key] = repr(value) + # Add the group meta key if group and 'group' not in meta: meta['group'] = group @@ -602,7 +581,7 @@ def cloudservers(module, state, name, flavor, image, meta, key_name, files, names = [name] * (count - len(servers)) create(module, names, flavor, image, meta, key_name, files, - wait, wait_timeout, disk_config, group, nics) + wait, wait_timeout, disk_config, group, nics, extra_create_args) elif state == 'absent': if instance_ids is None: @@ -642,11 +621,13 @@ def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( - auto_increment=dict(choices=BOOLEANS, default=True, type='bool'), + auto_increment=dict(default=True, type='bool'), count=dict(default=1, type='int'), count_offset=dict(default=1, type='int'), disk_config=dict(choices=['auto', 'manual']), - exact_count=dict(choices=BOOLEANS, default=False, type='bool'), + exact_count=dict(default=False, type='bool'), + extra_client_args=dict(type='dict', default={}), + extra_create_args=dict(type='dict', default={}), files=dict(type='dict', default={}), flavor=dict(), group=dict(), @@ -658,7 +639,7 @@ def main(): networks=dict(type='list', default=['public', 'private']), service=dict(), state=dict(default='present', choices=['present', 'absent']), - wait=dict(choices=BOOLEANS, default=False, type='bool'), + wait=dict(default=False, type='bool'), wait_timeout=dict(default=300), ) ) @@ -668,6 +649,9 @@ def main(): required_together=rax_required_together(), ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + service = module.params.get('service') if service is not None: @@ -682,6 +666,8 @@ def main(): if disk_config: disk_config = disk_config.upper() exact_count = module.params.get('exact_count', False) + extra_client_args = module.params.get('extra_client_args') + extra_create_args = module.params.get('extra_create_args') files = module.params.get('files') flavor = module.params.get('flavor') group = module.params.get('group') @@ -697,10 +683,23 @@ def main(): setup_rax_module(module, pyrax) + if extra_client_args: + pyrax.cloudservers = pyrax.connect_to_cloudservers( + region=pyrax.cloudservers.client.region_name, + **extra_client_args) + client = pyrax.cloudservers.client + if 'bypass_url' in extra_client_args: + client.management_url = extra_client_args['bypass_url'] + + if pyrax.cloudservers is None: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') + cloudservers(module, state, name, flavor, image, meta, key_name, files, wait, wait_timeout, disk_config, count, group, instance_ids, exact_count, networks, count_offset, - auto_increment) + auto_increment, extra_create_args) # import module snippets diff --git a/library/cloud/rax_cbs b/library/cloud/rax_cbs new file mode 100644 index 00000000000..443c833e7d0 --- /dev/null +++ b/library/cloud/rax_cbs @@ -0,0 +1,236 @@ +#!/usr/bin/python +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments +DOCUMENTATION = ''' +--- +module: rax_cbs +short_description: Manipulate Rackspace Cloud Block Storage Volumes +description: + - Manipulate Rackspace Cloud Block Storage Volumes +version_added: 1.6 +options: + description: + description: + - Description to give the volume being created + default: null + meta: + description: + - A hash of metadata to associate with the volume + default: null + name: + description: + - Name to give the volume being created + default: null + required: true + size: + description: + - Size of the volume to create in Gigabytes + default: 100 + required: true + snapshot_id: + description: + - The id of the snapshot to create the volume from + default: null + state: + description: + - Indicate desired state of the resource + choices: + - present + - absent + default: present + required: true + volume_type: + description: + - Type of the volume being created + choices: + - SATA + - SSD + default: SATA + required: true + wait: + description: + - wait for the volume to be in state 'available' before returning + default: "no" + choices: + - "yes" + - "no" + wait_timeout: + description: + - how long before wait gives up, in seconds + default: 300 +author: Christopher H. Laco, Matt Martz +extends_documentation_fragment: rackspace.openstack +''' + +EXAMPLES = ''' +- name: Build a Block Storage Volume + gather_facts: False + hosts: local + connection: local + tasks: + - name: Storage volume create request + local_action: + module: rax_cbs + credentials: ~/.raxpub + name: my-volume + description: My Volume + volume_type: SSD + size: 150 + region: DFW + wait: yes + state: present + meta: + app: my-cool-app + register: my_volume +''' + +import sys + +from uuid import UUID +from types import NoneType + +try: + import pyrax + HAS_PYRAX = True +except ImportError: + HAS_PYRAX = False + +NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) +VOLUME_STATUS = ('available', 'attaching', 'creating', 'deleting', 'in-use', + 'error', 'error_deleting') + + +def cloud_block_storage(module, state, name, description, meta, size, + snapshot_id, volume_type, wait, wait_timeout): + for arg in (state, name, size, volume_type): + if not arg: + module.fail_json(msg='%s is required for rax_cbs' % arg) + + if size < 100: + module.fail_json(msg='"size" must be greater than or equal to 100') + + changed = False + volume = None + instance = {} + + cbs = pyrax.cloud_blockstorage + + if cbs is None: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') + + try: + UUID(name) + volume = cbs.get(name) + except ValueError: + try: + volume = cbs.find(name=name) + except Exception, e: + module.fail_json(msg='%s' % e) + + if state == 'present': + if not volume: + try: + volume = cbs.create(name, size=size, volume_type=volume_type, + description=description, + metadata=meta, + snapshot_id=snapshot_id) + changed = True + except Exception, e: + module.fail_json(msg='%s' % e.message) + else: + if wait: + attempts = wait_timeout / 5 + pyrax.utils.wait_for_build(volume, interval=5, + attempts=attempts) + + volume.get() + for key, value in vars(volume).iteritems(): + if (isinstance(value, NON_CALLABLES) and + not key.startswith('_')): + instance[key] = value + + result = dict(changed=changed, volume=instance) + + if volume.status == 'error': + result['msg'] = '%s failed to build' % volume.id + elif wait and volume.status not in VOLUME_STATUS: + result['msg'] = 'Timeout waiting on %s' % volume.id + + if 'msg' in result: + module.fail_json(**result) + else: + module.exit_json(**result) + + elif state == 'absent': + if volume: + try: + volume.delete() + changed = True + except Exception, e: + module.fail_json(msg='%s' % e.message) + + module.exit_json(changed=changed, volume=instance) + + +def main(): + argument_spec = rax_argument_spec() + argument_spec.update( + dict( + description=dict(), + meta=dict(type='dict', default={}), + name=dict(required=True), + size=dict(type='int', default=100), + snapshot_id=dict(), + state=dict(default='present', choices=['present', 'absent']), + volume_type=dict(choices=['SSD', 'SATA'], default='SATA'), + wait=dict(type='bool', default=False), + wait_timeout=dict(type='int', default=300) + ) + ) + + module = AnsibleModule( + argument_spec=argument_spec, + required_together=rax_required_together() + ) + + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + + description = module.params.get('description') + meta = module.params.get('meta') + name = module.params.get('name') + size = module.params.get('size') + snapshot_id = module.params.get('snapshot_id') + state = module.params.get('state') + volume_type = module.params.get('volume_type') + wait = module.params.get('wait') + wait_timeout = module.params.get('wait_timeout') + + setup_rax_module(module, pyrax) + + cloud_block_storage(module, state, name, description, meta, size, + snapshot_id, volume_type, wait, wait_timeout) + +# import module snippets +from ansible.module_utils.basic import * +from ansible.module_utils.rax import * + +### invoke the module +main() diff --git a/library/cloud/rax_cbs_attachments b/library/cloud/rax_cbs_attachments new file mode 100644 index 00000000000..bc7dba9eec2 --- /dev/null +++ b/library/cloud/rax_cbs_attachments @@ -0,0 +1,268 @@ +#!/usr/bin/python +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments +DOCUMENTATION = ''' +--- +module: rax_cbs_attachments +short_description: Manipulate Rackspace Cloud Block Storage Volume Attachments +description: + - Manipulate Rackspace Cloud Block Storage Volume Attachments +version_added: 1.6 +options: + device: + description: + - The device path to attach the volume to, e.g. /dev/xvde + default: null + required: true + volume: + description: + - Name or id of the volume to attach/detach + default: null + required: true + server: + description: + - Name or id of the server to attach/detach + default: null + required: true + state: + description: + - Indicate desired state of the resource + choices: + - present + - absent + default: present + required: true + wait: + description: + - wait for the volume to be in 'in-use'/'available' state before returning + default: "no" + choices: + - "yes" + - "no" + wait_timeout: + description: + - how long before wait gives up, in seconds + default: 300 +author: Christopher H. Laco, Matt Martz +extends_documentation_fragment: rackspace.openstack +''' + +EXAMPLES = ''' +- name: Attach a Block Storage Volume + gather_facts: False + hosts: local + connection: local + tasks: + - name: Storage volume attach request + local_action: + module: rax_cbs_attachments + credentials: ~/.raxpub + volume: my-volume + server: my-server + device: /dev/xvdd + region: DFW + wait: yes + state: present + register: my_volume +''' + +import sys + +from uuid import UUID +from types import NoneType + +try: + import pyrax + HAS_PYRAX = True +except ImportError: + HAS_PYRAX = False + +NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) + + +def cloud_block_storage_attachments(module, state, volume, server, device, + wait, wait_timeout): + for arg in (state, volume, server, device): + if not arg: + module.fail_json(msg='%s is required for rax_cbs_attachments' % + arg) + + cbs = pyrax.cloud_blockstorage + cs = pyrax.cloudservers + + if cbs is None or cs is None: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') + + changed = False + instance = {} + + try: + UUID(volume) + volume = cbs.get(volume) + except ValueError: + try: + volume = cbs.find(name=volume) + except Exception, e: + module.fail_json(msg='%s' % e) + + if not volume: + module.fail_json(msg='No matching storage volumes were found') + + if state == 'present': + try: + UUID(server) + server = cs.servers.get(server) + except ValueError: + servers = cs.servers.list(search_opts=dict(name='^%s$' % server)) + if not servers: + module.fail_json(msg='No Server was matched by name, ' + 'try using the Server ID instead') + if len(servers) > 1: + module.fail_json(msg='Multiple servers matched by name, ' + 'try using the Server ID instead') + + # We made it this far, grab the first and hopefully only server + # in the list + server = servers[0] + + if (volume.attachments and + volume.attachments[0]['server_id'] == server.id): + changed = False + elif volume.attachments: + module.fail_json(msg='Volume is attached to another server') + else: + try: + volume.attach_to_instance(server, mountpoint=device) + changed = True + except Exception, e: + module.fail_json(msg='%s' % e.message) + + volume.get() + + for key, value in vars(volume).iteritems(): + if (isinstance(value, NON_CALLABLES) and + not key.startswith('_')): + instance[key] = value + + result = dict(changed=changed, volume=instance) + + if volume.status == 'error': + result['msg'] = '%s failed to build' % volume.id + elif wait: + attempts = wait_timeout / 5 + pyrax.utils.wait_until(volume, 'status', 'in-use', + interval=5, attempts=attempts) + + if 'msg' in result: + module.fail_json(**result) + else: + module.exit_json(**result) + + elif state == 'absent': + try: + UUID(server) + server = cs.servers.get(server) + except ValueError: + servers = cs.servers.list(search_opts=dict(name='^%s$' % server)) + if not servers: + module.fail_json(msg='No Server was matched by name, ' + 'try using the Server ID instead') + if len(servers) > 1: + module.fail_json(msg='Multiple servers matched by name, ' + 'try using the Server ID instead') + + # We made it this far, grab the first and hopefully only server + # in the list + server = servers[0] + + if (volume.attachments and + volume.attachments[0]['server_id'] == server.id): + try: + volume.detach() + if wait: + pyrax.utils.wait_until(volume, 'status', 'available', + interval=3, attempts=0, + verbose=False) + changed = True + except Exception, e: + module.fail_json(msg='%s' % e.message) + + volume.get() + changed = True + elif volume.attachments: + module.fail_json(msg='Volume is attached to another server') + + for key, value in vars(volume).iteritems(): + if (isinstance(value, NON_CALLABLES) and + not key.startswith('_')): + instance[key] = value + + result = dict(changed=changed, volume=instance) + + if volume.status == 'error': + result['msg'] = '%s failed to build' % volume.id + + if 'msg' in result: + module.fail_json(**result) + else: + module.exit_json(**result) + + module.exit_json(changed=changed, volume=instance) + + +def main(): + argument_spec = rax_argument_spec() + argument_spec.update( + dict( + device=dict(required=True), + volume=dict(required=True), + server=dict(required=True), + state=dict(default='present', choices=['present', 'absent']), + wait=dict(type='bool', default=False), + wait_timeout=dict(type='int', default=300) + ) + ) + + module = AnsibleModule( + argument_spec=argument_spec, + required_together=rax_required_together() + ) + + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + + device = module.params.get('device') + volume = module.params.get('volume') + server = module.params.get('server') + state = module.params.get('state') + wait = module.params.get('wait') + wait_timeout = module.params.get('wait_timeout') + + setup_rax_module(module, pyrax) + + cloud_block_storage_attachments(module, state, volume, server, device, + wait, wait_timeout) + +# import module snippets +from ansible.module_utils.basic import * +from ansible.module_utils.rax import * + +### invoke the module +main() diff --git a/library/cloud/rax_clb b/library/cloud/rax_clb index bd653eff8e8..85700895c7c 100644 --- a/library/cloud/rax_clb +++ b/library/cloud/rax_clb @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify @@ -14,6 +14,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_clb @@ -25,17 +27,13 @@ options: algorithm: description: - algorithm for the balancer being created - choices: ['RANDOM', 'LEAST_CONNECTIONS', 'ROUND_ROBIN', 'WEIGHTED_LEAST_CONNECTIONS', 'WEIGHTED_ROUND_ROBIN'] + choices: + - RANDOM + - LEAST_CONNECTIONS + - ROUND_ROBIN + - WEIGHTED_LEAST_CONNECTIONS + - WEIGHTED_ROUND_ROBIN default: LEAST_CONNECTIONS - api_key: - description: - - Rackspace API key (overrides C(credentials)) - credentials: - description: - - File to find the Rackspace credentials in (ignored if C(api_key) and - C(username) are provided) - default: null - aliases: ['creds_file'] meta: description: - A hash of metadata to associate with the instance @@ -51,16 +49,32 @@ options: protocol: description: - Protocol for the balancer being created - choices: ['DNS_TCP', 'DNS_UDP' ,'FTP', 'HTTP', 'HTTPS', 'IMAPS', 'IMAPv4', 'LDAP', 'LDAPS', 'MYSQL', 'POP3', 'POP3S', 'SMTP', 'TCP', 'TCP_CLIENT_FIRST', 'UDP', 'UDP_STREAM', 'SFTP'] + choices: + - DNS_TCP + - DNS_UDP + - FTP + - HTTP + - HTTPS + - IMAPS + - IMAPv4 + - LDAP + - LDAPS + - MYSQL + - POP3 + - POP3S + - SMTP + - TCP + - TCP_CLIENT_FIRST + - UDP + - UDP_STREAM + - SFTP default: HTTP - region: - description: - - Region to create the load balancer in - default: DFW state: description: - Indicate desired state of the resource - choices: ['present', 'absent'] + choices: + - present + - absent default: present timeout: description: @@ -69,11 +83,10 @@ options: type: description: - type of interface for the balancer being created - choices: ['PUBLIC', 'SERVICENET'] + choices: + - PUBLIC + - SERVICENET default: PUBLIC - username: - description: - - Rackspace username (overrides C(credentials)) vip_id: description: - Virtual IP ID to use when creating the load balancer for purposes of @@ -83,20 +96,15 @@ options: description: - wait for the balancer to be in state 'running' before returning default: "no" - choices: [ "yes", "no" ] + choices: + - "yes" + - "no" wait_timeout: description: - how long before wait gives up, in seconds default: 300 -requirements: [ "pyrax" ] author: Christopher H. Laco, Matt Martz -notes: - - The following environment variables can be used, C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file - appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +extends_documentation_fragment: rackspace ''' EXAMPLES = ''' @@ -122,15 +130,13 @@ EXAMPLES = ''' register: my_lb ''' -import sys - from types import NoneType try: import pyrax + HAS_PYRAX = True except ImportError: - print("failed=True msg='pyrax required for this module'") - sys.exit(1) + HAS_PYRAX = False NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) ALGORITHMS = ['RANDOM', 'LEAST_CONNECTIONS', 'ROUND_ROBIN', @@ -182,6 +188,10 @@ def cloud_load_balancer(module, state, name, meta, algorithm, port, protocol, balancers = [] clb = pyrax.cloud_loadbalancers + if not clb: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') for balancer in clb.list(): if name != balancer.name and name != balancer.id: @@ -300,6 +310,9 @@ def main(): required_together=rax_required_together(), ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + algorithm = module.params.get('algorithm') meta = module.params.get('meta') name = module.params.get('name') diff --git a/library/cloud/rax_clb_nodes b/library/cloud/rax_clb_nodes index f34fe6dde83..dc0950dca58 100644 --- a/library/cloud/rax_clb_nodes +++ b/library/cloud/rax_clb_nodes @@ -14,6 +14,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_clb_nodes @@ -26,21 +28,15 @@ options: required: false description: - IP address or domain name of the node - api_key: - required: false - description: - - Rackspace API key (overrides C(credentials)) condition: required: false - choices: [ "enabled", "disabled", "draining" ] + choices: + - enabled + - disabled + - draining description: - Condition for the node, which determines its role within the load balancer - credentials: - required: false - description: - - File to find the Rackspace credentials in (ignored if C(api_key) and - C(username) are provided) load_balancer_id: required: true type: integer @@ -56,35 +52,27 @@ options: type: integer description: - Port number of the load balanced service on the node - region: - required: false - description: - - Region to authenticate in state: required: false default: "present" - choices: [ "present", "absent" ] + choices: + - present + - absent description: - Indicate desired state of the node type: required: false - choices: [ "primary", "secondary" ] + choices: + - primary + - secondary description: - Type of node - username: - required: false - description: - - Rackspace username (overrides C(credentials)) - virtualenv: - required: false - description: - - Path to a virtualenv that should be activated before doing anything. - The virtualenv has to already exist. Useful if installing pyrax - globally is not an option. wait: required: false default: "no" - choices: [ "yes", "no" ] + choices: + - "yes" + - "no" description: - Wait for the load balancer to become active before returning wait_timeout: @@ -97,11 +85,8 @@ options: required: false description: - Weight of node -requirements: [ "pyrax" ] author: Lukasz Kawczynski -notes: - - "The following environment variables can be used: C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDENTIALS) and C(RAX_REGION)." +extends_documentation_fragment: rackspace ''' EXAMPLES = ''' @@ -136,13 +121,12 @@ EXAMPLES = ''' ''' import os -import sys try: import pyrax + HAS_PYRAX = True except ImportError: - print("failed=True msg='pyrax is required for this module'") - sys.exit(1) + HAS_PYRAX = False def _activate_virtualenv(path): @@ -151,11 +135,20 @@ def _activate_virtualenv(path): execfile(activate_this, dict(__file__=activate_this)) -def _get_node(lb, node_id): - """Return a node with the given `node_id`""" - for node in lb.nodes: - if node.id == node_id: +def _get_node(lb, node_id=None, address=None, port=None): + """Return a matching node""" + for node in getattr(lb, 'nodes', []): + match_list = [] + if node_id is not None: + match_list.append(getattr(node, 'id', None) == node_id) + if address is not None: + match_list.append(getattr(node, 'address', None) == address) + if port is not None: + match_list.append(getattr(node, 'port', None) == port) + + if match_list and all(match_list): return node + return None @@ -211,6 +204,9 @@ def main(): required_together=rax_required_together(), ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + address = module.params['address'] condition = (module.params['condition'] and module.params['condition'].upper()) @@ -234,18 +230,16 @@ def main(): setup_rax_module(module, pyrax) if not pyrax.cloud_loadbalancers: - module.fail_json(msg='Failed to instantiate load balancer client ' - '(possibly incorrect region)') + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') try: lb = pyrax.cloud_loadbalancers.get(load_balancer_id) except pyrax.exc.PyraxException, e: module.fail_json(msg='%s' % e.message) - if node_id: - node = _get_node(lb, node_id) - else: - node = None + node = _get_node(lb, node_id, address, port) result = _node_to_dict(node) @@ -284,22 +278,12 @@ def main(): except pyrax.exc.PyraxException, e: module.fail_json(msg='%s' % e.message) else: # Updating an existing node - immutable = { - 'address': address, - 'port': port, - } - mutable = { 'condition': condition, 'type': typ, 'weight': weight, } - for name, value in immutable.items(): - if value: - module.fail_json( - msg='Attribute %s cannot be modified' % name) - for name, value in mutable.items(): if value is None or value == getattr(node, name): mutable.pop(name) diff --git a/library/cloud/rax_dns b/library/cloud/rax_dns index 4c47d55fbbf..c12d09fb1ad 100644 --- a/library/cloud/rax_dns +++ b/library/cloud/rax_dns @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify @@ -14,6 +14,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_dns @@ -22,18 +24,9 @@ description: - Manage domains on Rackspace Cloud DNS version_added: 1.5 options: - api_key: - description: - - Rackspace API key (overrides C(credentials)) comment: description: - Brief description of the domain. Maximum length of 160 characters - credentials: - description: - - File to find the Rackspace credentials in (ignored if C(api_key) and - C(username) are provided) - default: null - aliases: ['creds_file'] email: desctiption: - Email address of the domain administrator @@ -43,24 +36,16 @@ options: state: description: - Indicate desired state of the resource - choices: ['present', 'absent'] + choices: + - present + - absent default: present ttl: description: - Time to live of domain in seconds default: 3600 - username: - description: - - Rackspace username (overrides C(credentials)) -requirements: [ "pyrax" ] author: Matt Martz -notes: - - The following environment variables can be used, C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file - appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +extends_documentation_fragment: rackspace ''' EXAMPLES = ''' @@ -77,16 +62,13 @@ EXAMPLES = ''' register: rax_dns ''' -import sys -import os - from types import NoneType try: import pyrax + HAS_PYRAX = True except ImportError: - print("failed=True msg='pyrax required for this module'") - sys.exit(1) + HAS_PYRAX = False NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) @@ -104,6 +86,10 @@ def rax_dns(module, comment, email, name, state, ttl): changed = False dns = pyrax.cloud_dns + if not dns: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') if state == 'present': if not email: @@ -174,6 +160,9 @@ def main(): required_together=rax_required_together(), ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + comment = module.params.get('comment') email = module.params.get('email') name = module.params.get('name') diff --git a/library/cloud/rax_dns_record b/library/cloud/rax_dns_record index 3e7f37f0def..d1e79983604 100644 --- a/library/cloud/rax_dns_record +++ b/library/cloud/rax_dns_record @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify @@ -14,6 +14,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_dns_record @@ -22,18 +24,9 @@ description: - Manage DNS records on Rackspace Cloud DNS version_added: 1.5 options: - api_key: - description: - - Rackspace API key (overrides C(credentials)) comment: description: - Brief description of the domain. Maximum length of 160 characters - credentials: - description: - - File to find the Rackspace credentials in (ignored if C(api_key) and - C(username) are provided) - default: null - aliases: ['creds_file'] data: description: - IP address for A/AAAA record, FQDN for CNAME/MX/NS, or text data for @@ -54,7 +47,9 @@ options: state: description: - Indicate desired state of the resource - choices: ['present', 'absent'] + choices: + - present + - absent default: present ttl: description: @@ -63,20 +58,17 @@ options: type: description: - DNS record type - choices: ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SRV', 'TXT'] + choices: + - A + - AAAA + - CNAME + - MX + - NS + - SRV + - TXT default: A - username: - description: - - Rackspace username (overrides C(credentials)) -requirements: [ "pyrax" ] author: Matt Martz -notes: - - The following environment variables can be used, C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file - appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +extends_documentation_fragment: rackspace ''' EXAMPLES = ''' @@ -95,16 +87,13 @@ EXAMPLES = ''' register: rax_dns_record ''' -import sys -import os - from types import NoneType try: import pyrax + HAS_PYRAX = True except ImportError: - print("failed=True msg='pyrax required for this module'") - sys.exit(1) + HAS_PYRAX = False NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) @@ -123,6 +112,10 @@ def rax_dns_record(module, comment, data, domain, name, priority, record_type, changed = False dns = pyrax.cloud_dns + if not dns: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') if state == 'present': if not priority and record_type in ['MX', 'SRV']: @@ -219,6 +212,9 @@ def main(): required_together=rax_required_together(), ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + comment = module.params.get('comment') data = module.params.get('data') domain = module.params.get('domain') diff --git a/library/cloud/rax_facts b/library/cloud/rax_facts index ca117a665a1..64711f41519 100644 --- a/library/cloud/rax_facts +++ b/library/cloud/rax_facts @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify @@ -14,6 +14,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_facts @@ -22,52 +24,6 @@ description: - Gather facts for Rackspace Cloud Servers. version_added: "1.4" options: - api_key: - description: - - Rackspace API key (overrides I(credentials)) - aliases: - - password - auth_endpoint: - description: - - The URI of the authentication service - default: https://identity.api.rackspacecloud.com/v2.0/ - version_added: 1.5 - credentials: - description: - - File to find the Rackspace credentials in (ignored if I(api_key) and - I(username) are provided) - default: null - aliases: - - creds_file - env: - description: - - Environment as configured in ~/.pyrax.cfg, - see https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration - version_added: 1.5 - identity_type: - description: - - Authentication machanism to use, such as rackspace or keystone - default: rackspace - version_added: 1.5 - region: - description: - - Region to create an instance in - default: DFW - tenant_id: - description: - - The tenant ID used for authentication - version_added: 1.5 - tenant_name: - description: - - The tenant name used for authentication - version_added: 1.5 - username: - description: - - Rackspace username (overrides I(credentials)) - verify_ssl: - description: - - Whether or not to require SSL validation of API endpoints - version_added: 1.5 address: description: - Server IP address to retrieve facts for, will match any IP assigned to @@ -79,15 +35,8 @@ options: description: - Server name to retrieve facts for default: null -requirements: [ "pyrax" ] author: Matt Martz -notes: - - The following environment variables can be used, C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file - appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +extends_documentation_fragment: rackspace.openstack ''' EXAMPLES = ''' @@ -106,16 +55,13 @@ EXAMPLES = ''' ansible_ssh_host: "{{ rax_accessipv4 }}" ''' -import sys -import os - from types import NoneType try: import pyrax + HAS_PYRAX = True except ImportError: - print("failed=True msg='pyrax required for this module'") - sys.exit(1) + HAS_PYRAX = False NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) @@ -138,6 +84,12 @@ def rax_facts(module, address, name, server_id): changed = False cs = pyrax.cloudservers + + if cs is None: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') + ansible_facts = {} search_opts = {} @@ -190,6 +142,9 @@ def main(): required_one_of=[['address', 'id', 'name']], ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + address = module.params.get('address') server_id = module.params.get('id') name = module.params.get('name') diff --git a/library/cloud/rax_files b/library/cloud/rax_files index 564cdb578d6..68e28a07f74 100644 --- a/library/cloud/rax_files +++ b/library/cloud/rax_files @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # (c) 2013, Paul Durivage # @@ -17,6 +17,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_files @@ -25,25 +27,18 @@ description: - Manipulate Rackspace Cloud Files Containers version_added: "1.5" options: - api_key: - description: - - Rackspace API key (overrides I(credentials)) clear_meta: description: - Optionally clear existing metadata when applying metadata to existing containers. Selecting this option is only appropriate when setting type=meta - choices: ["yes", "no"] + choices: + - "yes" + - "no" default: "no" container: description: - The container to use for container or metadata operations. required: true - credentials: - description: - - File to find the Rackspace credentials in (ignored if I(api_key) and - I(username) are provided) - default: null - aliases: ['creds_file'] meta: description: - A hash of items to set as metadata values on a container @@ -59,6 +54,11 @@ options: description: - Region to create an instance in default: DFW + state: + description: + - Indicate desired state of the resource + choices: ['present', 'absent'] + default: present ttl: description: - In seconds, set a container-wide TTL for all objects cached on CDN edge nodes. @@ -66,26 +66,18 @@ options: type: description: - Type of object to do work on, i.e. metadata object or a container object - choices: ["file", "meta"] - default: "file" - username: - description: - - Rackspace username (overrides I(credentials)) + choices: + - file + - meta + default: file web_error: description: - Sets an object to be presented as the HTTP error page when accessed by the CDN URL web_index: description: - Sets an object to be presented as the HTTP index page when accessed by the CDN URL -requirements: [ "pyrax" ] author: Paul Durivage -notes: - - The following environment variables can be used, C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file - appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +extends_documentation_fragment: rackspace ''' EXAMPLES = ''' @@ -151,9 +143,9 @@ from ansible import __version__ try: import pyrax + HAS_PYRAX = True except ImportError, e: - print("failed=True msg='pyrax is required for this module'") - sys.exit(1) + HAS_PYRAX = False EXIT_DICT = dict(success=True) META_PREFIX = 'x-container-meta-' @@ -208,7 +200,8 @@ def meta(cf, module, container_, state, meta_, clear_meta): module.exit_json(**EXIT_DICT) -def container(cf, module, container_, state, meta_, clear_meta, ttl, public, private, web_index, web_error): +def container(cf, module, container_, state, meta_, clear_meta, ttl, public, + private, web_index, web_error): if public and private: module.fail_json(msg='container cannot be simultaneously ' 'set to public and private') @@ -232,6 +225,7 @@ def container(cf, module, container_, state, meta_, clear_meta, ttl, public, pri except Exception, e: module.fail_json(msg=e.message) else: + EXIT_DICT['changed'] = True EXIT_DICT['created'] = True else: module.fail_json(msg=e.message) @@ -304,11 +298,9 @@ def container(cf, module, container_, state, meta_, clear_meta, ttl, public, pri EXIT_DICT['container'] = c.name EXIT_DICT['objs_in_container'] = c.object_count EXIT_DICT['total_bytes'] = c.total_bytes - + _locals = locals().keys() - - if ('cont_created' in _locals - or 'cont_deleted' in _locals + if ('cont_deleted' in _locals or 'meta_set' in _locals or 'cont_public' in _locals or 'cont_private' in _locals @@ -319,15 +311,23 @@ def container(cf, module, container_, state, meta_, clear_meta, ttl, public, pri module.exit_json(**EXIT_DICT) -def cloudfiles(module, container_, state, meta_, clear_meta, typ, ttl, public, private, web_index, web_error): - """ Dispatch from here to work with metadata or file objects """ - cf = pyrax.cloudfiles - cf.user_agent = USER_AGENT +def cloudfiles(module, container_, state, meta_, clear_meta, typ, ttl, public, + private, web_index, web_error): + """ Dispatch from here to work with metadata or file objects """ + cf = pyrax.cloudfiles - if typ == "container": - container(cf, module, container_, state, meta_, clear_meta, ttl, public, private, web_index, web_error) - else: - meta(cf, module, container_, state, meta_, clear_meta) + if cf is None: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') + + cf.user_agent = USER_AGENT + + if typ == "container": + container(cf, module, container_, state, meta_, clear_meta, ttl, + public, private, web_index, web_error) + else: + meta(cf, module, container_, state, meta_, clear_meta) def main(): @@ -335,13 +335,14 @@ def main(): argument_spec.update( dict( container=dict(), - state=dict(choices=['present', 'absent', 'list'], default='present'), + state=dict(choices=['present', 'absent', 'list'], + default='present'), meta=dict(type='dict', default=dict()), - clear_meta=dict(choices=BOOLEANS, default=False, type='bool'), + clear_meta=dict(default=False, type='bool'), type=dict(choices=['container', 'meta'], default='container'), ttl=dict(type='int'), - public=dict(choices=BOOLEANS, default=False, type='bool'), - private=dict(choices=BOOLEANS, default=False, type='bool'), + public=dict(default=False, type='bool'), + private=dict(default=False, type='bool'), web_index=dict(), web_error=dict() ) @@ -352,6 +353,9 @@ def main(): required_together=rax_required_together() ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + container_ = module.params.get('container') state = module.params.get('state') meta_ = module.params.get('meta') @@ -366,10 +370,12 @@ def main(): if state in ['present', 'absent'] and not container_: module.fail_json(msg='please specify a container name') if clear_meta and not typ == 'meta': - module.fail_json(msg='clear_meta can only be used when setting metadata') + module.fail_json(msg='clear_meta can only be used when setting ' + 'metadata') setup_rax_module(module, pyrax) - cloudfiles(module, container_, state, meta_, clear_meta, typ, ttl, public, private, web_index, web_error) + cloudfiles(module, container_, state, meta_, clear_meta, typ, ttl, public, + private, web_index, web_error) from ansible.module_utils.basic import * diff --git a/library/cloud/rax_files_objects b/library/cloud/rax_files_objects index b628ff14027..d7f11900ab9 100644 --- a/library/cloud/rax_files_objects +++ b/library/cloud/rax_files_objects @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # (c) 2013, Paul Durivage # @@ -17,6 +17,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_files_objects @@ -25,26 +27,19 @@ description: - Upload, download, and delete objects in Rackspace Cloud Files version_added: "1.5" options: - api_key: - description: - - Rackspace API key (overrides I(credentials)) - default: null clear_meta: description: - Optionally clear existing metadata when applying metadata to existing objects. Selecting this option is only appropriate when setting type=meta - choices: ["yes", "no"] + choices: + - "yes" + - "no" default: "no" container: description: - The container to use for file object operations. required: true default: null - credentials: - description: - - File to find the Rackspace credentials in (ignored if I(api_key) and I(username) are provided) - default: null - aliases: ['creds_file'] dest: description: - The destination of a "get" operation; i.e. a local directory, "/home/user/myfolder". @@ -64,12 +59,11 @@ options: - The method of operation to be performed. For example, put to upload files to Cloud Files, get to download files from Cloud Files or delete to delete remote objects in Cloud Files - choices: ["get", "put", "delete"] - default: "get" - region: - description: - - Region in which to work. Maps to a Rackspace Cloud region, i.e. DFW, ORD, IAD, SYD, LON - default: DFW + choices: + - get + - put + - delete + default: get src: description: - Source from which to upload files. Used to specify a remote object as a source for @@ -81,27 +75,25 @@ options: - Used to specify whether to maintain nested directory structure when downloading objects from Cloud Files. Setting to false downloads the contents of a container to a single, flat directory - choices: ["yes", "no"] + choices: + - yes + - "no" default: "yes" + state: + description: + - Indicate desired state of the resource + choices: ['present', 'absent'] + default: present type: description: - Type of object to do work on - Metadata object or a file object - choices: ["file", "meta"] - default: "file" - username: - description: - - Rackspace username (overrides I(credentials)) - default: null -requirements: [ "pyrax" ] + choices: + - file + - meta + default: file author: Paul Durivage -notes: - - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), - C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate - for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +extends_documentation_fragment: rackspace ''' EXAMPLES = ''' @@ -195,9 +187,9 @@ import os try: import pyrax -except ImportError, e: - print("failed=True msg='pyrax is required for this module'") - sys.exit(1) + HAS_PYRAX = True +except ImportError: + HAS_PYRAX = False EXIT_DICT = dict(success=False) META_PREFIX = 'x-object-meta-' @@ -441,7 +433,6 @@ def get_meta(module, cf, container, src, dest): meta_key = k.split(META_PREFIX)[-1] results[obj][meta_key] = v - EXIT_DICT['container'] = c.name if results: EXIT_DICT['meta_results'] = results @@ -538,28 +529,33 @@ def delete_meta(module, cf, container, src, dest, meta): def cloudfiles(module, container, src, dest, method, typ, meta, clear_meta, structure, expires): - """ Dispatch from here to work with metadata or file objects """ - cf = pyrax.cloudfiles + """ Dispatch from here to work with metadata or file objects """ + cf = pyrax.cloudfiles - if typ == "file": - if method == 'put': - upload(module, cf, container, src, dest, meta, expires) + if cf is None: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') - elif method == 'get': - download(module, cf, container, src, dest, structure) + if typ == "file": + if method == 'put': + upload(module, cf, container, src, dest, meta, expires) - elif method == 'delete': - delete(module, cf, container, src, dest) + elif method == 'get': + download(module, cf, container, src, dest, structure) - else: - if method == 'get': - get_meta(module, cf, container, src, dest) + elif method == 'delete': + delete(module, cf, container, src, dest) - if method == 'put': - put_meta(module, cf, container, src, dest, meta, clear_meta) + else: + if method == 'get': + get_meta(module, cf, container, src, dest) + + if method == 'put': + put_meta(module, cf, container, src, dest, meta, clear_meta) - if method == 'delete': - delete_meta(module, cf, container, src, dest, meta) + if method == 'delete': + delete_meta(module, cf, container, src, dest, meta) def main(): @@ -572,8 +568,8 @@ def main(): method=dict(default='get', choices=['put', 'get', 'delete']), type=dict(default='file', choices=['file', 'meta']), meta=dict(type='dict', default=dict()), - clear_meta=dict(choices=BOOLEANS, default=False, type='bool'), - structure=dict(choices=BOOLEANS, default=True, type='bool'), + clear_meta=dict(default=False, type='bool'), + structure=dict(default=True, type='bool'), expires=dict(type='int'), ) ) @@ -583,6 +579,9 @@ def main(): required_together=rax_required_together() ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + container = module.params.get('container') src = module.params.get('src') dest = module.params.get('dest') @@ -603,4 +602,4 @@ def main(): from ansible.module_utils.basic import * from ansible.module_utils.rax import * -main() \ No newline at end of file +main() diff --git a/library/cloud/rax_identity b/library/cloud/rax_identity new file mode 100644 index 00000000000..591cd018e70 --- /dev/null +++ b/library/cloud/rax_identity @@ -0,0 +1,117 @@ +#!/usr/bin/python +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments +DOCUMENTATION = ''' +--- +module: rax_identity +short_description: Load Rackspace Cloud Identity +description: + - Verifies Rackspace Cloud credentials and returns identity information +version_added: "1.5" +options: + state: + description: + - Indicate desired state of the resource + choices: ['present', 'absent'] + default: present +author: Christopher H. Laco, Matt Martz +extends_documentation_fragment: rackspace.openstack +''' + +EXAMPLES = ''' +- name: Load Rackspace Cloud Identity + gather_facts: False + hosts: local + connection: local + tasks: + - name: Load Identity + local_action: + module: rax_identity + credentials: ~/.raxpub + region: DFW + register: rackspace_identity +''' + +from types import NoneType + +try: + import pyrax + HAS_PYRAX = True +except ImportError: + HAS_PYRAX = False + + +NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) + + +def cloud_identity(module, state, identity): + for arg in (state, identity): + if not arg: + module.fail_json(msg='%s is required for rax_identity' % arg) + + instance = dict( + authenticated=identity.authenticated, + credentials=identity._creds_file + ) + changed = False + + for key, value in vars(identity).iteritems(): + if (isinstance(value, NON_CALLABLES) and + not key.startswith('_')): + instance[key] = value + + if state == 'present': + if not identity.authenticated: + module.fail_json(msg='Credentials could not be verified!') + + module.exit_json(changed=changed, identity=instance) + + +def main(): + argument_spec = rax_argument_spec() + argument_spec.update( + dict( + state=dict(default='present', choices=['present', 'absent']) + ) + ) + + module = AnsibleModule( + argument_spec=argument_spec, + required_together=rax_required_together() + ) + + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + + state = module.params.get('state') + + setup_rax_module(module, pyrax) + + if pyrax.identity is None: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') + + cloud_identity(module, state, pyrax.identity) + +# import module snippets +from ansible.module_utils.basic import * +from ansible.module_utils.rax import * + +### invoke the module +main() diff --git a/library/cloud/rax_keypair b/library/cloud/rax_keypair index bd5270b9e3d..458ec5713c4 100644 --- a/library/cloud/rax_keypair +++ b/library/cloud/rax_keypair @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify @@ -14,6 +14,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_keypair @@ -22,52 +24,6 @@ description: - Create a keypair for use with Rackspace Cloud Servers version_added: 1.5 options: - api_key: - description: - - Rackspace API key (overrides I(credentials)) - aliases: - - password - auth_endpoint: - description: - - The URI of the authentication service - default: https://identity.api.rackspacecloud.com/v2.0/ - version_added: 1.5 - credentials: - description: - - File to find the Rackspace credentials in (ignored if I(api_key) and - I(username) are provided) - default: null - aliases: - - creds_file - env: - description: - - Environment as configured in ~/.pyrax.cfg, - see https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration - version_added: 1.5 - identity_type: - description: - - Authentication machanism to use, such as rackspace or keystone - default: rackspace - version_added: 1.5 - region: - description: - - Region to create an instance in - default: DFW - tenant_id: - description: - - The tenant ID used for authentication - version_added: 1.5 - tenant_name: - description: - - The tenant name used for authentication - version_added: 1.5 - username: - description: - - Rackspace username (overrides I(credentials)) - verify_ssl: - description: - - Whether or not to require SSL validation of API endpoints - version_added: 1.5 name: description: - Name of keypair @@ -79,24 +35,20 @@ options: state: description: - Indicate desired state of the resource - choices: ['present', 'absent'] + choices: + - present + - absent default: present -requirements: [ "pyrax" ] author: Matt Martz notes: - - The following environment variables can be used, C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file - appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) - Keypairs cannot be manipulated, only created and deleted. To "update" a keypair you must first delete and then recreate. +extends_documentation_fragment: rackspace.openstack ''' EXAMPLES = ''' - name: Create a keypair - hosts: local + hosts: localhost gather_facts: False tasks: - name: keypair request @@ -116,17 +68,28 @@ EXAMPLES = ''' module: copy content: "{{ keypair.keypair.private_key }}" dest: "{{ inventory_dir }}/{{ keypair.keypair.name }}" -''' -import sys +- name: Create a keypair + hosts: localhost + gather_facts: False + tasks: + - name: keypair request + local_action: + module: rax_keypair + credentials: ~/.raxpub + name: my_keypair + public_key: "{{ lookup('file', 'authorized_keys/id_rsa.pub') }}" + region: DFW + register: keypair +''' from types import NoneType try: import pyrax + HAS_PYRAX = True except ImportError: - print("failed=True msg='pyrax required for this module'") - sys.exit(1) + HAS_PYRAX = False NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) @@ -144,6 +107,12 @@ def rax_keypair(module, name, public_key, state): changed = False cs = pyrax.cloudservers + + if cs is None: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') + keypair = {} if state == 'present': @@ -189,6 +158,9 @@ def main(): required_together=rax_required_together(), ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + name = module.params.get('name') public_key = module.params.get('public_key') state = module.params.get('state') diff --git a/library/cloud/rax_network b/library/cloud/rax_network index 05f3f554e36..bc4745a7a84 100644 --- a/library/cloud/rax_network +++ b/library/cloud/rax_network @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify @@ -14,6 +14,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_network @@ -25,20 +27,10 @@ options: state: description: - Indicate desired state of the resource - choices: ['present', 'absent'] + choices: + - present + - absent default: present - credentials: - description: - - File to find the Rackspace credentials in (ignored if C(api_key) and - C(username) are provided) - default: null - aliases: ['creds_file'] - api_key: - description: - - Rackspace API key (overrides C(credentials)) - username: - description: - - Rackspace username (overrides C(credentials)) label: description: - Label (name) to give the network @@ -47,19 +39,8 @@ options: description: - cidr of the network being created default: null - region: - description: - - Region to create the network in - default: DFW -requirements: [ "pyrax" ] author: Christopher H. Laco, Jesse Keating -notes: - - The following environment variables can be used, C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDS), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS) points to a credentials file - appropriate for pyrax - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +extends_documentation_fragment: rackspace.openstack ''' EXAMPLES = ''' @@ -76,16 +57,11 @@ EXAMPLES = ''' state: present ''' -import sys -import os - try: import pyrax - import pyrax.utils - from pyrax import exc + HAS_PYRAX = True except ImportError: - print("failed=True msg='pyrax required for this module'") - sys.exit(1) + HAS_PYRAX = False def cloud_network(module, state, label, cidr): @@ -97,10 +73,15 @@ def cloud_network(module, state, label, cidr): network = None networks = [] + if not pyrax.cloud_networks: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') + if state == 'present': try: network = pyrax.cloud_networks.find_network_by_label(label) - except exc.NetworkNotFound: + except pyrax.exceptions.NetworkNotFound: try: network = pyrax.cloud_networks.create(label, cidr=cidr) changed = True @@ -114,7 +95,7 @@ def cloud_network(module, state, label, cidr): network = pyrax.cloud_networks.find_network_by_label(label) network.delete() changed = True - except exc.NetworkNotFound: + except pyrax.exceptions.NetworkNotFound: pass except Exception, e: module.fail_json(msg='%s' % e.message) @@ -144,6 +125,9 @@ def main(): required_together=rax_required_together(), ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + state = module.params.get('state') label = module.params.get('label') cidr = module.params.get('cidr') diff --git a/library/cloud/rax_queue b/library/cloud/rax_queue index ee873739a34..d3e5ac3f81e 100644 --- a/library/cloud/rax_queue +++ b/library/cloud/rax_queue @@ -1,4 +1,4 @@ -#!/usr/bin/python -tt +#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify @@ -14,6 +14,8 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . +# This is a DOCUMENTATION stub specific to this module, it extends +# a documentation fragment located in ansible.utils.module_docs_fragments DOCUMENTATION = ''' --- module: rax_queue @@ -22,40 +24,19 @@ description: - creates / deletes a Rackspace Public Cloud queue. version_added: "1.5" options: - api_key: - description: - - Rackspace API key (overrides C(credentials)) - credentials: - description: - - File to find the Rackspace credentials in (ignored if C(api_key) and - C(username) are provided) - default: null - aliases: ['creds_file'] name: description: - Name to give the queue default: null - region: - description: - - Region to create the load balancer in - default: DFW state: description: - Indicate desired state of the resource - choices: ['present', 'absent'] + choices: + - present + - absent default: present - username: - description: - - Rackspace username (overrides C(credentials)) -requirements: [ "pyrax" ] author: Christopher H. Laco, Matt Martz -notes: - - The following environment variables can be used, C(RAX_USERNAME), - C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file - appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) +extends_documentation_fragment: rackspace ''' EXAMPLES = ''' @@ -68,22 +49,17 @@ EXAMPLES = ''' local_action: module: rax_queue credentials: ~/.raxpub - client_id: unique-client-name name: my-queue region: DFW state: present register: my_queue ''' -import sys -import os - - try: import pyrax + HAS_PYRAX = True except ImportError: - print("failed=True msg='pyrax is required for this module'") - sys.exit(1) + HAS_PYRAX = False def cloud_queue(module, state, name): @@ -96,6 +72,10 @@ def cloud_queue(module, state, name): instance = {} cq = pyrax.queues + if not cq: + module.fail_json(msg='Failed to instantiate client. This ' + 'typically indicates an invalid region or an ' + 'incorrectly capitalized region name.') for queue in cq.list(): if name != queue.name: @@ -146,6 +126,9 @@ def main(): required_together=rax_required_together() ) + if not HAS_PYRAX: + module.fail_json(msg='pyrax is required for this module') + name = module.params.get('name') state = module.params.get('state') diff --git a/library/cloud/rds b/library/cloud/rds index d0eeaf35ba5..cde7c5bcf20 100644 --- a/library/cloud/rds +++ b/library/cloud/rds @@ -60,7 +60,7 @@ options: required: false default: null aliases: [] - choices: [ 'db.t1.micro', 'db.m1.small', 'db.m1.medium', 'db.m1.large', 'db.m1.xlarge', 'db.m2.xlarge', 'db.m2.2xlarge', 'db.m2.4xlarge' ] + choices: [ 'db.t1.micro', 'db.m1.small', 'db.m1.medium', 'db.m1.large', 'db.m1.xlarge', 'db.m2.xlarge', 'db.m2.2xlarge', 'db.m2.4xlarge', 'db.m3.medium', 'db.m3.large', 'db.m3.xlarge', 'db.m3.2xlarge', 'db.cr1.8xlarge' ] username: description: - Master database username. Used only when command=create. @@ -131,7 +131,7 @@ options: aliases: [] port: description: - - Port number that the DB instance uses for connections. Defaults to 3306 for mysql, 1521 for Oracle, 1443 for SQL Server. Used only when command=create or command=replicate. + - Port number that the DB instance uses for connections. Defaults to 3306 for mysql. Must be changed to 1521 for Oracle, 1443 for SQL Server, 5432 for PostgreSQL. Used only when command=create or command=replicate. required: false default: null aliases: [] @@ -290,7 +290,7 @@ def main(): source_instance = dict(required=False), db_engine = dict(choices=['MySQL', 'oracle-se1', 'oracle-se', 'oracle-ee', 'sqlserver-ee', 'sqlserver-se', 'sqlserver-ex', 'sqlserver-web', 'postgres'], required=False), size = dict(required=False), - instance_type = dict(aliases=['type'], choices=['db.t1.micro', 'db.m1.small', 'db.m1.medium', 'db.m1.large', 'db.m1.xlarge', 'db.m2.xlarge', 'db.m2.2xlarge', 'db.m2.4xlarge'], required=False), + instance_type = dict(aliases=['type'], choices=['db.t1.micro', 'db.m1.small', 'db.m1.medium', 'db.m1.large', 'db.m1.xlarge', 'db.m2.xlarge', 'db.m2.2xlarge', 'db.m2.4xlarge', 'db.m3.medium', 'db.m3.large', 'db.m3.xlarge', 'db.m3.2xlarge', 'db.cr1.8xlarge'], required=False), username = dict(required=False), password = dict(no_log=True, required=False), db_name = dict(required=False), @@ -343,7 +343,7 @@ def main(): maint_window = module.params.get('maint_window') subnet = module.params.get('subnet') backup_window = module.params.get('backup_window') - backup_retention = module.params.get('module_retention') + backup_retention = module.params.get('backup_retention') region = module.params.get('region') zone = module.params.get('zone') aws_secret_key = module.params.get('aws_secret_key') diff --git a/library/cloud/route53 b/library/cloud/route53 index 2ff22ded9dc..49344ee2061 100644 --- a/library/cloud/route53 +++ b/library/cloud/route53 @@ -157,7 +157,7 @@ def commit(changes): time.sleep(500) def main(): - argument_spec = ec2_argument_keys_spec() + argument_spec = ec2_argument_spec() argument_spec.update(dict( command = dict(choices=['get', 'create', 'delete'], required=True), zone = dict(required=True), @@ -220,11 +220,16 @@ def main(): found_record = False sets = conn.get_all_rrsets(zones[zone_in]) for rset in sets: - if rset.type == type_in and rset.name == record_in: + # Due to a bug in either AWS or Boto, "special" characters are returned as octals, preventing round + # tripping of things like * and @. + decoded_name = rset.name.replace(r'\052', '*') + decoded_name = rset.name.replace(r'\100', '@') + + if rset.type == type_in and decoded_name == record_in: found_record = True record['zone'] = zone_in record['type'] = rset.type - record['record'] = rset.name + record['record'] = decoded_name record['ttl'] = rset.ttl record['value'] = ','.join(sorted(rset.resource_records)) record['values'] = sorted(rset.resource_records) diff --git a/library/cloud/s3 b/library/cloud/s3 index 6e566e4b8dc..715c0e00ab9 100644 --- a/library/cloud/s3 +++ b/library/cloud/s3 @@ -68,7 +68,7 @@ options: aliases: [] s3_url: description: - - S3 URL endpoint. If not specified then the S3_URL environment variable is used, if that variable is defined. + - "S3 URL endpoint. If not specified then the S3_URL environment variable is used, if that variable is defined. Ansible tries to guess if fakes3 (https://github.com/jubos/fake-s3) or Eucalyptus Walrus (https://github.com/eucalyptus/eucalyptus/wiki/Walrus) is used and configure connection accordingly. Current heuristic is: everything with scheme fakes3:// is fakes3, everything else not ending with amazonaws.com is Walrus." default: null aliases: [ S3_URL ] aws_secret_key: @@ -83,6 +83,13 @@ options: required: false default: null aliases: [ 'ec2_access_key', 'access_key' ] + metadata: + description: + - Metadata for PUT operation, as a dictionary of 'key=value' and 'key=value,key=value'. + required: false + default: null + version_added: "1.6" + requirements: [ "boto" ] author: Lester Wade, Ralph Tice ''' @@ -97,7 +104,11 @@ EXAMPLES = ''' # GET/download and do not overwrite local file (trust remote) - s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get force=false # PUT/upload and overwrite remote file (trust local) -- s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put +- s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put +# PUT/upload with metadata +- s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put metadata='Content-Encoding=gzip' +# PUT/upload with multiple metadata +- s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put metadata='Content-Encoding=gzip,Cache-Control=no-cache' # PUT/upload and do not overwrite remote file (trust local) - s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put force=false # Download an object as a string to use else where in your playbook @@ -134,11 +145,12 @@ def key_check(module, s3, bucket, obj): def keysum(module, s3, bucket, obj): bucket = s3.lookup(bucket) key_check = bucket.get_key(obj) - if key_check: - md5_remote = key_check.etag[1:-1] - etag_multipart = md5_remote.find('-')!=-1 #Check for multipart, etag is not md5 - if etag_multipart is True: - module.fail_json(msg="Files uploaded with multipart of s3 are not supported with checksum, unable to compute checksum.") + if not key_check: + return None + md5_remote = key_check.etag[1:-1] + etag_multipart = '-' in md5_remote # Check for multipart, etag is not md5 + if etag_multipart is True: + module.fail_json(msg="Files uploaded with multipart of s3 are not supported with checksum, unable to compute checksum.") return md5_remote def bucket_check(module, s3, bucket): @@ -201,10 +213,14 @@ def path_check(path): else: return False -def upload_s3file(module, s3, bucket, obj, src, expiry): +def upload_s3file(module, s3, bucket, obj, src, expiry, metadata): try: bucket = s3.lookup(bucket) - key = bucket.new_key(obj) + key = bucket.new_key(obj) + if metadata: + for meta_key in metadata.keys(): + key.set_metadata(meta_key, metadata[meta_key]) + key.set_contents_from_filename(src) url = key.generate_url(expiry) module.exit_json(msg="PUT operation complete", url=url, changed=True) @@ -238,6 +254,13 @@ def get_download_url(module, s3, bucket, obj, expiry, changed=True): except s3.provider.storage_response_error, e: module.fail_json(msg= str(e)) +def is_fakes3(s3_url): + """ Return True if s3_url has scheme fakes3:// """ + if s3_url is not None: + return urlparse.urlparse(s3_url).scheme == 'fakes3' + else: + return False + def is_walrus(s3_url): """ Return True if it's Walrus endpoint, not S3 @@ -249,7 +272,7 @@ def is_walrus(s3_url): return False def main(): - argument_spec = ec2_argument_keys_spec() + argument_spec = ec2_argument_spec() argument_spec.update(dict( bucket = dict(required=True), object = dict(), @@ -259,7 +282,8 @@ def main(): expiry = dict(default=600, aliases=['expiration']), s3_url = dict(aliases=['S3_URL']), overwrite = dict(aliases=['force'], default=True, type='bool'), - ) + metadata = dict(type='dict'), + ), ) module = AnsibleModule(argument_spec=argument_spec) @@ -272,6 +296,7 @@ def main(): expiry = int(module.params['expiry']) s3_url = module.params.get('s3_url') overwrite = module.params.get('overwrite') + metadata = module.params.get('metadata') ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) @@ -282,8 +307,22 @@ def main(): if not s3_url and 'S3_URL' in os.environ: s3_url = os.environ['S3_URL'] - # If we have an S3_URL env var set, this is likely to be Walrus, so change connection method - if is_walrus(s3_url): + # Look at s3_url and tweak connection settings + # if connecting to Walrus or fakes3 + if is_fakes3(s3_url): + try: + fakes3 = urlparse.urlparse(s3_url) + from boto.s3.connection import OrdinaryCallingFormat + s3 = boto.connect_s3( + aws_access_key, + aws_secret_key, + is_secure=False, + host=fakes3.hostname, + port=fakes3.port, + calling_format=OrdinaryCallingFormat()) + except boto.exception.NoAuthHandlerFound, e: + module.fail_json(msg = str(e)) + elif is_walrus(s3_url): try: walrus = urlparse.urlparse(s3_url).hostname s3 = boto.connect_walrus(walrus, aws_access_key, aws_secret_key) @@ -364,24 +403,24 @@ def main(): if md5_local == md5_remote: sum_matches = True if overwrite is True: - upload_s3file(module, s3, bucket, obj, src, expiry) + upload_s3file(module, s3, bucket, obj, src, expiry, metadata) else: get_download_url(module, s3, bucket, obj, expiry, changed=False) else: sum_matches = False if overwrite is True: - upload_s3file(module, s3, bucket, obj, src, expiry) + upload_s3file(module, s3, bucket, obj, src, expiry, metadata) else: module.exit_json(msg="WARNING: Checksums do not match. Use overwrite parameter to force upload.", failed=True) # If neither exist (based on bucket existence), we can create both. if bucketrtn is False and pathrtn is True: create_bucket(module, s3, bucket) - upload_s3file(module, s3, bucket, obj, src, expiry) + upload_s3file(module, s3, bucket, obj, src, expiry, metadata) # If bucket exists but key doesn't, just upload. if bucketrtn is True and pathrtn is True and keyrtn is False: - upload_s3file(module, s3, bucket, obj, src, expiry) + upload_s3file(module, s3, bucket, obj, src, expiry, metadata) # Support for deleting an object if we have both params. if mode == 'delete': diff --git a/library/cloud/virt b/library/cloud/virt index 42e99209b14..f1d36fc1964 100644 --- a/library/cloud/virt +++ b/library/cloud/virt @@ -36,7 +36,7 @@ options: since these refer only to VM states. After starting a guest, it may not be immediately accessible. required: false - choices: [ "running", "shutdown" ] + choices: [ "running", "shutdown", "destroyed", "paused" ] default: "no" command: description: @@ -108,18 +108,19 @@ VIRT_STATE_NAME_MAP = { 6 : "crashed" } -class VMNotFound(Exception): +class VMNotFound(Exception): pass class LibvirtConnection(object): - def __init__(self, uri): + def __init__(self, uri, module): - cmd = subprocess.Popen("uname -r", shell=True, stdout=subprocess.PIPE, - close_fds=True) - output = cmd.communicate()[0] + self.module = module - if output.find("xen") != -1: + cmd = "uname -r" + rc, stdout, stderr = self.module.run_command(cmd) + + if "xen" in stdout: conn = libvirt.open(None) else: conn = libvirt.open(uri) @@ -196,6 +197,10 @@ class LibvirtConnection(object): def get_type(self): return self.conn.getType() + def get_xml(self, vmid): + vm = self.conn.lookupByName(vmid) + return vm.XMLDesc(0) + def get_maxVcpus(self, vmid): vm = self.conn.lookupByName(vmid) return vm.maxVcpus() @@ -221,11 +226,12 @@ class LibvirtConnection(object): class Virt(object): - def __init__(self, uri): + def __init__(self, uri, module): + self.module = module self.uri = uri def __get_conn(self): - self.conn = LibvirtConnection(self.uri) + self.conn = LibvirtConnection(self.uri, self.module) return self.conn def get_vm(self, vmid): @@ -359,14 +365,8 @@ class Virt(object): Return an xml describing vm config returned by a libvirt call """ - conn = libvirt.openReadOnly(None) - if not conn: - return (-1,'Failed to open connection to the hypervisor') - try: - domV = conn.lookupByName(vmid) - except: - return (-1,'Failed to find the main domain') - return domV.XMLDesc(0) + self.__get_conn() + return self.conn.get_xml(vmid) def get_maxVcpus(self, vmid): """ @@ -399,7 +399,7 @@ def core(module): uri = module.params.get('uri', None) xml = module.params.get('xml', None) - v = Virt(uri) + v = Virt(uri, module) res = {} if state and command=='list_vms': @@ -414,13 +414,24 @@ def core(module): res['changed'] = False if state == 'running': - if v.status(guest) is not 'running': + if v.status(guest) is 'paused': + res['changed'] = True + res['msg'] = v.unpause(guest) + elif v.status(guest) is not 'running': res['changed'] = True res['msg'] = v.start(guest) elif state == 'shutdown': if v.status(guest) is not 'shutdown': res['changed'] = True res['msg'] = v.shutdown(guest) + elif state == 'destroyed': + if v.status(guest) is not 'shutdown': + res['changed'] = True + res['msg'] = v.destroy(guest) + elif state == 'paused': + if v.status(guest) is 'running': + res['changed'] = True + res['msg'] = v.pause(guest) else: module.fail_json(msg="unexpected state") @@ -459,7 +470,7 @@ def main(): module = AnsibleModule(argument_spec=dict( name = dict(aliases=['guest']), - state = dict(choices=['running', 'shutdown']), + state = dict(choices=['running', 'shutdown', 'destroyed', 'paused']), command = dict(choices=ALL_COMMANDS), uri = dict(default='qemu:///system'), xml = dict(), diff --git a/library/commands/command b/library/commands/command index 76d2f828d0c..f1a48922122 100644 --- a/library/commands/command +++ b/library/commands/command @@ -39,7 +39,8 @@ description: options: free_form: description: - - the command module takes a free form command to run + - the command module takes a free form command to run. There is no parameter actually named 'free form'. + See the examples! required: true default: null aliases: [] @@ -136,7 +137,7 @@ def main(): args = shlex.split(args) startd = datetime.datetime.now() - rc, out, err = module.run_command(args, executable=executable) + rc, out, err = module.run_command(args, executable=executable, use_unsafe_shell=shell) endd = datetime.datetime.now() delta = endd - startd @@ -180,7 +181,7 @@ class CommandModule(AnsibleModule): params['removes'] = None params['shell'] = False params['executable'] = None - if args.find("#USE_SHELL") != -1: + if "#USE_SHELL" in args: args = args.replace("#USE_SHELL", "") params['shell'] = True diff --git a/library/commands/shell b/library/commands/shell index 03299b967cc..639d4a14b09 100644 --- a/library/commands/shell +++ b/library/commands/shell @@ -14,7 +14,8 @@ version_added: "0.2" options: free_form: description: - - The shell module takes a free form command to run + - The shell module takes a free form command to run, as a string. There's not an actual + option named "free form". See the examples! required: true default: null creates: diff --git a/library/database/mongodb_user b/library/database/mongodb_user index 63bc6b5400d..5d7e0897b68 100644 --- a/library/database/mongodb_user +++ b/library/database/mongodb_user @@ -2,6 +2,7 @@ # (c) 2012, Elliott Foster # Sponsored by Four Kitchens http://fourkitchens.com. +# (c) 2014, Epic Games, Inc. # # This file is part of Ansible # @@ -46,6 +47,12 @@ options: - The port to connect to required: false default: 27017 + replica_set: + version_added: "1.6" + description: + - Replica set to connect to (automatically connects to primary for writes) + required: false + default: null database: description: - The name of the database to add/remove the user from @@ -92,12 +99,17 @@ EXAMPLES = ''' - mongodb_user: database=burgers name=ben password=12345 roles='read' state=present - mongodb_user: database=burgers name=jim password=12345 roles='readWrite,dbAdmin,userAdmin' state=present - mongodb_user: database=burgers name=joe password=12345 roles='readWriteAnyDatabase' state=present + +# add a user to database in a replica set, the primary server is automatically discovered and written to +- mongodb_user: database=burgers name=bob replica_set=blecher password=12345 roles='readWriteAnyDatabase' state=present ''' import ConfigParser +from distutils.version import LooseVersion try: from pymongo.errors import ConnectionFailure from pymongo.errors import OperationFailure + from pymongo import version as PyMongoVersion from pymongo import MongoClient except ImportError: try: # for older PyMongo 2.2 @@ -114,34 +126,25 @@ else: # def user_add(module, client, db_name, user, password, roles): - try: - db = client[db_name] - if roles is None: - db.add_user(user, password, False) - else: - try: - db.add_user(user, password, None, roles=roles) - except: - module.fail_json(msg='"problem adding user; you must be on mongodb 2.4+ and pymongo 2.5+ to use the roles param"') - except OperationFailure: - return False - - return True + db = client[db_name] + if roles is None: + db.add_user(user, password, False) + else: + try: + db.add_user(user, password, None, roles=roles) + except OperationFailure, e: + err_msg = str(e) + if LooseVersion(PyMongoVersion) <= LooseVersion('2.5'): + err_msg = err_msg + ' (Note: you must be on mongodb 2.4+ and pymongo 2.5+ to use the roles param)' + module.fail_json(msg=err_msg) def user_remove(client, db_name, user): - try: - db = client[db_name] - db.remove_user(user) - except OperationFailure: - return False - - return True + db = client[db_name] + db.remove_user(user) def load_mongocnf(): config = ConfigParser.RawConfigParser() mongocnf = os.path.expanduser('~/.mongodb.cnf') - if not os.path.exists(mongocnf): - return False try: config.readfp(open(mongocnf)) @@ -165,6 +168,7 @@ def main(): login_password=dict(default=None), login_host=dict(default='localhost'), login_port=dict(default='27017'), + replica_set=dict(default=None), database=dict(required=True, aliases=['db']), user=dict(required=True, aliases=['name']), password=dict(aliases=['pass']), @@ -180,6 +184,7 @@ def main(): login_password = module.params['login_password'] login_host = module.params['login_host'] login_port = module.params['login_port'] + replica_set = module.params['replica_set'] db_name = module.params['database'] user = module.params['user'] password = module.params['password'] @@ -187,7 +192,20 @@ def main(): state = module.params['state'] try: - client = MongoClient(login_host, int(login_port)) + if replica_set: + client = MongoClient(login_host, int(login_port), replicaset=replica_set) + else: + client = MongoClient(login_host, int(login_port)) + + # try to authenticate as a target user to check if it already exists + try: + client[db_name].authenticate(user, password) + if state == 'present': + module.exit_json(changed=False, user=user) + except OperationFailure: + if state == 'absent': + module.exit_json(changed=False, user=user) + if login_user is None and login_password is None: mongocnf_creds = load_mongocnf() if mongocnf_creds is not False: @@ -200,16 +218,22 @@ def main(): client.admin.authenticate(login_user, login_password) except ConnectionFailure, e: - module.fail_json(msg='unable to connect to database, check login_user and login_password are correct') + module.fail_json(msg='unable to connect to database: %s' % str(e)) if state == 'present': if password is None: module.fail_json(msg='password parameter required when adding a user') - if user_add(module, client, db_name, user, password, roles) is not True: - module.fail_json(msg='Unable to add or update user, check login_user and login_password are correct and that this user has access to the admin collection') + + try: + user_add(module, client, db_name, user, password, roles) + except OperationFailure, e: + module.fail_json(msg='Unable to add or update user: %s' % str(e)) + elif state == 'absent': - if user_remove(client, db_name, user) is not True: - module.fail_json(msg='Unable to remove user, check login_user and login_password are correct and that this user has access to the admin collection') + try: + user_remove(client, db_name, user) + except OperationFailure, e: + module.fail_json(msg='Unable to remove user: %s' % str(e)) module.exit_json(changed=True, user=user) diff --git a/library/database/mysql_db b/library/database/mysql_db index 622bf59a39f..8eec1005893 100644 --- a/library/database/mysql_db +++ b/library/database/mysql_db @@ -101,6 +101,7 @@ EXAMPLES = ''' import ConfigParser import os +import pipes try: import MySQLdb except ImportError: @@ -123,36 +124,36 @@ def db_delete(cursor, db): def db_dump(module, host, user, password, db_name, target, port, socket=None): cmd = module.get_bin_path('mysqldump', True) - cmd += " --quick --user=%s --password='%s'" %(user, password) + cmd += " --quick --user=%s --password=%s" % (pipes.quote(user), pipes.quote(password)) if socket is not None: - cmd += " --socket=%s" % socket + cmd += " --socket=%s" % pipes.quote(socket) else: - cmd += " --host=%s --port=%s" % (host, port) - cmd += " %s" % db_name + cmd += " --host=%s --port=%s" % (pipes.quote(host), pipes.quote(port)) + cmd += " %s" % pipes.quote(db_name) if os.path.splitext(target)[-1] == '.gz': - cmd = cmd + ' | gzip > ' + target + cmd = cmd + ' | gzip > ' + pipes.quote(target) elif os.path.splitext(target)[-1] == '.bz2': - cmd = cmd + ' | bzip2 > ' + target + cmd = cmd + ' | bzip2 > ' + pipes.quote(target) else: - cmd += " > %s" % target - rc, stdout, stderr = module.run_command(cmd) + cmd += " > %s" % pipes.quote(target) + rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True) return rc, stdout, stderr def db_import(module, host, user, password, db_name, target, port, socket=None): cmd = module.get_bin_path('mysql', True) - cmd += " --user=%s --password='%s'" %(user, password) + cmd += " --user=%s --password=%s" % (pipes.quote(user), pipes.quote(password)) if socket is not None: - cmd += " --socket=%s" % socket + cmd += " --socket=%s" % pipes.quote(socket) else: - cmd += " --host=%s --port=%s" % (host, port) - cmd += " -D %s" % db_name + cmd += " --host=%s --port=%s" % (pipes.quote(host), pipes.quote(port)) + cmd += " -D %s" % pipes.quote(db_name) if os.path.splitext(target)[-1] == '.gz': - cmd = 'gunzip < ' + target + ' | ' + cmd + cmd = 'gunzip < ' + pipes.quote(target) + ' | ' + cmd elif os.path.splitext(target)[-1] == '.bz2': - cmd = 'bunzip2 < ' + target + ' | ' + cmd + cmd = 'bunzip2 < ' + pipes.quote(target) + ' | ' + cmd else: - cmd += " < %s" % target - rc, stdout, stderr = module.run_command(cmd) + cmd += " < %s" % pipes.quote(target) + rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True) return rc, stdout, stderr def db_create(cursor, db, encoding, collation): diff --git a/library/database/mysql_replication b/library/database/mysql_replication index f18060e9556..fdbb379371a 100644 --- a/library/database/mysql_replication +++ b/library/database/mysql_replication @@ -325,7 +325,7 @@ def main(): if master_password: chm.append("MASTER_PASSWORD='" + master_password + "'") if master_port: - chm.append("MASTER_PORT='" + master_port + "'") + chm.append("MASTER_PORT=" + master_port) if master_connect_retry: chm.append("MASTER_CONNECT_RETRY='" + master_connect_retry + "'") if master_log_file: diff --git a/library/database/mysql_user b/library/database/mysql_user index e7fad3d77c6..b7c84fd1c3e 100644 --- a/library/database/mysql_user +++ b/library/database/mysql_user @@ -259,7 +259,7 @@ def privileges_unpack(priv): output = {} for item in priv.split('/'): pieces = item.split(':') - if pieces[0].find('.') != -1: + if '.' in pieces[0]: pieces[0] = pieces[0].split('.') for idx, piece in enumerate(pieces): if pieces[0][idx] != "*": diff --git a/library/database/mysql_variables b/library/database/mysql_variables index 720478cc005..595e0bbb55d 100644 --- a/library/database/mysql_variables +++ b/library/database/mysql_variables @@ -76,14 +76,48 @@ else: mysqldb_found = True +def typedvalue(value): + """ + Convert value to number whenever possible, return same value + otherwise. + + >>> typedvalue('3') + 3 + >>> typedvalue('3.0') + 3.0 + >>> typedvalue('foobar') + 'foobar' + + """ + try: + return int(value) + except ValueError: + pass + + try: + return float(value) + except ValueError: + pass + + return value + + def getvariable(cursor, mysqlvar): cursor.execute("SHOW VARIABLES LIKE '" + mysqlvar + "'") mysqlvar_val = cursor.fetchall() return mysqlvar_val + def setvariable(cursor, mysqlvar, value): + """ Set a global mysql variable to a given value + + The DB driver will handle quoting of the given value based on its + type, thus numeric strings like '3.0' or '8' are illegal, they + should be passed as numeric literals. + + """ try: - cursor.execute("SET GLOBAL " + mysqlvar + "=" + value) + cursor.execute("SET GLOBAL " + mysqlvar + " = %s", (value,)) cursor.fetchall() result = True except Exception, e: @@ -203,11 +237,14 @@ def main(): else: if len(mysqlvar_val) < 1: module.fail_json(msg="Variable not available", changed=False) - if value == mysqlvar_val[0][1]: + # Type values before using them + value_wanted = typedvalue(value) + value_actual = typedvalue(mysqlvar_val[0][1]) + if value_wanted == value_actual: module.exit_json(msg="Variable already set to requested value", changed=False) - result = setvariable(cursor, mysqlvar, value) + result = setvariable(cursor, mysqlvar, value_wanted) if result is True: - module.exit_json(msg="Variable change succeeded", changed=True) + module.exit_json(msg="Variable change succeeded prev_value=%s" % value_actual, changed=True) else: module.fail_json(msg=result, changed=False) diff --git a/library/database/postgresql_privs b/library/database/postgresql_privs index 2f3db9a93f1..de5fa94fa48 100644 --- a/library/database/postgresql_privs +++ b/library/database/postgresql_privs @@ -597,7 +597,8 @@ def main(): except psycopg2.Error, e: conn.rollback() # psycopg2 errors come in connection encoding, reencode - msg = e.message.decode(conn.encoding).encode(errors='replace') + msg = e.message.decode(conn.encoding).encode(sys.getdefaultencoding(), + 'replace') module.fail_json(msg=msg) if module.check_mode: diff --git a/library/database/postgresql_user b/library/database/postgresql_user index b6383006cb4..1dda1a6dc57 100644 --- a/library/database/postgresql_user +++ b/library/database/postgresql_user @@ -443,9 +443,9 @@ def main(): priv=dict(default=None), db=dict(default=''), port=dict(default='5432'), - fail_on_user=dict(type='bool', choices=BOOLEANS, default='yes'), + fail_on_user=dict(type='bool', default='yes'), role_attr_flags=dict(default=''), - encrypted=dict(type='bool', choices=BOOLEANS, default='no'), + encrypted=dict(type='bool', default='no'), expires=dict(default=None) ), supports_check_mode = True diff --git a/library/database/redis b/library/database/redis index 4e3793daa09..59a1bde7277 100644 --- a/library/database/redis +++ b/library/database/redis @@ -22,8 +22,9 @@ module: redis short_description: Various redis commands, slave and flush description: - Unified utility to interact with redis instances. - 'slave' Sets a redis instance in slave or master mode. - 'flush' Flushes all the instance or a specified db. + 'slave' sets a redis instance in slave or master mode. + 'flush' flushes all the instance or a specified db. + 'config' (new in 1.6), ensures a configuration setting on an instance. version_added: "1.3" options: command: @@ -31,7 +32,7 @@ options: - The selected redis command required: true default: null - choices: [ "slave", "flush" ] + choices: [ "slave", "flush", "config" ] login_password: description: - The password used to authenticate with (usually not used) @@ -75,6 +76,18 @@ options: required: false default: all choices: [ "all", "db" ] + name: + version_added: 1.6 + description: + - A redis config key. + required: false + default: null + value: + version_added: 1.6 + description: + - A redis config value. + required: false + default: null notes: @@ -100,6 +113,12 @@ EXAMPLES = ''' # Flush only one db in a redis instance - redis: command=flush db=1 flush_mode=db + +# Configure local redis to have 10000 max clients +- redis: command=config name=maxclients value=10000 + +# Configure local redis to have lua time limit of 100 ms +- redis: command=config name=lua-time-limit value=100 ''' try: @@ -146,7 +165,7 @@ def flush(client, db=None): def main(): module = AnsibleModule( argument_spec = dict( - command=dict(default=None, choices=['slave', 'flush']), + command=dict(default=None, choices=['slave', 'flush', 'config']), login_password=dict(default=None), login_host=dict(default='localhost'), login_port=dict(default='6379'), @@ -155,6 +174,8 @@ def main(): slave_mode=dict(default='slave', choices=['master', 'slave']), db=dict(default=None), flush_mode=dict(default='all', choices=['all', 'db']), + name=dict(default=None), + value=dict(default=None) ), supports_check_mode = True ) @@ -272,7 +293,34 @@ def main(): module.exit_json(changed=True, flushed=True, db=db) else: # Flush never fails :) module.fail_json(msg="Unable to flush '%d' database" % db) + elif command == 'config': + name = module.params['name'] + value = module.params['value'] + r = redis.StrictRedis(host=login_host, + port=login_port, + password=login_password) + + try: + r.ping() + except Exception, e: + module.fail_json(msg="unable to connect to database: %s" % e) + + + try: + old_value = r.config_get(name)[name] + except Exception, e: + module.fail_json(msg="unable to read config: %s" % e) + changed = old_value != value + + if module.check_mode or not changed: + module.exit_json(changed=changed, name=name, value=value) + else: + try: + r.config_set(name, value) + except Exception, e: + module.fail_json(msg="unable to write config: %s" % e) + module.exit_json(changed=changed, name=name, value=value) else: module.fail_json(msg='A valid command must be provided') diff --git a/library/database/riak b/library/database/riak index 53faba6e983..b30e7dc485d 100644 --- a/library/database/riak +++ b/library/database/riak @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, James Martin , Drew Kerrigan @@ -73,6 +73,14 @@ options: default: None aliases: [] choices: ['kv'] + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 ''' EXAMPLES = ''' @@ -97,7 +105,7 @@ except ImportError: def ring_check(module, riak_admin_bin): - cmd = '%s ringready 2> /dev/null' % riak_admin_bin + cmd = '%s ringready' % riak_admin_bin rc, out, err = module.run_command(cmd) if rc == 0 and 'TRUE All nodes agree on the ring' in out: return True @@ -116,8 +124,8 @@ def main(): wait_for_handoffs=dict(default=False, type='int'), wait_for_ring=dict(default=False, type='int'), wait_for_service=dict( - required=False, default=None, choices=['kv']) - ) + required=False, default=None, choices=['kv']), + validate_certs = dict(default='yes', type='bool')) ) @@ -128,6 +136,7 @@ def main(): wait_for_handoffs = module.params.get('wait_for_handoffs') wait_for_ring = module.params.get('wait_for_ring') wait_for_service = module.params.get('wait_for_service') + validate_certs = module.params.get('validate_certs') #make sure riak commands are on the path @@ -138,24 +147,13 @@ def main(): while True: if time.time() > timeout: module.fail_json(msg='Timeout, could not fetch Riak stats.') - try: - if sys.version_info<(2,6,0): - stats_raw = urllib2.urlopen( - 'http://%s/stats' % (http_conn), None).read() - else: - stats_raw = urllib2.urlopen( - 'http://%s/stats' % (http_conn), None, 5).read() + (response, info) = fetch_url(module, 'http://%s/stats' % (http_conn), force=True, timeout=5) + if info['status'] == 200: + stats_raw = response.read() break - except urllib2.HTTPError, e: - time.sleep(5) - except urllib2.URLError, e: - time.sleep(5) - except socket.timeout: - time.sleep(5) - except Exception, e: - module.fail_json(msg='Could not fetch Riak stats: %s' % e) + time.sleep(5) -# here we attempt to load those stats, + # here we attempt to load those stats, try: stats = json.loads(stats_raw) except: @@ -223,7 +221,7 @@ def main(): if wait_for_handoffs: timeout = time.time() + wait_for_handoffs while True: - cmd = '%s transfers 2> /dev/null' % riak_admin_bin + cmd = '%s transfers' % riak_admin_bin rc, out, err = module.run_command(cmd) if 'No transfers active' in out: result['handoffs'] = 'No transfers active.' @@ -233,7 +231,7 @@ def main(): module.fail_json(msg='Timeout waiting for handoffs.') if wait_for_service: - cmd = '%s wait_for_service riak_%s %s' % ( riak_admin_bin, wait_for_service, node_name) + cmd = [riak_admin_bin, 'wait_for_service', 'riak_%s' % wait_for_service, node_name ] rc, out, err = module.run_command(cmd) result['service'] = out @@ -252,5 +250,6 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * main() diff --git a/library/files/acl b/library/files/acl index b8d2b85cb65..93431ecf472 100644 --- a/library/files/acl +++ b/library/files/acl @@ -95,7 +95,7 @@ EXAMPLES = ''' - acl: name=/etc/foo.d entity=joe etype=user permissions=rw default=yes state=present # Same as previous but using entry shorthand -- acl: name=/etc/foo.d entrty="default:user:joe:rw-" state=present +- acl: name=/etc/foo.d entry="default:user:joe:rw-" state=present # Obtain the acl for a specific file - acl: name=/etc/foo.conf @@ -115,6 +115,9 @@ def split_entry(entry): print "wtf?? %s => %s" % (entry,a) raise e + if d: + d = True + if t.startswith("u"): t = "user" elif t.startswith("g"): @@ -215,10 +218,10 @@ def main(): if state in ['present','absent']: if not entry and not etype: - module.fail_json(msg="%s requries to have ither either etype and permissions or entry to be set" % state) + module.fail_json(msg="%s requires either etype and permissions or just entry be set" % state) if entry: - if etype or entity or permissions: + if etype or entity or permissions: module.fail_json(msg="entry and another incompatible field (entity, etype or permissions) are also set") if entry.count(":") not in [2,3]: module.fail_json(msg="Invalid entry: '%s', it requires 3 or 4 sections divided by ':'" % entry) @@ -248,7 +251,6 @@ def main(): if not old_permissions == permissions: changed = True break - break if not matched: changed=True diff --git a/library/files/assemble b/library/files/assemble index a8c78256e23..7f0a9d1e0a1 100644 --- a/library/files/assemble +++ b/library/files/assemble @@ -59,7 +59,7 @@ options: default: "no" delimiter: description: - - A delimiter to seperate the file contents. + - A delimiter to separate the file contents. version_added: "1.4" required: false default: null @@ -102,19 +102,38 @@ def assemble_from_fragments(src_path, delimiter=None, compiled_regexp=None): tmpfd, temp_path = tempfile.mkstemp() tmp = os.fdopen(tmpfd,'w') delimit_me = False + add_newline = False + for f in sorted(os.listdir(src_path)): if compiled_regexp and not compiled_regexp.search(f): continue fragment = "%s/%s" % (src_path, f) - if delimit_me and delimiter: - tmp.write(delimiter) - # always make sure there's a newline after the - # delimiter, so lines don't run together - if delimiter[-1] != '\n': - tmp.write('\n') - if os.path.isfile(fragment): - tmp.write(file(fragment).read()) + if not os.path.isfile(fragment): + continue + fragment_content = file(fragment).read() + + # always put a newline between fragments if the previous fragment didn't end with a newline. + if add_newline: + tmp.write('\n') + + # delimiters should only appear between fragments + if delimit_me: + if delimiter: + # un-escape anything like newlines + delimiter = delimiter.decode('unicode-escape') + tmp.write(delimiter) + # always make sure there's a newline after the + # delimiter, so lines don't run together + if delimiter[-1] != '\n': + tmp.write('\n') + + tmp.write(fragment_content) delimit_me = True + if fragment_content.endswith('\n'): + add_newline = False + else: + add_newline = True + tmp.close() return temp_path diff --git a/library/files/copy b/library/files/copy index dbf9c71b4f6..08aa1d71a40 100644 --- a/library/files/copy +++ b/library/files/copy @@ -73,6 +73,7 @@ options: description: - The validation command to run before copying into place. The path to the file to validate is passed in via '%s' which must be present as in the visudo example below. + The command is passed securely so shell features like expansion and pipes won't work. required: false default: "" version_added: "1.2" @@ -82,10 +83,6 @@ options: defaults. required: false version_added: "1.5" - others: - description: - - all arguments accepted by the M(file) module also work here - required: false author: Michael DeHaan notes: - The "copy" module recursively copy facility does not scale to lots (>hundreds) of files. diff --git a/library/files/file b/library/files/file index 8e4e30a99b7..3b4aaa9e235 100644 --- a/library/files/file +++ b/library/files/file @@ -33,99 +33,11 @@ DOCUMENTATION = ''' module: file version_added: "historical" short_description: Sets attributes of files +extends_documentation_fragment: files description: - Sets attributes of files, symlinks, and directories, or removes files/symlinks/directories. Many other modules support the same options as the M(file) module - including M(copy), M(template), and M(assemble). -options: - path: - description: - - 'path to the file being managed. Aliases: I(dest), I(name)' - required: true - default: [] - aliases: ['dest', 'name'] - state: - description: - - If C(directory), all immediate subdirectories will be created if they - do not exist. If C(file), the file will NOT be created if it does not - exist, see the M(copy) or M(template) module if you want that behavior. - If C(link), the symbolic link will be created or changed. Use C(hard) - for hardlinks. If C(absent), directories will be recursively deleted, - and files or symlinks will be unlinked. If C(touch) (new in 1.4), an empty file will - be created if the c(dest) does not exist, while an existing file or - directory will receive updated file access and modification times (similar - to the way `touch` works from the command line). - required: false - default: file - choices: [ file, link, directory, hard, touch, absent ] - mode: - required: false - default: null - choices: [] - description: - - mode the file or directory should be, such as 0644 as would be fed to I(chmod) - owner: - required: false - default: null - choices: [] - description: - - name of the user that should own the file/directory, as would be fed to I(chown) - group: - required: false - default: null - choices: [] - description: - - name of the group that should own the file/directory, as would be fed to I(chown) - src: - required: false - default: null - choices: [] - description: - - path of the file to link to (applies only to C(state=link)). Will accept absolute, - relative and nonexisting paths. Relative paths are not expanded. - seuser: - required: false - default: null - choices: [] - description: - - user part of SELinux file context. Will default to system policy, if - applicable. If set to C(_default), it will use the C(user) portion of the - policy if available - serole: - required: false - default: null - choices: [] - description: - - role part of SELinux file context, C(_default) feature works as for I(seuser). - setype: - required: false - default: null - choices: [] - description: - - type part of SELinux file context, C(_default) feature works as for I(seuser). - selevel: - required: false - default: "s0" - choices: [] - description: - - level part of the SELinux file context. This is the MLS/MCS attribute, - sometimes known as the C(range). C(_default) feature works as for - I(seuser). - recurse: - required: false - default: "no" - choices: [ "yes", "no" ] - version_added: "1.1" - description: - - recursively set the specified file attributes (applies only to state=directory) - force: - required: false - default: "no" - choices: [ "yes", "no" ] - description: - - 'force the creation of the symlinks in two cases: the source file does - not exist (but will appear later); the destination exists and is a file (so, we need to unlink the - "path" file and create symlink to the "src" file in place of it).' notes: - See also M(copy), M(template), M(assemble) requirements: [ ] @@ -135,13 +47,14 @@ author: Michael DeHaan EXAMPLES = ''' - file: path=/etc/foo.conf owner=foo group=foo mode=0644 - file: src=/file/to/link/to dest=/path/to/symlink owner=foo group=foo state=link +- file: path=/tmp/{{ item.path }} dest={{ item.dest }} state=link + with_items: + - { path: 'x', dest: 'y' } + - { path: 'z', dest: 'k' } ''' def main(): - # FIXME: pass this around, should not use global - global module - module = AnsibleModule( argument_spec = dict( state = dict(choices=['file','directory','link','hard','touch','absent'], default=None), @@ -151,6 +64,7 @@ def main(): force = dict(required=False,default=False,type='bool'), diff_peek = dict(default=None), validate = dict(required=False, default=None), + src = dict(required=False, default=None), ), add_file_common_args=True, supports_check_mode=True @@ -159,23 +73,27 @@ def main(): params = module.params state = params['state'] force = params['force'] + diff_peek = params['diff_peek'] + src = params['src'] + + # modify source as we later reload and pass, specially relevant when used by other modules. params['path'] = path = os.path.expanduser(params['path']) # short-circuit for diff_peek - if params.get('diff_peek', None) is not None: + if diff_peek is not None: appears_binary = False try: f = open(path) b = f.read(8192) f.close() - if b.find("\x00") != -1: + if "\x00" in b: appears_binary = True except: pass module.exit_json(path=path, changed=False, appears_binary=appears_binary) + # Find out current state prev_state = 'absent' - if os.path.lexists(path): if os.path.islink(path): prev_state = 'link' @@ -187,76 +105,60 @@ def main(): # could be many other things, but defaulting to file prev_state = 'file' - if prev_state is not None and state is None: - # set state to current type of file - state = prev_state - elif state is None: - # set default state to file - state = 'file' + # state should default to file, but since that creates many conflicts, + # default to 'current' when it exists. + if state is None: + if prev_state != 'absent': + state = prev_state + else: + state = 'file' # source is both the source of a symlink or an informational passing of the src for a template module # or copy module, even if this module never uses it, it is needed to key off some things - - src = params.get('src', None) - if src: + if src is not None: src = os.path.expanduser(src) - if src is not None and os.path.isdir(path) and state not in ["link", "absent"]: - if params['original_basename']: - basename = params['original_basename'] - else: - basename = os.path.basename(src) - params['path'] = path = os.path.join(path, basename) + # original_basename is used by other modules that depend on file. + if os.path.isdir(path) and state not in ["link", "absent"]: + if params['original_basename']: + basename = params['original_basename'] + else: + basename = os.path.basename(src) + params['path'] = path = os.path.join(path, basename) + else: + if state in ['link','hard']: + module.fail_json(msg='src and dest are required for creating links') file_args = module.load_file_common_arguments(params) - - if state in ['link','hard'] and (src is None or path is None): - module.fail_json(msg='src and dest are required for creating links') - elif path is None: - module.fail_json(msg='path is required') - changed = False recurse = params['recurse'] + if recurse and state != 'directory': + module.fail_json(path=path, msg="recurse option requires state to be 'directory'") - if recurse and state == 'file' and prev_state == 'directory': - state = 'directory' - - if prev_state != 'absent' and state == 'absent': - try: - if prev_state == 'directory': - if os.path.islink(path): - if module.check_mode: - module.exit_json(changed=True) - os.unlink(path) - else: + if state == 'absent': + if state != prev_state: + if not module.check_mode: + if prev_state == 'directory': try: - if module.check_mode: - module.exit_json(changed=True) shutil.rmtree(path, ignore_errors=False) except Exception, e: module.fail_json(msg="rmtree failed: %s" % str(e)) - else: - if module.check_mode: - module.exit_json(changed=True) - os.unlink(path) - except Exception, e: - module.fail_json(path=path, msg=str(e)) - module.exit_json(path=path, changed=True) - - if prev_state != 'absent' and prev_state != state: - if not (force and (prev_state == 'file' or prev_state == 'hard' or prev_state == 'directory') and state == 'link') and state != 'touch': - module.fail_json(path=path, msg='refusing to convert between %s and %s for %s' % (prev_state, state, src)) - - if prev_state == 'absent' and state == 'absent': - module.exit_json(path=path, changed=False) - - if state == 'file': + else: + try: + os.unlink(path) + except Exception, e: + module.fail_json(path=path, msg="unlinking failed: %s " % str(e)) + module.exit_json(path=path, changed=True) + else: + module.exit_json(path=path, changed=False) - if prev_state != 'file': - module.fail_json(path=path, msg='file (%s) does not exist, use copy or template module to create' % path) + elif state == 'file': + if state != prev_state: + # file is not absent and any other state is a conflict + module.fail_json(path=path, msg='file (%s) is %s, cannot continue' % (path, prev_state)) - changed = module.set_file_attributes_if_different(file_args, changed) + changed = module.set_fs_attributes_if_different(file_args, changed) module.exit_json(path=path, changed=changed) elif state == 'directory': @@ -266,31 +168,33 @@ def main(): os.makedirs(path) changed = True - changed = module.set_directory_attributes_if_different(file_args, changed) + changed = module.set_fs_attributes_if_different(file_args, changed) + if recurse: for root,dirs,files in os.walk( file_args['path'] ): - for dir in dirs: - dirname=os.path.join(root,dir) - tmp_file_args = file_args.copy() - tmp_file_args['path']=dirname - changed = module.set_directory_attributes_if_different(tmp_file_args, changed) - for file in files: - filename=os.path.join(root,file) + for fsobj in dirs + files: + fsname=os.path.join(root, fsobj) tmp_file_args = file_args.copy() - tmp_file_args['path']=filename - changed = module.set_file_attributes_if_different(tmp_file_args, changed) + tmp_file_args['path']=fsname + changed = module.set_fs_attributes_if_different(tmp_file_args, changed) + module.exit_json(path=path, changed=changed) elif state in ['link','hard']: + absrc = src + if not os.path.isabs(absrc): + absrc = os.path.normpath('%s/%s' % (os.path.dirname(path), absrc)) + + if not os.path.exists(absrc) and not force: + module.fail_json(path=path, src=src, msg='src file does not exist, use "force=yes" if you really want to create the link: %s' % absrc) + if state == 'hard': - if os.path.isabs(src): - abs_src = src - else: + if not os.path.isabs(src): module.fail_json(msg="absolute paths are required") - if not os.path.exists(abs_src) and not force: - module.fail_json(path=path, src=src, msg='src file does not exist') + elif prev_state in ['file', 'hard', 'directory'] and not force: + module.fail_json(path=path, msg='refusing to convert between %s and %s for %s' % (prev_state, state, src)) if prev_state == 'absent': changed = True @@ -300,58 +204,63 @@ def main(): changed = True elif prev_state == 'hard': if not (state == 'hard' and os.stat(path).st_ino == os.stat(src).st_ino): + changed = True if not force: module.fail_json(dest=path, src=src, msg='Cannot link, different hard link exists at destination') - changed = True - elif prev_state == 'file': - if not force: - module.fail_json(dest=path, src=src, msg='Cannot link, file exists at destination') + elif prev_state in ['file', 'directory']: changed = True - elif prev_state == 'directory': if not force: - module.fail_json(dest=path, src=src, msg='Cannot link, directory exists at destination') - changed = True + module.fail_json(dest=path, src=src, msg='Cannot link, %s exists at destination' % prev_state) else: module.fail_json(dest=path, src=src, msg='unexpected position reached') if changed and not module.check_mode: if prev_state != 'absent': + # try to replace atomically + tmppath = '/'.join([os.path.dirname(path), ".%s.%s.tmp" % (os.getpid(),time.time())]) try: - os.unlink(path) + if state == 'hard': + os.link(src,tmppath) + else: + os.symlink(src, tmppath) + os.rename(tmppath, path) except OSError, e: - module.fail_json(path=path, msg='Error while removing existing target: %s' % str(e)) - try: - if state == 'hard': - os.link(src,path) - else: - os.symlink(src, path) - except OSError, e: - module.fail_json(path=path, msg='Error while linking: %s' % str(e)) + if os.path.exists(tmppath): + os.unlink(tmppath) + module.fail_json(path=path, msg='Error while replacing: %s' % str(e)) + else: + try: + if state == 'hard': + os.link(src,path) + else: + os.symlink(src, path) + except OSError, e: + module.fail_json(path=path, msg='Error while linking: %s' % str(e)) - changed = module.set_file_attributes_if_different(file_args, changed) + changed = module.set_fs_attributes_if_different(file_args, changed) module.exit_json(dest=path, src=src, changed=changed) elif state == 'touch': - if module.check_mode: - module.exit_json(path=path, skipped=True) + if not module.check_mode: + + if prev_state == 'absent': + try: + open(path, 'w').close() + except OSError, e: + module.fail_json(path=path, msg='Error, could not touch target: %s' % str(e)) + elif prev_state in ['file', 'directory']: + try: + os.utime(path, None) + except OSError, e: + module.fail_json(path=path, msg='Error while touching existing target: %s' % str(e)) + else: + module.fail_json(msg='Cannot touch other than files and directories') + + module.set_fs_attributes_if_different(file_args, True) - if prev_state not in ['file', 'directory', 'absent']: - module.fail_json(msg='Cannot touch other than files and directories') - if prev_state != 'absent': - try: - os.utime(path, None) - except OSError, e: - module.fail_json(path=path, msg='Error while touching existing target: %s' % str(e)) - else: - try: - open(path, 'w').close() - except OSError, e: - module.fail_json(path=path, msg='Error, could not touch target: %s' % str(e)) - module.set_file_attributes_if_different(file_args, True) module.exit_json(dest=path, changed=True) - else: - module.fail_json(path=path, msg='unexpected position reached') + module.fail_json(path=path, msg='unexpected position reached') # import module snippets from ansible.module_utils.basic import * diff --git a/library/files/lineinfile b/library/files/lineinfile index 73c9e88cb8c..f781911ccd1 100644 --- a/library/files/lineinfile +++ b/library/files/lineinfile @@ -110,7 +110,8 @@ options: validate: required: false description: - - validation to run before copying into place + - validation to run before copying into place. The command is passed + securely so shell features like expansion and pipes won't work. required: false default: None version_added: "1.4" @@ -137,7 +138,7 @@ EXAMPLES = r""" # Fully quoted because of the ': ' on the line. See the Gotchas in the YAML docs. - lineinfile: "dest=/etc/sudoers state=present regexp='^%wheel' line='%wheel ALL=(ALL) NOPASSWD: ALL'" -- lineinfile: dest=/opt/jboss-as/bin/standalone.conf regexp='^(.*)Xms(\d+)m(.*)$' line='\1Xms${xms}m\3' backrefs=yes +- lineinfile: dest=/opt/jboss-as/bin/standalone.conf regexp='^(.*)Xms(\d+)m(.*)$' line='\\1Xms${xms}m\\3' backrefs=yes # Validate a the sudoers file before saving - lineinfile: dest=/etc/sudoers state=present regexp='^%ADMIN ALL\=' line='%ADMIN ALL=(ALL) NOPASSWD:ALL' validate='visudo -cf %s' diff --git a/library/files/replace b/library/files/replace new file mode 100644 index 00000000000..f4193ae9f30 --- /dev/null +++ b/library/files/replace @@ -0,0 +1,160 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2013, Evan Kaufman . + +import re +import os +import tempfile + +DOCUMENTATION = """ +--- +module: replace +author: Evan Kaufman +short_description: Replace all instances of a particular string in a + file using a back-referenced regular expression. +description: + - This module will replace all instances of a pattern within a file. + - It is up to the user to maintain idempotence by ensuring that the + same pattern would never match any replacements made. +version_added: "1.6" +options: + dest: + required: true + aliases: [ name, destfile ] + description: + - The file to modify. + regexp: + required: true + description: + - The regular expression to look for in the contents of the file. + Uses Python regular expressions; see + U(http://docs.python.org/2/library/re.html). + Uses multiline mode, which means C(^) and C($) match the beginning + and end respectively of I(each line) of the file. + replace: + required: false + description: + - The string to replace regexp matches. May contain backreferences + that will get expanded with the regexp capture groups if the regexp + matches. If not set, matches are removed entirely. + backup: + required: false + default: "no" + choices: [ "yes", "no" ] + description: + - Create a backup file including the timestamp information so you can + get the original file back if you somehow clobbered it incorrectly. + validate: + required: false + description: + - validation to run before copying into place + required: false + default: None + others: + description: + - All arguments accepted by the M(file) module also work here. + required: false +""" + +EXAMPLES = r""" +- replace: dest=/etc/hosts regexp='(\s+)old\.host\.name(\s+.*)?$' replace='\1new.host.name\2' backup=yes + +- replace: dest=/home/jdoe/.ssh/known_hosts regexp='^old\.host\.name[^\n]*\n' owner=jdoe group=jdoe mode=644 + +- replace: dest=/etc/apache/ports regexp='^(NameVirtualHost|Listen)\s+80\s*$' replace='\1 127.0.0.1:8080' validate='/usr/sbin/apache2ctl -f %s -t' +""" + +def write_changes(module,contents,dest): + + tmpfd, tmpfile = tempfile.mkstemp() + f = os.fdopen(tmpfd,'wb') + f.write(contents) + f.close() + + validate = module.params.get('validate', None) + valid = not validate + if validate: + (rc, out, err) = module.run_command(validate % tmpfile) + valid = rc == 0 + if rc != 0: + module.fail_json(msg='failed to validate: ' + 'rc:%s error:%s' % (rc,err)) + if valid: + module.atomic_move(tmpfile, dest) + +def check_file_attrs(module, changed, message): + + file_args = module.load_file_common_arguments(module.params) + if module.set_file_attributes_if_different(file_args, False): + + if changed: + message += " and " + changed = True + message += "ownership, perms or SE linux context changed" + + return message, changed + +def main(): + module = AnsibleModule( + argument_spec=dict( + dest=dict(required=True, aliases=['name', 'destfile']), + regexp=dict(required=True), + replace=dict(default='', type='str'), + backup=dict(default=False, type='bool'), + validate=dict(default=None, type='str'), + ), + add_file_common_args=True, + supports_check_mode=True + ) + + params = module.params + dest = os.path.expanduser(params['dest']) + + if os.path.isdir(dest): + module.fail_json(rc=256, msg='Destination %s is a directory !' % dest) + + if not os.path.exists(dest): + module.fail_json(rc=257, msg='Destination %s does not exist !' % dest) + else: + f = open(dest, 'rb') + contents = f.read() + f.close() + + mre = re.compile(params['regexp'], re.MULTILINE) + result = re.subn(mre, params['replace'], contents, 0) + + if result[1] > 0: + msg = '%s replacements made' % result[1] + changed = True + else: + msg = '' + changed = False + + if changed and not module.check_mode: + if params['backup'] and os.path.exists(dest): + module.backup_local(dest) + write_changes(module, result[0], dest) + + msg, changed = check_file_attrs(module, changed, msg) + module.exit_json(changed=changed, msg=msg) + +# this is magic, see lib/ansible/module_common.py +#<> + +main() diff --git a/library/files/stat b/library/files/stat index 2839ca8e06f..8c717a395c4 100644 --- a/library/files/stat +++ b/library/files/stat @@ -132,8 +132,9 @@ def main(): if S_ISLNK(mode): d['lnk_source'] = os.path.realpath(path) - if S_ISREG(mode) and get_md5: - d['md5'] = module.md5(path) + if S_ISREG(mode) and get_md5 and os.access(path,os.R_OK): + d['md5'] = module.md5(path) + try: pw = pwd.getpwuid(st.st_uid) diff --git a/library/files/synchronize b/library/files/synchronize index 493322393bc..8d67ce9bac1 100644 --- a/library/files/synchronize +++ b/library/files/synchronize @@ -16,8 +16,6 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . -import subprocess - DOCUMENTATION = ''' --- module: synchronize @@ -51,6 +49,13 @@ options: choices: [ 'yes', 'no' ] default: 'yes' required: false + checksum: + description: + - Skip based on checksum, rather than mod-time & size; Note that that "archive" option is still enabled by default - the "checksum" option will not disable it. + choices: [ 'yes', 'no' ] + default: 'no' + required: false + version_added: "1.6" existing_only: description: - Skip creating new files on receiver. @@ -60,7 +65,7 @@ options: version_added: "1.5" delete: description: - - Delete files that don't exist (after transfer, not before) in the C(src) path. + - Delete files that don't exist (after transfer, not before) in the C(src) path. This option requires C(recursive=yes). choices: [ 'yes', 'no' ] default: 'no' required: false @@ -121,6 +126,17 @@ options: - Specify a --timeout for the rsync command in seconds. default: 10 required: false + set_remote_user: + description: + - put user@ for the remote paths. If you have a custom ssh config to define the remote user for a host + that does not match the inventory user, you should set this parameter to "no". + default: yes + rsync_opts: + description: + - Specify additional rsync options by passing in an array. + default: + required: false + version_added: "1.6" notes: - Inspect the verbose output to validate the destination user/host/path are what was expected. @@ -144,6 +160,9 @@ synchronize: src=some/relative/path dest=/some/absolute/path archive=no # Synchronization with --archive options enabled except for --recursive synchronize: src=some/relative/path dest=/some/absolute/path recursive=no +# Synchronization with --archive options enabled except for --times, with --checksum option enabled +synchronize: src=some/relative/path dest=/some/absolute/path checksum=yes times=no + # Synchronization without --archive options enabled except use --links synchronize: src=some/relative/path dest=/some/absolute/path archive=no links=yes @@ -169,6 +188,9 @@ synchronize: src=some/relative/path dest=/some/absolute/path rsync_path="sudo rs - var # exclude any path whose last part is 'var' - /var # exclude any path starting with 'var' starting at the source directory + /var/conf # include /var/conf even though it was previously excluded + +# Synchronize passing in extra rsync options +synchronize: src=/tmp/helloworld dest=/var/www/helloword rsync_opts=--no-motd,--exclude=.git ''' @@ -182,6 +204,7 @@ def main(): private_key = dict(default=None), rsync_path = dict(default=None), archive = dict(default='yes', type='bool'), + checksum = dict(default='no', type='bool'), existing_only = dict(default='no', type='bool'), dirs = dict(default='no', type='bool'), recursive = dict(type='bool'), @@ -191,7 +214,9 @@ def main(): times = dict(type='bool'), owner = dict(type='bool'), group = dict(type='bool'), - rsync_timeout = dict(type='int', default=10) + set_remote_user = dict(default='yes', type='bool'), + rsync_timeout = dict(type='int', default=10), + rsync_opts = dict(type='list') ), supports_check_mode = True ) @@ -205,6 +230,7 @@ def main(): rsync = module.params.get('local_rsync_path', 'rsync') rsync_timeout = module.params.get('rsync_timeout', 'rsync_timeout') archive = module.params['archive'] + checksum = module.params['checksum'] existing_only = module.params['existing_only'] dirs = module.params['dirs'] # the default of these params depends on the value of archive @@ -215,6 +241,7 @@ def main(): times = module.params['times'] owner = module.params['owner'] group = module.params['group'] + rsync_opts = module.params['rsync_opts'] cmd = '%s --delay-updates -FF --compress --timeout=%s' % (rsync, rsync_timeout) if module.check_mode: @@ -223,6 +250,8 @@ def main(): cmd = cmd + ' --delete-after' if existing_only: cmd = cmd + ' --existing' + if checksum: + cmd = cmd + ' --checksum' if archive: cmd = cmd + ' --archive' if recursive is False: @@ -270,8 +299,17 @@ def main(): if rsync_path: cmd = cmd + " --rsync-path '%s'" %(rsync_path) + if rsync_opts: + cmd = cmd + " " + " ".join(rsync_opts) changed_marker = '<>' cmd = cmd + " --out-format='" + changed_marker + "%i %n%L'" + + # expand the paths + if '@' not in source: + source = os.path.expanduser(source) + if '@' not in dest: + dest = os.path.expanduser(dest) + cmd = ' '.join([cmd, source, dest]) cmdstr = cmd (rc, out, err) = module.run_command(cmd) @@ -279,8 +317,12 @@ def main(): return module.fail_json(msg=err, rc=rc, cmd=cmdstr) else: changed = changed_marker in out - return module.exit_json(changed=changed, msg=out.replace(changed_marker,''), - rc=rc, cmd=cmdstr) + out_clean=out.replace(changed_marker,'') + out_lines=out_clean.split('\n') + while '' in out_lines: + out_lines.remove('') + return module.exit_json(changed=changed, msg=out_clean, + rc=rc, cmd=cmdstr, stdout_lines=out_lines) # import module snippets from ansible.module_utils.basic import * diff --git a/library/files/template b/library/files/template index 29fa905207f..3c21f3f1170 100644 --- a/library/files/template +++ b/library/files/template @@ -17,7 +17,7 @@ description: the template's machine, C(template_uid) the owner, C(template_path) the absolute path of the template, C(template_fullpath) is the absolute path of the template, and C(template_run_date) is the date that the template was rendered. Note that including - a string that uses a date in the template will resort in the template being marked 'changed' + a string that uses a date in the template will result in the template being marked 'changed' each time." options: src: @@ -40,14 +40,13 @@ options: default: "no" validate: description: - - validation to run before copying into place + - The validation command to run before copying into place. + - The path to the file to validate is passed in via '%s' which must be present as in the visudo example below. + - validation to run before copying into place. The command is passed + securely so shell features like expansion and pipes won't work. required: false default: "" version_added: "1.2" - others: - description: - - all arguments accepted by the M(file) module also work here, as well as the M(copy) module (except the the 'content' parameter). - required: false notes: - "Since Ansible version 0.9, templates are loaded with C(trim_blocks=True)." @@ -63,6 +62,6 @@ EXAMPLES = ''' # Example from Ansible Playbooks - template: src=/mytemplates/foo.j2 dest=/etc/file.conf owner=bin group=wheel mode=0644 -# Copy a new "sudoers file into place, after passing validation with visudo -- action: template src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s' +# Copy a new "sudoers" file into place, after passing validation with visudo +- template: src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s' ''' diff --git a/library/files/unarchive b/library/files/unarchive index 661f3899690..29e9ddb9e48 100644 --- a/library/files/unarchive +++ b/library/files/unarchive @@ -43,7 +43,13 @@ options: required: false choices: [ "yes", "no" ] default: "yes" -author: Dylan Martin + creates: + description: + - a filename, when it already exists, this step will B(not) be run. + required: no + default: null + version_added: "1.6" +author: Dylan Martin todo: - detect changed/unchanged for .zip files - handle common unarchive args, like preserve owner/timestamp etc... @@ -75,17 +81,20 @@ class ZipFile(object): self.src = src self.dest = dest self.module = module + self.cmd_path = self.module.get_bin_path('unzip') def is_unarchived(self): return dict(unarchived=False) def unarchive(self): - cmd = 'unzip -o "%s" -d "%s"' % (self.src, self.dest) + cmd = '%s -o "%s" -d "%s"' % (self.cmd_path, self.src, self.dest) rc, out, err = self.module.run_command(cmd) return dict(cmd=cmd, rc=rc, out=out, err=err) def can_handle_archive(self): - cmd = 'unzip -l "%s"' % self.src + if not self.cmd_path: + return False + cmd = '%s -l "%s"' % (self.cmd_path, self.src) rc, out, err = self.module.run_command(cmd) if rc == 0: return True @@ -99,23 +108,26 @@ class TgzFile(object): self.src = src self.dest = dest self.module = module + self.cmd_path = self.module.get_bin_path('tar') self.zipflag = 'z' def is_unarchived(self): dirof = os.path.dirname(self.dest) destbase = os.path.basename(self.dest) - cmd = 'tar -v -C "%s" --diff -%sf "%s"' % (self.dest, self.zipflag, self.src) + cmd = '%s -v -C "%s" --diff -%sf "%s"' % (self.cmd_path, self.dest, self.zipflag, self.src) rc, out, err = self.module.run_command(cmd) unarchived = (rc == 0) return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd) def unarchive(self): - cmd = 'tar -C "%s" -x%sf "%s"' % (self.dest, self.zipflag, self.src) + cmd = '%s -C "%s" -x%sf "%s"' % (self.cmd_path, self.dest, self.zipflag, self.src) rc, out, err = self.module.run_command(cmd) return dict(cmd=cmd, rc=rc, out=out, err=err) def can_handle_archive(self): - cmd = 'tar -t%sf "%s"' % (self.zipflag, self.src) + if not self.cmd_path: + return False + cmd = '%s -t%sf "%s"' % (self.cmd_path, self.zipflag, self.src) rc, out, err = self.module.run_command(cmd) if rc == 0: if len(out.splitlines(True)) > 0: @@ -129,6 +141,7 @@ class TarFile(TgzFile): self.src = src self.dest = dest self.module = module + self.cmd_path = self.module.get_bin_path('tar') self.zipflag = '' @@ -138,6 +151,7 @@ class TarBzip(TgzFile): self.src = src self.dest = dest self.module = module + self.cmd_path = self.module.get_bin_path('tar') self.zipflag = 'j' @@ -147,6 +161,7 @@ class TarXz(TgzFile): self.src = src self.dest = dest self.module = module + self.cmd_path = self.module.get_bin_path('tar') self.zipflag = 'J' @@ -157,7 +172,7 @@ def pick_handler(src, dest, module): obj = handler(src, dest, module) if obj.can_handle_archive(): return obj - raise RuntimeError('Failed to find handler to unarchive "%s"' % src) + module.fail_json(msg='Failed to find handler to unarchive. Make sure the required command to extract the file is installed.') def main(): @@ -168,6 +183,7 @@ def main(): original_basename = dict(required=False), # used to handle 'dest is a directory' via template, a slight hack dest = dict(required=True), copy = dict(default=True, type='bool'), + creates = dict(required=False), ), add_file_common_args=True, ) @@ -175,6 +191,7 @@ def main(): src = os.path.expanduser(module.params['src']) dest = os.path.expanduser(module.params['dest']) copy = module.params['copy'] + creates = module.params['creates'] # did tar file arrive? if not os.path.exists(src): @@ -185,6 +202,20 @@ def main(): if not os.access(src, os.R_OK): module.fail_json(msg="Source '%s' not readable" % src) + if creates: + # do not run the command if the line contains creates=filename + # and the filename already exists. This allows idempotence + # of command executions. + v = os.path.expanduser(creates) + if os.path.exists(v): + module.exit_json( + stdout="skipped, since %s exists" % v, + skipped=True, + changed=False, + stderr=False, + rc=0 + ) + # is dest OK to receive tar file? if not os.path.exists(os.path.dirname(dest)): module.fail_json(msg="Destination directory '%s' does not exist" % (os.path.dirname(dest))) diff --git a/library/internal/async_wrapper b/library/internal/async_wrapper index 278280ef1a8..2bc2dc21823 100644 --- a/library/internal/async_wrapper +++ b/library/internal/async_wrapper @@ -72,7 +72,7 @@ if len(sys.argv) < 3: }) sys.exit(1) -jid = sys.argv[1] +jid = "%s.%d" % (sys.argv[1], os.getpid()) time_limit = sys.argv[2] wrapped_module = sys.argv[3] argsfile = sys.argv[4] diff --git a/library/messaging/rabbitmq_parameter b/library/messaging/rabbitmq_parameter index 2b540cbfdee..2f78bd4ee15 100644 --- a/library/messaging/rabbitmq_parameter +++ b/library/messaging/rabbitmq_parameter @@ -52,6 +52,7 @@ options: - erlang node name of the rabbit we wish to configure required: false default: rabbit + version_added: "1.2" state: description: - Specify if user is to be added or removed diff --git a/library/messaging/rabbitmq_user b/library/messaging/rabbitmq_user index 175bc0c1624..1cbee360dff 100644 --- a/library/messaging/rabbitmq_user +++ b/library/messaging/rabbitmq_user @@ -55,6 +55,7 @@ options: - erlang node name of the rabbit we wish to configure required: false default: rabbit + version_added: "1.2" configure_priv: description: - Regular expression to restrict configure actions on a resource diff --git a/library/messaging/rabbitmq_vhost b/library/messaging/rabbitmq_vhost index 122f84e5761..fd4b04a683f 100644 --- a/library/messaging/rabbitmq_vhost +++ b/library/messaging/rabbitmq_vhost @@ -39,6 +39,7 @@ options: - erlang node name of the rabbit we wish to configure required: false default: rabbit + version_added: "1.2" tracing: description: - Enable/disable tracing for a vhost diff --git a/library/monitoring/airbrake_deployment b/library/monitoring/airbrake_deployment index 8a4a834be7c..e1c490b881b 100644 --- a/library/monitoring/airbrake_deployment +++ b/library/monitoring/airbrake_deployment @@ -51,7 +51,15 @@ options: description: - Optional URL to submit the notification to. Use to send notifications to Airbrake-compliant tools like Errbit. required: false - default: https://airbrake.io/deploys + default: "https://airbrake.io/deploys" + version_added: "1.5" + validate_certs: + description: + - If C(no), SSL certificates for the target url will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] # informational: requirements for nodes requirements: [ urllib, urllib2 ] @@ -64,29 +72,12 @@ EXAMPLES = ''' revision=4.2 ''' -HAS_URLLIB = True -try: - import urllib -except ImportError: - HAS_URLLIB = False - -HAS_URLLIB2 = True -try: - import urllib2 -except ImportError: - HAS_URLLIB2 = False - # =========================================== # Module execution. # def main(): - if not HAS_URLLIB: - module.fail_json(msg="urllib is not installed") - if not HAS_URLLIB2: - module.fail_json(msg="urllib2 is not installed") - module = AnsibleModule( argument_spec=dict( token=dict(required=True), @@ -94,7 +85,8 @@ def main(): user=dict(required=False), repo=dict(required=False), revision=dict(required=False), - url=dict(required=False, default='https://api.airbrake.io/deploys.txt') + url=dict(required=False, default='https://api.airbrake.io/deploys.txt'), + validate_certs=dict(default='yes', type='bool'), ), supports_check_mode=True ) @@ -123,18 +115,16 @@ def main(): module.exit_json(changed=True) # Send the data to airbrake - try: - req = urllib2.Request(url, urllib.urlencode(params)) - result=urllib2.urlopen(req) - except Exception, e: - module.fail_json(msg="unable to update airbrake via %s?%s : %s" % (url, urllib.urlencode(params), e)) + data = urllib.urlencode(params) + response, info = fetch_url(module, url, data=data) + if info['status'] == 200: + module.exit_json(changed=True) else: - if result.code == 200: - module.exit_json(changed=True) - else: - module.fail_json(msg="HTTP result code: %d connecting to %s" % (result.code, url)) + module.fail_json(msg="HTTP result code: %d connecting to %s" % (info['status'], url)) # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + main() diff --git a/library/monitoring/boundary_meter b/library/monitoring/boundary_meter index 202dfd03ae3..da739d4306f 100644 --- a/library/monitoring/boundary_meter +++ b/library/monitoring/boundary_meter @@ -24,7 +24,6 @@ along with Ansible. If not, see . import json import datetime -import urllib2 import base64 import os @@ -59,6 +58,14 @@ options: description: - Organizations boundary API KEY required: true + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 notes: - This module does not yet support boundary tags. @@ -74,12 +81,6 @@ EXAMPLES=''' ''' -try: - import urllib2 - HAS_URLLIB2 = True -except ImportError: - HAS_URLLIB2 = False - api_host = "api.boundary.com" config_directory = "/etc/bprobe" @@ -101,7 +102,7 @@ def build_url(name, apiid, action, meter_id=None, cert_type=None): elif action == "delete": return "https://%s/%s/meters/%s" % (api_host, apiid, meter_id) -def http_request(name, apiid, apikey, action, meter_id=None, cert_type=None): +def http_request(module, name, apiid, apikey, action, data=None, meter_id=None, cert_type=None): if meter_id is None: url = build_url(name, apiid, action) @@ -111,11 +112,11 @@ def http_request(name, apiid, apikey, action, meter_id=None, cert_type=None): else: url = build_url(name, apiid, action, meter_id, cert_type) - auth = auth_encode(apikey) - request = urllib2.Request(url) - request.add_header("Authorization", "Basic %s" % (auth)) - request.add_header("Content-Type", "application/json") - return request + headers = dict() + headers["Authorization"] = "Basic %s" % auth_encode(apikey) + headers["Content-Type"] = "application/json" + + return fetch_url(module, url, data=data, headers=headers) def create_meter(module, name, apiid, apikey): @@ -126,14 +127,10 @@ def create_meter(module, name, apiid, apikey): module.exit_json(status="Meter " + name + " already exists",changed=False) else: # If it doesn't exist, create it - request = http_request(name, apiid, apikey, action="create") - # A create request seems to need a json body with the name of the meter in it body = '{"name":"' + name + '"}' - request.add_data(body) + response, info = http_request(module, name, apiid, apikey, data=body, action="create") - try: - result = urllib2.urlopen(request) - except urllib2.URLError, e: + if info['status'] != 200: module.fail_json(msg="Failed to connect to api host to create meter") # If the config directory doesn't exist, create it @@ -160,15 +157,13 @@ def create_meter(module, name, apiid, apikey): def search_meter(module, name, apiid, apikey): - request = http_request(name, apiid, apikey, action="search") + response, info = http_request(module, name, apiid, apikey, action="search") - try: - result = urllib2.urlopen(request) - except urllib2.URLError, e: + if info['status'] != 200: module.fail_json("Failed to connect to api host to search for meter") # Return meters - return json.loads(result.read()) + return json.loads(response.read()) def get_meter_id(module, name, apiid, apikey): # In order to delete the meter we need its id @@ -186,16 +181,9 @@ def delete_meter(module, name, apiid, apikey): if meter_id is None: return 1, "Meter does not exist, so can't delete it" else: - action = "delete" - request = http_request(name, apiid, apikey, action, meter_id) - # See http://stackoverflow.com/questions/4511598/how-to-make-http-delete-method-using-urllib2 - # urllib2 only does GET or POST I believe, but here we need delete - request.get_method = lambda: 'DELETE' - - try: - result = urllib2.urlopen(request) - except urllib2.URLError, e: - module.fail_json("Failed to connect to api host to delete meter") + response, info = http_request(module, name, apiid, apikey, action, meter_id) + if info['status'] != 200: + module.fail_json("Failed to delete meter") # Each new meter gets a new key.pem and ca.pem file, so they should be deleted types = ['cert', 'key'] @@ -214,17 +202,14 @@ def download_request(module, name, apiid, apikey, cert_type): if meter_id is not None: action = "certificates" - request = http_request(name, apiid, apikey, action, meter_id, cert_type) - - try: - result = urllib2.urlopen(request) - except urllib2.URLError, e: + response, info = http_request(module, name, apiid, apikey, action, meter_id, cert_type) + if info['status'] != 200: module.fail_json("Failed to connect to api host to download certificate") if result: try: cert_file_path = '%s/%s.pem' % (config_directory,cert_type) - body = result.read() + body = response.read() cert_file = open(cert_file_path, 'w') cert_file.write(body) cert_file.close @@ -238,15 +223,13 @@ def download_request(module, name, apiid, apikey, cert_type): def main(): - if not HAS_URLLIB2: - module.fail_json(msg="urllib2 is not installed") - module = AnsibleModule( argument_spec=dict( state=dict(required=True, choices=['present', 'absent']), name=dict(required=False), apikey=dict(required=True), apiid=dict(required=True), + validate_certs = dict(default='yes', type='bool'), ) ) @@ -268,5 +251,6 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * main() diff --git a/library/monitoring/datadog_event b/library/monitoring/datadog_event index 629e86e98ab..5d38dd4c31d 100644 --- a/library/monitoring/datadog_event +++ b/library/monitoring/datadog_event @@ -54,6 +54,14 @@ options: description: ["An arbitrary string to use for aggregation."] required: false default: null + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 ''' EXAMPLES = ''' @@ -67,7 +75,6 @@ datadog_event: title="Testing from ansible" text="Test!" ''' import socket -from urllib2 import urlopen, Request, URLError def main(): module = AnsibleModule( @@ -90,15 +97,15 @@ def main(): choices=['nagios', 'hudson', 'jenkins', 'user', 'my apps', 'feed', 'chef', 'puppet', 'git', 'bitbucket', 'fabric', 'capistrano'] - ) + ), + validate_certs = dict(default='yes', type='bool'), ) ) post_event(module) def post_event(module): - uri = "https://app.datadoghq.com/api/v1/events?api_key=" + \ - module.params['api_key'] + uri = "https://app.datadoghq.com/api/v1/events?api_key=%s" % module.params['api_key'] body = dict( title=module.params['title'], @@ -117,22 +124,20 @@ def post_event(module): json_body = module.jsonify(body) headers = {"Content-Type": "application/json"} - request = Request(uri, json_body, headers, unverifiable=True) - try: - response = urlopen(request) + (response, info) = fetch_url(module, uri, data=json_body, headers=headers) + if info['status'] == 200: response_body = response.read() response_json = module.from_json(response_body) if response_json['status'] == 'ok': module.exit_json(changed=True) else: module.fail_json(msg=response) - - except URLError, e: - module.fail_json(msg="URL error: %s." % e) - except socket.error, e: - module.fail_json(msg="Socket error: %s to %s" % (e, uri)) + else: + module.fail_json(**info) # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + main() diff --git a/library/monitoring/librato_annotation b/library/monitoring/librato_annotation new file mode 100644 index 00000000000..63979f41bfb --- /dev/null +++ b/library/monitoring/librato_annotation @@ -0,0 +1,169 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- +# +# (C) Seth Edwards, 2014 +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . +# + + +import base64 + +DOCUMENTATION = ''' +--- +module: librato_annotation +short_description: create an annotation in librato +description: + - Create an annotation event on the given annotation stream :name. If the annotation stream does not exist, it will be created automatically +version_added: "1.6" +author: Seth Edwards +requirements: + - urllib2 + - base64 +options: + user: + description: + - Librato account username + required: true + api_key: + description: + - Librato account api key + required: true + name: + description: + - The annotation stream name + - If the annotation stream does not exist, it will be created automatically + required: false + title: + description: + - The title of an annotation is a string and may contain spaces + - The title should be a short, high-level summary of the annotation e.g. v45 Deployment + required: true + source: + description: + - A string which describes the originating source of an annotation when that annotation is tracked across multiple members of a population + required: false + description: + description: + - The description contains extra meta-data about a particular annotation + - The description should contain specifics on the individual annotation e.g. Deployed 9b562b2 shipped new feature foo! + required: false + start_time: + description: + - The unix timestamp indicating the the time at which the event referenced by this annotation started + required: false + end_time: + description: + - The unix timestamp indicating the the time at which the event referenced by this annotation ended + - For events that have a duration, this is a useful way to annotate the duration of the event + required: false + links: + description: + - See examples + required: true +''' + +EXAMPLES = ''' +# Create a simple annotation event with a source +- librato_annotation: + user: user@example.com + api_key: XXXXXXXXXXXXXXXXX + title: 'App Config Change' + source: 'foo.bar' + description: 'This is a detailed description of the config change' + +# Create an annotation that includes a link +- librato_annotation: + user: user@example.com + api_key: XXXXXXXXXXXXXXXXXX + name: 'code.deploy' + title: 'app code deploy' + description: 'this is a detailed description of a deployment' + links: + - { rel: 'example', href: 'http://www.example.com/deploy' } + +# Create an annotation with a start_time and end_time +- librato_annotation: + user: user@example.com + api_key: XXXXXXXXXXXXXXXXXX + name: 'maintenance' + title: 'Maintenance window' + description: 'This is a detailed description of maintenance' + start_time: 1395940006 + end_time: 1395954406 +''' + + +try: + import urllib2 + HAS_URLLIB2 = True +except ImportError: + HAS_URLLIB2 = False + +def post_annotation(module): + user = module.params['user'] + api_key = module.params['api_key'] + name = module.params['name'] + title = module.params['title'] + + url = 'https://metrics-api.librato.com/v1/annotations/%s' % name + params = {} + params['title'] = title + + if module.params['source'] != None: + params['source'] = module.params['source'] + if module.params['description'] != None: + params['description'] = module.params['description'] + if module.params['start_time'] != None: + params['start_time'] = module.params['start_time'] + if module.params['end_time'] != None: + params['end_time'] = module.params['end_time'] + if module.params['links'] != None: + params['links'] = module.params['links'] + + json_body = module.jsonify(params) + + headers = {} + headers['Content-Type'] = 'application/json' + headers['Authorization'] = b"Basic " + base64.b64encode(user + b":" + api_key).strip() + req = urllib2.Request(url, json_body, headers) + try: + response = urllib2.urlopen(req) + except urllib2.HTTPError as e: + module.fail_json(msg="Request Failed", reason=e.reason) + response = response.read() + module.exit_json(changed=True, annotation=response) + +def main(): + + module = AnsibleModule( + argument_spec = dict( + user = dict(required=True), + api_key = dict(required=True), + name = dict(required=False), + title = dict(required=True), + source = dict(required=False), + description = dict(required=False), + start_time = dict(required=False, default=None, type='int'), + end_time = dict(require=False, default=None, type='int'), + links = dict(type='list') + ) + ) + + post_annotation(module) + +from ansible.module_utils.basic import * +main() diff --git a/library/monitoring/logentries b/library/monitoring/logentries new file mode 100644 index 00000000000..373f4f777ff --- /dev/null +++ b/library/monitoring/logentries @@ -0,0 +1,130 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2013, Ivan Vanderbyl +# +# This module is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This software is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this software. If not, see . + +DOCUMENTATION = ''' +--- +module: logentries +author: Ivan Vanderbyl +short_description: Module for tracking logs via logentries.com +description: + - Sends logs to LogEntries in realtime +version_added: "1.6" +options: + path: + description: + - path to a log file + required: true + state: + description: + - following state of the log + choices: [ 'present', 'absent' ] + required: false + default: present +notes: + - Requires the LogEntries agent which can be installed following the instructions at logentries.com +''' +EXAMPLES = ''' +- logentries: path=/var/log/nginx/access.log state=present +- logentries: path=/var/log/nginx/error.log state=absent +''' + +def query_log_status(module, le_path, path, state="present"): + """ Returns whether a log is followed or not. """ + + if state == "present": + rc, out, err = module.run_command("%s followed %s" % (le_path, path)) + if rc == 0: + return True + + return False + +def follow_log(module, le_path, logs): + """ Follows one or more logs if not already followed. """ + + followed_count = 0 + + for log in logs: + if query_log_status(module, le_path, log): + continue + + if module.check_mode: + module.exit_json(changed=True) + rc, out, err = module.run_command([le_path, 'follow', log]) + + if not query_log_status(module, le_path, log): + module.fail_json(msg="failed to follow '%s': %s" % (log, err.strip())) + + followed_count += 1 + + if followed_count > 0: + module.exit_json(changed=True, msg="followed %d log(s)" % (followed_count,)) + + module.exit_json(changed=False, msg="logs(s) already followed") + +def unfollow_log(module, le_path, logs): + """ Unfollows one or more logs if followed. """ + + removed_count = 0 + + # Using a for loop incase of error, we can report the package that failed + for log in logs: + # Query the log first, to see if we even need to remove. + if not query_log_status(module, le_path, log): + continue + + if module.check_mode: + module.exit_json(changed=True) + rc, out, err = module.run_command([le_path, 'rm', log]) + + if query_log_status(module, le_path, log): + module.fail_json(msg="failed to remove '%s': %s" % (log, err.strip())) + + removed_count += 1 + + if removed_count > 0: + module.exit_json(changed=True, msg="removed %d package(s)" % removed_count) + + module.exit_json(changed=False, msg="logs(s) already unfollowed") + +def main(): + module = AnsibleModule( + argument_spec = dict( + path = dict(aliases=["name"], required=True), + state = dict(default="present", choices=["present", "followed", "absent", "unfollowed"]) + ), + supports_check_mode=True + ) + + le_path = module.get_bin_path('le', True, ['/usr/local/bin']) + + p = module.params + + # Handle multiple log files + logs = p["path"].split(",") + logs = filter(None, logs) + + if p["state"] in ["present", "followed"]: + follow_log(module, le_path, logs) + + elif p["state"] in ["absent", "unfollowed"]: + unfollow_log(module, le_path, logs) + +# import module snippets +from ansible.module_utils.basic import * + +main() diff --git a/library/monitoring/monit b/library/monitoring/monit index 32e3e058121..0705b714315 100644 --- a/library/monitoring/monit +++ b/library/monitoring/monit @@ -47,6 +47,7 @@ EXAMPLES = ''' - monit: name=httpd state=started ''' +import pipes def main(): arg_spec = dict( @@ -67,7 +68,7 @@ def main(): rc, out, err = module.run_command('%s reload' % MONIT) module.exit_json(changed=True, name=name, state=state) - rc, out, err = module.run_command('%s summary | grep "Process \'%s\'"' % (MONIT, name)) + rc, out, err = module.run_command('%s summary | grep "Process \'%s\'"' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) present = name in out if not present and not state == 'present': @@ -78,7 +79,7 @@ def main(): if module.check_mode: module.exit_json(changed=True) module.run_command('%s reload' % MONIT, check_rc=True) - rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name)) + rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) if name in out: module.exit_json(changed=True, name=name, state=state) else: @@ -86,7 +87,7 @@ def main(): module.exit_json(changed=False, name=name, state=state) - rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name)) + rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) running = 'running' in out.lower() if running and (state == 'started' or state == 'monitored'): @@ -99,7 +100,7 @@ def main(): if module.check_mode: module.exit_json(changed=True) module.run_command('%s stop %s' % (MONIT, name)) - rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name)) + rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) if 'not monitored' in out.lower() or 'stop pending' in out.lower(): module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) @@ -108,7 +109,8 @@ def main(): if module.check_mode: module.exit_json(changed=True) module.run_command('%s unmonitor %s' % (MONIT, name)) - rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name)) + # FIXME: DRY FOLKS! + rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) if 'not monitored' in out.lower(): module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) diff --git a/library/monitoring/newrelic_deployment b/library/monitoring/newrelic_deployment index de64651969c..93d55832fd3 100644 --- a/library/monitoring/newrelic_deployment +++ b/library/monitoring/newrelic_deployment @@ -63,6 +63,14 @@ options: description: - The environment for this deployment required: false + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 # informational: requirements for nodes requirements: [ urllib, urllib2 ] @@ -75,29 +83,12 @@ EXAMPLES = ''' revision=1.0 ''' -HAS_URLLIB = True -try: - import urllib -except ImportError: - HAS_URLLIB = False - -HAS_URLLIB2 = True -try: - import urllib2 -except ImportError: - HAS_URLLIB2 = False - # =========================================== # Module execution. # def main(): - if not HAS_URLLIB: - module.fail_json(msg="urllib is not installed") - if not HAS_URLLIB2: - module.fail_json(msg="urllib2 is not installed") - module = AnsibleModule( argument_spec=dict( token=dict(required=True), @@ -109,6 +100,7 @@ def main(): user=dict(required=False), appname=dict(required=False), environment=dict(required=False), + validate_certs = dict(default='yes', type='bool'), ), supports_check_mode=True ) @@ -134,29 +126,20 @@ def main(): module.exit_json(changed=True) # Send the data to NewRelic - try: - req = urllib2.Request("https://rpm.newrelic.com/deployments.xml", urllib.urlencode(params)) - req.add_header('x-api-key',module.params["token"]) - result=urllib2.urlopen(req) - # urlopen behaves differently in python 2.4 and 2.6 so we handle - # both cases here. In python 2.4 it throws an exception if the - # return code is anything other than a 200. In python 2.6 it - # doesn't throw an exception for any 2xx return codes. In both - # cases we expect newrelic should return a 201 on success. So - # to handle both cases, both the except & else cases below are - # effectively identical. - except Exception, e: - if e.code == 201: - module.exit_json(changed=True) - else: - module.fail_json(msg="unable to update newrelic: %s" % e) + url = "https://rpm.newrelic.com/deployments.xml" + data = urllib.urlencode(params) + headers = { + 'x-api-key': module.params["token"], + } + response, info = fetch_url(module, url, data=data, headers=headers) + if info['status'] in (200, 201): + module.exit_json(changed=True) else: - if result.code == 201: - module.exit_json(changed=True) - else: - module.fail_json(msg="result code: %d" % result.code) + module.fail_json(msg="unable to update newrelic: %s" % info['msg']) # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + main() diff --git a/library/monitoring/pagerduty b/library/monitoring/pagerduty index d2f630ae82a..90771a818bd 100644 --- a/library/monitoring/pagerduty +++ b/library/monitoring/pagerduty @@ -85,6 +85,15 @@ options: default: Created by Ansible choices: [] aliases: [] + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 + notes: - This module does not yet have support to end maintenance windows. ''' @@ -124,9 +133,15 @@ EXAMPLES=''' import json import datetime -import urllib2 import base64 +def auth_header(user, passwd, token): + if token: + return "Token token=%s" % token + + auth = base64.encodestring('%s:%s' % (user, passwd)).replace('\n', '') + return "Basic %s" % auth + def create_req(url, data, name, user, passwd, token): req = urllib2.Request(url, data) if token: @@ -134,39 +149,42 @@ def create_req(url, data, name, user, passwd, token): else: auth = base64.encodestring('%s:%s' % (user, passwd)).replace('\n', '') req.add_header("Authorization", "Basic %s" % auth) - return req -def ongoing(name, user, passwd, token): +def ongoing(module, name, user, passwd, token): url = "https://" + name + ".pagerduty.com/api/v1/maintenance_windows/ongoing" - req = create_req(url, None, name, user, passwd, token) - res = urllib2.urlopen(req) - out = res.read() + headers = {"Authorization": auth_header(user, passwd, token)} - return False, out + response, info = fetch_url(module, url, headers=headers) + if info['status'] != 200: + module.fail_json(msg="failed to lookup the ongoing window: %s" % info['msg']) + return False, response.read() -def create(name, user, passwd, token, requester_id, service, hours, minutes, desc): +def create(module, name, user, passwd, token, requester_id, service, hours, minutes, desc): now = datetime.datetime.utcnow() later = now + datetime.timedelta(hours=int(hours), minutes=int(minutes)) start = now.strftime("%Y-%m-%dT%H:%M:%SZ") end = later.strftime("%Y-%m-%dT%H:%M:%SZ") url = "https://" + name + ".pagerduty.com/api/v1/maintenance_windows" + auth = base64.encodestring('%s:%s' % (user, passwd)).replace('\n', '') + headers = { + 'Authorization': auth_header(user, passwd, token), + 'Content-Type' : 'application/json', + } request_data = {'maintenance_window': {'start_time': start, 'end_time': end, 'description': desc, 'service_ids': [service]}} if requester_id: request_data['requester_id'] = requester_id data = json.dumps(request_data) - req = create_req(url, data, name, user, passwd, token) - req.add_header('Content-Type', 'application/json') + response, info = fetch_url(module, url, data=data, headers=headers, method='POST') + if info['status'] != 200: + module.fail_json(msg="failed to create the window: %s" % info['msg']) - res = urllib2.urlopen(req) - out = res.read() - - return False, out + return False, response.read() def main(): @@ -182,7 +200,8 @@ def main(): requester_id=dict(required=False), hours=dict(default='1', required=False), minutes=dict(default='0', required=False), - desc=dict(default='Created by Ansible', required=False) + desc=dict(default='Created by Ansible', required=False), + validate_certs = dict(default='yes', type='bool'), ) ) @@ -204,10 +223,10 @@ def main(): if state == "running" or state == "started": if not service: module.fail_json(msg="service not specified") - (rc, out) = create(name, user, passwd, token, requester_id, service, hours, minutes, desc) + (rc, out) = create(module, name, user, passwd, token, requester_id, service, hours, minutes, desc) if state == "ongoing": - (rc, out) = ongoing(name, user, passwd, token) + (rc, out) = ongoing(module, name, user, passwd, token) if rc != 0: module.fail_json(msg="failed", result=out) @@ -216,4 +235,6 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + main() diff --git a/library/monitoring/rollbar_deployment b/library/monitoring/rollbar_deployment new file mode 100644 index 00000000000..772e78fc5c2 --- /dev/null +++ b/library/monitoring/rollbar_deployment @@ -0,0 +1,133 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# Copyright 2014, Max Riveiro, +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = ''' +--- +module: rollbar_deployment +version_added: 1.6 +author: Max Riveiro +short_description: Notify Rollbar about app deployments +description: + - Notify Rollbar about app deployments + (see https://rollbar.com/docs/deploys_other/) +options: + token: + description: + - Your project access token. + required: true + environment: + description: + - Name of the environment being deployed, e.g. 'production'. + required: true + revision: + description: + - Revision number/sha being deployed. + required: true + user: + description: + - User who deployed. + required: false + rollbar_user: + description: + - Rollbar username of the user who deployed. + required: false + comment: + description: + - Deploy comment (e.g. what is being deployed). + required: false + url: + description: + - Optional URL to submit the notification to. + required: false + default: 'https://api.rollbar.com/api/1/deploy/' + validate_certs: + description: + - If C(no), SSL certificates for the target url will not be validated. + This should only be used on personally controlled sites using + self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] +''' + +EXAMPLES = ''' +- rollbar_deployment: token=AAAAAA + environment='staging' + user='ansible' + revision=4.2, + rollbar_user='admin', + comment='Test Deploy' +''' + + +def main(): + + module = AnsibleModule( + argument_spec=dict( + token=dict(required=True), + environment=dict(required=True), + revision=dict(required=True), + user=dict(required=False), + rollbar_user=dict(required=False), + comment=dict(required=False), + url=dict( + required=False, + default='https://api.rollbar.com/api/1/deploy/' + ), + validate_certs=dict(default='yes', type='bool'), + ), + supports_check_mode=True + ) + + if module.check_mode: + module.exit_json(changed=True) + + params = dict( + access_token=module.params['token'], + environment=module.params['environment'], + revision=module.params['revision'] + ) + + if module.params['user']: + params['local_username'] = module.params['user'] + + if module.params['rollbar_user']: + params['rollbar_username'] = module.params['rollbar_user'] + + if module.params['comment']: + params['comment'] = module.params['comment'] + + url = module.params.get('url') + + try: + data = urllib.urlencode(params) + response, info = fetch_url(module, url, data=data) + except Exception, e: + module.fail_json(msg='Unable to notify Rollbar: %s' % e) + else: + if info['status'] == 200: + module.exit_json(changed=True) + else: + module.fail_json(msg='HTTP result code: %d connecting to %s' % (info['status'], url)) + +from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + +main() diff --git a/library/net_infrastructure/bigip_facts b/library/net_infrastructure/bigip_facts new file mode 100644 index 00000000000..3a7a4533f69 --- /dev/null +++ b/library/net_infrastructure/bigip_facts @@ -0,0 +1,1670 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2013, Matt Hite +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = ''' +--- +module: bigip_facts +short_description: "Collect facts from F5 BIG-IP devices" +description: + - "Collect facts from F5 BIG-IP devices via iControl SOAP API" +version_added: "1.6" +author: Matt Hite +notes: + - "Requires BIG-IP software version >= 11.4" + - "F5 developed module 'bigsuds' required (see http://devcentral.f5.com)" + - "Best run as a local_action in your playbook" + - "Tested with manager and above account privilege level" + +requirements: + - bigsuds +options: + server: + description: + - BIG-IP host + required: true + default: null + choices: [] + aliases: [] + user: + description: + - BIG-IP username + required: true + default: null + choices: [] + aliases: [] + password: + description: + - BIG-IP password + required: true + default: null + choices: [] + aliases: [] + session: + description: + - BIG-IP session support; may be useful to avoid concurrency + issues in certain circumstances. + required: false + default: true + choices: [] + aliases: [] + include: + description: + - Fact category or list of categories to collect + required: true + default: null + choices: ['address_class', 'certificate', 'client_ssl_profile', + 'device_group', 'interface', 'key', 'node', 'pool', 'rule', + 'self_ip', 'software', 'system_info', 'traffic_group', + 'trunk', 'virtual_address', 'virtual_server', 'vlan'] + aliases: [] + filter: + description: + - Shell-style glob matching string used to filter fact keys. Not + applicable for software and system_info fact categories. + required: false + default: null + choices: [] + aliases: [] +''' + +EXAMPLES = ''' + +## playbook task examples: + +--- +# file bigip-test.yml +# ... +- hosts: bigip-test + tasks: + - name: Collect BIG-IP facts + local_action: > + bigip_facts + server=lb.mydomain.com + user=admin + password=mysecret + include=interface,vlan + +''' + +try: + import bigsuds +except ImportError: + bigsuds_found = False +else: + bigsuds_found = True + +import fnmatch +import traceback +import re +from suds import MethodNotFound + +# =========================================== +# bigip_facts module specific support methods. +# + +class F5(object): + """F5 iControl class. + + F5 BIG-IP iControl API class. + + Attributes: + api: iControl API instance. + """ + + def __init__(self, host, user, password, session=False): + self.api = bigsuds.BIGIP(hostname=host, username=user, password=password) + if session: + self.start_session() + + def start_session(self): + self.api = self.api.with_session_id() + + def get_api(self): + return self.api + + def set_recursive_query_state(self, state): + self.api.System.Session.set_recursive_query_state(state) + + def get_recursive_query_state(self): + return self.api.System.Session.get_recursive_query_state() + + def enable_recursive_query_state(self): + self.set_recursive_query_state('STATE_ENABLED') + + def disable_recursive_query_state(self): + self.set_recursive_query_state('STATE_DISABLED') + + def set_active_folder(self, folder): + self.api.System.Session.set_active_folder(folder=folder) + + def get_active_folder(self): + return self.api.System.Session.get_active_folder() + + +class Interfaces(object): + """Interfaces class. + + F5 BIG-IP interfaces class. + + Attributes: + api: iControl API instance. + interfaces: A list of BIG-IP interface names. + """ + + def __init__(self, api, regex=None): + self.api = api + self.interfaces = api.Networking.Interfaces.get_list() + if regex: + re_filter = re.compile(regex) + self.interfaces = filter(re_filter.search, self.interfaces) + + def get_list(self): + return self.interfaces + + def get_active_media(self): + return self.api.Networking.Interfaces.get_active_media(self.interfaces) + + def get_actual_flow_control(self): + return self.api.Networking.Interfaces.get_actual_flow_control(self.interfaces) + + def get_bundle_state(self): + return self.api.Networking.Interfaces.get_bundle_state(self.interfaces) + + def get_description(self): + return self.api.Networking.Interfaces.get_description(self.interfaces) + + def get_dual_media_state(self): + return self.api.Networking.Interfaces.get_dual_media_state(self.interfaces) + + def get_enabled_state(self): + return self.api.Networking.Interfaces.get_enabled_state(self.interfaces) + + def get_if_index(self): + return self.api.Networking.Interfaces.get_if_index(self.interfaces) + + def get_learning_mode(self): + return self.api.Networking.Interfaces.get_learning_mode(self.interfaces) + + def get_lldp_admin_status(self): + return self.api.Networking.Interfaces.get_lldp_admin_status(self.interfaces) + + def get_lldp_tlvmap(self): + return self.api.Networking.Interfaces.get_lldp_tlvmap(self.interfaces) + + def get_mac_address(self): + return self.api.Networking.Interfaces.get_mac_address(self.interfaces) + + def get_media(self): + return self.api.Networking.Interfaces.get_media(self.interfaces) + + def get_media_option(self): + return self.api.Networking.Interfaces.get_media_option(self.interfaces) + + def get_media_option_sfp(self): + return self.api.Networking.Interfaces.get_media_option_sfp(self.interfaces) + + def get_media_sfp(self): + return self.api.Networking.Interfaces.get_media_sfp(self.interfaces) + + def get_media_speed(self): + return self.api.Networking.Interfaces.get_media_speed(self.interfaces) + + def get_media_status(self): + return self.api.Networking.Interfaces.get_media_status(self.interfaces) + + def get_mtu(self): + return self.api.Networking.Interfaces.get_mtu(self.interfaces) + + def get_phy_master_slave_mode(self): + return self.api.Networking.Interfaces.get_phy_master_slave_mode(self.interfaces) + + def get_prefer_sfp_state(self): + return self.api.Networking.Interfaces.get_prefer_sfp_state(self.interfaces) + + def get_flow_control(self): + return self.api.Networking.Interfaces.get_requested_flow_control(self.interfaces) + + def get_sflow_poll_interval(self): + return self.api.Networking.Interfaces.get_sflow_poll_interval(self.interfaces) + + def get_sflow_poll_interval_global(self): + return self.api.Networking.Interfaces.get_sflow_poll_interval_global(self.interfaces) + + def get_sfp_media_state(self): + return self.api.Networking.Interfaces.get_sfp_media_state(self.interfaces) + + def get_stp_active_edge_port_state(self): + return self.api.Networking.Interfaces.get_stp_active_edge_port_state(self.interfaces) + + def get_stp_enabled_state(self): + return self.api.Networking.Interfaces.get_stp_enabled_state(self.interfaces) + + def get_stp_link_type(self): + return self.api.Networking.Interfaces.get_stp_link_type(self.interfaces) + + def get_stp_protocol_detection_reset_state(self): + return self.api.Networking.Interfaces.get_stp_protocol_detection_reset_state(self.interfaces) + + +class SelfIPs(object): + """Self IPs class. + + F5 BIG-IP Self IPs class. + + Attributes: + api: iControl API instance. + self_ips: List of self IPs. + """ + + def __init__(self, api, regex=None): + self.api = api + self.self_ips = api.Networking.SelfIPV2.get_list() + if regex: + re_filter = re.compile(regex) + self.self_ips = filter(re_filter.search, self.self_ips) + + def get_list(self): + return self.self_ips + + def get_address(self): + return self.api.Networking.SelfIPV2.get_address(self.self_ips) + + def get_allow_access_list(self): + return self.api.Networking.SelfIPV2.get_allow_access_list(self.self_ips) + + def get_description(self): + return self.api.Networking.SelfIPV2.get_description(self.self_ips) + + def get_enforced_firewall_policy(self): + return self.api.Networking.SelfIPV2.get_enforced_firewall_policy(self.self_ips) + + def get_floating_state(self): + return self.api.Networking.SelfIPV2.get_floating_state(self.self_ips) + + def get_fw_rule(self): + return self.api.Networking.SelfIPV2.get_fw_rule(self.self_ips) + + def get_netmask(self): + return self.api.Networking.SelfIPV2.get_netmask(self.self_ips) + + def get_staged_firewall_policy(self): + return self.api.Networking.SelfIPV2.get_staged_firewall_policy(self.self_ips) + + def get_traffic_group(self): + return self.api.Networking.SelfIPV2.get_traffic_group(self.self_ips) + + def get_vlan(self): + return self.api.Networking.SelfIPV2.get_vlan(self.self_ips) + + def get_is_traffic_group_inherited(self): + return self.api.Networking.SelfIPV2.is_traffic_group_inherited(self.self_ips) + + +class Trunks(object): + """Trunks class. + + F5 BIG-IP trunks class. + + Attributes: + api: iControl API instance. + trunks: List of trunks. + """ + + def __init__(self, api, regex=None): + self.api = api + self.trunks = api.Networking.Trunk.get_list() + if regex: + re_filter = re.compile(regex) + self.trunks = filter(re_filter.search, self.trunks) + + def get_list(self): + return self.trunks + + def get_active_lacp_state(self): + return self.api.Networking.Trunk.get_active_lacp_state(self.trunks) + + def get_configured_member_count(self): + return self.api.Networking.Trunk.get_configured_member_count(self.trunks) + + def get_description(self): + return self.api.Networking.Trunk.get_description(self.trunks) + + def get_distribution_hash_option(self): + return self.api.Networking.Trunk.get_distribution_hash_option(self.trunks) + + def get_interface(self): + return self.api.Networking.Trunk.get_interface(self.trunks) + + def get_lacp_enabled_state(self): + return self.api.Networking.Trunk.get_lacp_enabled_state(self.trunks) + + def get_lacp_timeout_option(self): + return self.api.Networking.Trunk.get_lacp_timeout_option(self.trunks) + + def get_link_selection_policy(self): + return self.api.Networking.Trunk.get_link_selection_policy(self.trunks) + + def get_media_speed(self): + return self.api.Networking.Trunk.get_media_speed(self.trunks) + + def get_media_status(self): + return self.api.Networking.Trunk.get_media_status(self.trunks) + + def get_operational_member_count(self): + return self.api.Networking.Trunk.get_operational_member_count(self.trunks) + + def get_stp_enabled_state(self): + return self.api.Networking.Trunk.get_stp_enabled_state(self.trunks) + + def get_stp_protocol_detection_reset_state(self): + return self.api.Networking.Trunk.get_stp_protocol_detection_reset_state(self.trunks) + + +class Vlans(object): + """Vlans class. + + F5 BIG-IP Vlans class. + + Attributes: + api: iControl API instance. + vlans: List of VLANs. + """ + + def __init__(self, api, regex=None): + self.api = api + self.vlans = api.Networking.VLAN.get_list() + if regex: + re_filter = re.compile(regex) + self.vlans = filter(re_filter.search, self.vlans) + + def get_list(self): + return self.vlans + + def get_auto_lasthop(self): + return self.api.Networking.VLAN.get_auto_lasthop(self.vlans) + + def get_cmp_hash_algorithm(self): + return self.api.Networking.VLAN.get_cmp_hash_algorithm(self.vlans) + + def get_description(self): + return self.api.Networking.VLAN.get_description(self.vlans) + + def get_dynamic_forwarding(self): + return self.api.Networking.VLAN.get_dynamic_forwarding(self.vlans) + + def get_failsafe_action(self): + return self.api.Networking.VLAN.get_failsafe_action(self.vlans) + + def get_failsafe_state(self): + return self.api.Networking.VLAN.get_failsafe_state(self.vlans) + + def get_failsafe_timeout(self): + return self.api.Networking.VLAN.get_failsafe_timeout(self.vlans) + + def get_if_index(self): + return self.api.Networking.VLAN.get_if_index(self.vlans) + + def get_learning_mode(self): + return self.api.Networking.VLAN.get_learning_mode(self.vlans) + + def get_mac_masquerade_address(self): + return self.api.Networking.VLAN.get_mac_masquerade_address(self.vlans) + + def get_member(self): + return self.api.Networking.VLAN.get_member(self.vlans) + + def get_mtu(self): + return self.api.Networking.VLAN.get_mtu(self.vlans) + + def get_sflow_poll_interval(self): + return self.api.Networking.VLAN.get_sflow_poll_interval(self.vlans) + + def get_sflow_poll_interval_global(self): + return self.api.Networking.VLAN.get_sflow_poll_interval_global(self.vlans) + + def get_sflow_sampling_rate(self): + return self.api.Networking.VLAN.get_sflow_sampling_rate(self.vlans) + + def get_sflow_sampling_rate_global(self): + return self.api.Networking.VLAN.get_sflow_sampling_rate_global(self.vlans) + + def get_source_check_state(self): + return self.api.Networking.VLAN.get_source_check_state(self.vlans) + + def get_true_mac_address(self): + return self.api.Networking.VLAN.get_true_mac_address(self.vlans) + + def get_vlan_id(self): + return self.api.Networking.VLAN.get_vlan_id(self.vlans) + + +class Software(object): + """Software class. + + F5 BIG-IP software class. + + Attributes: + api: iControl API instance. + """ + + def __init__(self, api): + self.api = api + + def get_all_software_status(self): + return self.api.System.SoftwareManagement.get_all_software_status() + + +class VirtualServers(object): + """Virtual servers class. + + F5 BIG-IP virtual servers class. + + Attributes: + api: iControl API instance. + virtual_servers: List of virtual servers. + """ + + def __init__(self, api, regex=None): + self.api = api + self.virtual_servers = api.LocalLB.VirtualServer.get_list() + if regex: + re_filter = re.compile(regex) + self.virtual_servers = filter(re_filter.search, self.virtual_servers) + + def get_list(self): + return self.virtual_servers + + def get_actual_hardware_acceleration(self): + return self.api.LocalLB.VirtualServer.get_actual_hardware_acceleration(self.virtual_servers) + + def get_authentication_profile(self): + return self.api.LocalLB.VirtualServer.get_authentication_profile(self.virtual_servers) + + def get_auto_lasthop(self): + return self.api.LocalLB.VirtualServer.get_auto_lasthop(self.virtual_servers) + + def get_bw_controller_policy(self): + return self.api.LocalLB.VirtualServer.get_bw_controller_policy(self.virtual_servers) + + def get_clone_pool(self): + return self.api.LocalLB.VirtualServer.get_clone_pool(self.virtual_servers) + + def get_cmp_enable_mode(self): + return self.api.LocalLB.VirtualServer.get_cmp_enable_mode(self.virtual_servers) + + def get_connection_limit(self): + return self.api.LocalLB.VirtualServer.get_connection_limit(self.virtual_servers) + + def get_connection_mirror_state(self): + return self.api.LocalLB.VirtualServer.get_connection_mirror_state(self.virtual_servers) + + def get_default_pool_name(self): + return self.api.LocalLB.VirtualServer.get_default_pool_name(self.virtual_servers) + + def get_description(self): + return self.api.LocalLB.VirtualServer.get_description(self.virtual_servers) + + def get_destination(self): + return self.api.LocalLB.VirtualServer.get_destination_v2(self.virtual_servers) + + def get_enabled_state(self): + return self.api.LocalLB.VirtualServer.get_enabled_state(self.virtual_servers) + + def get_enforced_firewall_policy(self): + return self.api.LocalLB.VirtualServer.get_enforced_firewall_policy(self.virtual_servers) + + def get_fallback_persistence_profile(self): + return self.api.LocalLB.VirtualServer.get_fallback_persistence_profile(self.virtual_servers) + + def get_fw_rule(self): + return self.api.LocalLB.VirtualServer.get_fw_rule(self.virtual_servers) + + def get_gtm_score(self): + return self.api.LocalLB.VirtualServer.get_gtm_score(self.virtual_servers) + + def get_last_hop_pool(self): + return self.api.LocalLB.VirtualServer.get_last_hop_pool(self.virtual_servers) + + def get_nat64_state(self): + return self.api.LocalLB.VirtualServer.get_nat64_state(self.virtual_servers) + + def get_object_status(self): + return self.api.LocalLB.VirtualServer.get_object_status(self.virtual_servers) + + def get_persistence_profile(self): + return self.api.LocalLB.VirtualServer.get_persistence_profile(self.virtual_servers) + + def get_profile(self): + return self.api.LocalLB.VirtualServer.get_profile(self.virtual_servers) + + def get_protocol(self): + return self.api.LocalLB.VirtualServer.get_protocol(self.virtual_servers) + + def get_rate_class(self): + return self.api.LocalLB.VirtualServer.get_rate_class(self.virtual_servers) + + def get_rate_limit(self): + return self.api.LocalLB.VirtualServer.get_rate_limit(self.virtual_servers) + + def get_rate_limit_destination_mask(self): + return self.api.LocalLB.VirtualServer.get_rate_limit_destination_mask(self.virtual_servers) + + def get_rate_limit_mode(self): + return self.api.LocalLB.VirtualServer.get_rate_limit_mode(self.virtual_servers) + + def get_rate_limit_source_mask(self): + return self.api.LocalLB.VirtualServer.get_rate_limit_source_mask(self.virtual_servers) + + def get_related_rule(self): + return self.api.LocalLB.VirtualServer.get_related_rule(self.virtual_servers) + + def get_rule(self): + return self.api.LocalLB.VirtualServer.get_rule(self.virtual_servers) + + def get_security_log_profile(self): + return self.api.LocalLB.VirtualServer.get_security_log_profile(self.virtual_servers) + + def get_snat_pool(self): + return self.api.LocalLB.VirtualServer.get_snat_pool(self.virtual_servers) + + def get_snat_type(self): + return self.api.LocalLB.VirtualServer.get_snat_type(self.virtual_servers) + + def get_source_address(self): + return self.api.LocalLB.VirtualServer.get_source_address(self.virtual_servers) + + def get_source_address_translation_lsn_pool(self): + return self.api.LocalLB.VirtualServer.get_source_address_translation_lsn_pool(self.virtual_servers) + + def get_source_address_translation_snat_pool(self): + return self.api.LocalLB.VirtualServer.get_source_address_translation_snat_pool(self.virtual_servers) + + def get_source_address_translation_type(self): + return self.api.LocalLB.VirtualServer.get_source_address_translation_type(self.virtual_servers) + + def get_source_port_behavior(self): + return self.api.LocalLB.VirtualServer.get_source_port_behavior(self.virtual_servers) + + def get_staged_firewall_policy(self): + return self.api.LocalLB.VirtualServer.get_staged_firewall_policy(self.virtual_servers) + + def get_translate_address_state(self): + return self.api.LocalLB.VirtualServer.get_translate_address_state(self.virtual_servers) + + def get_translate_port_state(self): + return self.api.LocalLB.VirtualServer.get_translate_port_state(self.virtual_servers) + + def get_type(self): + return self.api.LocalLB.VirtualServer.get_type(self.virtual_servers) + + def get_vlan(self): + return self.api.LocalLB.VirtualServer.get_vlan(self.virtual_servers) + + def get_wildmask(self): + return self.api.LocalLB.VirtualServer.get_wildmask(self.virtual_servers) + + +class Pools(object): + """Pools class. + + F5 BIG-IP pools class. + + Attributes: + api: iControl API instance. + pool_names: List of pool names. + """ + + def __init__(self, api, regex=None): + self.api = api + self.pool_names = api.LocalLB.Pool.get_list() + if regex: + re_filter = re.compile(regex) + self.pool_names = filter(re_filter.search, self.pool_names) + + def get_list(self): + return self.pool_names + + def get_action_on_service_down(self): + return self.api.LocalLB.Pool.get_action_on_service_down(self.pool_names) + + def get_active_member_count(self): + return self.api.LocalLB.Pool.get_active_member_count(self.pool_names) + + def get_aggregate_dynamic_ratio(self): + return self.api.LocalLB.Pool.get_aggregate_dynamic_ratio(self.pool_names) + + def get_allow_nat_state(self): + return self.api.LocalLB.Pool.get_allow_nat_state(self.pool_names) + + def get_allow_snat_state(self): + return self.api.LocalLB.Pool.get_allow_snat_state(self.pool_names) + + def get_client_ip_tos(self): + return self.api.LocalLB.Pool.get_client_ip_tos(self.pool_names) + + def get_client_link_qos(self): + return self.api.LocalLB.Pool.get_client_link_qos(self.pool_names) + + def get_description(self): + return self.api.LocalLB.Pool.get_description(self.pool_names) + + def get_gateway_failsafe_device(self): + return self.api.LocalLB.Pool.get_gateway_failsafe_device(self.pool_names) + + def get_ignore_persisted_weight_state(self): + return self.api.LocalLB.Pool.get_ignore_persisted_weight_state(self.pool_names) + + def get_lb_method(self): + return self.api.LocalLB.Pool.get_lb_method(self.pool_names) + + def get_member(self): + return self.api.LocalLB.Pool.get_member_v2(self.pool_names) + + def get_minimum_active_member(self): + return self.api.LocalLB.Pool.get_minimum_active_member(self.pool_names) + + def get_minimum_up_member(self): + return self.api.LocalLB.Pool.get_minimum_up_member(self.pool_names) + + def get_minimum_up_member_action(self): + return self.api.LocalLB.Pool.get_minimum_up_member_action(self.pool_names) + + def get_minimum_up_member_enabled_state(self): + return self.api.LocalLB.Pool.get_minimum_up_member_enabled_state(self.pool_names) + + def get_monitor_association(self): + return self.api.LocalLB.Pool.get_monitor_association(self.pool_names) + + def get_monitor_instance(self): + return self.api.LocalLB.Pool.get_monitor_instance(self.pool_names) + + def get_object_status(self): + return self.api.LocalLB.Pool.get_object_status(self.pool_names) + + def get_profile(self): + return self.api.LocalLB.Pool.get_profile(self.pool_names) + + def get_queue_depth_limit(self): + return self.api.LocalLB.Pool.get_queue_depth_limit(self.pool_names) + + def get_queue_on_connection_limit_state(self): + return self.api.LocalLB.Pool.get_queue_on_connection_limit_state(self.pool_names) + + def get_queue_time_limit(self): + return self.api.LocalLB.Pool.get_queue_time_limit(self.pool_names) + + def get_reselect_tries(self): + return self.api.LocalLB.Pool.get_reselect_tries(self.pool_names) + + def get_server_ip_tos(self): + return self.api.LocalLB.Pool.get_server_ip_tos(self.pool_names) + + def get_server_link_qos(self): + return self.api.LocalLB.Pool.get_server_link_qos(self.pool_names) + + def get_simple_timeout(self): + return self.api.LocalLB.Pool.get_simple_timeout(self.pool_names) + + def get_slow_ramp_time(self): + return self.api.LocalLB.Pool.get_slow_ramp_time(self.pool_names) + + +class Devices(object): + """Devices class. + + F5 BIG-IP devices class. + + Attributes: + api: iControl API instance. + devices: List of devices. + """ + + def __init__(self, api, regex=None): + self.api = api + self.devices = api.Management.Device.get_list() + if regex: + re_filter = re.compile(regex) + self.devices = filter(re_filter.search, self.devices) + + def get_list(self): + return self.devices + + def get_active_modules(self): + return self.api.Management.Device.get_active_modules(self.devices) + + def get_base_mac_address(self): + return self.api.Management.Device.get_base_mac_address(self.devices) + + def get_blade_addresses(self): + return self.api.Management.Device.get_blade_addresses(self.devices) + + def get_build(self): + return self.api.Management.Device.get_build(self.devices) + + def get_chassis_id(self): + return self.api.Management.Device.get_chassis_id(self.devices) + + def get_chassis_type(self): + return self.api.Management.Device.get_chassis_type(self.devices) + + def get_comment(self): + return self.api.Management.Device.get_comment(self.devices) + + def get_configsync_address(self): + return self.api.Management.Device.get_configsync_address(self.devices) + + def get_contact(self): + return self.api.Management.Device.get_contact(self.devices) + + def get_description(self): + return self.api.Management.Device.get_description(self.devices) + + def get_edition(self): + return self.api.Management.Device.get_edition(self.devices) + + def get_failover_state(self): + return self.api.Management.Device.get_failover_state(self.devices) + + def get_local_device(self): + return self.api.Management.Device.get_local_device() + + def get_hostname(self): + return self.api.Management.Device.get_hostname(self.devices) + + def get_inactive_modules(self): + return self.api.Management.Device.get_inactive_modules(self.devices) + + def get_location(self): + return self.api.Management.Device.get_location(self.devices) + + def get_management_address(self): + return self.api.Management.Device.get_management_address(self.devices) + + def get_marketing_name(self): + return self.api.Management.Device.get_marketing_name(self.devices) + + def get_multicast_address(self): + return self.api.Management.Device.get_multicast_address(self.devices) + + def get_optional_modules(self): + return self.api.Management.Device.get_optional_modules(self.devices) + + def get_platform_id(self): + return self.api.Management.Device.get_platform_id(self.devices) + + def get_primary_mirror_address(self): + return self.api.Management.Device.get_primary_mirror_address(self.devices) + + def get_product(self): + return self.api.Management.Device.get_product(self.devices) + + def get_secondary_mirror_address(self): + return self.api.Management.Device.get_secondary_mirror_address(self.devices) + + def get_software_version(self): + return self.api.Management.Device.get_software_version(self.devices) + + def get_timelimited_modules(self): + return self.api.Management.Device.get_timelimited_modules(self.devices) + + def get_timezone(self): + return self.api.Management.Device.get_timezone(self.devices) + + def get_unicast_addresses(self): + return self.api.Management.Device.get_unicast_addresses(self.devices) + + +class DeviceGroups(object): + """Device groups class. + + F5 BIG-IP device groups class. + + Attributes: + api: iControl API instance. + device_groups: List of device groups. + """ + + def __init__(self, api, regex=None): + self.api = api + self.device_groups = api.Management.DeviceGroup.get_list() + if regex: + re_filter = re.compile(regex) + self.device_groups = filter(re_filter.search, self.device_groups) + + def get_list(self): + return self.device_groups + + def get_all_preferred_active(self): + return self.api.Management.DeviceGroup.get_all_preferred_active(self.device_groups) + + def get_autosync_enabled_state(self): + return self.api.Management.DeviceGroup.get_autosync_enabled_state(self.device_groups) + + def get_description(self): + return self.api.Management.DeviceGroup.get_description(self.device_groups) + + def get_device(self): + return self.api.Management.DeviceGroup.get_device(self.device_groups) + + def get_full_load_on_sync_state(self): + return self.api.Management.DeviceGroup.get_full_load_on_sync_state(self.device_groups) + + def get_incremental_config_sync_size_maximum(self): + return self.api.Management.DeviceGroup.get_incremental_config_sync_size_maximum(self.device_groups) + + def get_network_failover_enabled_state(self): + return self.api.Management.DeviceGroup.get_network_failover_enabled_state(self.device_groups) + + def get_sync_status(self): + return self.api.Management.DeviceGroup.get_sync_status(self.device_groups) + + def get_type(self): + return self.api.Management.DeviceGroup.get_type(self.device_groups) + + +class TrafficGroups(object): + """Traffic groups class. + + F5 BIG-IP traffic groups class. + + Attributes: + api: iControl API instance. + traffic_groups: List of traffic groups. + """ + + def __init__(self, api, regex=None): + self.api = api + self.traffic_groups = api.Management.TrafficGroup.get_list() + if regex: + re_filter = re.compile(regex) + self.traffic_groups = filter(re_filter.search, self.traffic_groups) + + def get_list(self): + return self.traffic_groups + + def get_auto_failback_enabled_state(self): + return self.api.Management.TrafficGroup.get_auto_failback_enabled_state(self.traffic_groups) + + def get_auto_failback_time(self): + return self.api.Management.TrafficGroup.get_auto_failback_time(self.traffic_groups) + + def get_default_device(self): + return self.api.Management.TrafficGroup.get_default_device(self.traffic_groups) + + def get_description(self): + return self.api.Management.TrafficGroup.get_description(self.traffic_groups) + + def get_ha_load_factor(self): + return self.api.Management.TrafficGroup.get_ha_load_factor(self.traffic_groups) + + def get_ha_order(self): + return self.api.Management.TrafficGroup.get_ha_order(self.traffic_groups) + + def get_is_floating(self): + return self.api.Management.TrafficGroup.get_is_floating(self.traffic_groups) + + def get_mac_masquerade_address(self): + return self.api.Management.TrafficGroup.get_mac_masquerade_address(self.traffic_groups) + + def get_unit_id(self): + return self.api.Management.TrafficGroup.get_unit_id(self.traffic_groups) + + +class Rules(object): + """Rules class. + + F5 BIG-IP iRules class. + + Attributes: + api: iControl API instance. + rules: List of iRules. + """ + + def __init__(self, api, regex=None): + self.api = api + self.rules = api.LocalLB.Rule.get_list() + if regex: + re_filter = re.compile(regex) + self.traffic_groups = filter(re_filter.search, self.rules) + + def get_list(self): + return self.rules + + def get_description(self): + return self.api.LocalLB.Rule.get_description(rule_names=self.rules) + + def get_ignore_vertification(self): + return self.api.LocalLB.Rule.get_ignore_vertification(rule_names=self.rules) + + def get_verification_status(self): + return self.api.LocalLB.Rule.get_verification_status_v2(rule_names=self.rules) + + def get_definition(self): + return [x['rule_definition'] for x in self.api.LocalLB.Rule.query_rule(rule_names=self.rules)] + +class Nodes(object): + """Nodes class. + + F5 BIG-IP nodes class. + + Attributes: + api: iControl API instance. + nodes: List of nodes. + """ + + def __init__(self, api, regex=None): + self.api = api + self.nodes = api.LocalLB.NodeAddressV2.get_list() + if regex: + re_filter = re.compile(regex) + self.nodes = filter(re_filter.search, self.nodes) + + def get_list(self): + return self.nodes + + def get_address(self): + return self.api.LocalLB.NodeAddressV2.get_address(nodes=self.nodes) + + def get_connection_limit(self): + return self.api.LocalLB.NodeAddressV2.get_connection_limit(nodes=self.nodes) + + def get_description(self): + return self.api.LocalLB.NodeAddressV2.get_description(nodes=self.nodes) + + def get_dynamic_ratio(self): + return self.api.LocalLB.NodeAddressV2.get_dynamic_ratio_v2(nodes=self.nodes) + + def get_monitor_instance(self): + return self.api.LocalLB.NodeAddressV2.get_monitor_instance(nodes=self.nodes) + + def get_monitor_rule(self): + return self.api.LocalLB.NodeAddressV2.get_monitor_rule(nodes=self.nodes) + + def get_monitor_status(self): + return self.api.LocalLB.NodeAddressV2.get_monitor_status(nodes=self.nodes) + + def get_object_status(self): + return self.api.LocalLB.NodeAddressV2.get_object_status(nodes=self.nodes) + + def get_rate_limit(self): + return self.api.LocalLB.NodeAddressV2.get_rate_limit(nodes=self.nodes) + + def get_ratio(self): + return self.api.LocalLB.NodeAddressV2.get_ratio(nodes=self.nodes) + + def get_session_status(self): + return self.api.LocalLB.NodeAddressV2.get_session_status(nodes=self.nodes) + + +class VirtualAddresses(object): + """Virtual addresses class. + + F5 BIG-IP virtual addresses class. + + Attributes: + api: iControl API instance. + virtual_addresses: List of virtual addresses. + """ + + def __init__(self, api, regex=None): + self.api = api + self.virtual_addresses = api.LocalLB.VirtualAddressV2.get_list() + if regex: + re_filter = re.compile(regex) + self.virtual_addresses = filter(re_filter.search, self.virtual_addresses) + + def get_list(self): + return self.virtual_addresses + + def get_address(self): + return self.api.LocalLB.VirtualAddressV2.get_address(self.virtual_addresses) + + def get_arp_state(self): + return self.api.LocalLB.VirtualAddressV2.get_arp_state(self.virtual_addresses) + + def get_auto_delete_state(self): + return self.api.LocalLB.VirtualAddressV2.get_auto_delete_state(self.virtual_addresses) + + def get_connection_limit(self): + return self.api.LocalLB.VirtualAddressV2.get_connection_limit(self.virtual_addresses) + + def get_description(self): + return self.api.LocalLB.VirtualAddressV2.get_description(self.virtual_addresses) + + def get_enabled_state(self): + return self.api.LocalLB.VirtualAddressV2.get_enabled_state(self.virtual_addresses) + + def get_icmp_echo_state(self): + return self.api.LocalLB.VirtualAddressV2.get_icmp_echo_state(self.virtual_addresses) + + def get_is_floating_state(self): + return self.api.LocalLB.VirtualAddressV2.get_is_floating_state(self.virtual_addresses) + + def get_netmask(self): + return self.api.LocalLB.VirtualAddressV2.get_netmask(self.virtual_addresses) + + def get_object_status(self): + return self.api.LocalLB.VirtualAddressV2.get_object_status(self.virtual_addresses) + + def get_route_advertisement_state(self): + return self.api.LocalLB.VirtualAddressV2.get_route_advertisement_state(self.virtual_addresses) + + def get_traffic_group(self): + return self.api.LocalLB.VirtualAddressV2.get_traffic_group(self.virtual_addresses) + + +class AddressClasses(object): + """Address group/class class. + + F5 BIG-IP address group/class class. + + Attributes: + api: iControl API instance. + address_classes: List of address classes. + """ + + def __init__(self, api, regex=None): + self.api = api + self.address_classes = api.LocalLB.Class.get_address_class_list() + if regex: + re_filter = re.compile(regex) + self.address_classes = filter(re_filter.search, self.address_classes) + + def get_list(self): + return self.address_classes + + def get_address_class(self): + key = self.api.LocalLB.Class.get_address_class(self.address_classes) + value = self.api.LocalLB.Class.get_address_class_member_data_value(key) + result = map(zip, [x['members'] for x in key], value) + return result + + def get_description(self): + return self.api.LocalLB.Class.get_description(self.address_classes) + + +class Certificates(object): + """Certificates class. + + F5 BIG-IP certificates class. + + Attributes: + api: iControl API instance. + certificates: List of certificate identifiers. + certificate_list: List of certificate information structures. + """ + + def __init__(self, api, regex=None, mode="MANAGEMENT_MODE_DEFAULT"): + self.api = api + self.certificate_list = api.Management.KeyCertificate.get_certificate_list(mode=mode) + self.certificates = [x['certificate']['cert_info']['id'] for x in self.certificate_list] + if regex: + re_filter = re.compile(regex) + self.certificates = filter(re_filter.search, self.certificates) + self.certificate_list = [x for x in self.certificate_list if x['certificate']['cert_info']['id'] in self.certificates] + + def get_list(self): + return self.certificates + + def get_certificate_list(self): + return self.certificate_list + + +class Keys(object): + """Keys class. + + F5 BIG-IP keys class. + + Attributes: + api: iControl API instance. + keys: List of key identifiers. + key_list: List of key information structures. + """ + + def __init__(self, api, regex=None, mode="MANAGEMENT_MODE_DEFAULT"): + self.api = api + self.key_list = api.Management.KeyCertificate.get_key_list(mode=mode) + self.keys = [x['key_info']['id'] for x in self.key_list] + if regex: + re_filter = re.compile(regex) + self.keys = filter(re_filter.search, self.keys) + self.key_list = [x for x in self.key_list if x['key_info']['id'] in self.keys] + + def get_list(self): + return self.keys + + def get_key_list(self): + return self.key_list + + +class ProfileClientSSL(object): + """Client SSL profiles class. + + F5 BIG-IP client SSL profiles class. + + Attributes: + api: iControl API instance. + profiles: List of client SSL profiles. + """ + + def __init__(self, api, regex=None): + self.api = api + self.profiles = api.LocalLB.ProfileClientSSL.get_list() + if regex: + re_filter = re.compile(regex) + self.profiles = filter(re_filter.search, self.profiles) + + def get_list(self): + return self.profiles + + def get_alert_timeout(self): + return self.api.LocalLB.ProfileClientSSL.get_alert_timeout(self.profiles) + + def get_allow_nonssl_state(self): + return self.api.LocalLB.ProfileClientSSL.get_allow_nonssl_state(self.profiles) + + def get_authenticate_depth(self): + return self.api.LocalLB.ProfileClientSSL.get_authenticate_depth(self.profiles) + + def get_authenticate_once_state(self): + return self.api.LocalLB.ProfileClientSSL.get_authenticate_once_state(self.profiles) + + def get_ca_file(self): + return self.api.LocalLB.ProfileClientSSL.get_ca_file_v2(self.profiles) + + def get_cache_size(self): + return self.api.LocalLB.ProfileClientSSL.get_cache_size(self.profiles) + + def get_cache_timeout(self): + return self.api.LocalLB.ProfileClientSSL.get_cache_timeout(self.profiles) + + def get_certificate_file(self): + return self.api.LocalLB.ProfileClientSSL.get_certificate_file_v2(self.profiles) + + def get_chain_file(self): + return self.api.LocalLB.ProfileClientSSL.get_chain_file_v2(self.profiles) + + def get_cipher_list(self): + return self.api.LocalLB.ProfileClientSSL.get_cipher_list(self.profiles) + + def get_client_certificate_ca_file(self): + return self.api.LocalLB.ProfileClientSSL.get_client_certificate_ca_file_v2(self.profiles) + + def get_crl_file(self): + return self.api.LocalLB.ProfileClientSSL.get_crl_file_v2(self.profiles) + + def get_default_profile(self): + return self.api.LocalLB.ProfileClientSSL.get_default_profile(self.profiles) + + def get_description(self): + return self.api.LocalLB.ProfileClientSSL.get_description(self.profiles) + + def get_forward_proxy_ca_certificate_file(self): + return self.api.LocalLB.ProfileClientSSL.get_forward_proxy_ca_certificate_file(self.profiles) + + def get_forward_proxy_ca_key_file(self): + return self.api.LocalLB.ProfileClientSSL.get_forward_proxy_ca_key_file(self.profiles) + + def get_forward_proxy_ca_passphrase(self): + return self.api.LocalLB.ProfileClientSSL.get_forward_proxy_ca_passphrase(self.profiles) + + def get_forward_proxy_certificate_extension_include(self): + return self.api.LocalLB.ProfileClientSSL.get_forward_proxy_certificate_extension_include(self.profiles) + + def get_forward_proxy_certificate_lifespan(self): + return self.api.LocalLB.ProfileClientSSL.get_forward_proxy_certificate_lifespan(self.profiles) + + def get_forward_proxy_enabled_state(self): + return self.api.LocalLB.ProfileClientSSL.get_forward_proxy_enabled_state(self.profiles) + + def get_forward_proxy_lookup_by_ipaddr_port_state(self): + return self.api.LocalLB.ProfileClientSSL.get_forward_proxy_lookup_by_ipaddr_port_state(self.profiles) + + def get_handshake_timeout(self): + return self.api.LocalLB.ProfileClientSSL.get_handshake_timeout(self.profiles) + + def get_key_file(self): + return self.api.LocalLB.ProfileClientSSL.get_key_file_v2(self.profiles) + + def get_modssl_emulation_state(self): + return self.api.LocalLB.ProfileClientSSL.get_modssl_emulation_state(self.profiles) + + def get_passphrase(self): + return self.api.LocalLB.ProfileClientSSL.get_passphrase(self.profiles) + + def get_peer_certification_mode(self): + return self.api.LocalLB.ProfileClientSSL.get_peer_certification_mode(self.profiles) + + def get_profile_mode(self): + return self.api.LocalLB.ProfileClientSSL.get_profile_mode(self.profiles) + + def get_renegotiation_maximum_record_delay(self): + return self.api.LocalLB.ProfileClientSSL.get_renegotiation_maximum_record_delay(self.profiles) + + def get_renegotiation_period(self): + return self.api.LocalLB.ProfileClientSSL.get_renegotiation_period(self.profiles) + + def get_renegotiation_state(self): + return self.api.LocalLB.ProfileClientSSL.get_renegotiation_state(self.profiles) + + def get_renegotiation_throughput(self): + return self.api.LocalLB.ProfileClientSSL.get_renegotiation_throughput(self.profiles) + + def get_retain_certificate_state(self): + return self.api.LocalLB.ProfileClientSSL.get_retain_certificate_state(self.profiles) + + def get_secure_renegotiation_mode(self): + return self.api.LocalLB.ProfileClientSSL.get_secure_renegotiation_mode(self.profiles) + + def get_server_name(self): + return self.api.LocalLB.ProfileClientSSL.get_server_name(self.profiles) + + def get_session_ticket_state(self): + return self.api.LocalLB.ProfileClientSSL.get_session_ticket_state(self.profiles) + + def get_sni_default_state(self): + return self.api.LocalLB.ProfileClientSSL.get_sni_default_state(self.profiles) + + def get_sni_require_state(self): + return self.api.LocalLB.ProfileClientSSL.get_sni_require_state(self.profiles) + + def get_ssl_option(self): + return self.api.LocalLB.ProfileClientSSL.get_ssl_option(self.profiles) + + def get_strict_resume_state(self): + return self.api.LocalLB.ProfileClientSSL.get_strict_resume_state(self.profiles) + + def get_unclean_shutdown_state(self): + return self.api.LocalLB.ProfileClientSSL.get_unclean_shutdown_state(self.profiles) + + def get_is_base_profile(self): + return self.api.LocalLB.ProfileClientSSL.is_base_profile(self.profiles) + + def get_is_system_profile(self): + return self.api.LocalLB.ProfileClientSSL.is_system_profile(self.profiles) + + +class SystemInfo(object): + """System information class. + + F5 BIG-IP system information class. + + Attributes: + api: iControl API instance. + """ + + def __init__(self, api): + self.api = api + + def get_base_mac_address(self): + return self.api.System.SystemInfo.get_base_mac_address() + + def get_blade_temperature(self): + return self.api.System.SystemInfo.get_blade_temperature() + + def get_chassis_slot_information(self): + return self.api.System.SystemInfo.get_chassis_slot_information() + + def get_globally_unique_identifier(self): + return self.api.System.SystemInfo.get_globally_unique_identifier() + + def get_group_id(self): + return self.api.System.SystemInfo.get_group_id() + + def get_hardware_information(self): + return self.api.System.SystemInfo.get_hardware_information() + + def get_marketing_name(self): + return self.api.System.SystemInfo.get_marketing_name() + + def get_product_information(self): + return self.api.System.SystemInfo.get_product_information() + + def get_pva_version(self): + return self.api.System.SystemInfo.get_pva_version() + + def get_system_id(self): + return self.api.System.SystemInfo.get_system_id() + + def get_system_information(self): + return self.api.System.SystemInfo.get_system_information() + + def get_time(self): + return self.api.System.SystemInfo.get_time() + + def get_time_zone(self): + return self.api.System.SystemInfo.get_time_zone() + + def get_uptime(self): + return self.api.System.SystemInfo.get_uptime() + + +def generate_dict(api_obj, fields): + result_dict = {} + lists = [] + supported_fields = [] + if api_obj.get_list(): + for field in fields: + try: + api_response = getattr(api_obj, "get_" + field)() + except MethodNotFound: + pass + else: + lists.append(api_response) + supported_fields.append(field) + for i, j in enumerate(api_obj.get_list()): + temp = {} + temp.update([(item[0], item[1][i]) for item in zip(supported_fields, lists)]) + result_dict[j] = temp + return result_dict + +def generate_simple_dict(api_obj, fields): + result_dict = {} + for field in fields: + try: + api_response = getattr(api_obj, "get_" + field)() + except MethodNotFound: + pass + else: + result_dict[field] = api_response + return result_dict + +def generate_interface_dict(f5, regex): + interfaces = Interfaces(f5.get_api(), regex) + fields = ['active_media', 'actual_flow_control', 'bundle_state', + 'description', 'dual_media_state', 'enabled_state', 'if_index', + 'learning_mode', 'lldp_admin_status', 'lldp_tlvmap', + 'mac_address', 'media', 'media_option', 'media_option_sfp', + 'media_sfp', 'media_speed', 'media_status', 'mtu', + 'phy_master_slave_mode', 'prefer_sfp_state', 'flow_control', + 'sflow_poll_interval', 'sflow_poll_interval_global', + 'sfp_media_state', 'stp_active_edge_port_state', + 'stp_enabled_state', 'stp_link_type', + 'stp_protocol_detection_reset_state'] + return generate_dict(interfaces, fields) + +def generate_self_ip_dict(f5, regex): + self_ips = SelfIPs(f5.get_api(), regex) + fields = ['address', 'allow_access_list', 'description', + 'enforced_firewall_policy', 'floating_state', 'fw_rule', + 'netmask', 'staged_firewall_policy', 'traffic_group', + 'vlan', 'is_traffic_group_inherited'] + return generate_dict(self_ips, fields) + +def generate_trunk_dict(f5, regex): + trunks = Trunks(f5.get_api(), regex) + fields = ['active_lacp_state', 'configured_member_count', 'description', + 'distribution_hash_option', 'interface', 'lacp_enabled_state', + 'lacp_timeout_option', 'link_selection_policy', 'media_speed', + 'media_status', 'operational_member_count', 'stp_enabled_state', + 'stp_protocol_detection_reset_state'] + return generate_dict(trunks, fields) + +def generate_vlan_dict(f5, regex): + vlans = Vlans(f5.get_api(), regex) + fields = ['auto_lasthop', 'cmp_hash_algorithm', 'description', + 'dynamic_forwarding', 'failsafe_action', 'failsafe_state', + 'failsafe_timeout', 'if_index', 'learning_mode', + 'mac_masquerade_address', 'member', 'mtu', + 'sflow_poll_interval', 'sflow_poll_interval_global', + 'sflow_sampling_rate', 'sflow_sampling_rate_global', + 'source_check_state', 'true_mac_address', 'vlan_id'] + return generate_dict(vlans, fields) + +def generate_vs_dict(f5, regex): + virtual_servers = VirtualServers(f5.get_api(), regex) + fields = ['actual_hardware_acceleration', 'authentication_profile', + 'auto_lasthop', 'bw_controller_policy', 'clone_pool', + 'cmp_enable_mode', 'connection_limit', 'connection_mirror_state', + 'default_pool_name', 'description', 'destination', + 'enabled_state', 'enforced_firewall_policy', + 'fallback_persistence_profile', 'fw_rule', 'gtm_score', + 'last_hop_pool', 'nat64_state', 'object_status', + 'persistence_profile', 'profile', 'protocol', + 'rate_class', 'rate_limit', 'rate_limit_destination_mask', + 'rate_limit_mode', 'rate_limit_source_mask', 'related_rule', + 'rule', 'security_log_profile', 'snat_pool', 'snat_type', + 'source_address', 'source_address_translation_lsn_pool', + 'source_address_translation_snat_pool', + 'source_address_translation_type', 'source_port_behavior', + 'staged_firewall_policy', 'translate_address_state', + 'translate_port_state', 'type', 'vlan', 'wildmask'] + return generate_dict(virtual_servers, fields) + +def generate_pool_dict(f5, regex): + pools = Pools(f5.get_api(), regex) + fields = ['action_on_service_down', 'active_member_count', + 'aggregate_dynamic_ratio', 'allow_nat_state', + 'allow_snat_state', 'client_ip_tos', 'client_link_qos', + 'description', 'gateway_failsafe_device', + 'ignore_persisted_weight_state', 'lb_method', 'member', + 'minimum_active_member', 'minimum_up_member', + 'minimum_up_member_action', 'minimum_up_member_enabled_state', + 'monitor_association', 'monitor_instance', 'object_status', + 'profile', 'queue_depth_limit', + 'queue_on_connection_limit_state', 'queue_time_limit', + 'reselect_tries', 'server_ip_tos', 'server_link_qos', + 'simple_timeout', 'slow_ramp_time'] + return generate_dict(pools, fields) + +def generate_device_dict(f5, regex): + devices = Devices(f5.get_api(), regex) + fields = ['active_modules', 'base_mac_address', 'blade_addresses', + 'build', 'chassis_id', 'chassis_type', 'comment', + 'configsync_address', 'contact', 'description', 'edition', + 'failover_state', 'hostname', 'inactive_modules', 'location', + 'management_address', 'marketing_name', 'multicast_address', + 'optional_modules', 'platform_id', 'primary_mirror_address', + 'product', 'secondary_mirror_address', 'software_version', + 'timelimited_modules', 'timezone', 'unicast_addresses'] + return generate_dict(devices, fields) + +def generate_device_group_dict(f5, regex): + device_groups = DeviceGroups(f5.get_api(), regex) + fields = ['all_preferred_active', 'autosync_enabled_state','description', + 'device', 'full_load_on_sync_state', + 'incremental_config_sync_size_maximum', + 'network_failover_enabled_state', 'sync_status', 'type'] + return generate_dict(device_groups, fields) + +def generate_traffic_group_dict(f5, regex): + traffic_groups = TrafficGroups(f5.get_api(), regex) + fields = ['auto_failback_enabled_state', 'auto_failback_time', + 'default_device', 'description', 'ha_load_factor', + 'ha_order', 'is_floating', 'mac_masquerade_address', + 'unit_id'] + return generate_dict(traffic_groups, fields) + +def generate_rule_dict(f5, regex): + rules = Rules(f5.get_api(), regex) + fields = ['definition', 'description', 'ignore_vertification', + 'verification_status'] + return generate_dict(rules, fields) + +def generate_node_dict(f5, regex): + nodes = Nodes(f5.get_api(), regex) + fields = ['address', 'connection_limit', 'description', 'dynamic_ratio', + 'monitor_instance', 'monitor_rule', 'monitor_status', + 'object_status', 'rate_limit', 'ratio', 'session_status'] + return generate_dict(nodes, fields) + +def generate_virtual_address_dict(f5, regex): + virtual_addresses = VirtualAddresses(f5.get_api(), regex) + fields = ['address', 'arp_state', 'auto_delete_state', 'connection_limit', + 'description', 'enabled_state', 'icmp_echo_state', + 'is_floating_state', 'netmask', 'object_status', + 'route_advertisement_state', 'traffic_group'] + return generate_dict(virtual_addresses, fields) + +def generate_address_class_dict(f5, regex): + address_classes = AddressClasses(f5.get_api(), regex) + fields = ['address_class', 'description'] + return generate_dict(address_classes, fields) + +def generate_certificate_dict(f5, regex): + certificates = Certificates(f5.get_api(), regex) + return dict(zip(certificates.get_list(), certificates.get_certificate_list())) + +def generate_key_dict(f5, regex): + keys = Keys(f5.get_api(), regex) + return dict(zip(keys.get_list(), keys.get_key_list())) + +def generate_client_ssl_profile_dict(f5, regex): + profiles = ProfileClientSSL(f5.get_api(), regex) + fields = ['alert_timeout', 'allow_nonssl_state', 'authenticate_depth', + 'authenticate_once_state', 'ca_file', 'cache_size', + 'cache_timeout', 'certificate_file', 'chain_file', + 'cipher_list', 'client_certificate_ca_file', 'crl_file', + 'default_profile', 'description', + 'forward_proxy_ca_certificate_file', 'forward_proxy_ca_key_file', + 'forward_proxy_ca_passphrase', + 'forward_proxy_certificate_extension_include', + 'forward_proxy_certificate_lifespan', + 'forward_proxy_enabled_state', + 'forward_proxy_lookup_by_ipaddr_port_state', 'handshake_timeout', + 'key_file', 'modssl_emulation_state', 'passphrase', + 'peer_certification_mode', 'profile_mode', + 'renegotiation_maximum_record_delay', 'renegotiation_period', + 'renegotiation_state', 'renegotiation_throughput', + 'retain_certificate_state', 'secure_renegotiation_mode', + 'server_name', 'session_ticket_state', 'sni_default_state', + 'sni_require_state', 'ssl_option', 'strict_resume_state', + 'unclean_shutdown_state', 'is_base_profile', 'is_system_profile'] + return generate_dict(profiles, fields) + +def generate_system_info_dict(f5): + system_info = SystemInfo(f5.get_api()) + fields = ['base_mac_address', + 'blade_temperature', 'chassis_slot_information', + 'globally_unique_identifier', 'group_id', + 'hardware_information', + 'marketing_name', + 'product_information', 'pva_version', 'system_id', + 'system_information', 'time', + 'time_zone', 'uptime'] + return generate_simple_dict(system_info, fields) + +def generate_software_list(f5): + software = Software(f5.get_api()) + software_list = software.get_all_software_status() + return software_list + + +def main(): + module = AnsibleModule( + argument_spec = dict( + server = dict(type='str', required=True), + user = dict(type='str', required=True), + password = dict(type='str', required=True), + session = dict(type='bool', default=False), + include = dict(type='list', required=True), + filter = dict(type='str', required=False), + ) + ) + + if not bigsuds_found: + module.fail_json(msg="the python bigsuds module is required") + + server = module.params['server'] + user = module.params['user'] + password = module.params['password'] + session = module.params['session'] + fact_filter = module.params['filter'] + if fact_filter: + regex = fnmatch.translate(fact_filter) + else: + regex = None + include = map(lambda x: x.lower(), module.params['include']) + valid_includes = ('address_class', 'certificate', 'client_ssl_profile', + 'device_group', 'interface', 'key', 'node', 'pool', + 'rule', 'self_ip', 'software', 'system_info', + 'traffic_group', 'trunk', 'virtual_address', + 'virtual_server', 'vlan') + include_test = map(lambda x: x in valid_includes, include) + if not all(include_test): + module.fail_json(msg="value of include must be one or more of: %s, got: %s" % (",".join(valid_includes), ",".join(include))) + + try: + facts = {} + + if len(include) > 0: + f5 = F5(server, user, password, session) + saved_active_folder = f5.get_active_folder() + saved_recursive_query_state = f5.get_recursive_query_state() + if saved_active_folder != "/": + f5.set_active_folder("/") + if saved_recursive_query_state != "STATE_ENABLED": + f5.enable_recursive_query_state() + + if 'interface' in include: + facts['interface'] = generate_interface_dict(f5, regex) + if 'self_ip' in include: + facts['self_ip'] = generate_self_ip_dict(f5, regex) + if 'trunk' in include: + facts['trunk'] = generate_trunk_dict(f5, regex) + if 'vlan' in include: + facts['vlan'] = generate_vlan_dict(f5, regex) + if 'virtual_server' in include: + facts['virtual_server'] = generate_vs_dict(f5, regex) + if 'pool' in include: + facts['pool'] = generate_pool_dict(f5, regex) + if 'device' in include: + facts['device'] = generate_device_dict(f5, regex) + if 'device_group' in include: + facts['device_group'] = generate_device_group_dict(f5, regex) + if 'traffic_group' in include: + facts['traffic_group'] = generate_traffic_group_dict(f5, regex) + if 'rule' in include: + facts['rule'] = generate_rule_dict(f5, regex) + if 'node' in include: + facts['node'] = generate_node_dict(f5, regex) + if 'virtual_address' in include: + facts['virtual_address'] = generate_virtual_address_dict(f5, regex) + if 'address_class' in include: + facts['address_class'] = generate_address_class_dict(f5, regex) + if 'software' in include: + facts['software'] = generate_software_list(f5) + if 'certificate' in include: + facts['certificate'] = generate_certificate_dict(f5, regex) + if 'key' in include: + facts['key'] = generate_key_dict(f5, regex) + if 'client_ssl_profile' in include: + facts['client_ssl_profile'] = generate_client_ssl_profile_dict(f5, regex) + if 'system_info' in include: + facts['system_info'] = generate_system_info_dict(f5) + + # restore saved state + if saved_active_folder and saved_active_folder != "/": + f5.set_active_folder(saved_active_folder) + if saved_recursive_query_state and \ + saved_recursive_query_state != "STATE_ENABLED": + f5.set_recursive_query_state(saved_recursive_query_state) + + result = {'ansible_facts': facts} + + except Exception, e: + module.fail_json(msg="received exception: %s\ntraceback: %s" % (e, traceback.format_exc())) + + module.exit_json(**result) + +# include magic from lib/ansible/module_common.py +#<> +main() + diff --git a/library/net_infrastructure/dnsimple b/library/net_infrastructure/dnsimple new file mode 100755 index 00000000000..5bb53198945 --- /dev/null +++ b/library/net_infrastructure/dnsimple @@ -0,0 +1,302 @@ +#!/usr/bin/python +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = ''' +--- +module: dnsimple +version_added: "1.6" +short_description: Interface with dnsimple.com (a DNS hosting service). +description: + - "Manages domains and records via the DNSimple API, see the docs: U(http://developer.dnsimple.com/)" +options: + account_email: + description: + - "Account email. If ommitted, the env variables DNSIMPLE_EMAIL and DNSIMPLE_API_TOKEN will be looked for. If those aren't found, a C(.dnsimple) file will be looked for, see: U(https://github.com/mikemaccana/dnsimple-python#getting-started)" + required: false + default: null + + account_api_token: + description: + - Account API token. See I(account_email) for info. + required: false + default: null + + domain: + description: + - Domain to work with. Can be the domain name (e.g. "mydomain.com") or the numeric ID of the domain in DNSimple. If ommitted, a list of domains will be returned. + - If domain is present but the domain doesn't exist, it will be created. + required: false + default: null + + record: + description: + - Record to add, if blank a record for the domain will be created, supports the wildcard (*) + required: false + default: null + + record_ids: + description: + - List of records to ensure they either exist or don't exist + required: false + default: null + + type: + description: + - The type of DNS record to create + required: false + choices: [ 'A', 'ALIAS', 'CNAME', 'MX', 'SPF', 'URL', 'TXT', 'NS', 'SRV', 'NAPTR', 'PTR', 'AAAA', 'SSHFP', 'HINFO', 'POOL' ] + default: null + + ttl: + description: + - The TTL to give the new record + required: false + default: 3600 (one hour) + + value: + description: + - Record value + - "Must be specified when trying to ensure a record exists" + required: false + default: null + + priority: + description: + - Record priority + required: false + default: null + + state: + description: + - whether the record should exist or not + required: false + choices: [ 'present', 'absent' ] + default: null + + solo: + description: + - Whether the record should be the only one for that record type and record name. Only use with state=present on a record + required: false + default: null + +requirements: [ dnsimple ] +author: Alex Coomans +''' + +EXAMPLES = ''' +# authenicate using email and API token +- local_action: dnsimple account_email=test@example.com account_api_token=dummyapitoken + +# fetch all domains +- local_action dnsimple + register: domains + +# fetch my.com domain records +- local_action: dnsimple domain=my.com state=present + register: records + +# delete a domain +- local_action: dnsimple domain=my.com state=absent + +# create a test.my.com A record to point to 127.0.0.01 +- local_action: dnsimple domain=my.com record=test type=A value=127.0.0.1 + register: record + +# and then delete it +- local_action: dnsimple domain=my.com record_ids={{ record['id'] }} + +# create a my.com CNAME record to example.com +- local_action: dnsimple domain=my.com record= type=CNAME value=example.com state=present + +# change it's ttl +- local_action: dnsimple domain=my.com record= type=CNAME value=example.com ttl=600 state=present + +# and delete the record +- local_action: dnsimpledomain=my.com record= type=CNAME value=example.com state=absent + +''' + +import os +try: + from dnsimple import DNSimple + from dnsimple.dnsimple import DNSimpleException +except ImportError: + print "failed=True msg='dnsimple required for this module'" + sys.exit(1) + +def main(): + module = AnsibleModule( + argument_spec = dict( + account_email = dict(required=False), + account_api_token = dict(required=False, no_log=True), + domain = dict(required=False), + record = dict(required=False), + record_ids = dict(required=False, type='list'), + type = dict(required=False, choices=['A', 'ALIAS', 'CNAME', 'MX', 'SPF', 'URL', 'TXT', 'NS', 'SRV', 'NAPTR', 'PTR', 'AAAA', 'SSHFP', 'HINFO', 'POOL']), + ttl = dict(required=False, default=3600, type='int'), + value = dict(required=False), + priority = dict(required=False, type='int'), + state = dict(required=False, choices=['present', 'absent']), + solo = dict(required=False, type='bool'), + ), + required_together = ( + ['record', 'value'] + ), + supports_check_mode = True, + ) + + account_email = module.params.get('account_email') + account_api_token = module.params.get('account_api_token') + domain = module.params.get('domain') + record = module.params.get('record') + record_ids = module.params.get('record_ids') + record_type = module.params.get('type') + ttl = module.params.get('ttl') + value = module.params.get('value') + priority = module.params.get('priority') + state = module.params.get('state') + is_solo = module.params.get('solo') + + if account_email and account_api_token: + client = DNSimple(email=account_email, api_token=account_api_token) + elif os.environ.get('DNSIMPLE_EMAIL') and os.environ.get('DNSIMPLE_API_TOKEN'): + client = DNSimple(email=os.environ.get('DNSIMPLE_EMAIL'), api_token=os.environ.get('DNSIMPLE_API_TOKEN')) + else: + client = DNSimple() + + try: + # Let's figure out what operation we want to do + + # No domain, return a list + if not domain: + domains = client.domains() + module.exit_json(changed=False, result=[d['domain'] for d in domains]) + + # Domain & No record + if domain and record is None and not record_ids: + domains = [d['domain'] for d in client.domains()] + if domain.isdigit(): + dr = next((d for d in domains if d['id'] == int(domain)), None) + else: + dr = next((d for d in domains if d['name'] == domain), None) + if state == 'present': + if dr: + module.exit_json(changed=False, result=dr) + else: + if module.check_mode: + module.exit_json(changed=True) + else: + module.exit_json(changed=True, result=client.add_domain(domain)['domain']) + elif state == 'absent': + if dr: + if not module.check_mode: + client.delete(domain) + module.exit_json(changed=True) + else: + module.exit_json(changed=False) + else: + module.fail_json(msg="'%s' is an unknown value for the state argument" % state) + + # need the not none check since record could be an empty string + if domain and record is not None: + records = [r['record'] for r in client.records(str(domain))] + + if not record_type: + module.fail_json(msg="Missing the record type") + + if not value: + module.fail_json(msg="Missing the record value") + + rr = next((r for r in records if r['name'] == record and r['record_type'] == record_type and r['content'] == value), None) + + if state == 'present': + changed = False + if is_solo: + # delete any records that have the same name and record type + same_type = [r['id'] for r in records if r['name'] == record and r['record_type'] == record_type] + if rr: + same_type = [rid for rid in same_type if rid != rr['id']] + if same_type: + if not module.check_mode: + for rid in same_type: + client.delete_record(str(domain), rid) + changed = True + if rr: + # check if we need to update + if rr['ttl'] != ttl or rr['prio'] != priority: + data = {} + if ttl: data['ttl'] = ttl + if priority: data['prio'] = priority + if module.check_mode: + module.exit_json(changed=True) + else: + module.exit_json(changed=True, result=client.update_record(str(domain), str(rr['id']), data)['record']) + else: + module.exit_json(changed=changed, result=rr) + else: + # create it + data = { + 'name': record, + 'record_type': record_type, + 'content': value, + } + if ttl: data['ttl'] = ttl + if priority: data['prio'] = priority + if module.check_mode: + module.exit_json(changed=True) + else: + module.exit_json(changed=True, result=client.add_record(str(domain), data)['record']) + elif state == 'absent': + if rr: + if not module.check_mode: + client.delete_record(str(domain), rr['id']) + module.exit_json(changed=True) + else: + module.exit_json(changed=False) + else: + module.fail_json(msg="'%s' is an unknown value for the state argument" % state) + + # Make sure these record_ids either all exist or none + if domain and record_ids: + current_records = [str(r['record']['id']) for r in client.records(str(domain))] + wanted_records = [str(r) for r in record_ids] + if state == 'present': + difference = list(set(wanted_records) - set(current_records)) + if difference: + module.fail_json(msg="Missing the following records: %s" % difference) + else: + module.exit_json(changed=False) + elif state == 'absent': + difference = list(set(wanted_records) & set(current_records)) + if difference: + if not module.check_mode: + for rid in difference: + client.delete_record(str(domain), rid) + module.exit_json(changed=True) + else: + module.exit_json(changed=False) + else: + module.fail_json(msg="'%s' is an unknown value for the state argument" % state) + + except DNSimpleException, e: + module.fail_json(msg="Unable to contact DNSimple: %s" % e.message) + + module.fail_json(msg="Unknown what you wanted me to do") + +# import module snippets +from ansible.module_utils.basic import * + +main() diff --git a/library/net_infrastructure/dnsmadeeasy b/library/net_infrastructure/dnsmadeeasy index d4af13e884a..148e25a5011 100644 --- a/library/net_infrastructure/dnsmadeeasy +++ b/library/net_infrastructure/dnsmadeeasy @@ -73,6 +73,15 @@ options: choices: [ 'present', 'absent' ] default: null + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 + notes: - The DNS Made Easy service requires that machines interacting with the API have the proper time and timezone set. Be sure you are within a few seconds of actual time by using NTP. - This module returns record(s) in the "result" element when 'state' is set to 'present'. This value can be be registered and used in your playbooks. @@ -106,8 +115,6 @@ EXAMPLES = ''' IMPORT_ERROR = None try: - import urllib - import urllib2 import json from time import strftime, gmtime import hashlib @@ -115,22 +122,6 @@ try: except ImportError, e: IMPORT_ERROR = str(e) - -class RequestWithMethod(urllib2.Request): - - """Workaround for using DELETE/PUT/etc with urllib2""" - - def __init__(self, url, method, data=None, headers={}): - self._method = method - urllib2.Request.__init__(self, url, data, headers) - - def get_method(self): - if self._method: - return self._method - else: - return urllib2.Request.get_method(self) - - class DME2: def __init__(self, apikey, secret, domain, module): @@ -138,7 +129,7 @@ class DME2: self.api = apikey self.secret = secret - self.baseurl = 'http://api.dnsmadeeasy.com/V2.0/' + self.baseurl = 'https://api.dnsmadeeasy.com/V2.0/' self.domain = str(domain) self.domain_map = None # ["domain_name"] => ID self.record_map = None # ["record_name"] => ID @@ -169,21 +160,15 @@ class DME2: url = self.baseurl + resource if data and not isinstance(data, basestring): data = urllib.urlencode(data) - request = RequestWithMethod(url, method, data, self._headers()) - try: - response = urllib2.urlopen(request) - except urllib2.HTTPError, e: - self.module.fail_json( - msg="%s returned %s, with body: %s" % (url, e.code, e.read())) - except Exception, e: - self.module.fail_json( - msg="Failed contacting: %s : Exception %s" % (url, e.message())) + response, info = fetch_url(self.module, url, data=data, method=method, headers=self._headers()) + if info['status'] not in (200, 201, 204): + self.module.fail_json(msg="%s returned %s, with body: %s" % (url, info['status'], info['msg'])) try: return json.load(response) except Exception, e: - return False + return {} def getDomain(self, domain_id): if not self.domain_map: @@ -263,6 +248,7 @@ def main(): 'A', 'AAAA', 'CNAME', 'HTTPRED', 'MX', 'NS', 'PTR', 'SRV', 'TXT']), record_value=dict(required=False), record_ttl=dict(required=False, default=1800, type='int'), + validate_certs = dict(default='yes', type='bool'), ), required_together=( ['record_value', 'record_ttl', 'record_type'] @@ -282,7 +268,7 @@ def main(): domain_records = DME.getRecords() if not domain_records: module.fail_json( - msg="The %s domain name is not accessible with this api_key; try using its ID if known." % domain) + msg="The requested domain name is not accessible with this api_key; try using its ID if known.") module.exit_json(changed=False, result=domain_records) # Fetch existing record + Build new one @@ -338,4 +324,6 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + main() diff --git a/library/net_infrastructure/lldp b/library/net_infrastructure/lldp new file mode 100755 index 00000000000..6b8836852f6 --- /dev/null +++ b/library/net_infrastructure/lldp @@ -0,0 +1,83 @@ +#!/usr/bin/python -tt +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +import subprocess + +DOCUMENTATION = ''' +--- +module: lldp +version_added: 1.6 +short_description: get details reported by lldp +description: + - Reads data out of lldpctl +options: {} +author: Andy Hill +notes: + - Requires lldpd running and lldp enabled on switches +''' + +EXAMPLES = ''' +# Retrieve switch/port information + - name: Gather information from lldp + lldp: + + - name: Print each switch/port + debug: msg="{{ lldp[item]['chassis']['name'] }} / {{ lldp[item]['port']['ifalias'] }} + with_items: lldp.keys() + +# TASK: [Print each switch/port] *********************************************************** +# ok: [10.13.0.22] => (item=eth2) => {"item": "eth2", "msg": "switch1.example.com / Gi0/24"} +# ok: [10.13.0.22] => (item=eth1) => {"item": "eth1", "msg": "switch2.example.com / Gi0/3"} +# ok: [10.13.0.22] => (item=eth0) => {"item": "eth0", "msg": "switch3.example.com / Gi0/3"} + +''' + +def gather_lldp(): + cmd = ['lldpctl', '-f', 'keyvalue'] + proc = subprocess.Popen(cmd, stdout=subprocess.PIPE) + (output, err) = proc.communicate() + if output: + output_dict = {} + lldp_entries = output.split("\n") + + for entry in lldp_entries: + if entry: + path, value = entry.strip().split("=", 1) + path = path.split(".") + path_components, final = path[:-1], path[-1] + + current_dict = output_dict + for path_component in path_components: + current_dict[path_component] = current_dict.get(path_component, {}) + current_dict = current_dict[path_component] + current_dict[final] = value + return output_dict + + +def main(): + module = AnsibleModule({}) + + lldp_output = gather_lldp() + try: + data = {'lldp': lldp_output['lldp']} + module.exit_json(ansible_facts=data) + except TypeError: + module.fail_json(msg="lldpctl command failed. is lldpd running?") + +# import module snippets +from ansible.module_utils.basic import * +main() + diff --git a/library/net_infrastructure/netscaler b/library/net_infrastructure/netscaler index 1aa370895d5..2a8881cf56f 100644 --- a/library/net_infrastructure/netscaler +++ b/library/net_infrastructure/netscaler @@ -73,6 +73,14 @@ options: default: server choices: ["server", "service"] aliases: [] + validate_certs: + description: + - If C(no), SSL certificates for the target url will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + requirements: [ "urllib", "urllib2" ] author: Nandor Sivok ''' @@ -90,8 +98,6 @@ ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=api import json -import urllib -import urllib2 import base64 import socket @@ -100,23 +106,25 @@ class netscaler(object): _nitro_base_url = '/nitro/v1/' + def __init__(self, module): + self.module = module + def http_request(self, api_endpoint, data_json={}): request_url = self._nsc_protocol + '://' + self._nsc_host + self._nitro_base_url + api_endpoint - data_json = urllib.urlencode(data_json) - if len(data_json): - req = urllib2.Request(request_url, data_json) - req.add_header('Content-Type', 'application/x-www-form-urlencoded') - else: - req = urllib2.Request(request_url) + data_json = urllib.urlencode(data_json) + if not len(data_json): + data_json = None - base64string = base64.encodestring('%s:%s' % (self._nsc_user, self._nsc_pass)).replace('\n', '').strip() - req.add_header('Authorization', "Basic %s" % base64string) + auth = base64.encodestring('%s:%s' % (self._nsc_user, self._nsc_pass)).replace('\n', '').strip() + headers = { + 'Authorization': 'Basic %s' % auth, + 'Content-Type' : 'application/x-www-form-urlencoded', + } - resp = urllib2.urlopen(req) - resp = json.load(resp) + response, info = fetch_url(self.module, request_url, data=data_json) - return resp + return json.load(response.read()) def prepare_request(self, action): resp = self.http_request( @@ -134,7 +142,7 @@ class netscaler(object): def core(module): - n = netscaler() + n = netscaler(module) n._nsc_host = module.params.get('nsc_host') n._nsc_user = module.params.get('user') n._nsc_pass = module.params.get('password') @@ -158,7 +166,8 @@ def main(): password = dict(required=True), action = dict(default='enable', choices=['enable','disable']), name = dict(default=socket.gethostname()), - type = dict(default='server', choices=['service', 'server']) + type = dict(default='server', choices=['service', 'server']), + validate_certs=dict(default='yes', type='bool'), ) ) @@ -177,4 +186,5 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * main() diff --git a/library/network/get_url b/library/network/get_url index 9704b8dbadb..74cc5479f4a 100644 --- a/library/network/get_url +++ b/library/network/get_url @@ -83,6 +83,13 @@ options: required: false default: 'yes' choices: ['yes', 'no'] + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] others: description: - all arguments accepted by the M(file) module also work here @@ -108,19 +115,6 @@ try: except ImportError: HAS_HASHLIB=False -try: - import urllib2 - HAS_URLLIB2 = True -except ImportError: - HAS_URLLIB2 = False - -try: - import urlparse - import socket - HAS_URLPARSE = True -except ImportError: - HAS_URLPARSE=False - # ============================================================== # url handling @@ -130,72 +124,6 @@ def url_filename(url): return 'index.html' return fn -def url_do_get(module, url, dest, use_proxy, last_mod_time, force): - """ - Get url and return request and info - Credits: http://stackoverflow.com/questions/7006574/how-to-download-file-from-ftp - """ - - USERAGENT = 'ansible-httpget' - info = dict(url=url, dest=dest) - r = None - handlers = [] - - parsed = urlparse.urlparse(url) - - if '@' in parsed[1]: - credentials, netloc = parsed[1].split('@', 1) - if ':' in credentials: - username, password = credentials.split(':', 1) - else: - username = credentials - password = '' - parsed = list(parsed) - parsed[1] = netloc - - passman = urllib2.HTTPPasswordMgrWithDefaultRealm() - # this creates a password manager - passman.add_password(None, netloc, username, password) - # because we have put None at the start it will always - # use this username/password combination for urls - # for which `theurl` is a super-url - - authhandler = urllib2.HTTPBasicAuthHandler(passman) - # create the AuthHandler - handlers.append(authhandler) - - #reconstruct url without credentials - url = urlparse.urlunparse(parsed) - - if not use_proxy: - proxyhandler = urllib2.ProxyHandler({}) - handlers.append(proxyhandler) - - opener = urllib2.build_opener(*handlers) - urllib2.install_opener(opener) - request = urllib2.Request(url) - request.add_header('User-agent', USERAGENT) - - if last_mod_time and not force: - tstamp = last_mod_time.strftime('%a, %d %b %Y %H:%M:%S +0000') - request.add_header('If-Modified-Since', tstamp) - else: - request.add_header('cache-control', 'no-cache') - - try: - r = urllib2.urlopen(request) - info.update(r.info()) - info['url'] = r.geturl() # The URL goes in too, because of redirects. - info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), status=200)) - except urllib2.HTTPError, e: - # Must not fail_json() here so caller can handle HTTP 304 unmodified - info.update(dict(msg=str(e), status=e.code)) - except urllib2.URLError, e: - code = getattr(e, 'code', -1) - module.fail_json(msg="Request failed: %s" % str(e), status_code=code) - - return r, info - def url_get(module, url, dest, use_proxy, last_mod_time, force): """ Download data from the url and store in a temporary file. @@ -203,7 +131,7 @@ def url_get(module, url, dest, use_proxy, last_mod_time, force): Return (tempfile, info about the request) """ - req, info = url_do_get(module, url, dest, use_proxy, last_mod_time, force) + rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time) if info['status'] == 304: module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', '')) @@ -215,12 +143,12 @@ def url_get(module, url, dest, use_proxy, last_mod_time, force): fd, tempname = tempfile.mkstemp() f = os.fdopen(fd, 'wb') try: - shutil.copyfileobj(req, f) + shutil.copyfileobj(rsp, f) except Exception, err: os.remove(tempname) module.fail_json(msg="failed to create temporary content file: %s" % str(err)) f.close() - req.close() + rsp.close() return tempname, info def extract_filename_from_headers(headers): @@ -247,21 +175,16 @@ def extract_filename_from_headers(headers): def main(): - # does this really happen on non-ancient python? - if not HAS_URLLIB2: - module.fail_json(msg="urllib2 is not installed") - if not HAS_URLPARSE: - module.fail_json(msg="urlparse is not installed") + argument_spec = url_argument_spec() + argument_spec.update( + url = dict(required=True), + dest = dict(required=True), + sha256sum = dict(default=''), + ) module = AnsibleModule( # not checking because of daisy chain to file module - argument_spec = dict( - url = dict(required=True), - dest = dict(required=True), - force = dict(default='no', aliases=['thirsty'], type='bool'), - sha256sum = dict(default=''), - use_proxy = dict(default='yes', type='bool') - ), + argument_spec = argument_spec, add_file_common_args=True ) @@ -366,4 +289,5 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * main() diff --git a/library/network/uri b/library/network/uri index 0060c1fdc90..b8b9b04ab9c 100644 --- a/library/network/uri +++ b/library/network/uri @@ -106,7 +106,7 @@ options: required: false status_code: description: - - A valid, numeric, HTTP status code that signifies success of the request. + - A valid, numeric, HTTP status code that signifies success of the request. Can also be comma separated list of status codes. required: false default: 200 timeout: @@ -143,23 +143,29 @@ EXAMPLES = ''' when: 'AWESOME' not in "{{ webpage.content }}" -# Create a JIRA issue. -- action: > - uri url=https://your.jira.example.com/rest/api/2/issue/ - method=POST user=your_username password=your_pass - body="{{ lookup('file','issue.json') }}" force_basic_auth=yes - status_code=201 HEADER_Content-Type="application/json" +# Create a JIRA issue -- action: > - uri url=https://your.form.based.auth.examle.com/index.php - method=POST body="name=your_username&password=your_password&enter=Sign%20in" - status_code=302 HEADER_Content-Type="application/x-www-form-urlencoded" - register: login +- uri: url=https://your.jira.example.com/rest/api/2/issue/ + method=POST user=your_username password=your_pass + body="{{ lookup('file','issue.json') }}" force_basic_auth=yes + status_code=201 HEADER_Content-Type="application/json" # Login to a form based webpage, then use the returned cookie to -# access the app in later tasks. -- action: uri url=https://your.form.based.auth.example.com/dashboard.php - method=GET return_content=yes HEADER_Cookie="{{login.set_cookie}}" +# access the app in later tasks + +- uri: url=https://your.form.based.auth.examle.com/index.php + method=POST body="name=your_username&password=your_password&enter=Sign%20in" + status_code=302 HEADER_Content-Type="application/x-www-form-urlencoded" + register: login + +- uri: url=https://your.form.based.auth.example.com/dashboard.php + method=GET return_content=yes HEADER_Cookie="{{login.set_cookie}}" + +# Queue build of a project in Jenkins: + +- uri: url=http://{{jenkins.host}}/job/{{jenkins.job}}/build?token={{jenkins.token}} + method=GET user={{jenkins.user}} password={{jenkins.password}} force_basic_auth=yes status_code=201 + ''' HAS_HTTPLIB2 = True @@ -335,7 +341,7 @@ def main(): follow_redirects = dict(required=False, default='safe', choices=['all', 'safe', 'none', 'yes', 'no']), creates = dict(required=False, default=None), removes = dict(required=False, default=None), - status_code = dict(required=False, default=200, type='int'), + status_code = dict(required=False, default=[200], type='list'), timeout = dict(required=False, default=30, type='int'), ), check_invalid_arguments=False, @@ -358,7 +364,7 @@ def main(): redirects = module.params['follow_redirects'] creates = module.params['creates'] removes = module.params['removes'] - status_code = int(module.params['status_code']) + status_code = [int(x) for x in list(module.params['status_code'])] socket_timeout = module.params['timeout'] # Grab all the http headers. Need this hack since passing multi-values is currently a bit ugly. (e.g. headers='{"Content-Type":"application/json"}') @@ -427,7 +433,7 @@ def main(): uresp['json'] = js except: pass - if resp['status'] != status_code: + if resp['status'] not in status_code: module.fail_json(msg="Status code was not " + str(status_code), content=content, **uresp) elif return_content: module.exit_json(changed=changed, content=content, **uresp) diff --git a/library/notification/flowdock b/library/notification/flowdock index a5be40d1f10..009487fb438 100644 --- a/library/notification/flowdock +++ b/library/notification/flowdock @@ -76,6 +76,14 @@ options: description: - (inbox only) Link associated with the message. This will be used to link the message subject in Team Inbox. required: false + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 # informational: requirements for nodes requirements: [ urllib, urllib2 ] @@ -96,31 +104,12 @@ EXAMPLES = ''' tags=tag1,tag2,tag3 ''' -HAS_URLLIB = True -try: - import urllib -except ImportError: - HAS_URLLIB = False - -HAS_URLLIB2 = True -try: - import urllib2 -except ImportError: - HAS_URLLIB2 = False - - - # =========================================== # Module execution. # def main(): - if not HAS_URLLIB: - module.fail_json(msg="urllib is not installed") - if not HAS_URLLIB2: - module.fail_json(msg="urllib2 is not installed") - module = AnsibleModule( argument_spec=dict( token=dict(required=True), @@ -135,6 +124,7 @@ def main(): project=dict(required=False), tags=dict(required=False), link=dict(required=False), + validate_certs = dict(default='yes', type='bool'), ), supports_check_mode=True ) @@ -187,14 +177,16 @@ def main(): module.exit_json(changed=False) # Send the data to Flowdock - try: - response = urllib2.urlopen(url, urllib.urlencode(params)) - except Exception, e: - module.fail_json(msg="unable to send msg: %s" % e) + data = urllib.urlencode(params) + response, info = fetch_url(module, url, data=data) + if info['status'] != 200: + module.fail_json(msg="unable to send msg: %s" % info['msg']) - module.exit_json(changed=False, msg=module.params["msg"]) + module.exit_json(changed=True, msg=module.params["msg"]) # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + main() diff --git a/library/notification/grove b/library/notification/grove index b759f025e29..e6bf241bdaa 100644 --- a/library/notification/grove +++ b/library/notification/grove @@ -31,6 +31,14 @@ options: description: - Icon for the service required: false + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 author: Jonas Pfenniger ''' @@ -41,8 +49,6 @@ EXAMPLES = ''' message=deployed {{ target }} ''' -import urllib - BASE_URL = 'https://grove.io/api/notice/%s/' # ============================================================== @@ -57,7 +63,10 @@ def do_notify_grove(module, channel_token, service, message, url=None, icon_url= if icon_url is not None: my_data['icon_url'] = icon_url - urllib.urlopen(my_url, urllib.urlencode(my_data)) + data = urllib.urlencode(my_data) + response, info = fetch_url(module, my_url, data=data) + if info['status'] != 200: + module.fail_json(msg="failed to send notification: %s" % info['msg']) # ============================================================== # main @@ -70,6 +79,7 @@ def main(): service = dict(type='str', default='ansible'), url = dict(type='str', default=None), icon_url = dict(type='str', default=None), + validate_certs = dict(default='yes', type='bool'), ) ) diff --git a/library/notification/hipchat b/library/notification/hipchat index eec2b8c3618..4ff95b32bf6 100644 --- a/library/notification/hipchat +++ b/library/notification/hipchat @@ -46,6 +46,21 @@ options: required: false default: 'yes' choices: [ "yes", "no" ] + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + version_added: 1.5.1 + api: + description: + - API url if using a self-hosted hipchat server + required: false + default: 'https://api.hipchat.com/v1/rooms/message' + version_added: 1.6.0 + # informational: requirements for nodes requirements: [ urllib, urllib2 ] @@ -60,23 +75,10 @@ EXAMPLES = ''' # HipChat module specific support methods. # -HAS_URLLIB = True -try: - import urllib -except ImportError: - HAS_URLLIB = False +MSG_URI = "https://api.hipchat.com/v1/rooms/message" -HAS_URLLIB2 = True -try: - import urllib2 -except ImportError: - HAS_URLLIB2 = False - -MSG_URI = "https://api.hipchat.com/v1/rooms/message?" - - -def send_msg(token, room, msg_from, msg, msg_format='text', - color='yellow', notify=False): +def send_msg(module, token, room, msg_from, msg, msg_format='text', + color='yellow', notify=False, api=MSG_URI): '''sending message to hipchat''' params = {} @@ -85,15 +87,20 @@ def send_msg(token, room, msg_from, msg, msg_format='text', params['message'] = msg params['message_format'] = msg_format params['color'] = color + params['api'] = api if notify: params['notify'] = 1 else: params['notify'] = 0 - url = MSG_URI + "auth_token=%s" % (token) - response = urllib2.urlopen(url, urllib.urlencode(params)) - return response.read() + url = api + "?auth_token=%s" % (token) + data = urllib.urlencode(params) + response, info = fetch_url(module, url, data=data) + if info['status'] == 200: + return response.read() + else: + module.fail_json(msg="failed to send message, return status=%s" % str(info['status'])) # =========================================== @@ -102,11 +109,6 @@ def send_msg(token, room, msg_from, msg, msg_format='text', def main(): - if not HAS_URLLIB: - module.fail_json(msg="urllib is not installed") - if not HAS_URLLIB2: - module.fail_json(msg="urllib2 is not installed") - module = AnsibleModule( argument_spec=dict( token=dict(required=True), @@ -117,6 +119,8 @@ def main(): "purple", "gray", "random"]), msg_format=dict(default="text", choices=["text", "html"]), notify=dict(default=True, type='bool'), + validate_certs = dict(default='yes', type='bool'), + api = dict(default=MSG_URI), ), supports_check_mode=True ) @@ -128,17 +132,18 @@ def main(): color = module.params["color"] msg_format = module.params["msg_format"] notify = module.params["notify"] + api = module.params["api"] try: - send_msg(token, room, msg_from, msg, msg_format, - color, notify) + send_msg(module, token, room, msg_from, msg, msg_format, color, notify, api) except Exception, e: module.fail_json(msg="unable to sent msg: %s" % e) changed = True - module.exit_json(changed=changed, room=room, msg_from=msg_from, - msg=msg) + module.exit_json(changed=changed, room=room, msg_from=msg_from, msg=msg) # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + main() diff --git a/library/notification/irc b/library/notification/irc index 11bdc4a95ec..bba7319a083 100644 --- a/library/notification/irc +++ b/library/notification/irc @@ -39,7 +39,7 @@ options: default: 6667 nick: description: - - Nickname + - Nickname. May be shortened, depending on server's NICKLEN setting. required: false default: ansible msg: @@ -49,10 +49,10 @@ options: default: null color: description: - - Text color for the message. Default is black. + - Text color for the message. ("none" is a valid option in 1.6 or later, in 1.6 and prior, the default color is black, not "none"). required: false - default: black - choices: [ "yellow", "red", "green", "blue", "black" ] + default: "none" + choices: [ "none", "yellow", "red", "green", "blue", "black" ] channel: description: - Channel name @@ -94,7 +94,7 @@ from time import sleep def send_msg(channel, msg, server='localhost', port='6667', - nick="ansible", color='black', passwd=False, timeout=30): + nick="ansible", color='none', passwd=False, timeout=30): '''send message to IRC''' colornumbers = { @@ -107,10 +107,11 @@ def send_msg(channel, msg, server='localhost', port='6667', try: colornumber = colornumbers[color] + colortext = "\x03" + colornumber except: - colornumber = "01" # black + colortext = "" - message = "\x03" + colornumber + msg + message = colortext + msg irc = socket.socket(socket.AF_INET, socket.SOCK_STREAM) irc.connect((server, int(port))) @@ -122,11 +123,15 @@ def send_msg(channel, msg, server='localhost', port='6667', start = time.time() while 1: motd += irc.recv(1024) - if re.search('^:\S+ 00[1-4] %s :' % nick, motd, flags=re.M): + # The server might send back a shorter nick than we specified (due to NICKLEN), + # so grab that and use it from now on (assuming we find the 00[1-4] response). + match = re.search('^:\S+ 00[1-4] (?P\S+) :', motd, flags=re.M) + if match: + nick = match.group('nick') break elif time.time() - start > timeout: raise Exception('Timeout waiting for IRC server welcome response') - time.sleep(0.5) + sleep(0.5) irc.send('JOIN %s\r\n' % channel) join = '' @@ -137,13 +142,13 @@ def send_msg(channel, msg, server='localhost', port='6667', break elif time.time() - start > timeout: raise Exception('Timeout waiting for IRC JOIN response') - time.sleep(0.5) + sleep(0.5) irc.send('PRIVMSG %s :%s\r\n' % (channel, message)) - time.sleep(1) + sleep(1) irc.send('PART %s\r\n' % channel) irc.send('QUIT\r\n') - time.sleep(1) + sleep(1) irc.close() # =========================================== @@ -158,8 +163,8 @@ def main(): port=dict(default=6667), nick=dict(default='ansible'), msg=dict(required=True), - color=dict(default="black", choices=["yellow", "red", "green", - "blue", "black"]), + color=dict(default="none", choices=["yellow", "red", "green", + "blue", "black", "none"]), channel=dict(required=True), passwd=dict(), timeout=dict(type='int', default=30) diff --git a/library/notification/mqtt b/library/notification/mqtt index d00307018dc..d701bd9348a 100644 --- a/library/notification/mqtt +++ b/library/notification/mqtt @@ -1,7 +1,7 @@ #!/usr/bin/python # -*- coding: utf-8 -*- -# (c) 2013, Jan-Piet Mens +# (c) 2013, 2014, Jan-Piet Mens # # This file is part of Ansible # @@ -80,7 +80,7 @@ options: requirements: [ mosquitto ] notes: - This module requires a connection to an MQTT broker such as Mosquitto - U(http://mosquitto.org) and the C(mosquitto) Python module (U(http://mosquitto.org/python)). + U(http://mosquitto.org) and the I(Paho) C(mqtt) Python client (U(https://pypi.python.org/pypi/paho-mqtt)). author: Jan-Piet Mens ''' @@ -97,34 +97,12 @@ EXAMPLES = ''' # MQTT module support methods. # -HAS_MOSQUITTO = True +HAS_PAHOMQTT = True try: import socket - import mosquitto + import paho.mqtt.publish as mqtt except ImportError: - HAS_MOSQUITTO = False -import os - -def publish(module, topic, payload, server='localhost', port='1883', qos='0', - client_id='', retain=False, username=None, password=None): - '''Open connection to MQTT broker and publish the topic''' - - mqttc = mosquitto.Mosquitto(client_id, clean_session=True) - - if username is not None and password is not None: - mqttc.username_pw_set(username, password) - - rc = mqttc.connect(server, int(port), 5) - if rc != 0: - module.fail_json(msg="unable to connect to MQTT broker") - - mqttc.publish(topic, payload, int(qos), retain) - rc = mqttc.loop() - if rc != 0: - module.fail_json(msg="unable to send to MQTT broker") - - mqttc.disconnect() - + HAS_PAHOMQTT = False # =========================================== # Main @@ -132,10 +110,6 @@ def publish(module, topic, payload, server='localhost', port='1883', qos='0', def main(): - if not HAS_MOSQUITTO: - module.fail_json(msg="mosquitto is not installed") - - module = AnsibleModule( argument_spec=dict( server = dict(default = 'localhost'), @@ -151,15 +125,18 @@ def main(): supports_check_mode=True ) - server = module.params["server"] - port = module.params["port"] - topic = module.params["topic"] - payload = module.params["payload"] - client_id = module.params["client_id"] - qos = module.params["qos"] - retain = module.params["retain"] - username = module.params["username"] - password = module.params["password"] + if not HAS_PAHOMQTT: + module.fail_json(msg="Paho MQTT is not installed") + + server = module.params.get("server", 'localhost') + port = module.params.get("port", 1883) + topic = module.params.get("topic") + payload = module.params.get("payload") + client_id = module.params.get("client_id", '') + qos = int(module.params.get("qos", 0)) + retain = module.params.get("retain") + username = module.params.get("username", None) + password = module.params.get("password", None) if client_id is None: client_id = "%s_%s" % (socket.getfqdn(), os.getpid()) @@ -167,9 +144,18 @@ def main(): if payload and payload == 'None': payload = None + auth=None + if username is not None: + auth = { 'username' : username, 'password' : password } + try: - publish(module, topic, payload, server, port, qos, client_id, retain, - username, password) + rc = mqtt.single(topic, payload, + qos=qos, + retain=retain, + client_id=client_id, + hostname=server, + port=port, + auth=auth) except Exception, e: module.fail_json(msg="unable to publish to MQTT broker %s" % (e)) diff --git a/library/notification/nexmo b/library/notification/nexmo new file mode 100644 index 00000000000..d4898c40cdb --- /dev/null +++ b/library/notification/nexmo @@ -0,0 +1,140 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2014, Matt Martz +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = """ +module: nexmo +short_description: Send a SMS via nexmo +description: + - Send a SMS message via nexmo +version_added: 1.6 +author: Matt Martz +options: + api_key: + description: + - Nexmo API Key + required: true + api_secret: + description: + - Nexmo API Secret + required: true + src: + description: + - Nexmo Number to send from + required: true + dest: + description: + - Phone number(s) to send SMS message to + required: true + msg: + description: + - Message to text to send. Messages longer than 160 characters will be + split into multiple messages + required: true + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: + - 'yes' + - 'no' +""" + +EXAMPLES = """ +- name: Send notification message via Nexmo + local_action: + module: nexmo + api_key: 640c8a53 + api_secret: 0ce239a6 + src: 12345678901 + dest: + - 10987654321 + - 16789012345 + msg: "{{ inventory_hostname }} completed" +""" + + +NEXMO_API = 'https://rest.nexmo.com/sms/json' + + +def send_msg(module): + failed = list() + responses = dict() + msg = { + 'api_key': module.params.get('api_key'), + 'api_secret': module.params.get('api_secret'), + 'from': module.params.get('src'), + 'text': module.params.get('msg') + } + for number in module.params.get('dest'): + msg['to'] = number + url = "%s?%s" % (NEXMO_API, urllib.urlencode(msg)) + + headers = dict(Accept='application/json') + response, info = fetch_url(module, url, headers=headers) + if info['status'] != 200: + failed.append(number) + responses[number] = dict(failed=True) + + try: + responses[number] = json.load(response) + except: + failed.append(number) + responses[number] = dict(failed=True) + else: + for message in responses[number]['messages']: + if int(message['status']) != 0: + failed.append(number) + responses[number] = dict(failed=True, **responses[number]) + + if failed: + msg = 'One or messages failed to send' + else: + msg = '' + + module.exit_json(failed=bool(failed), msg=msg, changed=False, + responses=responses) + + +def main(): + argument_spec = url_argument_spec() + argument_spec.update( + dict( + api_key=dict(required=True, no_log=True), + api_secret=dict(required=True, no_log=True), + src=dict(required=True, type='int'), + dest=dict(required=True, type='list'), + msg=dict(required=True), + ), + ) + + module = AnsibleModule( + argument_spec=argument_spec + ) + + send_msg(module) + + +# import module snippets +from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + +main() diff --git a/library/notification/osx_say b/library/notification/osx_say index de5d1917c5f..39e3da88c19 100644 --- a/library/notification/osx_say +++ b/library/notification/osx_say @@ -44,8 +44,6 @@ EXAMPLES = ''' - local_action: osx_say msg="{{inventory_hostname}} is all done" voice=Zarvox ''' -import subprocess - DEFAULT_VOICE='Trinoids' def say(module, msg, voice): diff --git a/library/notification/slack b/library/notification/slack new file mode 100644 index 00000000000..176d6b338fb --- /dev/null +++ b/library/notification/slack @@ -0,0 +1,173 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2014, Ramon de la Fuente +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = """ +module: slack +short_description: Send Slack notifications +description: + - The M(slack) module sends notifications to U(http://slack.com) via the Incoming WebHook integration +version_added: 1.6 +author: Ramon de la Fuente +options: + domain: + description: + - Slack (sub)domain for your environment without protocol. + (i.e. C(future500.slack.com)) + required: true + token: + description: + - Slack integration token + required: true + msg: + description: + - Message to send. + required: true + channel: + description: + - Channel to send the message to. If absent, the message goes to the channel selected for the I(token). + required: false + username: + description: + - This is the sender of the message. + required: false + default: ansible + icon_url: + description: + - Url for the message sender's icon (default C(http://www.ansible.com/favicon.ico)) + required: false + icon_emoji: + description: + - Emoji for the message sender. See Slack documentation for options. + (if I(icon_emoji) is set, I(icon_url) will not be used) + required: false + link_names: + description: + - Automatically create links for channels and usernames in I(msg). + required: false + default: 1 + choices: + - 1 + - 0 + parse: + description: + - Setting for the message parser at Slack + required: false + choices: + - 'full' + - 'none' + validate_certs: + description: + - If C(no), SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: + - 'yes' + - 'no' +""" + +EXAMPLES = """ +- name: Send notification message via Slack + local_action: + module: slack + domain: future500.slack.com + token: thetokengeneratedbyslack + msg: "{{ inventory_hostname }} completed" + +- name: Send notification message via Slack all options + local_action: + module: slack + domain: future500.slack.com + token: thetokengeneratedbyslack + msg: "{{ inventory_hostname }} completed" + channel: "#ansible" + username: "Ansible on {{ inventory_hostname }}" + icon_url: "http://www.example.com/some-image-file.png" + link_names: 0 + parse: 'none' + +""" + + +SLACK_INCOMING_WEBHOOK = 'https://%s/services/hooks/incoming-webhook?token=%s' + +def build_payload_for_slack(module, text, channel, username, icon_url, icon_emoji, link_names, parse): + payload = dict(text=text) + + if channel is not None: + payload['channel'] = channel if (channel[0] == '#') else '#'+channel + if username is not None: + payload['username'] = username + if icon_emoji is not None: + payload['icon_emoji'] = icon_emoji + else: + payload['icon_url'] = icon_url + if link_names is not None: + payload['link_names'] = link_names + if parse is not None: + payload['parse'] = parse + + payload="payload=" + module.jsonify(payload) + return payload + +def do_notify_slack(module, domain, token, payload): + slack_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, token) + + response, info = fetch_url(module, slack_incoming_webhook, data=payload) + if info['status'] != 200: + obscured_incoming_webhook = SLACK_INCOMING_WEBHOOK % (domain, '[obscured]') + module.fail_json(msg=" failed to send %s to %s: %s" % (payload, obscured_incoming_webhook, info['msg'])) + +def main(): + module = AnsibleModule( + argument_spec = dict( + domain = dict(type='str', required=True), + token = dict(type='str', required=True), + msg = dict(type='str', required=True), + channel = dict(type='str', default=None), + username = dict(type='str', default='Ansible'), + icon_url = dict(type='str', default='http://www.ansible.com/favicon.ico'), + icon_emoji = dict(type='str', default=None), + link_names = dict(type='int', default=1, choices=[0,1]), + parse = dict(type='str', default=None, choices=['none', 'full']), + + validate_certs = dict(default='yes', type='bool'), + ) + ) + + domain = module.params['domain'] + token = module.params['token'] + text = module.params['msg'] + channel = module.params['channel'] + username = module.params['username'] + icon_url = module.params['icon_url'] + icon_emoji = module.params['icon_emoji'] + link_names = module.params['link_names'] + parse = module.params['parse'] + + payload = build_payload_for_slack(module, text, channel, username, icon_url, icon_emoji, link_names, parse) + do_notify_slack(module, domain, token, payload) + + module.exit_json(msg="OK") + +# import module snippets +from ansible.module_utils.basic import * +from ansible.module_utils.urls import * +main() \ No newline at end of file diff --git a/library/notification/sns b/library/notification/sns new file mode 100644 index 00000000000..f2ed178554e --- /dev/null +++ b/library/notification/sns @@ -0,0 +1,190 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2014, Michael J. Schultz +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = """ +module: sns +short_description: Send Amazon Simple Notification Service (SNS) messages +description: + - The M(sns) module sends notifications to a topic on your Amazon SNS account +version_added: 1.6 +author: Michael J. Schultz +options: + msg: + description: + - Default message to send. + required: true + aliases: [ "default" ] + subject: + description: + - Subject line for email delivery. + required: false + topic: + description: + - The topic you want to publish to. + required: true + email: + description: + - Message to send to email-only subscription + required: false + sqs: + description: + - Message to send to SQS-only subscription + required: false + sms: + description: + - Message to send to SMS-only subscription + required: false + http: + description: + - Message to send to HTTP-only subscription + required: false + https: + description: + - Message to send to HTTPS-only subscription + required: false + aws_secret_key: + description: + - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. + required: false + default: None + aliases: ['ec2_secret_key', 'secret_key'] + aws_access_key: + description: + - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. + required: false + default: None + aliases: ['ec2_access_key', 'access_key'] + region: + description: + - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. + required: false + aliases: ['aws_region', 'ec2_region'] + +requirements: [ "boto" ] +author: Michael J. Schultz +""" + +EXAMPLES = """ +- name: Send default notification message via SNS + local_action: + module: sns + msg: "{{ inventory_hostname }} has completed the play." + subject: "Deploy complete!" + topic: "deploy" + +- name: Send notification messages via SNS with short message for SMS + local_action: + module: sns + msg: "{{ inventory_hostname }} has completed the play." + sms: "deployed!" + subject: "Deploy complete!" + topic: "deploy" +""" + +import sys + +from ansible.module_utils.basic import * +from ansible.module_utils.ec2 import * + +try: + import boto + import boto.sns +except ImportError: + print "failed=True msg='boto required for this module'" + sys.exit(1) + + +def arn_topic_lookup(connection, short_topic): + response = connection.get_all_topics() + result = response[u'ListTopicsResponse'][u'ListTopicsResult'] + # topic names cannot have colons, so this captures the full topic name + lookup_topic = ':{}'.format(short_topic) + for topic in result[u'Topics']: + if topic[u'TopicArn'].endswith(lookup_topic): + return topic[u'TopicArn'] + return None + + +def main(): + argument_spec = ec2_argument_spec() + argument_spec.update( + dict( + msg=dict(type='str', required=True, aliases=['default']), + subject=dict(type='str', default=None), + topic=dict(type='str', required=True), + email=dict(type='str', default=None), + sqs=dict(type='str', default=None), + sms=dict(type='str', default=None), + http=dict(type='str', default=None), + https=dict(type='str', default=None), + ) + ) + + module = AnsibleModule(argument_spec=argument_spec) + + msg = module.params['msg'] + subject = module.params['subject'] + topic = module.params['topic'] + email = module.params['email'] + sqs = module.params['sqs'] + sms = module.params['sms'] + http = module.params['http'] + https = module.params['https'] + + region, ec2_url, aws_connect_params = get_aws_connection_info(module) + if not region: + module.fail_json(msg="region must be specified") + try: + connection = connect_to_aws(boto.sns, region, **aws_connect_params) + except boto.exception.NoAuthHandlerFound, e: + module.fail_json(msg=str(e)) + + # .publish() takes full ARN topic id, but I'm lazy and type shortnames + # so do a lookup (topics cannot contain ':', so thats the decider) + if ':' in topic: + arn_topic = topic + else: + arn_topic = arn_topic_lookup(connection, topic) + + if not arn_topic: + module.fail_json(msg='Could not find topic: {}'.format(topic)) + + dict_msg = {'default': msg} + if email: + dict_msg.update(email=email) + if sqs: + dict_msg.update(sqs=sqs) + if sms: + dict_msg.update(sms=sms) + if http: + dict_msg.update(http=http) + if https: + dict_msg.update(https=https) + + json_msg = json.dumps(dict_msg) + try: + connection.publish(topic=arn_topic, subject=subject, + message_structure='json', message=json_msg) + except boto.exception.BotoServerError, e: + module.fail_json(msg=str(e)) + + module.exit_json(msg="OK") + +main() diff --git a/library/notification/twilio b/library/notification/twilio new file mode 100644 index 00000000000..8969c28aa50 --- /dev/null +++ b/library/notification/twilio @@ -0,0 +1,135 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2014, Matt Makai +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = ''' +--- +version_added: "1.6" +module: twilio +short_description: Sends a text message to a mobile phone through Twilio. +description: + - Sends a text message to a phone number through an the Twilio SMS service. +notes: + - Like the other notification modules, this one requires an external + dependency to work. In this case, you'll need a Twilio account with + a purchased or verified phone number to send the text message. +options: + account_sid: + description: + user's account id for Twilio found on the account page + required: true + auth_token: + description: user's authentication token for Twilio found on the account page + required: true + msg: + description: + the body of the text message + required: true + to_number: + description: + what phone number to send the text message to, format +15551112222 + required: true + from_number: + description: + what phone number to send the text message from, format +15551112222 + required: true + +requirements: [ urllib, urllib2 ] +author: Matt Makai +''' + +EXAMPLES = ''' +# send a text message from the local server about the build status to (555) 303 5681 +# note: you have to have purchased the 'from_number' on your Twilio account +- local_action: text msg="All servers with webserver role are now configured." + account_sid={{ twilio_account_sid }} + auth_token={{ twilio_auth_token }} + from_number=+15552014545 to_number=+15553035681 + +# send a text message from a server to (555) 111 3232 +# note: you have to have purchased the 'from_number' on your Twilio account +- text: msg="This server's configuration is now complete." + account_sid={{ twilio_account_sid }} + auth_token={{ twilio_auth_token }} + from_number=+15553258899 to_number=+15551113232 + +''' + +# ======================================= +# text module support methods +# +try: + import urllib, urllib2 +except ImportError: + module.fail_json(msg="urllib and urllib2 are required") + +import base64 + + +def post_text(module, account_sid, auth_token, msg, from_number, to_number): + URI = "https://api.twilio.com/2010-04-01/Accounts/%s/Messages.json" \ + % (account_sid,) + AGENT = "Ansible/1.5" + + data = {'From':from_number, 'To':to_number, 'Body':msg} + encoded_data = urllib.urlencode(data) + request = urllib2.Request(URI) + base64string = base64.encodestring('%s:%s' % \ + (account_sid, auth_token)).replace('\n', '') + request.add_header('User-Agent', AGENT) + request.add_header('Content-type', 'application/x-www-form-urlencoded') + request.add_header('Accept', 'application/ansible') + request.add_header('Authorization', 'Basic %s' % base64string) + return urllib2.urlopen(request, encoded_data) + + +# ======================================= +# Main +# + +def main(): + + module = AnsibleModule( + argument_spec=dict( + account_sid=dict(required=True), + auth_token=dict(required=True), + msg=dict(required=True), + from_number=dict(required=True), + to_number=dict(required=True), + ), + supports_check_mode=True + ) + + account_sid = module.params['account_sid'] + auth_token = module.params['auth_token'] + msg = module.params['msg'] + from_number = module.params['from_number'] + to_number = module.params['to_number'] + + try: + response = post_text(module, account_sid, auth_token, msg, + from_number, to_number) + except Exception, e: + module.fail_json(msg="unable to send text message to %s" % to_number) + + module.exit_json(msg=msg, changed=False) + +# import module snippets +from ansible.module_utils.basic import * +main() diff --git a/library/notification/typetalk b/library/notification/typetalk new file mode 100644 index 00000000000..b987acbe837 --- /dev/null +++ b/library/notification/typetalk @@ -0,0 +1,116 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +DOCUMENTATION = ''' +--- +module: typetalk +version_added: "1.6" +short_description: Send a message to typetalk +description: + - Send a message to typetalk using typetalk API ( http://developers.typetalk.in/ ) +options: + client_id: + description: + - OAuth2 client ID + required: true + client_secret: + description: + - OAuth2 client secret + required: true + topic: + description: + - topic id to post message + required: true + msg: + description: + - message body + required: true +requirements: [ urllib, urllib2, json ] +author: Takashi Someda +''' + +EXAMPLES = ''' +- typetalk: client_id=12345 client_secret=12345 topic=1 msg="install completed" +''' + +try: + import urllib +except ImportError: + urllib = None + +try: + import urllib2 +except ImportError: + urllib2 = None + +try: + import json +except ImportError: + json = None + + +def do_request(url, params, headers={}): + data = urllib.urlencode(params) + headers = dict(headers, **{ + 'User-Agent': 'Ansible/typetalk module', + }) + return urllib2.urlopen(urllib2.Request(url, data, headers)) + + +def get_access_token(client_id, client_secret): + params = { + 'client_id': client_id, + 'client_secret': client_secret, + 'grant_type': 'client_credentials', + 'scope': 'topic.post' + } + res = do_request('https://typetalk.in/oauth2/access_token', params) + return json.load(res)['access_token'] + + +def send_message(client_id, client_secret, topic, msg): + """ + send message to typetalk + """ + try: + access_token = get_access_token(client_id, client_secret) + url = 'https://typetalk.in/api/v1/topics/%d' % topic + headers = { + 'Authorization': 'Bearer %s' % access_token, + } + do_request(url, {'message': msg}, headers) + return True, {'access_token': access_token} + except urllib2.HTTPError, e: + return False, e + + +def main(): + + module = AnsibleModule( + argument_spec=dict( + client_id=dict(required=True), + client_secret=dict(required=True), + topic=dict(required=True, type='int'), + msg=dict(required=True), + ), + supports_check_mode=False + ) + + if not (urllib and urllib2 and json): + module.fail_json(msg="urllib, urllib2 and json modules are required") + + client_id = module.params["client_id"] + client_secret = module.params["client_secret"] + topic = module.params["topic"] + msg = module.params["msg"] + + res, error = send_message(client_id, client_secret, topic, msg) + if not res: + module.fail_json(msg='fail to send message with response code %s' % error.code) + + module.exit_json(changed=True, topic=topic, msg=msg) + + +# import module snippets +from ansible.module_utils.basic import * +main() diff --git a/library/packaging/apt b/library/packaging/apt old mode 100644 new mode 100755 index f143c8f7b73..6bd19177f2d --- a/library/packaging/apt +++ b/library/packaging/apt @@ -29,18 +29,18 @@ version_added: "0.0.2" options: pkg: description: - - A package name or package specifier with version, like C(foo) or C(foo=1.0). Shell like wildcards (fnmatch) like apt* are also supported. + - A package name, like C(foo), or package specifier with version, like C(foo=1.0). Wildcards (fnmatch) like apt* are also supported. required: false default: null state: description: - - Indicates the desired package state + - Indicates the desired package state. C(latest) ensures that the latest version is installed. required: false default: present choices: [ "latest", "absent", "present" ] update_cache: description: - - Run the equivalent of C(apt-get update) before the operation. Can be run as part of the package installation or as a separate step + - Run the equivalent of C(apt-get update) before the operation. Can be run as part of the package installation or as a separate step. required: false default: no choices: [ "yes", "no" ] @@ -62,7 +62,7 @@ options: default: null install_recommends: description: - - Corresponds to the C(--no-install-recommends) option for I(apt), default behavior works as apt's default behavior, C(no) does not install recommended packages. Suggested packages are never installed. + - Corresponds to the C(--no-install-recommends) option for I(apt). Default behavior (C(yes)) replicates apt's default behavior; C(no) does not install recommended packages. Suggested packages are never installed. required: false default: yes choices: [ "yes", "no" ] @@ -88,6 +88,11 @@ options: - Options should be supplied as comma separated list required: false default: 'force-confdef,force-confold' + deb: + description: + - Path to a local .deb package file to install. + required: false + version_added: "1.6" requirements: [ python-apt, aptitude ] author: Matthew Williams notes: @@ -125,6 +130,9 @@ EXAMPLES = ''' # Pass options to dpkg on run - apt: upgrade=dist update_cache=yes dpkg_options='force-confold,force-confdef' + +# Install a .deb package +- apt: deb=/tmp/mypackage.deb ''' @@ -138,7 +146,11 @@ import datetime import fnmatch # APT related constants -APT_ENVVARS = "DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical" +APT_ENV_VARS = dict( + DEBIAN_FRONTEND = 'noninteractive', + DEBIAN_PRIORITY = 'critical' +) + DPKG_OPTIONS = 'force-confdef,force-confold' APT_GET_ZERO = "0 upgraded, 0 newly installed" APTITUDE_ZERO = "0 packages upgraded, 0 newly installed" @@ -148,8 +160,9 @@ APT_UPDATE_SUCCESS_STAMP_PATH = "/var/lib/apt/periodic/update-success-stamp" HAS_PYTHON_APT = True try: import apt + import apt.debfile import apt_pkg -except: +except ImportError: HAS_PYTHON_APT = False def package_split(pkgspec): @@ -182,7 +195,7 @@ def package_status(m, pkgname, version, cache, state): has_files = False # older python-apt cannot be used to determine non-purged try: - package_is_installed = ll_pkg.current_state == apt_pkg.CURSTATE_INSTALLED + package_is_installed = ll_pkg.current_state == apt_pkg.CURSTATE_INSTALLED except AttributeError: # python-apt 0.7.X has very weak low-level object try: # might not be necessary as python-apt post-0.7.X should have current_state property @@ -260,7 +273,10 @@ def install(m, pkgspec, cache, upgrade=False, default_release=None, else: check_arg = '' - cmd = "%s %s -y %s %s %s install %s" % (APT_ENVVARS, APT_GET_CMD, dpkg_options, force_yes, check_arg, packages) + for (k,v) in APT_ENV_VARS.iteritems(): + os.environ[k] = v + + cmd = "%s -y %s %s %s install %s" % (APT_GET_CMD, dpkg_options, force_yes, check_arg, packages) if default_release: cmd += " -t '%s'" % (default_release,) @@ -269,12 +285,57 @@ def install(m, pkgspec, cache, upgrade=False, default_release=None, rc, out, err = m.run_command(cmd) if rc: - m.fail_json(msg="'apt-get install %s' failed: %s" % (packages, err), stdout=out, stderr=err) + return (False, dict(msg="'apt-get install %s' failed: %s" % (packages, err), stdout=out, stderr=err)) else: - m.exit_json(changed=True, stdout=out, stderr=err) + return (True, dict(changed=True, stdout=out, stderr=err)) else: + return (True, dict(changed=False)) + +def install_deb(m, debfile, cache, force, install_recommends, dpkg_options): + changed=False + pkg = apt.debfile.DebPackage(debfile) + + # Check if it's already installed + if pkg.compare_to_version_in_cache() == pkg.VERSION_SAME: m.exit_json(changed=False) + # Check if package is installable + if not pkg.check(): + m.fail_json(msg=pkg._failure_string) + + (success, retvals) = install(m=m, pkgspec=pkg.missing_deps, + cache=cache, + install_recommends=install_recommends, + dpkg_options=expand_dpkg_options(dpkg_options)) + if not success: + m.fail_json(**retvals) + changed = retvals['changed'] + + + options = ' '.join(["--%s"% x for x in dpkg_options.split(",")]) + + if m.check_mode: + options += " --simulate" + if force: + options += " --force-yes" + + + cmd = "dpkg %s -i %s" % (options, debfile) + rc, out, err = m.run_command(cmd) + + if "stdout" in retvals: + stdout = retvals["stdout"] + out + else: + stdout = out + if "stderr" in retvals: + stderr = retvals["stderr"] + err + else: + stderr = err + if rc == 0: + m.exit_json(changed=True, stdout=stdout, stderr=stderr) + else: + m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr) + def remove(m, pkgspec, cache, purge=False, dpkg_options=expand_dpkg_options(DPKG_OPTIONS)): packages = "" @@ -292,7 +353,11 @@ def remove(m, pkgspec, cache, purge=False, purge = '--purge' else: purge = '' - cmd = "%s %s -q -y %s %s remove %s" % (APT_ENVVARS, APT_GET_CMD, dpkg_options, purge, packages) + + for (k,v) in APT_ENV_VARS.iteritems(): + os.environ[k] = v + + cmd = "%s -q -y %s %s remove %s" % (APT_GET_CMD, dpkg_options, purge, packages) if m.check_mode: m.exit_json(changed=True) @@ -332,7 +397,11 @@ def upgrade(m, mode="yes", force=False, force_yes = '' apt_cmd_path = m.get_bin_path(apt_cmd, required=True) - cmd = '%s %s -y %s %s %s %s' % (APT_ENVVARS, apt_cmd_path, dpkg_options, + + for (k,v) in APT_ENV_VARS.iteritems(): + os.environ[k] = v + + cmd = '%s -y %s %s %s %s' % (apt_cmd_path, dpkg_options, force_yes, check_arg, upgrade_command) rc, out, err = m.run_command(cmd) if rc: @@ -349,20 +418,21 @@ def main(): cache_valid_time = dict(type='int'), purge = dict(default=False, type='bool'), package = dict(default=None, aliases=['pkg', 'name']), + deb = dict(default=None), default_release = dict(default=None, aliases=['default-release']), install_recommends = dict(default='yes', aliases=['install-recommends'], type='bool'), force = dict(default='no', type='bool'), upgrade = dict(choices=['yes', 'safe', 'full', 'dist']), dpkg_options = dict(default=DPKG_OPTIONS) ), - mutually_exclusive = [['package', 'upgrade']], - required_one_of = [['package', 'upgrade', 'update_cache']], + mutually_exclusive = [['package', 'upgrade', 'deb']], + required_one_of = [['package', 'upgrade', 'update_cache', 'deb']], supports_check_mode = True ) if not HAS_PYTHON_APT: try: - module.run_command('apt-get update && apt-get install python-apt -y -q') + module.run_command('apt-get update && apt-get install python-apt -y -q', use_unsafe_shell=True) global apt, apt_pkg import apt import apt_pkg @@ -421,7 +491,7 @@ def main(): if cache_valid is not True: cache.update() cache.open(progress=None) - if not p['package'] and not p['upgrade']: + if not p['package'] and not p['upgrade'] and not p['deb']: module.exit_json(changed=False) force_yes = p['force'] @@ -429,6 +499,13 @@ def main(): if p['upgrade']: upgrade(module, p['upgrade'], force_yes, dpkg_options) + if p['deb']: + if p['state'] != "installed": + module.fail_json(msg="deb only supports state=installed") + install_deb(module, p['deb'], cache, + install_recommends=install_recommends, + force=force_yes, dpkg_options=p['dpkg_options']) + packages = p['package'].split(',') latest = p['state'] == 'latest' for package in packages: @@ -438,14 +515,24 @@ def main(): module.fail_json(msg='version number inconsistent with state=latest: %s' % package) if p['state'] == 'latest': - install(module, packages, cache, upgrade=True, + result = install(module, packages, cache, upgrade=True, default_release=p['default_release'], install_recommends=install_recommends, force=force_yes, dpkg_options=dpkg_options) + (success, retvals) = result + if success: + module.exit_json(**retvals) + else: + module.fail_json(**retvals) elif p['state'] in [ 'installed', 'present' ]: - install(module, packages, cache, default_release=p['default_release'], + result = install(module, packages, cache, default_release=p['default_release'], install_recommends=install_recommends,force=force_yes, dpkg_options=dpkg_options) + (success, retvals) = result + if success: + module.exit_json(**retvals) + else: + module.fail_json(**retvals) elif p['state'] in [ 'removed', 'absent' ]: remove(module, packages, cache, p['purge'], dpkg_options) diff --git a/library/packaging/apt_key b/library/packaging/apt_key index eee86337020..2308d34329f 100644 --- a/library/packaging/apt_key +++ b/library/packaging/apt_key @@ -58,12 +58,26 @@ options: default: none description: - url to retrieve key from. + keyserver: + version_added: "1.6" + required: false + default: none + description: + - keyserver to retrieve key from. state: required: false choices: [ absent, present ] default: present description: - used to specify if key is being added or revoked + validate_certs: + description: + - If C(no), SSL certificates for the target url will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + ''' EXAMPLES = ''' @@ -88,7 +102,6 @@ EXAMPLES = ''' # FIXME: standardize into module_common -from urllib2 import urlopen, URLError from traceback import format_exc from re import compile as re_compile # FIXME: standardize into module_common @@ -105,7 +118,7 @@ REQUIRED_EXECUTABLES=['gpg', 'grep', 'apt-key'] def check_missing_binaries(module): missing = [e for e in REQUIRED_EXECUTABLES if not find_executable(e)] if len(missing): - module.fail_json(msg="binaries are missing", names=all) + module.fail_json(msg="binaries are missing", names=missing) def all_keys(module, keyring): if keyring: @@ -124,7 +137,7 @@ def all_keys(module, keyring): return results def key_present(module, key_id): - (rc, out, err) = module.run_command("apt-key list | 2>&1 grep -i -q %s" % key_id) + (rc, out, err) = module.run_command("apt-key list | 2>&1 grep -i -q %s" % pipes.quote(key_id), use_unsafe_shell=True) return rc == 0 def download_key(module, url): @@ -133,14 +146,15 @@ def download_key(module, url): if url is None: module.fail_json(msg="needed a URL but was not specified") try: - connection = urlopen(url) - if connection is None: - module.fail_json("error connecting to download key from url") - data = connection.read() - return data + rsp, info = fetch_url(module, url) + return rsp.read() except Exception: - module.fail_json(msg="error getting key id from url", traceback=format_exc()) + module.fail_json(msg="error getting key id from url: %s" % url, traceback=format_exc()) +def import_key(module, keyserver, key_id): + cmd = "apt-key adv --keyserver %s --recv %s" % (keyserver, key_id) + (rc, out, err) = module.run_command(cmd, check_rc=True) + return True def add_key(module, keyfile, keyring, data=None): if data is not None: @@ -175,6 +189,8 @@ def main(): file=dict(required=False), key=dict(required=False), keyring=dict(required=False), + validate_certs=dict(default='yes', type='bool'), + keyserver=dict(required=False), state=dict(required=False, choices=['present', 'absent'], default='present') ), supports_check_mode=True @@ -186,6 +202,7 @@ def main(): filename = module.params['file'] keyring = module.params['keyring'] state = module.params['state'] + keyserver = module.params['keyserver'] changed = False if key_id: @@ -194,7 +211,7 @@ def main(): if key_id.startswith('0x'): key_id = key_id[2:] except ValueError: - module.fail_json("Invalid key_id") + module.fail_json(msg="Invalid key_id", id=key_id) # FIXME: I think we have a common facility for this, if not, want check_missing_binaries(module) @@ -206,7 +223,7 @@ def main(): if key_id and key_id in keys: module.exit_json(changed=False) else: - if not filename and not data: + if not filename and not data and not keyserver: data = download_key(module, url) if key_id and key_id in keys: module.exit_json(changed=False) @@ -215,6 +232,8 @@ def main(): module.exit_json(changed=True) if filename: add_key(module, filename, keyring) + elif keyserver: + import_key(module, keyserver, key_id) else: add_key(module, "-", keyring, data) changed=False @@ -240,4 +259,5 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * main() diff --git a/library/packaging/apt_repository b/library/packaging/apt_repository index 4587d90ba78..a0d3b89e739 100644 --- a/library/packaging/apt_repository +++ b/library/packaging/apt_repository @@ -28,7 +28,7 @@ short_description: Add and remove APT repositores description: - Add or remove an APT repositories in Ubuntu and Debian. notes: - - This module works on Debian and Ubuntu and requires C(python-apt) and C(python-pycurl) packages. + - This module works on Debian and Ubuntu and requires C(python-apt). - This module supports Debian Squeeze (version 6) as well as its successors. - This module treats Debian and Ubuntu distributions separately. So PPA could be installed only on Ubuntu machines. options: @@ -43,15 +43,21 @@ options: default: "present" description: - A source string state. + mode: + required: false + default: 0644 + description: + - The octal mode for newly created files in sources.list.d + version_added: "1.6" update_cache: description: - - Run the equivalent of C(apt-get update) if has changed. + - Run the equivalent of C(apt-get update) when a change occurs. Cache updates are run after making changes. required: false default: "yes" choices: [ "yes", "no" ] author: Alexander Saltanov version_added: "0.7" -requirements: [ python-apt, python-pycurl ] +requirements: [ python-apt ] ''' EXAMPLES = ''' @@ -70,10 +76,6 @@ apt_repository: repo='ppa:nginx/stable' ''' import glob -try: - import json -except ImportError: - import simplejson as json import os import re import tempfile @@ -87,22 +89,19 @@ try: except ImportError: HAVE_PYTHON_APT = False -try: - import pycurl - HAVE_PYCURL = True -except ImportError: - HAVE_PYCURL = False VALID_SOURCE_TYPES = ('deb', 'deb-src') +def install_python_apt(module): -class CurlCallback: - def __init__(self): - self.contents = '' - - def body_callback(self, buf): - self.contents = self.contents + buf - + if not module.check_mode: + apt_get_path = module.get_bin_path('apt-get') + if apt_get_path: + rc, so, se = module.run_command('%s update && %s install python-apt -y -q' % (apt_get_path, apt_get_path)) + if rc == 0: + global apt, apt_pkg + import apt + import apt_pkg class InvalidSource(Exception): pass @@ -140,12 +139,22 @@ class SourcesList(object): def _suggest_filename(self, line): def _cleanup_filename(s): return '_'.join(re.sub('[^a-zA-Z0-9]', ' ', s).split()) + def _strip_username_password(s): + if '@' in s: + s = s.split('@', 1) + s = s[-1] + return s # Drop options and protocols. line = re.sub('\[[^\]]+\]', '', line) line = re.sub('\w+://', '', line) + # split line into valid keywords parts = [part for part in line.split() if part not in VALID_SOURCE_TYPES] + + # Drop usernames and passwords + parts[0] = _strip_username_password(parts[0]) + return '%s.list' % _cleanup_filename(' '.join(parts[:1])) def _parse(self, line, raise_if_invalid_or_disabled=False): @@ -214,7 +223,10 @@ class SourcesList(object): if sources: d, fn = os.path.split(filename) fd, tmp_path = tempfile.mkstemp(prefix=".%s-" % fn, dir=d) - os.chmod(os.path.join(fd, tmp_path), 0644) + + # allow the user to override the default mode + this_mode = module.params['mode'] + module.set_mode_if_different(tmp_path, this_mode, False) f = os.fdopen(fd, 'w') for n, valid, enabled, source, comment in sources: @@ -290,29 +302,19 @@ class SourcesList(object): class UbuntuSourcesList(SourcesList): - LP_API = 'https://launchpad.net/api/1.0/~%s/+archive/%s' + LP_API = 'https://launchpad.net/api/1.0/~%s/+archive/%s' - def __init__(self, add_ppa_signing_keys_callback=None): + def __init__(self, module, add_ppa_signing_keys_callback=None): + self.module = module self.add_ppa_signing_keys_callback = add_ppa_signing_keys_callback super(UbuntuSourcesList, self).__init__() def _get_ppa_info(self, owner_name, ppa_name): - # we can not use urllib2 here as it does not do cert verification lp_api = self.LP_API % (owner_name, ppa_name) - return self._get_ppa_info_curl(lp_api) - - def _get_ppa_info_curl(self, lp_api): - callback = CurlCallback() - curl = pycurl.Curl() - curl.setopt(pycurl.SSL_VERIFYPEER, 1) - curl.setopt(pycurl.SSL_VERIFYHOST, 2) - curl.setopt(pycurl.WRITEFUNCTION, callback.body_callback) - curl.setopt(pycurl.URL, str(lp_api)) - curl.setopt(pycurl.HTTPHEADER, ["Accept: application/json"]) - curl.perform() - curl.close() - lp_page = callback.contents - return json.loads(lp_page) + + headers = dict(Accept='application/json') + response, info = fetch_url(self.module, lp_api, headers=headers) + return json.load(response) def _expand_ppa(self, path): ppa = path.split(':')[1] @@ -352,7 +354,10 @@ def get_add_ppa_signing_key_callback(module): def _run_command(command): module.run_command(command, check_rc=True) - return _run_command if not module.check_mode else None + if module.check_mode: + return None + else: + return _run_command def main(): @@ -360,16 +365,17 @@ def main(): argument_spec=dict( repo=dict(required=True), state=dict(choices=['present', 'absent'], default='present'), + mode=dict(required=False, default=0644), update_cache = dict(aliases=['update-cache'], type='bool', default='yes'), + # this should not be needed, but exists as a failsafe + install_python_apt=dict(required=False, default="yes", type='bool'), ), supports_check_mode=True, ) - if not HAVE_PYTHON_APT: - module.fail_json(msg='Could not import python modules: apt_pkg. Please install python-apt package.') - - if not HAVE_PYCURL: - module.fail_json(msg='Could not import python modules: pycurl. Please install python-pycurl package.') + params = module.params + if params['install_python_apt'] and not HAVE_PYTHON_APT and not module.check_mode: + install_python_apt(module) repo = module.params['repo'] state = module.params['state'] @@ -377,7 +383,8 @@ def main(): sourceslist = None if isinstance(distro, aptsources.distro.UbuntuDistribution): - sourceslist = UbuntuSourcesList(add_ppa_signing_keys_callback=get_add_ppa_signing_key_callback(module)) + sourceslist = UbuntuSourcesList(module, + add_ppa_signing_keys_callback=get_add_ppa_signing_key_callback(module)) elif isinstance(distro, aptsources.distro.DebianDistribution) or \ isinstance(distro, aptsources.distro.Distribution): sourceslist = SourcesList() @@ -410,5 +417,6 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * main() diff --git a/library/packaging/apt_rpm b/library/packaging/apt_rpm new file mode 100755 index 00000000000..0eca3132224 --- /dev/null +++ b/library/packaging/apt_rpm @@ -0,0 +1,168 @@ +#!/usr/bin/python -tt +# -*- coding: utf-8 -*- + +# (c) 2013, Evgenii Terechkov +# Written by Evgenii Terechkov +# Based on urpmi module written by Philippe Makowski +# +# This module is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This software is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this software. If not, see . + + +DOCUMENTATION = ''' +--- +module: apt_rpm +short_description: apt_rpm package manager +description: + - Manages packages with I(apt-rpm). Both low-level (I(rpm)) and high-level (I(apt-get)) package manager binaries required. +version_added: "1.5" +options: + pkg: + description: + - name of package to install, upgrade or remove. + required: true + default: null + state: + description: + - Indicates the desired package state + required: false + default: present + choices: [ "absent", "present" ] + update_cache: + description: + - update the package database first C(apt-get update). + required: false + default: no + choices: [ "yes", "no" ] +author: Evgenii Terechkov +notes: [] +''' + +EXAMPLES = ''' +# install package foo +- apt_rpm: pkg=foo state=present +# remove package foo +- apt_rpm: pkg=foo state=absent +# description: remove packages foo and bar +- apt_rpm: pkg=foo,bar state=absent +# description: update the package database and install bar (bar will be the updated if a newer version exists) +- apt_rpm: name=bar state=present update_cache=yes +''' + + +import json +import shlex +import os +import sys + +APT_PATH="/usr/bin/apt-get" +RPM_PATH="/usr/bin/rpm" + +def query_package(module, name): + # rpm -q returns 0 if the package is installed, + # 1 if it is not installed + rc = os.system("%s -q %s" % (RPM_PATH,name)) + if rc == 0: + return True + else: + return False + +def query_package_provides(module, name): + # rpm -q returns 0 if the package is installed, + # 1 if it is not installed + rc = os.system("%s -q --provides %s >/dev/null" % (RPM_PATH,name)) + return rc == 0 + +def update_package_db(module): + rc = os.system("%s update" % APT_PATH) + + if rc != 0: + module.fail_json(msg="could not update package db") + +def remove_packages(module, packages): + + remove_c = 0 + # Using a for loop incase of error, we can report the package that failed + for package in packages: + # Query the package first, to see if we even need to remove + if not query_package(module, package): + continue + + rc = os.system("%s -y remove %s > /dev/null" % (APT_PATH,package)) + + if rc != 0: + module.fail_json(msg="failed to remove %s" % (package)) + + remove_c += 1 + + if remove_c > 0: + module.exit_json(changed=True, msg="removed %s package(s)" % remove_c) + + module.exit_json(changed=False, msg="package(s) already absent") + + +def install_packages(module, pkgspec): + + packages = "" + for package in pkgspec: + if not query_package_provides(module, package): + packages += "'%s' " % package + + if len(packages) != 0: + + cmd = ("%s -y install %s > /dev/null" % (APT_PATH, packages)) + + rc, out, err = module.run_command(cmd) + + installed = True + for packages in pkgspec: + if not query_package_provides(module, package): + installed = False + + # apt-rpm always have 0 for exit code if --force is used + if rc or not installed: + module.fail_json(msg="'apt-get -y install %s' failed: %s" % (packages, err)) + else: + module.exit_json(changed=True, msg="%s present(s)" % packages) + else: + module.exit_json(changed=False) + + +def main(): + module = AnsibleModule( + argument_spec = dict( + state = dict(default='installed', choices=['installed', 'removed', 'absent', 'present']), + update_cache = dict(default=False, aliases=['update-cache'], type='bool'), + package = dict(aliases=['pkg', 'name'], required=True))) + + + if not os.path.exists(APT_PATH) or not os.path.exists(RPM_PATH): + module.fail_json(msg="cannot find /usr/bin/apt-get and/or /usr/bin/rpm") + + p = module.params + + if p['update_cache']: + update_package_db(module) + + packages = p['package'].split(',') + + if p['state'] in [ 'installed', 'present' ]: + install_packages(module, packages) + + elif p['state'] in [ 'removed', 'absent' ]: + remove_packages(module, packages) + +# this is magic, see lib/ansible/module_common.py +from ansible.module_utils.basic import * + +main() diff --git a/library/packaging/composer b/library/packaging/composer new file mode 100644 index 00000000000..983a38dec64 --- /dev/null +++ b/library/packaging/composer @@ -0,0 +1,153 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2014, Dimitrios Tydeas Mengidis + +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . +# + +DOCUMENTATION = ''' +--- +module: composer +author: Dimitrios Tydeas Mengidis +short_description: Dependency Manager for PHP +version_added: "1.6" +description: + - Composer is a tool for dependency management in PHP. It allows you to declare the dependent libraries your project needs and it will install them in your project for you +options: + working_dir: + description: + - Directory of your project ( see --working-dir ) + required: true + default: null + aliases: [ "working-dir" ] + prefer_source: + description: + - Forces installation from package sources when possible ( see --prefer-source ) + required: false + default: "no" + choices: [ "yes", "no" ] + aliases: [ "prefer-source" ] + prefer_dist: + description: + - Forces installation from package dist even for de versions ( see --prefer-dist ) + required: false + default: "no" + choices: [ "yes", "no" ] + aliases: [ "prefer-dist" ] + no_dev: + description: + - Disables installation of require-dev packages ( see --no-dev ) + required: false + default: "yes" + choices: [ "yes", "no" ] + aliases: [ "no-dev" ] + no_scripts: + description: + - Skips the execution of all scripts defined in composer.json ( see --no-scripts ) + required: false + default: "no" + choices: [ "yes", "no" ] + aliases: [ "no-scripts" ] + no_plugins: + description: + - Disables all plugins ( see --no-plugins ) + required: false + default: "no" + choices: [ "yes", "no" ] + aliases: [ "no-plugins" ] + optimize_autoloader: + description: + - Optimize autoloader during autoloader dump ( see --optimize-autoloader ). Convert PSR-0/4 autoloading to classmap to get a faster autoloader. This is recommended especially for production, but can take a bit of time to run so it is currently not done by default. + required: false + default: "yes" + choices: [ "yes", "no" ] + aliases: [ "optimize-autoloader" ] +requirements: + - php + - composer installed in bin path (recommended /usr/local/bin) +notes: + - Default options that are always appended in each execution are --no-ansi, --no-progress, and --no-interaction +''' + +EXAMPLES = ''' +# Downloads and installs all the libs and dependencies outlined in the /path/to/project/composer.lock +- composer: command=install working_dir=/path/to/project +''' + +import os +import re + +def parse_out(string): + return re.sub("\s+", " ", string).strip() + +def has_changed(string): + return (re.match("Nothing to install or update", string) != None) + +def composer_install(module, options): + php_path = module.get_bin_path("php", True, ["/usr/local/bin"]) + composer_path = module.get_bin_path("composer", True, ["/usr/local/bin"]) + cmd = "%s %s install %s" % (php_path, composer_path, " ".join(options)) + + return module.run_command(cmd) + +def main(): + module = AnsibleModule( + argument_spec = dict( + working_dir = dict(aliases=["working-dir"], required=True), + prefer_source = dict(default="no", type="bool", aliases=["prefer-source"]), + prefer_dist = dict(default="no", type="bool", aliases=["prefer-dist"]), + no_dev = dict(default="yes", type="bool", aliases=["no-dev"]), + no_scripts = dict(default="no", type="bool", aliases=["no-scripts"]), + no_plugins = dict(default="no", type="bool", aliases=["no-plugins"]), + optimize_autoloader = dict(default="yes", type="bool", aliases=["optimize-autoloader"]), + ), + supports_check_mode=True + ) + + module.params["working_dir"] = os.path.abspath(module.params["working_dir"]) + + options = set([]) + # Default options + options.add("--no-ansi") + options.add("--no-progress") + options.add("--no-interaction") + + if module.check_mode: + options.add("--dry-run") + + # Prepare options + for i in module.params: + opt = "--%s" % i.replace("_","-") + p = module.params[i] + if isinstance(p, (bool)) and p: + options.add(opt) + elif isinstance(p, (str)): + options.add("%s=%s" % (opt, p)) + + rc, out, err = composer_install(module, options) + + if rc != 0: + output = parse_out(err) + module.fail_json(msg=output) + else: + output = parse_out(out) + module.exit_json(changed=has_changed(output), msg=output) + +# import module snippets +from ansible.module_utils.basic import * + +main() diff --git a/library/packaging/cpanm b/library/packaging/cpanm index 5f5ae98022f..5b1a9878d21 100644 --- a/library/packaging/cpanm +++ b/library/packaging/cpanm @@ -25,7 +25,7 @@ module: cpanm short_description: Manages Perl library dependencies. description: - Manage Perl library dependencies. -version_added: "1.0" +version_added: "1.6" options: name: description: @@ -72,14 +72,17 @@ author: Franck Cuny def _is_package_installed(module, name, locallib, cpanm): cmd = "" if locallib: - cmd = "PERL5LIB={locallib}/lib/perl5".format(locallib=locallib) - cmd = "{cmd} perl -M{name} -e '1'".format(cmd=cmd, name=name) + os.environ["PERL5LIB"] = "%s/lib/perl5" % locallib + cmd = "%s perl -M%s -e '1'" % (cmd, name) res, stdout, stderr = module.run_command(cmd, check_rc=False) - installed = True if res == 0 else False - return installed - + if res == 0: + return True + else: + return False def _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm): + # this code should use "%s" like everything else and just return early but not fixing all of it now. + # don't copy stuff like this if from_path: cmd = "{cpanm} {path}".format(cpanm=cpanm, path=from_path) else: @@ -111,21 +114,20 @@ def main(): required_one_of=[['name', 'from_path']], ) - cpanm = module.get_bin_path('cpanm', True) - - name = module.params['name'] + cpanm = module.get_bin_path('cpanm', True) + name = module.params['name'] from_path = module.params['from_path'] - notest = module.boolean(module.params.get('notest', False)) - locallib = module.params['locallib'] - mirror = module.params['mirror'] + notest = module.boolean(module.params.get('notest', False)) + locallib = module.params['locallib'] + mirror = module.params['mirror'] - changed = False + changed = False installed = _is_package_installed(module, name, locallib, cpanm) if not installed: out_cpanm = err_cpanm = '' - cmd = _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm) + cmd = _build_cmd_line(name, from_path, notest, locallib, mirror, cpanm) rc_cpanm, out_cpanm, err_cpanm = module.run_command(cmd, check_rc=False) @@ -137,7 +139,6 @@ def main(): module.exit_json(changed=changed, binary=cpanm, name=name) - # import module snippets from ansible.module_utils.basic import * diff --git a/library/packaging/easy_install b/library/packaging/easy_install index bdacf8e464b..889a81f025a 100644 --- a/library/packaging/easy_install +++ b/library/packaging/easy_install @@ -151,8 +151,8 @@ def main(): command = '%s %s' % (virtualenv, env) if site_packages: command += ' --system-site-packages' - os.chdir(tempfile.gettempdir()) - rc_venv, out_venv, err_venv = module.run_command(command) + cwd = tempfile.gettempdir() + rc_venv, out_venv, err_venv = module.run_command(command, cwd=cwd) rc += rc_venv out += out_venv diff --git a/library/packaging/gem b/library/packaging/gem index 25fc337e14e..0d1a157a1f4 100644 --- a/library/packaging/gem +++ b/library/packaging/gem @@ -34,8 +34,9 @@ options: state: description: - The desired state of the gem. C(latest) ensures that the latest version is installed. - required: true + required: false choices: [present, absent, latest] + default: present gem_source: description: - The path to a local gem used as installation source. @@ -66,6 +67,12 @@ options: description: - Version of the gem to be installed/removed. required: false + pre_release: + description: + - Allow installation of pre-release versions of the gem. + required: false + default: "no" + version_added: "1.6" author: Johan Wiren ''' @@ -89,7 +96,7 @@ def get_rubygems_path(module): return module.get_bin_path('gem', True) def get_rubygems_version(module): - cmd = [get_rubygems_path(module), '--version'] + cmd = [ get_rubygems_path(module), '--version' ] (rc, out, err) = module.run_command(cmd, check_rc=True) match = re.match(r'^(\d+)\.(\d+)\.(\d+)', out) @@ -173,6 +180,8 @@ def install(module): cmd.append('--user-install') else: cmd.append('--no-user-install') + if module.params['pre_release']: + cmd.append('--pre') cmd.append('--no-rdoc') cmd.append('--no-ri') cmd.append(module.params['gem_source']) @@ -187,8 +196,9 @@ def main(): include_dependencies = dict(required=False, default=True, type='bool'), name = dict(required=True, type='str'), repository = dict(required=False, aliases=['source'], type='str'), - state = dict(required=False, choices=['present','absent','latest'], type='str'), + state = dict(required=False, default='present', choices=['present','absent','latest'], type='str'), user_install = dict(required=False, default=True, type='bool'), + pre_release = dict(required=False, default=False, type='bool'), version = dict(required=False, type='str'), ), supports_check_mode = True, diff --git a/library/packaging/homebrew b/library/packaging/homebrew index ab1362acf1d..0dfc86096ff 100644 --- a/library/packaging/homebrew +++ b/library/packaging/homebrew @@ -2,6 +2,8 @@ # -*- coding: utf-8 -*- # (c) 2013, Andrew Dunham +# (c) 2013, Daniel Jaouen +# # Based on macports (Jimmy Tang ) # # This module is free software: you can redistribute it and/or modify @@ -20,11 +22,11 @@ DOCUMENTATION = ''' --- module: homebrew -author: Andrew Dunham +author: Andrew Dunham and Daniel Jaouen short_description: Package manager for Homebrew description: - Manages Homebrew packages -version_added: "1.4" +version_added: "1.1" options: name: description: @@ -33,7 +35,7 @@ options: state: description: - state of the package - choices: [ 'present', 'absent' ] + choices: [ 'head', 'latest', 'present', 'absent', 'linked', 'unlinked' ] required: false default: present update_homebrew: @@ -47,130 +49,743 @@ options: - options flags to install a package required: false default: null + version_added: "1.4" notes: [] ''' EXAMPLES = ''' - homebrew: name=foo state=present - homebrew: name=foo state=present update_homebrew=yes +- homebrew: name=foo state=latest update_homebrew=yes +- homebrew: update_homebrew=yes upgrade=yes +- homebrew: name=foo state=head +- homebrew: name=foo state=linked - homebrew: name=foo state=absent - homebrew: name=foo,bar state=absent - homebrew: name=foo state=present install_options=with-baz,enable-debug ''' +import os.path +import re + + +# exceptions -------------------------------------------------------------- {{{ +class HomebrewException(Exception): + pass +# /exceptions ------------------------------------------------------------- }}} + + +# utils ------------------------------------------------------------------- {{{ +def _create_regex_group(s): + lines = (line.strip() for line in s.split('\n') if line.strip()) + chars = filter(None, (line.split('#')[0].strip() for line in lines)) + group = r'[^' + r''.join(chars) + r']' + return re.compile(group) +# /utils ------------------------------------------------------------------ }}} + + +class Homebrew(object): + '''A class to manage Homebrew packages.''' + + # class regexes ------------------------------------------------ {{{ + VALID_PATH_CHARS = r''' + \w # alphanumeric characters (i.e., [a-zA-Z0-9_]) + \s # spaces + : # colons + {sep} # the OS-specific path separator + - # dashes + '''.format(sep=os.path.sep) + + VALID_BREW_PATH_CHARS = r''' + \w # alphanumeric characters (i.e., [a-zA-Z0-9_]) + \s # spaces + {sep} # the OS-specific path separator + - # dashes + '''.format(sep=os.path.sep) + + VALID_PACKAGE_CHARS = r''' + \w # alphanumeric characters (i.e., [a-zA-Z0-9_]) + - # dashes + ''' + + INVALID_PATH_REGEX = _create_regex_group(VALID_PATH_CHARS) + INVALID_BREW_PATH_REGEX = _create_regex_group(VALID_BREW_PATH_CHARS) + INVALID_PACKAGE_REGEX = _create_regex_group(VALID_PACKAGE_CHARS) + # /class regexes ----------------------------------------------- }}} + + # class validations -------------------------------------------- {{{ + @classmethod + def valid_path(cls, path): + ''' + `path` must be one of: + - list of paths + - a string containing only: + - alphanumeric characters + - dashes + - spaces + - colons + - os.path.sep + ''' + + if isinstance(path, basestring): + return not cls.INVALID_PATH_REGEX.search(path) + + try: + iter(path) + except TypeError: + return False + else: + paths = path + return all(cls.valid_brew_path(path_) for path_ in paths) + + @classmethod + def valid_brew_path(cls, brew_path): + ''' + `brew_path` must be one of: + - None + - a string containing only: + - alphanumeric characters + - dashes + - spaces + - os.path.sep + ''' + + if brew_path is None: + return True -def update_homebrew(module, brew_path): - """ Updates packages list. """ - - rc, out, err = module.run_command("%s update" % brew_path) - - if rc != 0: - module.fail_json(msg="could not update homebrew") + return ( + isinstance(brew_path, basestring) + and not cls.INVALID_BREW_PATH_REGEX.search(brew_path) + ) + @classmethod + def valid_package(cls, package): + '''A valid package is either None or alphanumeric.''' -def query_package(module, brew_path, name, state="present"): - """ Returns whether a package is installed or not. """ + if package is None: + return True - if state == "present": - rc, out, err = module.run_command("%s list %s" % (brew_path, name)) - if rc == 0: + return ( + isinstance(package, basestring) + and not cls.INVALID_PACKAGE_REGEX.search(package) + ) + + @classmethod + def valid_state(cls, state): + ''' + A valid state is one of: + - None + - installed + - upgraded + - head + - linked + - unlinked + - absent + ''' + + if state is None: return True + else: + return ( + isinstance(state, basestring) + and state.lower() in ( + 'installed', + 'upgraded', + 'head', + 'linked', + 'unlinked', + 'absent', + ) + ) + + @classmethod + def valid_module(cls, module): + '''A valid module is an instance of AnsibleModule.''' + + return isinstance(module, AnsibleModule) + + # /class validations ------------------------------------------- }}} + + # class properties --------------------------------------------- {{{ + @property + def module(self): + return self._module + + @module.setter + def module(self, module): + if not self.valid_module(module): + self._module = None + self.failed = True + self.message = 'Invalid module: {0}.'.format(module) + raise HomebrewException(self.message) + + else: + self._module = module + return module + + @property + def path(self): + return self._path + + @path.setter + def path(self, path): + if not self.valid_path(path): + self._path = [] + self.failed = True + self.message = 'Invalid path: {0}.'.format(path) + raise HomebrewException(self.message) + + else: + if isinstance(path, basestring): + self._path = path.split(':') + else: + self._path = path + + return path + + @property + def brew_path(self): + return self._brew_path + + @brew_path.setter + def brew_path(self, brew_path): + if not self.valid_brew_path(brew_path): + self._brew_path = None + self.failed = True + self.message = 'Invalid brew_path: {0}.'.format(brew_path) + raise HomebrewException(self.message) + + else: + self._brew_path = brew_path + return brew_path + + @property + def params(self): + return self._params + + @params.setter + def params(self, params): + self._params = self.module.params + return self._params + + @property + def current_package(self): + return self._current_package + + @current_package.setter + def current_package(self, package): + if not self.valid_package(package): + self._current_package = None + self.failed = True + self.message = 'Invalid package: {0}.'.format(package) + raise HomebrewException(self.message) + + else: + self._current_package = package + return package + # /class properties -------------------------------------------- }}} + + def __init__(self, module, path=None, packages=None, state=None, + update_homebrew=False, install_options=None): + if not install_options: + install_options = list() + self._setup_status_vars() + self._setup_instance_vars(module=module, path=path, packages=packages, + state=state, update_homebrew=update_homebrew, + install_options=install_options, ) + + self._prep() + + # prep --------------------------------------------------------- {{{ + def _setup_status_vars(self): + self.failed = False + self.changed = False + self.changed_count = 0 + self.unchanged_count = 0 + self.message = '' + + def _setup_instance_vars(self, **kwargs): + for key, val in kwargs.iteritems(): + setattr(self, key, val) + + def _prep(self): + self._prep_path() + self._prep_brew_path() + + def _prep_path(self): + if not self.path: + self.path = ['/usr/local/bin'] + + def _prep_brew_path(self): + if not self.module: + self.brew_path = None + self.failed = True + self.message = 'AnsibleModule not set.' + raise HomebrewException(self.message) + + self.brew_path = self.module.get_bin_path( + 'brew', + required=True, + opt_dirs=self.path, + ) + if not self.brew_path: + self.brew_path = None + self.failed = True + self.message = 'Unable to locate homebrew executable.' + raise HomebrewException('Unable to locate homebrew executable.') + + return self.brew_path + + def _status(self): + return (self.failed, self.changed, self.message) + # /prep -------------------------------------------------------- }}} + + def run(self): + try: + self._run() + except HomebrewException: + pass + + if not self.failed and (self.changed_count + self.unchanged_count > 1): + self.message = "Changed: %d, Unchanged: %d" % ( + self.changed_count, + self.unchanged_count, + ) + (failed, changed, message) = self._status() + + return (failed, changed, message) + + # checks ------------------------------------------------------- {{{ + def _current_package_is_installed(self): + if not self.valid_package(self.current_package): + self.failed = True + self.message = 'Invalid package: {0}.'.format(self.current_package) + raise HomebrewException(self.message) + + cmd = [ + "{brew_path}".format(brew_path=self.brew_path), + "info", + self.current_package, + ] + rc, out, err = self.module.run_command(cmd) + for line in out.split('\n'): + if ( + re.search(r'Built from source', line) + or re.search(r'Poured from bottle', line) + ): + return True return False + def _outdated_packages(self): + rc, out, err = self.module.run_command([ + self.brew_path, + 'outdated', + ]) + return [line.split(' ')[0].strip() for line in out.split('\n') if line] + + def _current_package_is_outdated(self): + if not self.valid_package(self.current_package): + return False + + return self.current_package in self._outdated_packages() + + def _current_package_is_installed_from_head(self): + if not Homebrew.valid_package(self.current_package): + return False + elif not self._current_package_is_installed(): + return False + + rc, out, err = self.module.run_command([ + self.brew_path, + 'info', + self.current_package, + ]) + + try: + version_info = [line for line in out.split('\n') if line][0] + except IndexError: + return False + + return version_info.split(' ')[-1] == 'HEAD' + # /checks ------------------------------------------------------ }}} + + # commands ----------------------------------------------------- {{{ + def _run(self): + if self.update_homebrew: + self._update_homebrew() + + if self.packages: + if self.state == 'installed': + return self._install_packages() + elif self.state == 'upgraded': + return self._upgrade_packages() + elif self.state == 'head': + return self._install_packages() + elif self.state == 'linked': + return self._link_packages() + elif self.state == 'unlinked': + return self._unlink_packages() + elif self.state == 'absent': + return self._uninstall_packages() + + # updated -------------------------------- {{{ + def _update_homebrew(self): + rc, out, err = self.module.run_command([ + self.brew_path, + 'update', + ]) + if rc == 0: + if out and isinstance(out, basestring): + already_updated = any( + re.search(r'Already up-to-date.', s.strip(), re.IGNORECASE) + for s in out.split('\n') + if s + ) + if not already_updated: + self.changed = True + self.message = 'Homebrew updated successfully.' + else: + self.message = 'Homebrew already up-to-date.' -def remove_packages(module, brew_path, packages): - """ Uninstalls one or more packages if installed. """ - - removed_count = 0 - - # Using a for loop incase of error, we can report the package that failed - for package in packages: - # Query the package first, to see if we even need to remove. - if not query_package(module, brew_path, package): - continue - - if module.check_mode: - module.exit_json(changed=True) - rc, out, err = module.run_command([brew_path, 'remove', package]) - - if query_package(module, brew_path, package): - module.fail_json(msg="failed to remove %s: %s" % (package, out.strip())) - - removed_count += 1 - - if removed_count > 0: - module.exit_json(changed=True, msg="removed %d package(s)" % removed_count) - - module.exit_json(changed=False, msg="package(s) already absent") - - -def install_packages(module, brew_path, packages, options): - """ Installs one or more packages if not already installed. """ - - installed_count = 0 - - for package in packages: - if query_package(module, brew_path, package): - continue + return True + else: + self.failed = True + self.message = err.strip() + raise HomebrewException(self.message) + # /updated ------------------------------- }}} + + # installed ------------------------------ {{{ + def _install_current_package(self): + if not self.valid_package(self.current_package): + self.failed = True + self.message = 'Invalid package: {0}.'.format(self.current_package) + raise HomebrewException(self.message) + + if self._current_package_is_installed(): + self.unchanged_count += 1 + self.message = 'Package already installed: {0}'.format( + self.current_package, + ) + return True - if module.check_mode: - module.exit_json(changed=True) + if self.module.check_mode: + self.changed = True + self.message = 'Package would be installed: {0}'.format( + self.current_package + ) + raise HomebrewException(self.message) + + if self.state == 'head': + head = '--HEAD' + else: + head = None + + opts = ( + [self.brew_path, 'install'] + + self.install_options + + [self.current_package, head] + ) + cmd = [opt for opt in opts if opt] + rc, out, err = self.module.run_command(cmd) + + if self._current_package_is_installed(): + self.changed_count += 1 + self.changed = True + self.message = 'Package installed: {0}'.format(self.current_package) + return True + else: + self.failed = True + self.message = err.strip() + raise HomebrewException(self.message) + + def _install_packages(self): + for package in self.packages: + self.current_package = package + self._install_current_package() + + return True + # /installed ----------------------------- }}} + + # upgraded ------------------------------- {{{ + def _upgrade_current_package(self): + command = 'upgrade' + + if not self.valid_package(self.current_package): + self.failed = True + self.message = 'Invalid package: {0}.'.format(self.current_package) + raise HomebrewException(self.message) + + if not self._current_package_is_installed(): + command = 'install' + + if self._current_package_is_installed() and not self._current_package_is_outdated(): + self.message = 'Package is already upgraded: {0}'.format( + self.current_package, + ) + self.unchanged_count += 1 + return True - cmd = [brew_path, 'install', package] - if options: - cmd.extend(options) - rc, out, err = module.run_command(cmd) + if self.module.check_mode: + self.changed = True + self.message = 'Package would be upgraded: {0}'.format( + self.current_package + ) + raise HomebrewException(self.message) + + opts = ( + [self.brew_path, command] + + self.install_options + + [self.current_package] + ) + cmd = [opt for opt in opts if opt] + rc, out, err = self.module.run_command(cmd) + + if self._current_package_is_installed() and not self._current_package_is_outdated(): + self.changed_count += 1 + self.changed = True + self.message = 'Package upgraded: {0}'.format(self.current_package) + return True + else: + self.failed = True + self.message = err.strip() + raise HomebrewException(self.message) + + def _upgrade_all_packages(self): + opts = ( + [self.brew_path, 'upgrade'] + + self.install_options + ) + cmd = [opt for opt in opts if opt] + rc, out, err = self.module.run_command(cmd) - if not query_package(module, brew_path, package): - module.fail_json(msg="failed to install %s: '%s' %s" % (package, cmd, out.strip())) + if rc == 0: + self.changed = True + self.message = 'All packages upgraded.' + return True + else: + self.failed = True + self.message = err.strip() + raise HomebrewException(self.message) + + def _upgrade_packages(self): + if not self.packages: + self._upgrade_all_packages() + else: + for package in self.packages: + self.current_package = package + self._upgrade_current_package() + return True + # /upgraded ------------------------------ }}} + + # uninstalled ---------------------------- {{{ + def _uninstall_current_package(self): + if not self.valid_package(self.current_package): + self.failed = True + self.message = 'Invalid package: {0}.'.format(self.current_package) + raise HomebrewException(self.message) + + if not self._current_package_is_installed(): + self.unchanged_count += 1 + self.message = 'Package already uninstalled: {0}'.format( + self.current_package, + ) + return True - installed_count += 1 + if self.module.check_mode: + self.changed = True + self.message = 'Package would be uninstalled: {0}'.format( + self.current_package + ) + raise HomebrewException(self.message) + + opts = ( + [self.brew_path, 'uninstall'] + + self.install_options + + [self.current_package] + ) + cmd = [opt for opt in opts if opt] + rc, out, err = self.module.run_command(cmd) + + if not self._current_package_is_installed(): + self.changed_count += 1 + self.changed = True + self.message = 'Package uninstalled: {0}'.format(self.current_package) + return True + else: + self.failed = True + self.message = err.strip() + raise HomebrewException(self.message) + + def _uninstall_packages(self): + for package in self.packages: + self.current_package = package + self._uninstall_current_package() + + return True + # /uninstalled ----------------------------- }}} + + # linked --------------------------------- {{{ + def _link_current_package(self): + if not self.valid_package(self.current_package): + self.failed = True + self.message = 'Invalid package: {0}.'.format(self.current_package) + raise HomebrewException(self.message) + + if not self._current_package_is_installed(): + self.failed = True + self.message = 'Package not installed: {0}.'.format(self.current_package) + raise HomebrewException(self.message) + + if self.module.check_mode: + self.changed = True + self.message = 'Package would be linked: {0}'.format( + self.current_package + ) + raise HomebrewException(self.message) + + opts = ( + [self.brew_path, 'link'] + + self.install_options + + [self.current_package] + ) + cmd = [opt for opt in opts if opt] + rc, out, err = self.module.run_command(cmd) - if installed_count > 0: - module.exit_json(changed=True, msg="installed %d package(s)" % (installed_count,)) + if rc == 0: + self.changed_count += 1 + self.changed = True + self.message = 'Package linked: {0}'.format(self.current_package) - module.exit_json(changed=False, msg="package(s) already present") + return True + else: + self.failed = True + self.message = 'Package could not be linked: {0}.'.format(self.current_package) + raise HomebrewException(self.message) + + def _link_packages(self): + for package in self.packages: + self.current_package = package + self._link_current_package() + + return True + # /linked -------------------------------- }}} + + # unlinked ------------------------------- {{{ + def _unlink_current_package(self): + if not self.valid_package(self.current_package): + self.failed = True + self.message = 'Invalid package: {0}.'.format(self.current_package) + raise HomebrewException(self.message) + + if not self._current_package_is_installed(): + self.failed = True + self.message = 'Package not installed: {0}.'.format(self.current_package) + raise HomebrewException(self.message) + + if self.module.check_mode: + self.changed = True + self.message = 'Package would be unlinked: {0}'.format( + self.current_package + ) + raise HomebrewException(self.message) + + opts = ( + [self.brew_path, 'unlink'] + + self.install_options + + [self.current_package] + ) + cmd = [opt for opt in opts if opt] + rc, out, err = self.module.run_command(cmd) -def generate_options_string(install_options): - if install_options is None: - return None + if rc == 0: + self.changed_count += 1 + self.changed = True + self.message = 'Package unlinked: {0}'.format(self.current_package) - options = [] + return True + else: + self.failed = True + self.message = 'Package could not be unlinked: {0}.'.format(self.current_package) + raise HomebrewException(self.message) - for option in install_options: - options.append('--%s' % option) + def _unlink_packages(self): + for package in self.packages: + self.current_package = package + self._unlink_current_package() - return options + return True + # /unlinked ------------------------------ }}} + # /commands ---------------------------------------------------- }}} def main(): module = AnsibleModule( - argument_spec = dict( - name = dict(aliases=["pkg"], required=True), - state = dict(default="present", choices=["present", "installed", "absent", "removed"]), - update_homebrew = dict(default="no", aliases=["update-brew"], type='bool'), - install_options = dict(default=None, aliases=["options"], type='list') + argument_spec=dict( + name=dict(aliases=["pkg"], required=False), + path=dict(required=False), + state=dict( + default="present", + choices=[ + "present", "installed", + "latest", "upgraded", "head", + "linked", "unlinked", + "absent", "removed", "uninstalled", + ], + ), + update_homebrew=dict( + default="no", + aliases=["update-brew"], + type='bool', + ), + install_options=dict( + default=None, + aliases=['options'], + type='list', + ) ), - supports_check_mode=True + supports_check_mode=True, ) - - brew_path = module.get_bin_path('brew', True, ['/usr/local/bin']) - p = module.params - if p["update_homebrew"]: - update_homebrew(module, brew_path) - - pkgs = p["name"].split(",") - - if p["state"] in ["present", "installed"]: - opt = generate_options_string(p["install_options"]) - install_packages(module, brew_path, pkgs, opt) - - elif p["state"] in ["absent", "removed"]: - remove_packages(module, brew_path, pkgs) - -# import module snippets -from ansible.module_utils.basic import * - + if p['name']: + packages = p['name'].split(',') + else: + packages = None + + path = p['path'] + if path: + path = path.split(':') + else: + path = ['/usr/local/bin'] + + state = p['state'] + if state in ('present', 'installed'): + state = 'installed' + if state in ('head'): + state = 'head' + if state in ('latest', 'upgraded'): + state = 'upgraded' + if state == 'linked': + state = 'linked' + if state == 'unlinked': + state = 'unlinked' + if state in ('absent', 'removed', 'uninstalled'): + state = 'absent' + + update_homebrew = p['update_homebrew'] + p['install_options'] = p['install_options'] or [] + install_options = ['--{0}'.format(install_option) + for install_option in p['install_options']] + + brew = Homebrew(module=module, path=path, packages=packages, + state=state, update_homebrew=update_homebrew, + install_options=install_options) + (failed, changed, message) = brew.run() + if failed: + module.fail_json(msg=message) + else: + module.exit_json(changed=changed, msg=message) + +# this is magic, see lib/ansible/module_common.py +#<> main() diff --git a/library/packaging/homebrew_cask b/library/packaging/homebrew_cask new file mode 100644 index 00000000000..fa85931afc9 --- /dev/null +++ b/library/packaging/homebrew_cask @@ -0,0 +1,513 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2013, Daniel Jaouen +# +# This module is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This software is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this software. If not, see . + +DOCUMENTATION = ''' +--- +module: homebrew_cask +author: Daniel Jaouen +short_description: Install/uninstall homebrew casks. +description: + - Manages Homebrew casks. +version_added: "1.6" +options: + name: + description: + - name of cask to install/remove + required: true + state: + description: + - state of the cask + choices: [ 'installed', 'uninstalled' ] + required: false + default: present +''' +EXAMPLES = ''' +- homebrew_cask: name=alfred state=present +- homebrew_cask: name=alfred state=absent +''' + +import os.path +import re + + +# exceptions -------------------------------------------------------------- {{{ +class HomebrewCaskException(Exception): + pass +# /exceptions ------------------------------------------------------------- }}} + + +# utils ------------------------------------------------------------------- {{{ +def _create_regex_group(s): + lines = (line.strip() for line in s.split('\n') if line.strip()) + chars = filter(None, (line.split('#')[0].strip() for line in lines)) + group = r'[^' + r''.join(chars) + r']' + return re.compile(group) +# /utils ------------------------------------------------------------------ }}} + + +class HomebrewCask(object): + '''A class to manage Homebrew casks.''' + + # class regexes ------------------------------------------------ {{{ + VALID_PATH_CHARS = r''' + \w # alphanumeric characters (i.e., [a-zA-Z0-9_]) + \s # spaces + : # colons + {sep} # the OS-specific path separator + - # dashes + '''.format(sep=os.path.sep) + + VALID_BREW_PATH_CHARS = r''' + \w # alphanumeric characters (i.e., [a-zA-Z0-9_]) + \s # spaces + {sep} # the OS-specific path separator + - # dashes + '''.format(sep=os.path.sep) + + VALID_CASK_CHARS = r''' + \w # alphanumeric characters (i.e., [a-zA-Z0-9_]) + - # dashes + ''' + + INVALID_PATH_REGEX = _create_regex_group(VALID_PATH_CHARS) + INVALID_BREW_PATH_REGEX = _create_regex_group(VALID_BREW_PATH_CHARS) + INVALID_CASK_REGEX = _create_regex_group(VALID_CASK_CHARS) + # /class regexes ----------------------------------------------- }}} + + # class validations -------------------------------------------- {{{ + @classmethod + def valid_path(cls, path): + ''' + `path` must be one of: + - list of paths + - a string containing only: + - alphanumeric characters + - dashes + - spaces + - colons + - os.path.sep + ''' + + if isinstance(path, basestring): + return not cls.INVALID_PATH_REGEX.search(path) + + try: + iter(path) + except TypeError: + return False + else: + paths = path + return all(cls.valid_brew_path(path_) for path_ in paths) + + @classmethod + def valid_brew_path(cls, brew_path): + ''' + `brew_path` must be one of: + - None + - a string containing only: + - alphanumeric characters + - dashes + - spaces + - os.path.sep + ''' + + if brew_path is None: + return True + + return ( + isinstance(brew_path, basestring) + and not cls.INVALID_BREW_PATH_REGEX.search(brew_path) + ) + + @classmethod + def valid_cask(cls, cask): + '''A valid cask is either None or alphanumeric + backslashes.''' + + if cask is None: + return True + + return ( + isinstance(cask, basestring) + and not cls.INVALID_CASK_REGEX.search(cask) + ) + + @classmethod + def valid_state(cls, state): + ''' + A valid state is one of: + - installed + - absent + ''' + + if state is None: + return True + else: + return ( + isinstance(state, basestring) + and state.lower() in ( + 'installed', + 'absent', + ) + ) + + @classmethod + def valid_module(cls, module): + '''A valid module is an instance of AnsibleModule.''' + + return isinstance(module, AnsibleModule) + # /class validations ------------------------------------------- }}} + + # class properties --------------------------------------------- {{{ + @property + def module(self): + return self._module + + @module.setter + def module(self, module): + if not self.valid_module(module): + self._module = None + self.failed = True + self.message = 'Invalid module: {0}.'.format(module) + raise HomebrewCaskException(self.message) + + else: + self._module = module + return module + + @property + def path(self): + return self._path + + @path.setter + def path(self, path): + if not self.valid_path(path): + self._path = [] + self.failed = True + self.message = 'Invalid path: {0}.'.format(path) + raise HomebrewCaskException(self.message) + + else: + if isinstance(path, basestring): + self._path = path.split(':') + else: + self._path = path + + return path + + @property + def brew_path(self): + return self._brew_path + + @brew_path.setter + def brew_path(self, brew_path): + if not self.valid_brew_path(brew_path): + self._brew_path = None + self.failed = True + self.message = 'Invalid brew_path: {0}.'.format(brew_path) + raise HomebrewCaskException(self.message) + + else: + self._brew_path = brew_path + return brew_path + + @property + def params(self): + return self._params + + @params.setter + def params(self, params): + self._params = self.module.params + return self._params + + @property + def current_cask(self): + return self._current_cask + + @current_cask.setter + def current_cask(self, cask): + if not self.valid_cask(cask): + self._current_cask = None + self.failed = True + self.message = 'Invalid cask: {0}.'.format(cask) + raise HomebrewCaskException(self.message) + + else: + self._current_cask = cask + return cask + # /class properties -------------------------------------------- }}} + + def __init__(self, module, path=None, casks=None, state=None): + self._setup_status_vars() + self._setup_instance_vars(module=module, path=path, casks=casks, + state=state) + + self._prep() + + # prep --------------------------------------------------------- {{{ + def _setup_status_vars(self): + self.failed = False + self.changed = False + self.changed_count = 0 + self.unchanged_count = 0 + self.message = '' + + def _setup_instance_vars(self, **kwargs): + for key, val in kwargs.iteritems(): + setattr(self, key, val) + + def _prep(self): + self._prep_path() + self._prep_brew_path() + + def _prep_path(self): + if not self.path: + self.path = ['/usr/local/bin'] + + def _prep_brew_path(self): + if not self.module: + self.brew_path = None + self.failed = True + self.message = 'AnsibleModule not set.' + raise HomebrewCaskException(self.message) + + self.brew_path = self.module.get_bin_path( + 'brew', + required=True, + opt_dirs=self.path, + ) + if not self.brew_path: + self.brew_path = None + self.failed = True + self.message = 'Unable to locate homebrew executable.' + raise HomebrewCaskException('Unable to locate homebrew executable.') + + return self.brew_path + + def _status(self): + return (self.failed, self.changed, self.message) + # /prep -------------------------------------------------------- }}} + + def run(self): + try: + self._run() + except HomebrewCaskException: + pass + + if not self.failed and (self.changed_count + self.unchanged_count > 1): + self.message = "Changed: %d, Unchanged: %d" % ( + self.changed_count, + self.unchanged_count, + ) + (failed, changed, message) = self._status() + + return (failed, changed, message) + + # checks ------------------------------------------------------- {{{ + def _current_cask_is_installed(self): + if not self.valid_cask(self.current_cask): + self.failed = True + self.message = 'Invalid cask: {0}.'.format(self.current_cask) + raise HomebrewCaskException(self.message) + + cmd = [self.brew_path, 'cask', 'list'] + rc, out, err = self.module.run_command(cmd) + + if 'nothing to list' in err: + return False + elif rc == 0: + casks = [cask_.strip() for cask_ in out.split('\n') if cask_.strip()] + return self.current_cask in casks + else: + self.failed = True + self.message = err.strip() + raise HomebrewCaskException(self.message) + # /checks ------------------------------------------------------ }}} + + # commands ----------------------------------------------------- {{{ + def _run(self): + if self.state == 'installed': + return self._install_casks() + elif self.state == 'absent': + return self._uninstall_casks() + + if self.command: + return self._command() + + # updated -------------------------------- {{{ + def _update_homebrew(self): + rc, out, err = self.module.run_command([ + self.brew_path, + 'update', + ]) + if rc == 0: + if out and isinstance(out, basestring): + already_updated = any( + re.search(r'Already up-to-date.', s.strip(), re.IGNORECASE) + for s in out.split('\n') + if s + ) + if not already_updated: + self.changed = True + self.message = 'Homebrew updated successfully.' + else: + self.message = 'Homebrew already up-to-date.' + + return True + else: + self.failed = True + self.message = err.strip() + raise HomebrewCaskException(self.message) + # /updated ------------------------------- }}} + + # installed ------------------------------ {{{ + def _install_current_cask(self): + if not self.valid_cask(self.current_cask): + self.failed = True + self.message = 'Invalid cask: {0}.'.format(self.current_cask) + raise HomebrewCaskException(self.message) + + if self._current_cask_is_installed(): + self.unchanged_count += 1 + self.message = 'Cask already installed: {0}'.format( + self.current_cask, + ) + return True + + if self.module.check_mode: + self.changed = True + self.message = 'Cask would be installed: {0}'.format( + self.current_cask + ) + raise HomebrewCaskException(self.message) + + cmd = [opt + for opt in (self.brew_path, 'cask', 'install', self.current_cask) + if opt] + + rc, out, err = self.module.run_command(cmd) + + if self._current_cask_is_installed(): + self.changed_count += 1 + self.changed = True + self.message = 'Cask installed: {0}'.format(self.current_cask) + return True + else: + self.failed = True + self.message = err.strip() + raise HomebrewCaskException(self.message) + + def _install_casks(self): + for cask in self.casks: + self.current_cask = cask + self._install_current_cask() + + return True + # /installed ----------------------------- }}} + + # uninstalled ---------------------------- {{{ + def _uninstall_current_cask(self): + if not self.valid_cask(self.current_cask): + self.failed = True + self.message = 'Invalid cask: {0}.'.format(self.current_cask) + raise HomebrewCaskException(self.message) + + if not self._current_cask_is_installed(): + self.unchanged_count += 1 + self.message = 'Cask already uninstalled: {0}'.format( + self.current_cask, + ) + return True + + if self.module.check_mode: + self.changed = True + self.message = 'Cask would be uninstalled: {0}'.format( + self.current_cask + ) + raise HomebrewCaskException(self.message) + + cmd = [opt + for opt in (self.brew_path, 'cask', 'uninstall', self.current_cask) + if opt] + + rc, out, err = self.module.run_command(cmd) + + if not self._current_cask_is_installed(): + self.changed_count += 1 + self.changed = True + self.message = 'Cask uninstalled: {0}'.format(self.current_cask) + return True + else: + self.failed = True + self.message = err.strip() + raise HomebrewCaskException(self.message) + + def _uninstall_casks(self): + for cask in self.casks: + self.current_cask = cask + self._uninstall_current_cask() + + return True + # /uninstalled ----------------------------- }}} + # /commands ---------------------------------------------------- }}} + + +def main(): + module = AnsibleModule( + argument_spec=dict( + name=dict(aliases=["cask"], required=False), + path=dict(required=False), + state=dict( + default="present", + choices=[ + "present", "installed", + "absent", "removed", "uninstalled", + ], + ), + ), + supports_check_mode=True, + ) + p = module.params + + if p['name']: + casks = p['name'].split(',') + else: + casks = None + + path = p['path'] + if path: + path = path.split(':') + else: + path = ['/usr/local/bin'] + + state = p['state'] + if state in ('present', 'installed'): + state = 'installed' + if state in ('absent', 'removed', 'uninstalled'): + state = 'absent' + + brew_cask = HomebrewCask(module=module, path=path, casks=casks, + state=state) + (failed, changed, message) = brew_cask.run() + if failed: + module.fail_json(msg=message) + else: + module.exit_json(changed=changed, msg=message) + +# this is magic, see lib/ansible/module_common.py +#<> +main() diff --git a/library/packaging/homebrew_tap b/library/packaging/homebrew_tap new file mode 100644 index 00000000000..a79ba076a8a --- /dev/null +++ b/library/packaging/homebrew_tap @@ -0,0 +1,215 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2013, Daniel Jaouen +# Based on homebrew (Andrew Dunham ) +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +import re + +DOCUMENTATION = ''' +--- +module: homebrew_tap +author: Daniel Jaouen +short_description: Tap a Homebrew repository. +description: + - Tap external Homebrew repositories. +version_added: "1.6" +options: + tap: + description: + - The repository to tap. + required: true + state: + description: + - state of the repository. + choices: [ 'present', 'absent' ] + required: false + default: 'present' +requirements: [ homebrew ] +''' + +EXAMPLES = ''' +homebrew_tap: tap=homebrew/dupes state=present +homebrew_tap: tap=homebrew/dupes state=absent +homebrew_tap: tap=homebrew/dupes,homebrew/science state=present +''' + + +def a_valid_tap(tap): + '''Returns True if the tap is valid.''' + regex = re.compile(r'^(\S+)/(homebrew-)?(\w+)$') + return regex.match(tap) + + +def already_tapped(module, brew_path, tap): + '''Returns True if already tapped.''' + + rc, out, err = module.run_command([ + brew_path, + 'tap', + ]) + taps = [tap_.strip().lower() for tap_ in out.split('\n') if tap_] + return tap.lower() in taps + + +def add_tap(module, brew_path, tap): + '''Adds a single tap.''' + failed, changed, msg = False, False, '' + + if not a_valid_tap(tap): + failed = True + msg = 'not a valid tap: %s' % tap + + elif not already_tapped(module, brew_path, tap): + if module.check_mode: + module.exit_json(changed=True) + + rc, out, err = module.run_command([ + brew_path, + 'tap', + tap, + ]) + if already_tapped(module, brew_path, tap): + changed = True + msg = 'successfully tapped: %s' % tap + else: + failed = True + msg = 'failed to tap: %s' % tap + + else: + msg = 'already tapped: %s' % tap + + return (failed, changed, msg) + + +def add_taps(module, brew_path, taps): + '''Adds one or more taps.''' + failed, unchanged, added, msg = False, 0, 0, '' + + for tap in taps: + (failed, changed, msg) = add_tap(module, brew_path, tap) + if failed: + break + if changed: + added += 1 + else: + unchanged += 1 + + if failed: + msg = 'added: %d, unchanged: %d, error: ' + msg + msg = msg % (added, unchanged) + elif added: + changed = True + msg = 'added: %d, unchanged: %d' % (added, unchanged) + else: + msg = 'added: %d, unchanged: %d' % (added, unchanged) + + return (failed, changed, msg) + + +def remove_tap(module, brew_path, tap): + '''Removes a single tap.''' + failed, changed, msg = False, False, '' + + if not a_valid_tap(tap): + failed = True + msg = 'not a valid tap: %s' % tap + + elif already_tapped(module, brew_path, tap): + if module.check_mode: + module.exit_json(changed=True) + + rc, out, err = module.run_command([ + brew_path, + 'untap', + tap, + ]) + if not already_tapped(module, brew_path, tap): + changed = True + msg = 'successfully untapped: %s' % tap + else: + failed = True + msg = 'failed to untap: %s' % tap + + else: + msg = 'already untapped: %s' % tap + + return (failed, changed, msg) + + +def remove_taps(module, brew_path, taps): + '''Removes one or more taps.''' + failed, unchanged, removed, msg = False, 0, 0, '' + + for tap in taps: + (failed, changed, msg) = remove_tap(module, brew_path, tap) + if failed: + break + if changed: + removed += 1 + else: + unchanged += 1 + + if failed: + msg = 'removed: %d, unchanged: %d, error: ' + msg + msg = msg % (removed, unchanged) + elif removed: + changed = True + msg = 'removed: %d, unchanged: %d' % (removed, unchanged) + else: + msg = 'removed: %d, unchanged: %d' % (removed, unchanged) + + return (failed, changed, msg) + + +def main(): + module = AnsibleModule( + argument_spec=dict( + name=dict(aliases=['tap'], required=True), + state=dict(default='present', choices=['present', 'absent']), + ), + supports_check_mode=True, + ) + + brew_path = module.get_bin_path( + 'brew', + required=True, + opt_dirs=['/usr/local/bin'] + ) + + taps = module.params['name'].split(',') + + if module.params['state'] == 'present': + failed, changed, msg = add_taps(module, brew_path, taps) + + if failed: + module.fail_json(msg=msg) + else: + module.exit_json(changed=changed, msg=msg) + + elif module.params['state'] == 'absent': + failed, changed, msg = remove_taps(module, brew_path, taps) + + if failed: + module.fail_json(msg=msg) + else: + module.exit_json(changed=changed, msg=msg) + +# this is magic, see lib/ansible/module_common.py +#<> +main() diff --git a/library/packaging/layman b/library/packaging/layman new file mode 100644 index 00000000000..a0b12202812 --- /dev/null +++ b/library/packaging/layman @@ -0,0 +1,236 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2014, Jakub Jirutka +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +import shutil +from os import path +from urllib2 import Request, urlopen, URLError + +DOCUMENTATION = ''' +--- +module: layman +author: Jakub Jirutka +version_added: "1.6" +short_description: Manage Gentoo overlays +description: + - Uses Layman to manage an additional repositories for the Portage package manager on Gentoo Linux. + Please note that Layman must be installed on a managed node prior using this module. +options: + name: + description: + - The overlay id to install, synchronize, or uninstall. + Use 'ALL' to sync all of the installed overlays (can be used only when C(state=updated)). + required: true + list_url: + description: + - An URL of the alternative overlays list that defines the overlay to install. + This list will be fetched and saved under C(${overlay_defs})/${name}.xml), where + C(overlay_defs) is readed from the Layman's configuration. + required: false + state: + description: + - Whether to install (C(present)), sync (C(updated)), or uninstall (C(absent)) the overlay. + required: false + default: present + choices: [present, absent, updated] +''' + +EXAMPLES = ''' +# Install the overlay 'mozilla' which is on the central overlays list. +- layman: name=mozilla + +# Install the overlay 'cvut' from the specified alternative list. +- layman: name=cvut list_url=http://raw.github.com/cvut/gentoo-overlay/master/overlay.xml + +# Update (sync) the overlay 'cvut', or install if not installed yet. +- layman: name=cvut list_url=http://raw.github.com/cvut/gentoo-overlay/master/overlay.xml state=updated + +# Update (sync) all of the installed overlays. +- layman: name=ALL state=updated + +# Uninstall the overlay 'cvut'. +- layman: name=cvut state=absent +''' + +USERAGENT = 'ansible-httpget' + +try: + from layman.api import LaymanAPI + from layman.config import BareConfig + HAS_LAYMAN_API = True +except ImportError: + HAS_LAYMAN_API = False + + +class ModuleError(Exception): pass + + +def init_layman(config=None): + '''Returns the initialized ``LaymanAPI``. + + :param config: the layman's configuration to use (optional) + ''' + if config is None: config = BareConfig(read_configfile=True, quietness=1) + return LaymanAPI(config) + + +def download_url(url, dest): + ''' + :param url: the URL to download + :param dest: the absolute path of where to save the downloaded content to; + it must be writable and not a directory + + :raises ModuleError + ''' + request = Request(url) + request.add_header('User-agent', USERAGENT) + + try: + response = urlopen(request) + except URLError, e: + raise ModuleError("Failed to get %s: %s" % (url, str(e))) + + try: + with open(dest, 'w') as f: + shutil.copyfileobj(response, f) + except IOError, e: + raise ModuleError("Failed to write: %s" % str(e)) + + +def install_overlay(name, list_url=None): + '''Installs the overlay repository. If not on the central overlays list, + then :list_url of an alternative list must be provided. The list will be + fetched and saved under ``%(overlay_defs)/%(name.xml)`` (location of the + ``overlay_defs`` is read from the Layman's configuration). + + :param name: the overlay id + :param list_url: the URL of the remote repositories list to look for the overlay + definition (optional, default: None) + + :returns: True if the overlay was installed, or False if already exists + (i.e. nothing has changed) + :raises ModuleError + ''' + # read Layman configuration + layman_conf = BareConfig(read_configfile=True) + layman = init_layman(layman_conf) + + if layman.is_installed(name): + return False + + if not layman.is_repo(name): + if not list_url: raise ModuleError("Overlay '%s' is not on the list of known " \ + "overlays and URL of the remote list was not provided." % name) + + overlay_defs = layman_conf.get_option('overlay_defs') + dest = path.join(overlay_defs, name + '.xml') + + download_url(list_url, dest) + + # reload config + layman = init_layman() + + if not layman.add_repos(name): raise ModuleError(layman.get_errors()) + + return True + + +def uninstall_overlay(name): + '''Uninstalls the given overlay repository from the system. + + :param name: the overlay id to uninstall + + :returns: True if the overlay was uninstalled, or False if doesn't exist + (i.e. nothing has changed) + :raises ModuleError + ''' + layman = init_layman() + + if not layman.is_installed(name): + return False + + layman.delete_repos(name) + if layman.get_errors(): raise ModuleError(layman.get_errors()) + + return True + + +def sync_overlay(name): + '''Synchronizes the specified overlay repository. + + :param name: the overlay repository id to sync + :raises ModuleError + ''' + layman = init_layman() + + if not layman.sync(name): + messages = [ str(item[1]) for item in layman.sync_results[2] ] + raise ModuleError(messages) + + +def sync_overlays(): + '''Synchronize all of the installed overlays. + + :raises ModuleError + ''' + layman = init_layman() + + for name in layman.get_installed(): + sync_overlay(name) + + +def main(): + # define module + module = AnsibleModule( + argument_spec = { + 'name': { 'required': True }, + 'list_url': { 'aliases': ['url'] }, + 'state': { 'default': "present", 'choices': ['present', 'absent', 'updated'] }, + } + ) + + if not HAS_LAYMAN_API: + module.fail_json(msg='Layman is not installed') + + state, name, url = (module.params[key] for key in ['state', 'name', 'list_url']) + + changed = False + try: + if state == 'present': + changed = install_overlay(name, url) + + elif state == 'updated': + if name == 'ALL': + sync_overlays() + elif install_overlay(name, url): + changed = True + else: + sync_overlay(name) + else: + changed = uninstall_overlay(name) + + except ModuleError, e: + module.fail_json(msg=e.message) + else: + module.exit_json(changed=changed, name=name) + + +# import module snippets +from ansible.module_utils.basic import * +main() diff --git a/library/packaging/macports b/library/packaging/macports index b58224b63fe..ae7010b1cbd 100644 --- a/library/packaging/macports +++ b/library/packaging/macports @@ -53,6 +53,7 @@ EXAMPLES = ''' - macports: name=foo state=inactive ''' +import pipes def update_package_db(module, port_path): """ Updates packages list. """ @@ -68,7 +69,7 @@ def query_package(module, port_path, name, state="present"): if state == "present": - rc, out, err = module.run_command("%s installed | grep -q ^.*%s" % (port_path, name)) + rc, out, err = module.run_command("%s installed | grep -q ^.*%s" % (pipes.quote(port_path), pipes.quote(name)), use_unsafe_shell=True) if rc == 0: return True @@ -76,7 +77,8 @@ def query_package(module, port_path, name, state="present"): elif state == "active": - rc, out, err = module.run_command("%s installed %s | grep -q active" % (port_path, name)) + rc, out, err = module.run_command("%s installed %s | grep -q active" % (pipes.quote(port_path), pipes.quote(name)), use_unsafe_shell=True) + if rc == 0: return True diff --git a/library/packaging/npm b/library/packaging/npm index 62179c373aa..7034c7f9964 100644 --- a/library/packaging/npm +++ b/library/packaging/npm @@ -56,6 +56,11 @@ options: required: false choices: [ "yes", "no" ] default: no + registry: + description: + - The registry to install modules from. + required: false + version_added: "1.6" state: description: - The state of the node.js library @@ -77,6 +82,9 @@ description: Install "coffee-script" node.js package globally. description: Remove the globally package "coffee-script". - npm: name=coffee-script global=yes state=absent +description: Install "coffee-script" node.js package from custom registry. +- npm: name=coffee-script registry=http://registry.mysite.com + description: Install packages based on package.json. - npm: path=/app/location @@ -101,6 +109,7 @@ class Npm(object): self.name = kwargs['name'] self.version = kwargs['version'] self.path = kwargs['path'] + self.registry = kwargs['registry'] self.production = kwargs['production'] if kwargs['executable']: @@ -123,12 +132,20 @@ class Npm(object): cmd.append('--production') if self.name: cmd.append(self.name_version) + if self.registry: + cmd.append('--registry') + cmd.append(self.registry) #If path is specified, cd into that path and run the command. + cwd = None if self.path: - os.chdir(self.path) + if not os.path.exists(self.path): + os.makedirs(self.path) + if not os.path.isdir(self.path): + self.module.fail_json(msg="path %s is not a directory" % self.path) + cwd = self.path - rc, out, err = self.module.run_command(cmd, check_rc=check_rc) + rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd) return out return '' @@ -142,6 +159,8 @@ class Npm(object): for dep in data['dependencies']: if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']: missing.append(dep) + elif 'invalid' in data['dependencies'][dep] and data['dependencies'][dep]['invalid']: + missing.append(dep) else: installed.append(dep) #Named dependency not installed @@ -179,6 +198,7 @@ def main(): version=dict(default=None), production=dict(default='no', type='bool'), executable=dict(default=None), + registry=dict(default=None), state=dict(default='present', choices=['present', 'absent', 'latest']) ) arg_spec['global'] = dict(default='no', type='bool') @@ -193,6 +213,7 @@ def main(): glbl = module.params['global'] production = module.params['production'] executable = module.params['executable'] + registry = module.params['registry'] state = module.params['state'] if not path and not glbl: @@ -201,7 +222,7 @@ def main(): module.fail_json(msg='uninstalling a package is only available for named packages') npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production, \ - executable=executable) + executable=executable, registry=registry) changed = False if state == 'present': @@ -215,7 +236,6 @@ def main(): if len(missing) or len(outdated): changed = True npm.install() - npm.update() else: #absent installed, missing = npm.list() if name in installed: diff --git a/library/packaging/opkg b/library/packaging/opkg index 4a834cf1a39..0187abe56a8 100644 --- a/library/packaging/opkg +++ b/library/packaging/opkg @@ -51,6 +51,7 @@ EXAMPLES = ''' - opkg: name=foo,bar state=absent ''' +import pipes def update_package_db(module, opkg_path): """ Updates packages list. """ @@ -66,7 +67,7 @@ def query_package(module, opkg_path, name, state="present"): if state == "present": - rc, out, err = module.run_command("%s list-installed | grep -q ^%s" % (opkg_path, name)) + rc, out, err = module.run_command("%s list-installed | grep -q ^%s" % (pipes.quote(opkg_path), pipes.quote(name)), use_unsafe_shell=True) if rc == 0: return True diff --git a/library/packaging/pacman b/library/packaging/pacman index 3080cb4a607..5bf2d931e6e 100644 --- a/library/packaging/pacman +++ b/library/packaging/pacman @@ -1,82 +1,82 @@ #!/usr/bin/python -tt # -*- coding: utf-8 -*- -# (c) 2012, Afterburn -# Written by Afterburn -# Based on apt module written by Matthew Williams +# (c) 2012, Afterburn +# (c) 2013, Aaron Bull Schaefer # -# This module is free software: you can redistribute it and/or modify +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # -# This software is distributed in the hope that it will be useful, +# Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License -# along with this software. If not, see . - +# along with Ansible. If not, see . DOCUMENTATION = ''' --- module: pacman -short_description: Package manager for Archlinux +short_description: Manage packages with I(pacman) description: - - Manages Archlinux packages - + - Manage packages with the I(pacman) package manager, which is used by + Arch Linux and its variants. version_added: "1.0" +author: Afterburn +notes: [] +requirements: [] options: name: description: - - name of package to install, upgrade or remove. - required: true + - Name of the package to install, upgrade, or remove. + required: false + default: null state: description: - - desired state of the package. + - Desired state of the package. required: false - choices: [ "installed", "absent" ] + default: "present" + choices: ["present", "absent"] - update_cache: + recurse: description: - - update the package database first (pacman -Syy). + - When removing a package, also remove its dependencies, provided + that they are not required by other packages and were not + explicitly installed by a user. required: false default: "no" - choices: [ "yes", "no" ] + choices: ["yes", "no"] + version_added: "1.3" - recurse: + update_cache: description: - - remove all not explicitly installed dependencies not required - by other packages of the package to remove + - Whether or not to refresh the master package lists. This can be + run as part of a package installation or as a separate step. required: false default: "no" - choices: [ "yes", "no" ] - version_added: "1.3" - -author: Afterburn -notes: [] + choices: ["yes", "no"] ''' EXAMPLES = ''' # Install package foo -- pacman: name=foo state=installed - -# Remove package foo -- pacman: name=foo state=absent +- pacman: name=foo state=present -# Remove packages foo and bar +# Remove packages foo and bar - pacman: name=foo,bar state=absent # Recursively remove package baz - pacman: name=baz state=absent recurse=yes -# Update the package database (pacman -Syy) and install bar (bar will be the updated if a newer version exists) -- pacman: name=bar, state=installed, update_cache=yes +# Run the equivalent of "pacman -Syy" as a separate step +- pacman: update_cache=yes ''' - import json import shlex import os @@ -85,12 +85,12 @@ import sys PACMAN_PATH = "/usr/bin/pacman" -def query_package(module, name, state="installed"): - +def query_package(module, name, state="present"): # pacman -Q returns 0 if the package is installed, # 1 if it is not installed - if state == "installed": - rc = os.system("pacman -Q %s" % (name)) + if state == "present": + cmd = "pacman -Q %s" % (name) + rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc == 0: return True @@ -99,18 +99,21 @@ def query_package(module, name, state="installed"): def update_package_db(module): - rc = os.system("pacman -Syy > /dev/null") + cmd = "pacman -Syy" + rc, stdout, stderr = module.run_command(cmd, check_rc=False) - if rc != 0: + if rc == 0: + return True + else: module.fail_json(msg="could not update package db") - + def remove_packages(module, packages): if module.params["recurse"]: args = "Rs" else: args = "R" - + remove_c = 0 # Using a for loop incase of error, we can report the package that failed for package in packages: @@ -118,11 +121,12 @@ def remove_packages(module, packages): if not query_package(module, package): continue - rc = os.system("pacman -%s %s --noconfirm > /dev/null" % (args, package)) + cmd = "pacman -%s %s --noconfirm" % (args, package) + rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc != 0: module.fail_json(msg="failed to remove %s" % (package)) - + remove_c += 1 if remove_c > 0: @@ -133,7 +137,6 @@ def remove_packages(module, packages): def install_packages(module, packages, package_files): - install_c = 0 for i, package in enumerate(packages): @@ -145,13 +148,14 @@ def install_packages(module, packages, package_files): else: params = '-S %s' % package - rc = os.system("pacman %s --noconfirm > /dev/null" % (params)) + cmd = "pacman %s --noconfirm" % (params) + rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc != 0: module.fail_json(msg="failed to install %s" % (package)) install_c += 1 - + if install_c > 0: module.exit_json(changed=True, msg="installed %s package(s)" % (install_c)) @@ -162,7 +166,7 @@ def check_packages(module, packages, state): would_be_changed = [] for package in packages: installed = query_package(module, package) - if ((state == "installed" and not installed) or + if ((state == "present" and not installed) or (state == "absent" and installed)): would_be_changed.append(package) if would_be_changed: @@ -176,42 +180,50 @@ def check_packages(module, packages, state): def main(): module = AnsibleModule( - argument_spec = dict( - state = dict(default="installed", choices=["installed","absent"]), - update_cache = dict(default="no", aliases=["update-cache"], type='bool'), - recurse = dict(default="no", type='bool'), - name = dict(aliases=["pkg"], required=True)), - supports_check_mode = True) - + argument_spec = dict( + name = dict(aliases=['pkg']), + state = dict(default='present', choices=['present', 'installed', 'absent', 'removed']), + recurse = dict(default='no', choices=BOOLEANS, type='bool'), + update_cache = dict(default='no', aliases=['update-cache'], choices=BOOLEANS, type='bool')), + required_one_of = [['name', 'update_cache']], + supports_check_mode = True) if not os.path.exists(PACMAN_PATH): module.fail_json(msg="cannot find pacman, looking for %s" % (PACMAN_PATH)) p = module.params + # normalize the state parameter + if p['state'] in ['present', 'installed']: + p['state'] = 'present' + elif p['state'] in ['absent', 'removed']: + p['state'] = 'absent' + if p["update_cache"] and not module.check_mode: update_package_db(module) - - pkgs = p["name"].split(",") - - pkg_files = [] - for i, pkg in enumerate(pkgs): - if pkg.endswith('.pkg.tar.xz'): - # The package given is a filename, extract the raw pkg name from - # it and store the filename - pkg_files.append(pkg) - pkgs[i] = re.sub('-[0-9].*$', '', pkgs[i].split('/')[-1]) - else: - pkg_files.append(None) - - if module.check_mode: - check_packages(module, pkgs, p['state']) - - if p["state"] == "installed": - install_packages(module, pkgs, pkg_files) - - elif p["state"] == "absent": - remove_packages(module, pkgs) + if not p['name']: + module.exit_json(changed=True, msg='updated the package master lists') + + if p['name']: + pkgs = p['name'].split(',') + + pkg_files = [] + for i, pkg in enumerate(pkgs): + if pkg.endswith('.pkg.tar.xz'): + # The package given is a filename, extract the raw pkg name from + # it and store the filename + pkg_files.append(pkg) + pkgs[i] = re.sub('-[0-9].*$', '', pkgs[i].split('/')[-1]) + else: + pkg_files.append(None) + + if module.check_mode: + check_packages(module, pkgs, p['state']) + + if p['state'] == 'present': + install_packages(module, pkgs, pkg_files) + elif p['state'] == 'absent': + remove_packages(module, pkgs) # import module snippets from ansible.module_utils.basic import * diff --git a/library/packaging/pip b/library/packaging/pip index 35487c32963..aa55bf8ba0b 100644 --- a/library/packaging/pip +++ b/library/packaging/pip @@ -253,10 +253,10 @@ def main(): cmd = '%s --no-site-packages %s' % (virtualenv, env) else: cmd = '%s %s' % (virtualenv, env) - os.chdir(tempfile.gettempdir()) + this_dir = tempfile.gettempdir() if chdir: - os.chdir(chdir) - rc, out_venv, err_venv = module.run_command(cmd) + this_dir = os.path.join(this_dir, chdir) + rc, out_venv, err_venv = module.run_command(cmd, cwd=this_dir) out += out_venv err += err_venv if rc != 0: @@ -298,10 +298,11 @@ def main(): if module.check_mode: module.exit_json(changed=True) - os.chdir(tempfile.gettempdir()) + this_dir = tempfile.gettempdir() if chdir: - os.chdir(chdir) - rc, out_pip, err_pip = module.run_command(cmd, path_prefix=path_prefix) + this_dir = os.path.join(this_dir, chdir) + + rc, out_pip, err_pip = module.run_command(cmd, path_prefix=path_prefix, cwd=this_dir) out += out_pip err += err_pip if rc == 1 and state == 'absent' and 'not installed' in out_pip: diff --git a/library/packaging/pkgin b/library/packaging/pkgin index 0554cf9a216..866c9f76a4c 100755 --- a/library/packaging/pkgin +++ b/library/packaging/pkgin @@ -58,13 +58,13 @@ import json import shlex import os import sys - +import pipes def query_package(module, pkgin_path, name, state="present"): if state == "present": - rc, out, err = module.run_command("%s -y list | grep ^%s" % (pkgin_path, name)) + rc, out, err = module.run_command("%s -y list | grep ^%s" % (pipes.quote(pkgin_path), pipes.quote(name)), use_unsafe_shell=True) if rc == 0: # At least one package with a package name that starts with ``name`` diff --git a/library/packaging/pkgng b/library/packaging/pkgng index 7b0468a7cbd..a1f443fd4e1 100644 --- a/library/packaging/pkgng +++ b/library/packaging/pkgng @@ -46,10 +46,22 @@ options: choices: [ 'yes', 'no' ] required: false default: no + annotation: + description: + - a comma-separated list of keyvalue-pairs of the form + <+/-/:>[=]. A '+' denotes adding an annotation, a + '-' denotes removing an annotation, and ':' denotes modifying an + annotation. + If setting or modifying annotations, a value must be provided. + required: false + version_added: "1.6" pkgsite: description: - - specify packagesite to use for downloading packages, if - not specified, use settings from /usr/local/etc/pkg.conf + - for pkgng versions before 1.1.4, specify packagesite to use + for downloading packages, if not specified, use settings from + /usr/local/etc/pkg.conf + for newer pkgng versions, specify a the name of a repository + configured in /usr/local/etc/pkg/repos required: false author: bleader notes: @@ -60,6 +72,9 @@ EXAMPLES = ''' # Install package foo - pkgng: name=foo state=present +# Annotate package foo and bar +- pkgng: name=foo,bar annotation=+test1=baz,-test2,:test3=foobar + # Remove packages foo and bar - pkgng: name=foo,bar state=absent ''' @@ -68,92 +83,217 @@ EXAMPLES = ''' import json import shlex import os +import re import sys -def query_package(module, pkgin_path, name): +def query_package(module, pkgng_path, name): - rc, out, err = module.run_command("%s info -g -e %s" % (pkgin_path, name)) + rc, out, err = module.run_command("%s info -g -e %s" % (pkgng_path, name)) if rc == 0: return True return False +def pkgng_older_than(module, pkgng_path, compare_version): + + rc, out, err = module.run_command("%s -v" % pkgng_path) + version = map(lambda x: int(x), re.split(r'[\._]', out)) -def remove_packages(module, pkgin_path, packages): + i = 0 + new_pkgng = True + while compare_version[i] == version[i]: + i += 1 + if i == min(len(compare_version), len(version)): + break + else: + if compare_version[i] > version[i]: + new_pkgng = False + return not new_pkgng + + +def remove_packages(module, pkgng_path, packages): remove_c = 0 # Using a for loop incase of error, we can report the package that failed for package in packages: # Query the package first, to see if we even need to remove - if not query_package(module, pkgin_path, package): + if not query_package(module, pkgng_path, package): continue if not module.check_mode: - rc, out, err = module.run_command("%s delete -y %s" % (pkgin_path, package)) + rc, out, err = module.run_command("%s delete -y %s" % (pkgng_path, package)) - if not module.check_mode and query_package(module, pkgin_path, package): + if not module.check_mode and query_package(module, pkgng_path, package): module.fail_json(msg="failed to remove %s: %s" % (package, out)) remove_c += 1 if remove_c > 0: - module.exit_json(changed=True, msg="removed %s package(s)" % remove_c) + return (True, "removed %s package(s)" % remove_c) - module.exit_json(changed=False, msg="package(s) already absent") + return (False, "package(s) already absent") -def install_packages(module, pkgin_path, packages, cached, pkgsite): +def install_packages(module, pkgng_path, packages, cached, pkgsite): install_c = 0 + # as of pkg-1.1.4, PACKAGESITE is deprecated in favor of repository definitions + # in /usr/local/etc/pkg/repos + old_pkgng = pkgng_older_than(module, pkgng_path, [1, 1, 4]) if pkgsite != "": - pkgsite="PACKAGESITE=%s" % (pkgsite) - - if not module.check_mode and cached == "no": - rc, out, err = module.run_command("%s %s update" % (pkgsite, pkgin_path)) + if old_pkgng: + pkgsite = "PACKAGESITE=%s" % (pkgsite) + else: + pkgsite = "-r %s" % (pkgsite) + + if not module.check_mode and not cached: + if old_pkgng: + rc, out, err = module.run_command("%s %s update" % (pkgsite, pkgng_path)) + else: + rc, out, err = module.run_command("%s update" % (pkgng_path)) if rc != 0: module.fail_json(msg="Could not update catalogue") for package in packages: - if query_package(module, pkgin_path, package): + if query_package(module, pkgng_path, package): continue if not module.check_mode: - rc, out, err = module.run_command("%s %s install -g -U -y %s" % (pkgsite, pkgin_path, package)) + if old_pkgng: + rc, out, err = module.run_command("%s %s install -g -U -y %s" % (pkgsite, pkgng_path, package)) + else: + rc, out, err = module.run_command("%s install %s -g -U -y %s" % (pkgng_path, pkgsite, package)) - if not module.check_mode and not query_package(module, pkgin_path, package): + if not module.check_mode and not query_package(module, pkgng_path, package): module.fail_json(msg="failed to install %s: %s" % (package, out), stderr=err) install_c += 1 if install_c > 0: - module.exit_json(changed=True, msg="present %s package(s)" % (install_c)) + return (True, "added %s package(s)" % (install_c)) - module.exit_json(changed=False, msg="package(s) already present") + return (False, "package(s) already present") +def annotation_query(module, pkgng_path, package, tag): + rc, out, err = module.run_command("%s info -g -A %s" % (pkgng_path, package)) + match = re.search(r'^\s*(?P%s)\s*:\s*(?P\w+)' % tag, out, flags=re.MULTILINE) + if match: + return match.group('value') + return False + + +def annotation_add(module, pkgng_path, package, tag, value): + _value = annotation_query(module, pkgng_path, package, tag) + if not _value: + # Annotation does not exist, add it. + rc, out, err = module.run_command('%s annotate -y -A %s %s "%s"' + % (pkgng_path, package, tag, value)) + if rc != 0: + module.fail_json("could not annotate %s: %s" + % (package, out), stderr=err) + return True + elif _value != value: + # Annotation exists, but value differs + module.fail_json( + mgs="failed to annotate %s, because %s is already set to %s, but should be set to %s" + % (package, tag, _value, value)) + return False + else: + # Annotation exists, nothing to do + return False + +def annotation_delete(module, pkgng_path, package, tag, value): + _value = annotation_query(module, pkgng_path, package, tag) + if _value: + rc, out, err = module.run_command('%s annotate -y -D %s %s' + % (pkgng_path, package, tag)) + if rc != 0: + module.fail_json("could not delete annotation to %s: %s" + % (package, out), stderr=err) + return True + return False + +def annotation_modify(module, pkgng_path, package, tag, value): + _value = annotation_query(module, pkgng_path, package, tag) + if not value: + # No such tag + module.fail_json("could not change annotation to %s: tag %s does not exist" + % (package, tag)) + elif _value == value: + # No change in value + return False + else: + rc,out,err = module.run_command('%s annotate -y -M %s %s "%s"' + % (pkgng_path, package, tag, value)) + if rc != 0: + module.fail_json("could not change annotation annotation to %s: %s" + % (package, out), stderr=err) + return True + + +def annotate_packages(module, pkgng_path, packages, annotation): + annotate_c = 0 + annotations = map(lambda _annotation: + re.match(r'(?P[\+-:])(?P\w+)(=(?P\w+))?', + _annotation).groupdict(), + re.split(r',', annotation)) + + operation = { + '+': annotation_add, + '-': annotation_delete, + ':': annotation_modify + } + + for package in packages: + for _annotation in annotations: + annotate_c += ( 1 if operation[_annotation['operation']]( + module, pkgng_path, package, + _annotation['tag'], _annotation['value']) else 0 ) + + if annotate_c > 0: + return (True, "added %s annotations." % annotate_c) + return (False, "changed no annotations") def main(): module = AnsibleModule( argument_spec = dict( - state = dict(default="present", choices=["present","absent"]), + state = dict(default="present", choices=["present","absent"], required=False), name = dict(aliases=["pkg"], required=True), cached = dict(default=False, type='bool'), + annotation = dict(default="", required=False), pkgsite = dict(default="", required=False)), supports_check_mode = True) - pkgin_path = module.get_bin_path('pkg', True) + pkgng_path = module.get_bin_path('pkg', True) p = module.params pkgs = p["name"].split(",") + changed = False + msgs = [] + if p["state"] == "present": - install_packages(module, pkgin_path, pkgs, p["cached"], p["pkgsite"]) + _changed, _msg = install_packages(module, pkgng_path, pkgs, p["cached"], p["pkgsite"]) + changed = changed or _changed + msgs.append(_msg) elif p["state"] == "absent": - remove_packages(module, pkgin_path, pkgs) + _changed, _msg = remove_packages(module, pkgng_path, pkgs) + changed = changed or _changed + msgs.append(_msg) + + if p["annotation"]: + _changed, _msg = annotate_packages(module, pkgng_path, pkgs, p["annotation"]) + changed = changed or _changed + msgs.append(_msg) + + module.exit_json(changed=changed, msg=", ".join(msgs)) + + # import module snippets from ansible.module_utils.basic import * diff --git a/library/packaging/pkgutil b/library/packaging/pkgutil index d6c4f536c5a..e7d1ce7a0d6 100644 --- a/library/packaging/pkgutil +++ b/library/packaging/pkgutil @@ -58,13 +58,14 @@ pkgutil: name=CSWcommon state=present # Install a package from a specific repository pkgutil: name=CSWnrpe site='ftp://myinternal.repo/opencsw/kiel state=latest' ''' + import os +import pipes def package_installed(module, name): cmd = [module.get_bin_path('pkginfo', True)] cmd.append('-q') cmd.append(name) - #rc, out, err = module.run_command(' '.join(cmd), shell=False) rc, out, err = module.run_command(' '.join(cmd)) if rc == 0: return True @@ -73,12 +74,14 @@ def package_installed(module, name): def package_latest(module, name, site): # Only supports one package + name = pipes.quote(name) + site = pipes.quote(site) cmd = [ 'pkgutil', '--single', '-c' ] if site is not None: cmd += [ '-t', site ] cmd.append(name) cmd += [ '| tail -1 | grep -v SAME' ] - rc, out, err = module.run_command(' '.join(cmd)) + rc, out, err = module.run_command(' '.join(cmd), use_unsafe_shell=True) if rc == 1: return True else: diff --git a/library/packaging/portage b/library/packaging/portage new file mode 100644 index 00000000000..2cce4b41d1e --- /dev/null +++ b/library/packaging/portage @@ -0,0 +1,387 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2013, Yap Sok Ann +# Written by Yap Sok Ann +# Based on apt module written by Matthew Williams +# +# This module is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This software is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this software. If not, see . + + +DOCUMENTATION = ''' +--- +module: portage +short_description: Package manager for Gentoo +description: + - Manages Gentoo packages +version_added: "1.6" + +options: + package: + description: + - Package atom or set, e.g. C(sys-apps/foo) or C(>foo-2.13) or C(@world) + required: false + default: null + + state: + description: + - State of the package atom + required: false + default: "present" + choices: [ "present", "installed", "emerged", "absent", "removed", "unmerged" ] + + update: + description: + - Update packages to the best version available (--update) + required: false + default: null + choices: [ "yes" ] + + deep: + description: + - Consider the entire dependency tree of packages (--deep) + required: false + default: null + choices: [ "yes" ] + + newuse: + description: + - Include installed packages where USE flags have changed (--newuse) + required: false + default: null + choices: [ "yes" ] + + oneshot: + description: + - Do not add the packages to the world file (--oneshot) + required: false + default: null + choices: [ "yes" ] + + noreplace: + description: + - Do not re-emerge installed packages (--noreplace) + required: false + default: null + choices: [ "yes" ] + + nodeps: + description: + - Only merge packages but not their dependencies (--nodeps) + required: false + default: null + choices: [ "yes" ] + + onlydeps: + description: + - Only merge packages' dependencies but not the packages (--onlydeps) + required: false + default: null + choices: [ "yes" ] + + depclean: + description: + - Remove packages not needed by explicitly merged packages (--depclean) + - If no package is specified, clean up the world's dependencies + - Otherwise, --depclean serves as a dependency aware version of --unmerge + required: false + default: null + choices: [ "yes" ] + + quiet: + description: + - Run emerge in quiet mode (--quiet) + required: false + default: null + choices: [ "yes" ] + + verbose: + description: + - Run emerge in verbose mode (--verbose) + required: false + default: null + choices: [ "yes" ] + + sync: + description: + - Sync package repositories first + - If yes, perform "emerge --sync" + - If web, perform "emerge-webrsync" + required: false + default: null + choices: [ "yes", "web" ] + +requirements: [ gentoolkit ] +author: Yap Sok Ann +notes: [] +''' + +EXAMPLES = ''' +# Make sure package foo is installed +- portage: package=foo state=present + +# Make sure package foo is not installed +- portage: package=foo state=absent + +# Update package foo to the "best" version +- portage: package=foo update=yes + +# Sync repositories and update world +- portage: package=@world update=yes deep=yes sync=yes + +# Remove unneeded packages +- portage: depclean=yes + +# Remove package foo if it is not explicitly needed +- portage: package=foo state=absent depclean=yes +''' + + +import os +import pipes + + +def query_package(module, package, action): + if package.startswith('@'): + return query_set(module, package, action) + return query_atom(module, package, action) + + +def query_atom(module, atom, action): + cmd = '%s list %s' % (module.equery_path, atom) + + rc, out, err = module.run_command(cmd) + return rc == 0 + + +def query_set(module, set, action): + system_sets = [ + '@live-rebuild', + '@module-rebuild', + '@preserved-rebuild', + '@security', + '@selected', + '@system', + '@world', + '@x11-module-rebuild', + ] + + if set in system_sets: + if action == 'unmerge': + module.fail_json(msg='set %s cannot be removed' % set) + return False + + world_sets_path = '/var/lib/portage/world_sets' + if not os.path.exists(world_sets_path): + return False + + cmd = 'grep %s %s' % (set, world_sets_path) + + rc, out, err = module.run_command(cmd) + return rc == 0 + + +def sync_repositories(module, webrsync=False): + if module.check_mode: + module.fail_json(msg='check mode not supported by sync') + + if webrsync: + webrsync_path = module.get_bin_path('emerge-webrsync', required=True) + cmd = '%s --quiet' % webrsync_path + else: + cmd = '%s --sync --quiet' % module.emerge_path + + rc, out, err = module.run_command(cmd) + if rc != 0: + module.fail_json(msg='could not sync package repositories') + + +# Note: In the 3 functions below, equery is done one-by-one, but emerge is done +# in one go. If that is not desirable, split the packages into multiple tasks +# instead of joining them together with comma. + + +def emerge_packages(module, packages): + p = module.params + + if not (p['update'] or p['noreplace']): + for package in packages: + if not query_package(module, package, 'emerge'): + break + else: + module.exit_json(changed=False, msg='Packages already present.') + + args = [] + for flag in [ + 'update', 'deep', 'newuse', + 'oneshot', 'noreplace', + 'nodeps', 'onlydeps', + 'quiet', 'verbose', + ]: + if p[flag]: + args.append('--%s' % flag) + + cmd, (rc, out, err) = run_emerge(module, packages, *args) + if rc != 0: + module.fail_json( + cmd=cmd, rc=rc, stdout=out, stderr=err, + msg='Packages not installed.', + ) + + changed = True + for line in out.splitlines(): + if line.startswith('>>> Emerging (1 of'): + break + else: + changed = False + + module.exit_json( + changed=changed, cmd=cmd, rc=rc, stdout=out, stderr=err, + msg='Packages installed.', + ) + + +def unmerge_packages(module, packages): + p = module.params + + for package in packages: + if query_package(module, package, 'unmerge'): + break + else: + module.exit_json(changed=False, msg='Packages already absent.') + + args = ['--unmerge'] + + for flag in ['quiet', 'verbose']: + if p[flag]: + args.append('--%s' % flag) + + cmd, (rc, out, err) = run_emerge(module, packages, *args) + + if rc != 0: + module.fail_json( + cmd=cmd, rc=rc, stdout=out, stderr=err, + msg='Packages not removed.', + ) + + module.exit_json( + changed=True, cmd=cmd, rc=rc, stdout=out, stderr=err, + msg='Packages removed.', + ) + + +def cleanup_packages(module, packages): + p = module.params + + if packages: + for package in packages: + if query_package(module, package, 'unmerge'): + break + else: + module.exit_json(changed=False, msg='Packages already absent.') + + args = ['--depclean'] + + for flag in ['quiet', 'verbose']: + if p[flag]: + args.append('--%s' % flag) + + cmd, (rc, out, err) = run_emerge(module, packages, *args) + if rc != 0: + module.fail_json(cmd=cmd, rc=rc, stdout=out, stderr=err) + + removed = 0 + for line in out.splitlines(): + if not line.startswith('Number removed:'): + continue + parts = line.split(':') + removed = int(parts[1].strip()) + changed = removed > 0 + + module.exit_json( + changed=changed, cmd=cmd, rc=rc, stdout=out, stderr=err, + msg='Depclean completed.', + ) + + +def run_emerge(module, packages, *args): + args = list(args) + + if module.check_mode: + args.append('--pretend') + + cmd = [module.emerge_path] + args + packages + return cmd, module.run_command(cmd) + + +portage_present_states = ['present', 'emerged', 'installed'] +portage_absent_states = ['absent', 'unmerged', 'removed'] + + +def main(): + module = AnsibleModule( + argument_spec=dict( + package=dict(default=None, aliases=['name']), + state=dict( + default=portage_present_states[0], + choices=portage_present_states + portage_absent_states, + ), + update=dict(default=None, choices=['yes']), + deep=dict(default=None, choices=['yes']), + newuse=dict(default=None, choices=['yes']), + oneshot=dict(default=None, choices=['yes']), + noreplace=dict(default=None, choices=['yes']), + nodeps=dict(default=None, choices=['yes']), + onlydeps=dict(default=None, choices=['yes']), + depclean=dict(default=None, choices=['yes']), + quiet=dict(default=None, choices=['yes']), + verbose=dict(default=None, choices=['yes']), + sync=dict(default=None, choices=['yes', 'web']), + ), + required_one_of=[['package', 'sync', 'depclean']], + mutually_exclusive=[['nodeps', 'onlydeps'], ['quiet', 'verbose']], + supports_check_mode=True, + ) + + module.emerge_path = module.get_bin_path('emerge', required=True) + module.equery_path = module.get_bin_path('equery', required=True) + + p = module.params + + if p['sync']: + sync_repositories(module, webrsync=(p['sync'] == 'web')) + if not p['package']: + return + + packages = p['package'].split(',') if p['package'] else [] + + if p['depclean']: + if packages and p['state'] not in portage_absent_states: + module.fail_json( + msg='Depclean can only be used with package when the state is ' + 'one of: %s' % portage_absent_states, + ) + + cleanup_packages(module, packages) + + elif p['state'] in portage_present_states: + emerge_packages(module, packages) + + elif p['state'] in portage_absent_states: + unmerge_packages(module, packages) + +# import module snippets +from ansible.module_utils.basic import * + +main() diff --git a/library/packaging/portinstall b/library/packaging/portinstall index 4bef8035be3..88e654b8db4 100644 --- a/library/packaging/portinstall +++ b/library/packaging/portinstall @@ -71,7 +71,7 @@ def query_package(module, name): if pkg_info_path: pkgng = False pkg_glob_path = module.get_bin_path('pkg_glob', True) - rc, out, err = module.run_command("%s -e `pkg_glob %s`" % (pkg_info_path, name)) + rc, out, err = module.run_command("%s -e `pkg_glob %s`" % (pkg_info_path, pipes.quote(name)), use_unsafe_shell=True) else: pkgng = True pkg_info_path = module.get_bin_path('pkg', True) @@ -128,11 +128,11 @@ def remove_packages(module, packages): if not query_package(module, package): continue - rc, out, err = module.run_command("%s `%s %s`" % (pkg_delete_path, pkg_glob_path, package)) + rc, out, err = module.run_command("%s `%s %s`" % (pkg_delete_path, pkg_glob_path, pipes.quote(package)), use_unsafe_shell=True) if query_package(module, package): name_without_digits = re.sub('[0-9]', '', package) - rc, out, err = module.run_command("%s `%s %s`" % (pkg_delete_path, pkg_glob_path, name_without_digits)) + rc, out, err = module.run_command("%s `%s %s`" % (pkg_delete_path, pkg_glob_path, pipes.quote(name_without_digits)),use_unsafe_shell=True) if query_package(module, package): module.fail_json(msg="failed to remove %s: %s" % (package, out)) diff --git a/library/packaging/redhat_subscription b/library/packaging/redhat_subscription index e363aa0946a..f9918ada4b0 100644 --- a/library/packaging/redhat_subscription +++ b/library/packaging/redhat_subscription @@ -75,39 +75,13 @@ EXAMPLES = ''' import os import re import types -import subprocess import ConfigParser import shlex -class CommandException(Exception): - pass - - -def run_command(args): - ''' - Convenience method to run a command, specified as a list of arguments. - Returns: - * tuple - (stdout, stder, retcode) - ''' - - # Coerce into a string - if isinstance(args, str): - args = shlex.split(args) - - # Run desired command - proc = subprocess.Popen(args, stdout=subprocess.PIPE, - stderr=subprocess.STDOUT) - (stdout, stderr) = proc.communicate() - returncode = proc.poll() - if returncode != 0: - cmd = ' '.join(args) - raise CommandException("Command failed (%s): %s\n%s" % (returncode, cmd, stdout)) - return (stdout, stderr, returncode) - - -class RegistrationBase (object): - def __init__(self, username=None, password=None): +class RegistrationBase(object): + def __init__(self, module, username=None, password=None): + self.module = module self.username = username self.password = password @@ -147,9 +121,10 @@ class RegistrationBase (object): class Rhsm(RegistrationBase): - def __init__(self, username=None, password=None): - RegistrationBase.__init__(self, username, password) + def __init__(self, module, username=None, password=None): + RegistrationBase.__init__(self, module, username, password) self.config = self._read_config() + self.module = module def _read_config(self, rhsm_conf='/etc/rhsm/rhsm.conf'): ''' @@ -199,8 +174,8 @@ class Rhsm(RegistrationBase): for k,v in kwargs.items(): if re.search(r'^(system|rhsm)_', k): args.append('--%s=%s' % (k.replace('_','.'), v)) - - run_command(args) + + self.module.run_command(args, check_rc=True) @property def is_registered(self): @@ -216,13 +191,11 @@ class Rhsm(RegistrationBase): os.path.isfile('/etc/pki/consumer/key.pem') args = ['subscription-manager', 'identity'] - try: - (stdout, stderr, retcode) = run_command(args) - except CommandException, e: - return False - else: - # Display some debug output + rc, stdout, stderr = self.module.run_command(args, check_rc=False) + if rc == 0: return True + else: + return False def register(self, username, password, autosubscribe, activationkey): ''' @@ -243,8 +216,7 @@ class Rhsm(RegistrationBase): if password: args.extend(['--password', password]) - # Do the needful... - run_command(args) + rc, stderr, stdout = self.module.run_command(args, check_rc=True) def unsubscribe(self): ''' @@ -253,7 +225,7 @@ class Rhsm(RegistrationBase): * Exception - if error occurs while running command ''' args = ['subscription-manager', 'unsubscribe', '--all'] - run_command(args) + rc, stderr, stdout = self.module.run_command(args, check_rc=True) def unregister(self): ''' @@ -262,7 +234,7 @@ class Rhsm(RegistrationBase): * Exception - if error occurs while running command ''' args = ['subscription-manager', 'unregister'] - run_command(args) + rc, stderr, stdout = self.module.run_command(args, check_rc=True) def subscribe(self, regexp): ''' @@ -273,7 +245,7 @@ class Rhsm(RegistrationBase): ''' # Available pools ready for subscription - available_pools = RhsmPools() + available_pools = RhsmPools(self.module) for pool in available_pools.filter(regexp): pool.subscribe() @@ -284,7 +256,8 @@ class RhsmPool(object): Convenience class for housing subscription information ''' - def __init__(self, **kwargs): + def __init__(self, module, **kwargs): + self.module = module for k,v in kwargs.items(): setattr(self, k, v) @@ -292,15 +265,20 @@ class RhsmPool(object): return str(self.__getattribute__('_name')) def subscribe(self): - (stdout, stderr, retcode) = run_command("subscription-manager subscribe --pool %s" % self.PoolId) - return True + args = "subscription-manager subscribe --pool %s" % self.PoolId + rc, stdout, stderr = self.module.run_command(args, check_rc=True) + if rc == 0: + return True + else: + return False class RhsmPools(object): """ This class is used for manipulating pools subscriptions with RHSM """ - def __init__(self): + def __init__(self, module): + self.module = module self.products = self._load_product_list() def __iter__(self): @@ -310,7 +288,8 @@ class RhsmPools(object): """ Loads list of all availaible pools for system in data structure """ - (stdout, stderr, retval) = run_command("subscription-manager list --available") + args = "subscription-manager list --available" + rc, stdout, stderr = self.module.run_command(args, check_rc=True) products = [] for line in stdout.split('\n'): @@ -326,7 +305,7 @@ class RhsmPools(object): value = value.strip() if key in ['ProductName', 'SubscriptionName']: # Remember the name for later processing - products.append(RhsmPool(_name=value, key=value)) + products.append(RhsmPool(self.module, _name=value, key=value)) elif products: # Associate value with most recently recorded product products[-1].__setattr__(key, value) @@ -348,7 +327,7 @@ class RhsmPools(object): def main(): # Load RHSM configuration from file - rhn = Rhsm() + rhn = Rhsm(None) module = AnsibleModule( argument_spec = dict( @@ -364,6 +343,7 @@ def main(): ) ) + rhn.module = module state = module.params['state'] username = module.params['username'] password = module.params['password'] diff --git a/library/packaging/rhn_register b/library/packaging/rhn_register index 5e8c3718f98..552dfcc580a 100644 --- a/library/packaging/rhn_register +++ b/library/packaging/rhn_register @@ -58,7 +58,7 @@ EXAMPLES = ''' # Register as user (joe_user) with password (somepass) against a satellite # server specified by (server_url). -- rhn_register: +- rhn_register: > state=present username=joe_user password=somepass @@ -72,12 +72,7 @@ EXAMPLES = ''' ''' import sys -import os -import re import types -import subprocess -import ConfigParser -import shlex import xmlrpclib import urlparse @@ -89,72 +84,10 @@ try: except ImportError, e: module.fail_json(msg="Unable to import up2date_client. Is 'rhn-client-tools' installed?\n%s" % e) - -class CommandException(Exception): - pass - - -def run_command(args): - ''' - Convenience method to run a command, specified as a list of arguments. - Returns: - * tuple - (stdout, stder, retcode) - ''' - - # Coerce into a string - if isinstance(args, str): - args = shlex.split(args) - - # Run desired command - proc = subprocess.Popen(args, stdout=subprocess.PIPE, - stderr=subprocess.STDOUT) - (stdout, stderr) = proc.communicate() - returncode = proc.poll() - if returncode != 0: - cmd = ' '.join(args) - raise CommandException("Command failed (%s): %s\n%s" % (returncode, cmd, stdout)) - return (stdout, stderr, returncode) - - -class RegistrationBase (object): - def __init__(self, username=None, password=None): - self.username = username - self.password = password - - def configure(self): - raise NotImplementedError("Must be implemented by a sub-class") - - def enable(self): - # Remove any existing redhat.repo - redhat_repo = '/etc/yum.repos.d/redhat.repo' - if os.path.isfile(redhat_repo): - os.unlink(redhat_repo) - - def register(self): - raise NotImplementedError("Must be implemented by a sub-class") - - def unregister(self): - raise NotImplementedError("Must be implemented by a sub-class") - - def unsubscribe(self): - raise NotImplementedError("Must be implemented by a sub-class") - - def update_plugin_conf(self, plugin, enabled=True): - plugin_conf = '/etc/yum/pluginconf.d/%s.conf' % plugin - if os.path.isfile(plugin_conf): - cfg = ConfigParser.ConfigParser() - cfg.read([plugin_conf]) - if enabled: - cfg.set('main', 'enabled', 1) - else: - cfg.set('main', 'enabled', 0) - fd = open(plugin_conf, 'rwa+') - cfg.write(fd) - fd.close() - - def subscribe(self, **kwargs): - raise NotImplementedError("Must be implemented by a sub-class") - +# INSERT REDHAT SNIPPETS +from ansible.module_utils.redhat import * +# INSERT COMMON SNIPPETS +from ansible.module_utils.basic import * class Rhn(RegistrationBase): @@ -264,21 +197,26 @@ class Rhn(RegistrationBase): Register system to RHN. If enable_eus=True, extended update support will be requested. ''' - register_cmd = "/usr/sbin/rhnreg_ks --username '%s' --password '%s' --force" % (self.username, self.password) + register_cmd = "/usr/sbin/rhnreg_ks --username='%s' --password='%s' --force" % (self.username, self.password) + if self.module.params.get('server_url', None): + register_cmd += " --serverUrl=%s" % self.module.params.get('server_url') if enable_eus: register_cmd += " --use-eus-channel" if activationkey is not None: register_cmd += " --activationkey '%s'" % activationkey # FIXME - support --profilename # FIXME - support --systemorgid - run_command(register_cmd) + rc, stdout, stderr = self.module.run_command(register_cmd, check_rc=True, use_unsafe_shell=True) def api(self, method, *args): ''' Convenience RPC wrapper ''' if not hasattr(self, 'server') or self.server is None: - url = "https://xmlrpc.%s/rpc/api" % self.hostname + if self.hostname != 'rhn.redhat.com': + url = "https://%s/rpc/api" % self.hostname + else: + url = "https://xmlrpc.%s/rpc/api" % self.hostname self.server = xmlrpclib.Server(url, verbose=0) self.session = self.server.auth.login(self.username, self.password) @@ -309,14 +247,14 @@ class Rhn(RegistrationBase): Subscribe to requested yum repositories using 'rhn-channel' command ''' rhn_channel_cmd = "rhn-channel --user='%s' --password='%s'" % (self.username, self.password) - (stdout, stderr, rc) = run_command(rhn_channel_cmd + " --available-channels") + rc, stdout, stderr = self.module.run_command(rhn_channel_cmd + " --available-channels", check_rc=True) # Enable requested repoid's for wanted_channel in channels: # Each inserted repo regexp will be matched. If no match, no success. for availaible_channel in stdout.rstrip().split('\n'): # .rstrip() because of \n at the end -> empty string at the end if re.search(wanted_repo, available_channel): - run_command(rhn_channel_cmd + " --add --channel=%s" % available_channel) + rc, stdout, stderr = self.module.run_command(rhn_channel_cmd + " --add --channel=%s" % available_channel, check_rc=True) def main(): @@ -341,6 +279,7 @@ def main(): rhn.configure(module.params['server_url']) activationkey = module.params['activationkey'] channels = module.params['channels'] + rhn.module = module # Ensure system is registered if state == 'present': @@ -359,10 +298,10 @@ def main(): rhn.enable() rhn.register(module.params['enable_eus'] == True, activationkey) rhn.subscribe(channels) - except CommandException, e: + except Exception, e: module.fail_json(msg="Failed to register with '%s': %s" % (rhn.hostname, e)) - else: - module.exit_json(changed=True, msg="System successfully registered to '%s'." % rhn.hostname) + + module.exit_json(changed=True, msg="System successfully registered to '%s'." % rhn.hostname) # Ensure system is *not* registered if state == 'absent': @@ -371,12 +310,10 @@ def main(): else: try: rhn.unregister() - except CommandException, e: + except Exception, e: module.fail_json(msg="Failed to unregister: %s" % e) - else: - module.exit_json(changed=True, msg="System successfully unregistered from %s." % rhn.hostname) + + module.exit_json(changed=True, msg="System successfully unregistered from %s." % rhn.hostname) -# import module snippets -from ansible.module_utils.basic import * main() diff --git a/library/packaging/rpm_key b/library/packaging/rpm_key index 82532477348..d60706b157d 100644 --- a/library/packaging/rpm_key +++ b/library/packaging/rpm_key @@ -42,6 +42,14 @@ options: choices: [present, absent] description: - Wheather the key will be imported or removed from the rpm db. + validate_certs: + description: + - If C(no) and the C(key) is a url starting with https, SSL certificates will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + ''' EXAMPLES = ''' @@ -57,7 +65,6 @@ EXAMPLES = ''' import syslog import os.path import re -import urllib2 import tempfile # Attempt to download at most 8192 bytes. @@ -116,8 +123,8 @@ class RpmKey: def fetch_key(self, url, maxbytes=MAXBYTES): """Downloads a key from url, returns a valid path to a gpg key""" try: - fd = urllib2.urlopen(url) - key = fd.read(maxbytes) + rsp, info = fetch_url(self.module, url) + key = rsp.read(maxbytes) if not is_pubkey(key): self.module.fail_json(msg="Not a public key: %s" % url) tmpfd, tmpname = tempfile.mkstemp() @@ -131,7 +138,9 @@ class RpmKey: def normalize_keyid(self, keyid): """Ensure a keyid doesn't have a leading 0x, has leading or trailing whitespace, and make sure is lowercase""" ret = keyid.strip().lower() - if ret.startswith(('0x', '0X')): + if ret.startswith('0x'): + return ret[2:] + elif ret.startswith('0X'): return ret[2:] else: return ret @@ -141,9 +150,9 @@ class RpmKey: stdout, stderr = self.execute_command([gpg, '--no-tty', '--batch', '--with-colons', '--fixed-list-mode', '--list-packets', keyfile]) for line in stdout.splitlines(): line = line.strip() - if line.startswith('keyid:'): + if line.startswith(':signature packet:'): # We want just the last 8 characters of the keyid - keyid = line.split(':')[1].strip()[8:] + keyid = line.split()[-1].strip()[8:] return keyid self.json_fail(msg="Unexpected gpg output") @@ -161,7 +170,7 @@ class RpmKey: return stdout, stderr def is_key_imported(self, keyid): - stdout, stderr = self.execute_command([self.rpm, '-q', 'gpg-pubkey']) + stdout, stderr = self.execute_command([self.rpm, '-qa', 'gpg-pubkey']) for line in stdout.splitlines(): line = line.strip() if not line: @@ -187,7 +196,8 @@ def main(): module = AnsibleModule( argument_spec = dict( state=dict(default='present', choices=['present', 'absent'], type='str'), - key=dict(required=True, type='str') + key=dict(required=True, type='str'), + validate_certs=dict(default='yes', type='bool'), ), supports_check_mode=True ) @@ -198,4 +208,5 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * main() diff --git a/library/packaging/svr4pkg b/library/packaging/svr4pkg index 485e7ebcbfe..4e790b46c52 100644 --- a/library/packaging/svr4pkg +++ b/library/packaging/svr4pkg @@ -57,6 +57,20 @@ options: description: - Specifies the location of a response file to be used if package expects input on install. (added in Ansible 1.4) required: false + zone: + description: + - Whether to install the package only in the current zone, or install it into all zones. + - The installation into all zones works only if you are working with the global zone. + required: false + default: "all" + choices: ["current", "all"] + version_added: "1.6" + category: + description: + - Install/Remove category instead of a single package. + required: false + choices: ["true", "false"] + version_added: "1.6" ''' EXAMPLES = ''' @@ -64,22 +78,27 @@ EXAMPLES = ''' - svr4pkg: name=CSWcommon src=/tmp/cswpkgs.pkg state=present # Install a package directly from an http site -- svr4pkg: name=CSWpkgutil src=http://get.opencsw.org/now state=present +- svr4pkg: name=CSWpkgutil src=http://get.opencsw.org/now state=present zone=current # Install a package with a response file - svr4pkg: name=CSWggrep src=/tmp/third-party.pkg response_file=/tmp/ggrep.response state=present # Ensure that a package is not installed. - svr4pkg: name=SUNWgnome-sound-recorder state=absent + +# Ensure that a category is not installed. +- svr4pkg: name=FIREFOX state=absent category=true ''' import os import tempfile -def package_installed(module, name): +def package_installed(module, name, category): cmd = [module.get_bin_path('pkginfo', True)] cmd.append('-q') + if category: + cmd.append('-c') cmd.append(name) rc, out, err = module.run_command(' '.join(cmd)) if rc == 0: @@ -116,13 +135,18 @@ def run_command(module, cmd): cmd[0] = module.get_bin_path(progname, True) return module.run_command(cmd) -def package_install(module, name, src, proxy, response_file): +def package_install(module, name, src, proxy, response_file, zone, category): adminfile = create_admin_file() - cmd = [ 'pkgadd', '-na', adminfile, '-d', src ] + cmd = [ 'pkgadd', '-n'] + if zone == 'current': + cmd += [ '-G' ] + cmd += [ '-a', adminfile, '-d', src ] if proxy is not None: cmd += [ '-x', proxy ] if response_file is not None: cmd += [ '-r', response_file ] + if category: + cmd += [ '-Y' ] cmd.append(name) (rc, out, err) = run_command(module, cmd) os.unlink(adminfile) @@ -130,7 +154,10 @@ def package_install(module, name, src, proxy, response_file): def package_uninstall(module, name, src): adminfile = create_admin_file() - cmd = [ 'pkgrm', '-na', adminfile, name] + if category: + cmd = [ 'pkgrm', '-na', adminfile, '-Y', name ] + else: + cmd = [ 'pkgrm', '-na', adminfile, name] (rc, out, err) = run_command(module, cmd) os.unlink(adminfile) return (rc, out, err) @@ -142,7 +169,9 @@ def main(): state = dict(required = True, choices=['present', 'absent']), src = dict(default = None), proxy = dict(default = None), - response_file = dict(default = None) + response_file = dict(default = None), + zone = dict(required=False, default = 'all', choices=['current','all']), + category = dict(default=False, type='bool') ), supports_check_mode=True ) @@ -151,6 +180,8 @@ def main(): src = module.params['src'] proxy = module.params['proxy'] response_file = module.params['response_file'] + zone = module.params['zone'] + category = module.params['category'] rc = None out = '' err = '' @@ -162,20 +193,20 @@ def main(): if src is None: module.fail_json(name=name, msg="src is required when state=present") - if not package_installed(module, name): + if not package_installed(module, name, category): if module.check_mode: module.exit_json(changed=True) - (rc, out, err) = package_install(module, name, src, proxy, response_file) + (rc, out, err) = package_install(module, name, src, proxy, response_file, zone, category) # Stdout is normally empty but for some packages can be # very long and is not often useful if len(out) > 75: out = out[:75] + '...' elif state == 'absent': - if package_installed(module, name): + if package_installed(module, name, category): if module.check_mode: module.exit_json(changed=True) - (rc, out, err) = package_uninstall(module, name, src) + (rc, out, err) = package_uninstall(module, name, src, category) out = out[:75] if rc is None: diff --git a/library/packaging/swdepot b/library/packaging/swdepot index 6fd89088cc0..b41a860531f 100644 --- a/library/packaging/swdepot +++ b/library/packaging/swdepot @@ -19,6 +19,7 @@ # along with this software. If not, see . import re +import pipes DOCUMENTATION = ''' --- @@ -78,9 +79,9 @@ def query_package(module, name, depot=None): cmd_list = '/usr/sbin/swlist -a revision -l product' if depot: - rc, stdout, stderr = module.run_command("%s -s %s %s | grep %s" % (cmd_list, depot, name, name)) + rc, stdout, stderr = module.run_command("%s -s %s %s | grep %s" % (cmd_list, pipes.quote(depot), pipes.quote(name), pipes.quote(name)), use_unsafe_shell=True) else: - rc, stdout, stderr = module.run_command("%s %s | grep %s" % (cmd_list, name, name)) + rc, stdout, stderr = module.run_command("%s %s | grep %s" % (cmd_list, pipes.quote(name), pipes.quote(name)), use_unsafe_shell=True) if rc == 0: version = re.sub("\s\s+|\t" , " ", stdout).strip().split()[1] else: diff --git a/library/packaging/urpmi b/library/packaging/urpmi index b001ed94dee..be49dfd2648 100644 --- a/library/packaging/urpmi +++ b/library/packaging/urpmi @@ -91,7 +91,8 @@ def query_package(module, name): # rpm -q returns 0 if the package is installed, # 1 if it is not installed - rc = os.system("rpm -q %s" % (name)) + cmd = "rpm -q %s" % (name) + rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc == 0: return True else: @@ -103,13 +104,14 @@ def query_package_provides(module, name): # rpm -q returns 0 if the package is installed, # 1 if it is not installed - rc = os.system("rpm -q --provides %s >/dev/null" % (name)) + cmd = "rpm -q --provides %s" % (name) + rc, stdout, stderr = module.run_command(cmd, check_rc=False) return rc == 0 def update_package_db(module): - rc = os.system("urpmi.update -a -q") - + cmd = "urpmi.update -a -q" + rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc != 0: module.fail_json(msg="could not update package db") @@ -123,7 +125,8 @@ def remove_packages(module, packages): if not query_package(module, package): continue - rc = os.system("%s --auto %s > /dev/null" % (URPME_PATH, package)) + cmd = "%s --auto %s" % (URPME_PATH, package) + rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc != 0: module.fail_json(msg="failed to remove %s" % (package)) @@ -155,7 +158,7 @@ def install_packages(module, pkgspec, force=True, no_suggests=True): else: force_yes = '' - cmd = ("%s --auto %s --quiet %s %s > /dev/null" % (URPMI_PATH, force_yes, no_suggests_yes, packages)) + cmd = ("%s --auto %s --quiet %s %s" % (URPMI_PATH, force_yes, no_suggests_yes, packages)) rc, out, err = module.run_command(cmd) diff --git a/library/packaging/yum b/library/packaging/yum index 61bb836b43a..aded7abbb63 100644 --- a/library/packaging/yum +++ b/library/packaging/yum @@ -3,6 +3,7 @@ # (c) 2012, Red Hat, Inc # Written by Seth Vidal +# (c) 2014, Epic Games, Inc. # # This file is part of Ansible # @@ -108,7 +109,7 @@ EXAMPLES = ''' - name: remove the Apache package yum: name=httpd state=removed -- name: install the latest version of Apche from the testing repo +- name: install the latest version of Apache from the testing repo yum: name=httpd enablerepo=testing state=installed - name: upgrade all packages @@ -535,6 +536,7 @@ def install(module, items, repoq, yum_basecmd, conf_file, en_repos, dis_repos): if found: continue + # if not - then pass in the spec as what to install # we could get here if nothing provides it but that's not # the error we're catching here diff --git a/library/source_control/bzr b/library/source_control/bzr index bc2dfc3089f..996150a39af 100644 --- a/library/source_control/bzr +++ b/library/source_control/bzr @@ -75,16 +75,16 @@ class Bzr(object): self.version = version self.bzr_path = bzr_path - def _command(self, args_list, **kwargs): - (rc, out, err) = self.module.run_command( - [self.bzr_path] + args_list, **kwargs) + def _command(self, args_list, cwd=None, **kwargs): + (rc, out, err) = self.module.run_command([self.bzr_path] + args_list, cwd=cwd, **kwargs) return (rc, out, err) def get_version(self): '''samples the version of the bzr branch''' - os.chdir(self.dest) + cmd = "%s revno" % self.bzr_path - revno = os.popen(cmd).read().strip() + rc, stdout, stderr = self.module.run_command(cmd, cwd=self.dest) + revno = stdout.strip() return revno def clone(self): @@ -94,17 +94,18 @@ class Bzr(object): os.makedirs(dest_dirname) except: pass - os.chdir(dest_dirname) if self.version.lower() != 'head': args_list = ["branch", "-r", self.version, self.parent, self.dest] else: args_list = ["branch", self.parent, self.dest] - return self._command(args_list, check_rc=True) + return self._command(args_list, check_rc=True, cwd=dest_dirname) def has_local_mods(self): - os.chdir(self.dest) + cmd = "%s status -S" % self.bzr_path - lines = os.popen(cmd).read().splitlines() + rc, stdout, stderr = self.module.run_command(cmd, cwd=self.dest) + lines = stdout.splitlines() + lines = filter(lambda c: not re.search('^\\?\\?.*$', c), lines) return len(lines) > 0 @@ -114,30 +115,27 @@ class Bzr(object): Discards any changes to tracked files in the working tree since that commit. ''' - os.chdir(self.dest) if not force and self.has_local_mods(): self.module.fail_json(msg="Local modifications exist in branch (force=no).") - return self._command(["revert"], check_rc=True) + return self._command(["revert"], check_rc=True, cwd=self.dest) def fetch(self): '''updates branch from remote sources''' - os.chdir(self.dest) if self.version.lower() != 'head': - (rc, out, err) = self._command(["pull", "-r", self.version]) + (rc, out, err) = self._command(["pull", "-r", self.version], cwd=self.dest) else: - (rc, out, err) = self._command(["pull"]) + (rc, out, err) = self._command(["pull"], cwd=self.dest) if rc != 0: self.module.fail_json(msg="Failed to pull") return (rc, out, err) def switch_version(self): '''once pulled, switch to a particular revno or revid''' - os.chdir(self.dest) if self.version.lower() != 'head': args_list = ["revert", "-r", self.version] else: args_list = ["revert"] - return self._command(args_list, check_rc=True) + return self._command(args_list, check_rc=True, cwd=self.dest) # =========================================== diff --git a/library/source_control/git b/library/source_control/git index ca876c666b5..968b763b1a4 100644 --- a/library/source_control/git +++ b/library/source_control/git @@ -45,12 +45,13 @@ options: branch name, or a tag name. accept_hostkey: required: false - default: false + default: "no" + choices: [ "yes", "no" ] version_added: "1.5" description: - - Add the hostkey for the repo url if not already added. - If ssh_args contains "-o StrictHostKeyChecking=no", this - parameter is ignored. + - if C(yes), adds the hostkey for the repo url if not already + added. If ssh_args contains "-o StrictHostKeyChecking=no", + this parameter is ignored. ssh_opts: required: false default: None @@ -118,11 +119,20 @@ options: description: - if C(yes), repository will be created as a bare repo, otherwise it will be a standard repo with a workspace. + + recursive: + required: false + default: "yes" + choices: [ "yes", "no" ] + version_added: "1.6" + description: + - if C(no), repository will be cloned without the --recursive + option, skipping sub-modules. notes: - "If the task seems to be hanging, first verify remote host is in C(known_hosts). SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling - the git module, with the following command: ssh-keyscan remote_host.com >> /etc/ssh/ssh_known_hosts." + the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts." ''' EXAMPLES = ''' @@ -141,8 +151,37 @@ EXAMPLES = ''' import re import tempfile +def get_submodule_update_params(module, git_path, cwd): + + #or: git submodule [--quiet] update [--init] [-N|--no-fetch] + #[-f|--force] [--rebase] [--reference ] [--merge] + #[--recursive] [--] [...] + + params = [] + + # run a bad submodule command to get valid params + cmd = "%s submodule update --help" % (git_path) + rc, stdout, stderr = module.run_command(cmd, cwd=cwd) + lines = stderr.split('\n') + update_line = None + for line in lines: + if 'git submodule [--quiet] update ' in line: + update_line = line + if update_line: + update_line = update_line.replace('[','') + update_line = update_line.replace(']','') + update_line = update_line.replace('|',' ') + parts = shlex.split(update_line) + for part in parts: + if part.startswith('--'): + part = part.replace('--', '') + params.append(part) + + return params + def write_ssh_wrapper(): - fd, wrapper_path = tempfile.mkstemp() + module_dir = get_module_path() + fd, wrapper_path = tempfile.mkstemp(prefix=module_dir + '/') fh = os.fdopen(fd, 'w+b') template = """#!/bin/sh if [ -z "$GIT_SSH_OPTS" ]; then @@ -181,26 +220,29 @@ def set_git_ssh(ssh_wrapper, key_file, ssh_opts): if ssh_opts: os.environ["GIT_SSH_OPTS"] = ssh_opts -def get_version(git_path, dest, ref="HEAD"): +def get_version(module, git_path, dest, ref="HEAD"): ''' samples the version of the git repo ''' - os.chdir(dest) + cmd = "%s rev-parse %s" % (git_path, ref) - sha = os.popen(cmd).read().rstrip("\n") + rc, stdout, stderr = module.run_command(cmd, cwd=dest) + sha = stdout.rstrip('\n') return sha -def clone(git_path, module, repo, dest, remote, depth, version, bare, reference): +def clone(git_path, module, repo, dest, remote, depth, version, bare, + reference, recursive): ''' makes a new git repo if it does not already exist ''' dest_dirname = os.path.dirname(dest) try: os.makedirs(dest_dirname) except: pass - os.chdir(dest_dirname) cmd = [ git_path, 'clone' ] if bare: cmd.append('--bare') else: - cmd.extend([ '--origin', remote, '--recursive' ]) + cmd.extend([ '--origin', remote ]) + if recursive: + cmd.extend([ '--recursive' ]) if is_remote_branch(git_path, module, dest, repo, version) \ or is_remote_tag(git_path, module, dest, repo, version): cmd.extend([ '--branch', version ]) @@ -209,19 +251,20 @@ def clone(git_path, module, repo, dest, remote, depth, version, bare, reference) if reference: cmd.extend([ '--reference', str(reference) ]) cmd.extend([ repo, dest ]) - module.run_command(cmd, check_rc=True) + module.run_command(cmd, check_rc=True, cwd=dest_dirname) if bare: - os.chdir(dest) if remote != 'origin': - module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True) + module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest) -def has_local_mods(git_path, dest, bare): +def has_local_mods(module, git_path, dest, bare): if bare: return False - os.chdir(dest) - cmd = "%s status -s" % (git_path,) - lines = os.popen(cmd).read().splitlines() + + cmd = "%s status -s" % (git_path) + rc, stdout, stderr = module.run_command(cmd, cwd=dest) + lines = stdout.splitlines() lines = filter(lambda c: not re.search('^\\?\\?.*$', c), lines) + return len(lines) > 0 def reset(git_path, module, dest): @@ -230,16 +273,16 @@ def reset(git_path, module, dest): Discards any changes to tracked files in working tree since that commit. ''' - os.chdir(dest) cmd = "%s reset --hard HEAD" % (git_path,) - return module.run_command(cmd, check_rc=True) + return module.run_command(cmd, check_rc=True, cwd=dest) def get_remote_head(git_path, module, dest, version, remote, bare): cloning = False + cwd = None if remote == module.params['repo']: cloning = True else: - os.chdir(dest) + cwd = dest if version == 'HEAD': if cloning: # cloning the repo, just get the remote's HEAD version @@ -255,7 +298,7 @@ def get_remote_head(git_path, module, dest, version, remote, bare): # appears to be a sha1. return as-is since it appears # cannot check for a specific sha1 on remote return version - (rc, out, err) = module.run_command(cmd, check_rc=True ) + (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd) if len(out) < 1: module.fail_json(msg="Could not determine remote revision for %s" % version) rev = out.split()[0] @@ -263,17 +306,16 @@ def get_remote_head(git_path, module, dest, version, remote, bare): def is_remote_tag(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version) - (rc, out, err) = module.run_command(cmd, check_rc=True) + (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if version in out: return True else: return False def get_branches(git_path, module, dest): - os.chdir(dest) branches = [] cmd = '%s branch -a' % (git_path,) - (rc, out, err) = module.run_command(cmd) + (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Could not determine branch data - received %s" % out) for line in out.split('\n'): @@ -281,10 +323,9 @@ def get_branches(git_path, module, dest): return branches def get_tags(git_path, module, dest): - os.chdir(dest) tags = [] cmd = '%s tag' % (git_path,) - (rc, out, err) = module.run_command(cmd) + (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Could not determine tag data - received %s" % out) for line in out.split('\n'): @@ -293,7 +334,7 @@ def get_tags(git_path, module, dest): def is_remote_branch(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version) - (rc, out, err) = module.run_command(cmd, check_rc=True) + (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if version in out: return True else: @@ -352,18 +393,17 @@ def get_head_branch(git_path, module, dest, remote, bare=False): def fetch(git_path, module, repo, dest, version, remote, bare): ''' updates repo from remote sources ''' - os.chdir(dest) if bare: - (rc, out1, err1) = module.run_command([git_path, 'fetch', remote, '+refs/heads/*:refs/heads/*']) + (rc, out1, err1) = module.run_command([git_path, 'fetch', remote, '+refs/heads/*:refs/heads/*'], cwd=dest) else: - (rc, out1, err1) = module.run_command("%s fetch %s" % (git_path, remote)) + (rc, out1, err1) = module.run_command("%s fetch %s" % (git_path, remote), cwd=dest) if rc != 0: module.fail_json(msg="Failed to download remote objects and refs") if bare: - (rc, out2, err2) = module.run_command([git_path, 'fetch', remote, '+refs/tags/*:refs/tags/*']) + (rc, out2, err2) = module.run_command([git_path, 'fetch', remote, '+refs/tags/*:refs/tags/*'], cwd=dest) else: - (rc, out2, err2) = module.run_command("%s fetch --tags %s" % (git_path, remote)) + (rc, out2, err2) = module.run_command("%s fetch --tags %s" % (git_path, remote), cwd=dest) if rc != 0: module.fail_json(msg="Failed to download remote objects and refs") (rc, out3, err3) = submodule_update(git_path, module, dest) @@ -371,28 +411,33 @@ def fetch(git_path, module, repo, dest, version, remote, bare): def submodule_update(git_path, module, dest): ''' init and update any submodules ''' - os.chdir(dest) + + # get the valid submodule params + params = get_submodule_update_params(module, git_path, dest) + # skip submodule commands if .gitmodules is not present if not os.path.exists(os.path.join(dest, '.gitmodules')): return (0, '', '') cmd = [ git_path, 'submodule', 'sync' ] - (rc, out, err) = module.run_command(cmd, check_rc=True) - cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ] - (rc, out, err) = module.run_command(cmd) + (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) + if 'remote' in params: + cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ,'--remote' ] + else: + cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ] + (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: - module.fail_json(msg="Failed to init/update submodules") + module.fail_json(msg="Failed to init/update submodules: %s" % out + err) return (rc, out, err) def switch_version(git_path, module, dest, remote, version): ''' once pulled, switch to a particular SHA, tag, or branch ''' - os.chdir(dest) cmd = '' if version != 'HEAD': if is_remote_branch(git_path, module, dest, remote, version): if not is_local_branch(git_path, module, dest, version): cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version) else: - (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version)) + (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version), cwd=dest) if rc != 0: module.fail_json(msg="Failed to checkout branch %s" % version) cmd = "%s reset --hard %s/%s" % (git_path, remote, version) @@ -400,11 +445,11 @@ def switch_version(git_path, module, dest, remote, version): cmd = "%s checkout --force %s" % (git_path, version) else: branch = get_head_branch(git_path, module, dest, remote) - (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch)) + (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch), cwd=dest) if rc != 0: module.fail_json(msg="Failed to checkout branch %s" % branch) cmd = "%s reset --hard %s" % (git_path, remote) - (rc, out1, err1) = module.run_command(cmd) + (rc, out1, err1) = module.run_command(cmd, cwd=dest) if rc != 0: if version != 'HEAD': module.fail_json(msg="Failed to checkout %s" % (version)) @@ -431,6 +476,7 @@ def main(): ssh_opts=dict(default=None, required=False), executable=dict(default=None), bare=dict(default='no', type='bool'), + recursive=dict(default='yes', type='bool'), ), supports_check_mode=True ) @@ -464,6 +510,8 @@ def main(): else: add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey']) + recursive = module.params['recursive'] + if bare: gitconfig = os.path.join(dest, 'config') else: @@ -479,17 +527,18 @@ def main(): if module.check_mode: remote_head = get_remote_head(git_path, module, dest, version, repo, bare) module.exit_json(changed=True, before=before, after=remote_head) - clone(git_path, module, repo, dest, remote, depth, version, bare, reference) + clone(git_path, module, repo, dest, remote, depth, version, bare, + reference, recursive) elif not update: # Just return having found a repo already in the dest path # this does no checking that the repo is the actual repo # requested. - before = get_version(git_path, dest) + before = get_version(module, git_path, dest) module.exit_json(changed=False, before=before, after=before) else: # else do a pull - local_mods = has_local_mods(git_path, dest, bare) - before = get_version(git_path, dest) + local_mods = has_local_mods(module, git_path, dest, bare) + before = get_version(module, git_path, dest) if local_mods: # failure should happen regardless of check mode if not force: @@ -519,7 +568,7 @@ def main(): switch_version(git_path, module, dest, remote, version) # determine if we changed anything - after = get_version(git_path, dest) + after = get_version(module, git_path, dest) changed = False if before != after or local_mods: diff --git a/library/source_control/github_hooks b/library/source_control/github_hooks index 55eb8d3c8d3..6a8d1ced935 100644 --- a/library/source_control/github_hooks +++ b/library/source_control/github_hooks @@ -19,7 +19,6 @@ # along with Ansible. If not, see . import json -import urllib2 import base64 DOCUMENTATION = ''' @@ -51,6 +50,14 @@ options: - This tells the githooks module what you want it to do. required: true choices: [ "create", "cleanall" ] + validate_certs: + description: + - If C(no), SSL certificates for the target repo will not be validated. This should only be used + on personally controlled sites using self-signed certificates. + required: false + default: 'yes' + choices: ['yes', 'no'] + author: Phillip Gentry, CX Inc ''' @@ -62,16 +69,19 @@ EXAMPLES = ''' - local_action: github_hooks action=cleanall user={{ gituser }} oauthkey={{ oauthkey }} repo={{ repo }} ''' -def list(hookurl, oauthkey, repo, user): +def list(module, hookurl, oauthkey, repo, user): url = "%s/hooks" % repo auth = base64.encodestring('%s:%s' % (user, oauthkey)).replace('\n', '') - req = urllib2.Request(url) - req.add_header("Authorization", "Basic %s" % auth) - res = urllib2.urlopen(req) - out = res.read() - return False, out - -def clean504(hookurl, oauthkey, repo, user): + headers = { + 'Authorization': 'Basic %s' % auth, + } + response, info = fetch_url(module, url, headers=headers) + if info['status'] != 200: + return False, '' + else: + return False, response.read() + +def clean504(module, hookurl, oauthkey, repo, user): current_hooks = list(hookurl, oauthkey, repo, user)[1] decoded = json.loads(current_hooks) @@ -79,11 +89,11 @@ def clean504(hookurl, oauthkey, repo, user): if hook['last_response']['code'] == 504: # print "Last response was an ERROR for hook:" # print hook['id'] - delete(hookurl, oauthkey, repo, user, hook['id']) + delete(module, hookurl, oauthkey, repo, user, hook['id']) return 0, current_hooks -def cleanall(hookurl, oauthkey, repo, user): +def cleanall(module, hookurl, oauthkey, repo, user): current_hooks = list(hookurl, oauthkey, repo, user)[1] decoded = json.loads(current_hooks) @@ -91,11 +101,11 @@ def cleanall(hookurl, oauthkey, repo, user): if hook['last_response']['code'] != 200: # print "Last response was an ERROR for hook:" # print hook['id'] - delete(hookurl, oauthkey, repo, user, hook['id']) + delete(module, hookurl, oauthkey, repo, user, hook['id']) return 0, current_hooks -def create(hookurl, oauthkey, repo, user): +def create(module, hookurl, oauthkey, repo, user): url = "%s/hooks" % repo values = { "active": True, @@ -107,29 +117,23 @@ def create(hookurl, oauthkey, repo, user): } data = json.dumps(values) auth = base64.encodestring('%s:%s' % (user, oauthkey)).replace('\n', '') - out='[]' - try : - req = urllib2.Request(url) - req.add_data(data) - req.add_header("Authorization", "Basic %s" % auth) - res = urllib2.urlopen(req) - out = res.read() - return 0, out - except urllib2.HTTPError, e : - if e.code == 422 : - return 0, out - -def delete(hookurl, oauthkey, repo, user, hookid): + headers = { + 'Authorization': 'Basic %s' % auth, + } + response, info = fetch_url(module, url, data=data, headers=headers) + if info['status'] != 200: + return 0, '[]' + else: + return 0, response.read() + +def delete(module, hookurl, oauthkey, repo, user, hookid): url = "%s/hooks/%s" % (repo, hookid) auth = base64.encodestring('%s:%s' % (user, oauthkey)).replace('\n', '') - req = urllib2.Request(url) - req.get_method = lambda: 'DELETE' - req.add_header("Authorization", "Basic %s" % auth) - # req.add_header('Content-Type', 'application/xml') - # req.add_header('Accept', 'application/xml') - res = urllib2.urlopen(req) - out = res.read() - return out + headers = { + 'Authorization': 'Basic %s' % auth, + } + response, info = fetch_url(module, url, data=data, headers=headers, method='DELETE') + return response.read() def main(): module = AnsibleModule( @@ -139,6 +143,7 @@ def main(): oauthkey=dict(required=True), repo=dict(required=True), user=dict(required=True), + validate_certs=dict(default='yes', type='bool'), ) ) @@ -149,16 +154,16 @@ def main(): user = module.params['user'] if action == "list": - (rc, out) = list(hookurl, oauthkey, repo, user) + (rc, out) = list(module, hookurl, oauthkey, repo, user) if action == "clean504": - (rc, out) = clean504(hookurl, oauthkey, repo, user) + (rc, out) = clean504(module, hookurl, oauthkey, repo, user) if action == "cleanall": - (rc, out) = cleanall(hookurl, oauthkey, repo, user) + (rc, out) = cleanall(module, hookurl, oauthkey, repo, user) if action == "create": - (rc, out) = create(hookurl, oauthkey, repo, user) + (rc, out) = create(module, hookurl, oauthkey, repo, user) if rc != 0: module.fail_json(msg="failed", result=out) @@ -168,4 +173,6 @@ def main(): # import module snippets from ansible.module_utils.basic import * +from ansible.module_utils.urls import * + main() diff --git a/library/source_control/subversion b/library/source_control/subversion index 497052af005..29d62240af3 100644 --- a/library/source_control/subversion +++ b/library/source_control/subversion @@ -27,7 +27,7 @@ description: version_added: "0.7" author: Dane Summers, njharman@gmail.com notes: - - Requres I(svn) to be installed on the client. + - Requires I(svn) to be installed on the client. requirements: [] options: repo: @@ -70,11 +70,20 @@ options: description: - Path to svn executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. + export: + required: false + default: False + version_added: "1.6" + description: + - If True, do export instead of checkout/update. ''' EXAMPLES = ''' # Checkout subversion repository to specified folder. - subversion: repo=svn+ssh://an.example.org/path/to/repo dest=/src/checkout + +# Export subversion directory to folder +- subversion: repo=svn+ssh://an.example.org/path/to/repo dest=/src/export export=True ''' import re @@ -110,6 +119,10 @@ class Subversion(object): def checkout(self): '''Creates new svn working directory if it does not already exist.''' self._exec(["checkout", "-r", self.revision, self.repo, self.dest]) + + def export(self, force=False): + '''Export svn repo to directory''' + self._exec(["export", "-r", self.revision, self.repo, self.dest]) def switch(self): '''Change working directory's repo.''' @@ -163,6 +176,7 @@ def main(): username=dict(required=False), password=dict(required=False), executable=dict(default=None), + export=dict(default=False, required=False), ), supports_check_mode=True ) @@ -174,6 +188,7 @@ def main(): username = module.params['username'] password = module.params['password'] svn_path = module.params['executable'] or module.get_bin_path('svn', True) + export = module.params['export'] os.environ['LANG'] = 'C' svn = Subversion(module, dest, repo, revision, username, password, svn_path) @@ -183,7 +198,10 @@ def main(): local_mods = False if module.check_mode: module.exit_json(changed=True) - svn.checkout() + if not export: + svn.checkout() + else: + svn.export() elif os.path.exists("%s/.svn" % (dest, )): # Order matters. Need to get local mods before switch to avoid false # positives. Need to switch before revert to ensure we are reverting to diff --git a/library/system/alternatives b/library/system/alternatives new file mode 100755 index 00000000000..503f9745f12 --- /dev/null +++ b/library/system/alternatives @@ -0,0 +1,137 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +""" +Ansible module to manage symbolic link alternatives. +(c) 2014, Gabe Mulley + +This file is part of Ansible + +Ansible is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +Ansible is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with Ansible. If not, see . +""" + +DOCUMENTATION = ''' +--- +module: alternatives +short_description: Manages alternative programs for common commands +description: + - Manages symbolic links using the 'update-alternatives' tool provided on debian-like systems. + - Useful when multiple programs are installed but provide similar functionality (e.g. different editors). +version_added: "1.6" +options: + name: + description: + - The generic name of the link. + required: true + path: + description: + - The path to the real executable that the link should point to. + required: true + link: + description: + - The path to the symbolic link that should point to the real executable. + required: false +requirements: [ update-alternatives ] +''' + +EXAMPLES = ''' +- name: correct java version selected + alternatives: name=java path=/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java + +- name: alternatives link created + alternatives: name=hadoop-conf link=/etc/hadoop/conf path=/etc/hadoop/conf.ansible +''' + +UPDATE_ALTERNATIVES = '/usr/sbin/update-alternatives' +DEFAULT_LINK_PRIORITY = 50 + +def main(): + + module = AnsibleModule( + argument_spec = dict( + name = dict(required=True), + path = dict(required=True), + link = dict(required=False), + ) + ) + + params = module.params + name = params['name'] + path = params['path'] + link = params['link'] + + current_path = None + all_alternatives = [] + + (rc, query_output, query_error) = module.run_command( + [UPDATE_ALTERNATIVES, '--query', name] + ) + + # Gather the current setting and all alternatives from the query output. + # Query output should look something like this: + + # Name: java + # Link: /usr/bin/java + # Slaves: + # java.1.gz /usr/share/man/man1/java.1.gz + # Status: manual + # Best: /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java + # Value: /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java + + # Alternative: /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java + # Priority: 1061 + # Slaves: + # java.1.gz /usr/lib/jvm/java-6-openjdk-amd64/jre/man/man1/java.1.gz + + # Alternative: /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java + # Priority: 1071 + # Slaves: + # java.1.gz /usr/lib/jvm/java-7-openjdk-amd64/jre/man/man1/java.1.gz + + if rc == 0: + for line in query_output.splitlines(): + split_line = line.split(':') + if len(split_line) == 2: + key = split_line[0] + value = split_line[1].strip() + if key == 'Value': + current_path = value + elif key == 'Alternative': + all_alternatives.append(value) + + if current_path != path: + try: + # install the requested path if necessary + if path not in all_alternatives: + module.run_command( + [UPDATE_ALTERNATIVES, '--install', link, name, path, str(DEFAULT_LINK_PRIORITY)], + check_rc=True + ) + + # select the requested path + module.run_command( + [UPDATE_ALTERNATIVES, '--set', name, path], + check_rc=True + ) + + module.exit_json(changed=True) + except subprocess.CalledProcessError as cpe: + module.fail_json(msg=str(dir(cpe))) + else: + module.exit_json(changed=False) + + +# import module snippets +from ansible.module_utils.basic import * +main() diff --git a/library/system/at b/library/system/at index ffac9d1d535..c63527563fd 100644 --- a/library/system/at +++ b/library/system/at @@ -21,17 +21,12 @@ DOCUMENTATION = ''' --- module: at -short_description: Schedule the execution of a command or scripts via the at command. +short_description: Schedule the execution of a command or script file via the at command. description: - - Use this module to schedule a command or script to run once in the future. - - All jobs are executed in the a queue. -version_added: "0.0" + - Use this module to schedule a command or script file to run once in the future. + - All jobs are executed in the 'a' queue. +version_added: "1.5" options: - user: - description: - - The user to execute the at command as. - required: false - default: null command: description: - A command to be executed in the future. @@ -39,25 +34,29 @@ options: default: null script_file: description: - - An existing script to be executed in the future. + - An existing script file to be executed in the future. required: false default: null - unit_count: + count: description: - - The count of units in the future to execute the command or script. + - The count of units in the future to execute the command or script file. required: true - unit_type: + units: description: - - The type of units in the future to execute the command or script. + - The type of units in the future to execute the command or script file. required: true choices: ["minutes", "hours", "days", "weeks"] - action: + state: description: - - The action to take for the job defaulting to add. Unique will verify that there is only one entry in the queue. - - Delete will remove all existing queued jobs. - required: true - choices: ["add", "delete", "unique"] - default: add + - The state dictates if the command or script file should be evaluated as present(added) or absent(deleted). + required: false + choices: ["present", "absent"] + default: "present" + unique: + description: + - If a matching job is present a new job will not be added. + required: false + default: false requirements: - at author: Richard Isaacson @@ -65,33 +64,45 @@ author: Richard Isaacson EXAMPLES = ''' # Schedule a command to execute in 20 minutes as root. -- at: command="ls -d / > /dev/null" unit_count=20 unit_type="minutes" - -# Schedule a script to execute in 1 hour as the neo user. -- at: script_file="/some/script.sh" user="neo" unit_count=1 unit_type="hours" +- at: command="ls -d / > /dev/null" count=20 units="minutes" # Match a command to an existing job and delete the job. -- at: command="ls -d / > /dev/null" action="delete" +- at: command="ls -d / > /dev/null" state="absent" # Schedule a command to execute in 20 minutes making sure it is unique in the queue. -- at: command="ls -d / > /dev/null" action="unique" unit_count=20 unit_type="minutes" +- at: command="ls -d / > /dev/null" unique=true count=20 units="minutes" ''' import os import tempfile -def matching_jobs(module, at_cmd, script_file, user=None): + +def add_job(module, result, at_cmd, count, units, command, script_file): + at_command = "%s now + %s %s -f %s" % (at_cmd, count, units, script_file) + rc, out, err = module.run_command(at_command, check_rc=True) + if command: + os.unlink(script_file) + result['changed'] = True + + +def delete_job(module, result, at_cmd, command, script_file): + for matching_job in get_matching_jobs(module, at_cmd, script_file): + at_command = "%s -d %s" % (at_cmd, matching_job) + rc, out, err = module.run_command(at_command, check_rc=True) + result['changed'] = True + if command: + os.unlink(script_file) + module.exit_json(**result) + + +def get_matching_jobs(module, at_cmd, script_file): matching_jobs = [] atq_cmd = module.get_bin_path('atq', True) # Get list of job numbers for the user. - atq_command = "%s" % (atq_cmd) - if user: - atq_command = "su '%s' -c '%s'" % (user, atq_command) - rc, out, err = module.run_command(atq_command) - if rc != 0: - module.fail_json(msg=err) + atq_command = "%s" % atq_cmd + rc, out, err = module.run_command(atq_command, check_rc=True) current_jobs = out.splitlines() if len(current_jobs) == 0: return matching_jobs @@ -104,100 +115,83 @@ def matching_jobs(module, at_cmd, script_file, user=None): for current_job in current_jobs: split_current_job = current_job.split() at_command = "%s -c %s" % (at_cmd, split_current_job[0]) - if user: - at_command = "su '%s' -c '%s'" % (user, at_command) - rc, out, err = module.run_command(at_command) - if rc != 0: - module.fail_json(msg=err) + rc, out, err = module.run_command(at_command, check_rc=True) if script_file_string in out: matching_jobs.append(split_current_job[0]) # Return the list. return matching_jobs -#================================================ + +def create_tempfile(command): + filed, script_file = tempfile.mkstemp(prefix='at') + fileh = os.fdopen(filed, 'w') + fileh.write(command) + fileh.close() + return script_file + def main(): module = AnsibleModule( argument_spec = dict( - user=dict(required=False), - command=dict(required=False), - script_file=dict(required=False), - unit_count=dict(required=False, - type='int'), - unit_type=dict(required=False, - default=None, - choices=["minutes", "hours", "days", "weeks"], - type="str"), - action=dict(required=False, - default="add", - choices=["add", "delete", "unique"], - type="str") + command=dict(required=False, + type='str'), + script_file=dict(required=False, + type='str'), + count=dict(required=False, + type='int'), + units=dict(required=False, + default=None, + choices=['minutes', 'hours', 'days', 'weeks'], + type='str'), + state=dict(required=False, + default='present', + choices=['present', 'absent'], + type='str'), + unique=dict(required=False, + default=False, + type='bool') ), - supports_check_mode = False, + mutually_exclusive=[['command', 'script_file']], + required_one_of=[['command', 'script_file']], + supports_check_mode=False ) at_cmd = module.get_bin_path('at', True) - user = module.params['user'] command = module.params['command'] script_file = module.params['script_file'] - unit_count = module.params['unit_count'] - unit_type = module.params['unit_type'] - action = module.params['action'] + count = module.params['count'] + units = module.params['units'] + state = module.params['state'] + unique = module.params['unique'] - if ((action == 'add') and (not unit_count or not unit_type)): - module.fail_json(msg="add action requires unit_count and unit_type") + if (state == 'present') and (not count or not units): + module.fail_json(msg="present state requires count and units") - if (not command) and (not script_file): - module.fail_json(msg="command or script_file not specified") - - if command and script_file: - module.fail_json(msg="command and script_file are mutually exclusive") - - result = {} - result['action'] = action - result['changed'] = False + result = {'state': state, 'changed': False} # If command transform it into a script_file if command: - filed, script_file = tempfile.mkstemp(prefix='at') - fileh = os.fdopen(filed, 'w') - fileh.write(command) - fileh.close() - - # if delete then return - if action == 'delete': - for matching_job in matching_jobs(module, at_cmd, script_file, user): - at_command = "%s -d %s" % (at_cmd, matching_job) - if user: - at_command = "su '%s' -c '%s'" % (user, at_ccommand) - rc, out, err = module.run_command(at_command) - if rc != 0: - module.fail_json(msg=err) - result['changed'] = True - module.exit_json(**result) + script_file = create_tempfile(command) + + # if absent remove existing and return + if state == 'absent': + delete_job(module, result, at_cmd, command, script_file) # if unique if existing return unchanged - if action == 'unique': - if len(matching_jobs(module, at_cmd, script_file, user)) != 0: + if unique: + if len(get_matching_jobs(module, at_cmd, script_file)) != 0: + if command: + os.unlink(script_file) module.exit_json(**result) result['script_file'] = script_file - result['unit_count'] = unit_count - result['unit_type'] = unit_type - - at_command = "%s now + %s %s -f %s" % (at_cmd, unit_count, unit_type, script_file) - if user: - # We expect that if this is an installed the permissions are already correct for the user to execute it. - at_command = "su '%s' -c '%s'" % (user, at_command) - rc, out, err = module.run_command(at_command) - if rc != 0: - module.fail_json(msg=err) - if command: - os.unlink(script_file) - result['changed'] = True + result['count'] = count + result['units'] = units + + add_job(module, result, at_cmd, count, units, command, script_file) module.exit_json(**result) diff --git a/library/system/authorized_key b/library/system/authorized_key index 1a7c8b97b0e..c40edb1f162 100644 --- a/library/system/authorized_key +++ b/library/system/authorized_key @@ -48,7 +48,12 @@ options: version_added: "1.2" manage_dir: description: - - Whether this module should manage the directory of the authorized_keys file. Make sure to set C(manage_dir=no) if you are using an alternate directory for authorized_keys set with C(path), since you could lock yourself out of SSH access. See the example below. + - Whether this module should manage the directory of the authorized key file. If + set, the module will create the directory, as well as set the owner and permissions + of an existing directory. Be sure to + set C(manage_dir=no) if you are using an alternate directory for + authorized_keys, as set with C(path), since you could lock yourself out of + SSH access. See the example below. required: false choices: [ "yes", "no" ] default: "yes" @@ -165,7 +170,7 @@ def keyfile(module, user, write=False, path=None, manage_dir=True): uid = user_entry.pw_uid gid = user_entry.pw_gid - if manage_dir in BOOLEANS_TRUE: + if manage_dir: if not os.path.exists(sshdir): os.mkdir(sshdir, 0700) if module.selinux_enabled(): @@ -199,33 +204,19 @@ def parseoptions(module, options): ''' options_dict = keydict() #ordered dict if options: - token_exp = [ - # matches separator - (r',+', False), - # matches option with value, e.g. from="x,y" - (r'([a-z0-9-]+)="((?:[^"\\]|\\.)*)"', True), - # matches single option, e.g. no-agent-forwarding - (r'[a-z0-9-]+', True) - ] - - pos = 0 - while pos < len(options): - match = None - for pattern, is_valid_option in token_exp: - regex = re.compile(pattern, re.IGNORECASE) - match = regex.match(options, pos) - if match: - text = match.group(0) - if is_valid_option: - if len(match.groups()) == 2: - options_dict[match.group(1)] = match.group(2) - else: - options_dict[text] = None - break - if not match: - module.fail_json(msg="invalid option string: %s" % options) - else: - pos = match.end(0) + try: + # the following regex will split on commas while + # ignoring those commas that fall within quotes + regex = re.compile(r'''((?:[^,"']|"[^"]*"|'[^']*')+)''') + parts = regex.split(options)[1:-1] + for part in parts: + if "=" in part: + (key, value) = part.split("=", 1) + options_dict[key] = value + elif part != ",": + options_dict[part] = None + except: + module.fail_json(msg="invalid option string: %s" % options) return options_dict @@ -254,7 +245,7 @@ def parsekey(module, raw_key): # split key safely lex = shlex.shlex(raw_key) - lex.quotes = ["'", '"'] + lex.quotes = [] lex.commenters = '' #keep comment hashes lex.whitespace_split = True key_parts = list(lex) @@ -315,7 +306,7 @@ def writekeys(module, filename, keys): option_strings = [] for option_key in options.keys(): if options[option_key]: - option_strings.append("%s=\"%s\"" % (option_key, options[option_key])) + option_strings.append("%s=%s" % (option_key, options[option_key])) else: option_strings.append("%s" % option_key) diff --git a/library/system/capabilities b/library/system/capabilities new file mode 100644 index 00000000000..f4a9f62c0d0 --- /dev/null +++ b/library/system/capabilities @@ -0,0 +1,187 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2014, Nate Coraor +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . +# + +DOCUMENTATION = ''' +--- +module: capabilities +short_description: Manage Linux capabilities +description: + - This module manipulates files privileges using the Linux capabilities(7) system. +version_added: "1.6" +options: + path: + description: + - Specifies the path to the file to be managed. + required: true + default: null + capability: + description: + - Desired capability to set (with operator and flags, if state is C(present)) or remove (if state is C(absent)) + required: true + default: null + aliases: [ 'cap' ] + state: + description: + - Whether the entry should be present or absent in the file's capabilities. + choices: [ "present", "absent" ] + default: present +notes: + - The capabilities system will automatically transform operators and flags + into the effective set, so (for example, cap_foo=ep will probably become + cap_foo+ep). This module does not attempt to determine the final operator + and flags to compare, so you will want to ensure that your capabilities + argument matches the final capabilities. +requirements: [] +author: Nate Coraor +''' + +EXAMPLES = ''' +# Set cap_sys_chroot+ep on /foo +- capabilities: path=/foo capability=cap_sys_chroot+ep state=present + +# Remove cap_net_bind_service from /bar +- capabilities: path=/bar capability=cap_net_bind_service state=absent +''' + + +OPS = ( '=', '-', '+' ) + +# ============================================================== + +import os +import tempfile +import re + +class CapabilitiesModule(object): + + platform = 'Linux' + distribution = None + + def __init__(self, module): + self.module = module + self.path = module.params['path'].strip() + self.capability = module.params['capability'].strip().lower() + self.state = module.params['state'] + self.getcap_cmd = module.get_bin_path('getcap', required=True) + self.setcap_cmd = module.get_bin_path('setcap', required=True) + self.capability_tup = self._parse_cap(self.capability, op_required=self.state=='present') + + self.run() + + def run(self): + + current = self.getcap(self.path) + caps = [ cap[0] for cap in current ] + + if self.state == 'present' and self.capability_tup not in current: + # need to add capability + if self.module.check_mode: + self.module.exit_json(changed=True, msg='capabilities changed') + else: + # remove from current cap list if it's already set (but op/flags differ) + current = filter(lambda x: x[0] != self.capability_tup[0], current) + # add new cap with correct op/flags + current.append( self.capability_tup ) + self.module.exit_json(changed=True, state=self.state, msg='capabilities changed', stdout=self.setcap(self.path, current)) + elif self.state == 'absent' and self.capability_tup[0] in caps: + # need to remove capability + if self.module.check_mode: + self.module.exit_json(changed=True, msg='capabilities changed') + else: + # remove from current cap list and then set current list + current = filter(lambda x: x[0] != self.capability_tup[0], current) + self.module.exit_json(changed=True, state=self.state, msg='capabilities changed', stdout=self.setcap(self.path, current)) + self.module.exit_json(changed=False, state=self.state) + + def getcap(self, path): + rval = [] + cmd = "%s -v %s" % (self.getcap_cmd, path) + rc, stdout, stderr = self.module.run_command(cmd) + # If file xattrs are set but no caps are set the output will be: + # '/foo =' + # If file xattrs are unset the output will be: + # '/foo' + # If the file does not eixst the output will be (with rc == 0...): + # '/foo (No such file or directory)' + if rc != 0 or (stdout.strip() != path and stdout.count(' =') != 1): + self.module.fail_json(msg="Unable to get capabilities of %s" % path, stdout=stdout.strip(), stderr=stderr) + if stdout.strip() != path: + caps = stdout.split(' =')[1].strip().split() + for cap in caps: + cap = cap.lower() + # getcap condenses capabilities with the same op/flags into a + # comma-separated list, so we have to parse that + if ',' in cap: + cap_group = cap.split(',') + cap_group[-1], op, flags = self._parse_cap(cap_group[-1]) + for subcap in cap_group: + rval.append( ( subcap, op, flags ) ) + else: + rval.append(self._parse_cap(cap)) + return rval + + def setcap(self, path, caps): + caps = ' '.join([ ''.join(cap) for cap in caps ]) + cmd = "%s '%s' %s" % (self.setcap_cmd, caps, path) + rc, stdout, stderr = self.module.run_command(cmd) + if rc != 0: + self.module.fail_json(msg="Unable to set capabilities of %s" % path, stdout=stdout, stderr=stderr) + else: + return stdout + + def _parse_cap(self, cap, op_required=True): + opind = -1 + try: + i = 0 + while opind == -1: + opind = cap.find(OPS[i]) + i += 1 + except: + if op_required: + self.module.fail_json(msg="Couldn't find operator (one of: %s)" % str(OPS)) + else: + return (cap, None, None) + op = cap[opind] + cap, flags = cap.split(op) + return (cap, op, flags) + +# ============================================================== +# main + +def main(): + + # defining module + module = AnsibleModule( + argument_spec = dict( + path = dict(aliases=['key'], required=True), + capability = dict(aliases=['cap'], required=True), + state = dict(default='present', choices=['present', 'absent']), + ), + supports_check_mode=True + ) + + CapabilitiesModule(module) + + sys.exit(0) + +# import module snippets +from ansible.module_utils.basic import * +main() diff --git a/library/system/cron b/library/system/cron index 39727b4c769..32e7e872f06 100644 --- a/library/system/cron +++ b/library/system/cron @@ -44,7 +44,6 @@ options: name: description: - Description of a crontab entry. - required: false default: null user: description: @@ -145,6 +144,7 @@ import os import re import tempfile import platform +import pipes CRONCMD = "/usr/bin/crontab" @@ -190,7 +190,8 @@ class CronTab(object): except: raise CronTabError("Unexpected error:", sys.exc_info()[0]) else: - (rc, out, err) = self.module.run_command(self._read_user_execute()) + # using safely quoted shell for now, but this really should be two non-shell calls instead. FIXME + (rc, out, err) = self.module.run_command(self._read_user_execute(), use_unsafe_shell=True) if rc != 0 and rc != 1: # 1 can mean that there are no jobs. raise CronTabError("Unable to read crontab") @@ -235,8 +236,8 @@ class CronTab(object): # Add the entire crontab back to the user crontab if not self.cron_file: - # os.system(self._write_execute(path)) - (rc, out, err) = self.module.run_command(self._write_execute(path)) + # quoting shell args for now but really this should be two non-shell calls. FIXME + (rc, out, err) = self.module.run_command(self._write_execute(path), use_unsafe_shell=True) os.unlink(path) if rc != 0: @@ -350,9 +351,11 @@ class CronTab(object): user = '' if self.user: if platform.system() == 'SunOS': - return "su '%s' -c '%s -l'" % (self.user, CRONCMD) + return "su %s -c '%s -l'" % (pipes.quote(self.user), pipes.quote(CRONCMD)) + elif platform.system() == 'AIX': + return "%s -l %s" % (pipes.quote(CRONCMD), pipes.quote(self.user)) else: - user = '-u %s' % self.user + user = '-u %s' % pipes.quote(self.user) return "%s %s %s" % (CRONCMD , user, '-l') def _write_execute(self, path): @@ -361,11 +364,11 @@ class CronTab(object): """ user = '' if self.user: - if platform.system() == 'SunOS': - return "chown %s %s ; su '%s' -c '%s %s'" % (self.user, path, self.user, CRONCMD, path) + if platform.system() in [ 'SunOS', 'AIX' ]: + return "chown %s %s ; su '%s' -c '%s %s'" % (pipes.quote(self.user), pipes.quote(path), pipes.quote(self.user), CRONCMD, pipes.quote(path)) else: - user = '-u %s' % self.user - return "%s %s %s" % (CRONCMD , user, path) + user = '-u %s' % pipes.quote(self.user) + return "%s %s %s" % (CRONCMD , user, pipes.quote(path)) diff --git a/library/system/debconf b/library/system/debconf index 5b47d6b2b18..5cb0ba1e8fc 100644 --- a/library/system/debconf +++ b/library/system/debconf @@ -34,6 +34,7 @@ notes: - A number of questions have to be answered (depending on the package). Use 'debconf-show ' on any Debian or derivative with the package installed to see questions/settings available. +requirements: [ debconf, debconf-utils ] options: name: description: @@ -75,7 +76,7 @@ EXAMPLES = ''' debconf: name=locales question='locales/default_environment_locale' value=fr_FR.UTF-8 # set to generate locales: -debconf: name=locales question='locales/locales_to_be_generated value='en_US.UTF-8 UTF-8, fr_FR.UTF-8 UTF-8' +debconf: name=locales question='locales/locales_to_be_generated' value='en_US.UTF-8 UTF-8, fr_FR.UTF-8 UTF-8' # Accept oracle license debconf: name='oracle-java7-installer' question='shared/accepted-oracle-license-v1-1' value='true' vtype='select' @@ -84,6 +85,8 @@ debconf: name='oracle-java7-installer' question='shared/accepted-oracle-license- debconf: name='tzdata' ''' +import pipes + def get_selections(module, pkg): cmd = [module.get_bin_path('debconf-show', True), pkg] rc, out, err = module.run_command(' '.join(cmd)) @@ -94,7 +97,7 @@ def get_selections(module, pkg): selections = {} for line in out.splitlines(): - (key, value) = line.split(':') + (key, value) = line.split(':', 1) selections[ key.strip('*').strip() ] = value.strip() return selections @@ -105,11 +108,11 @@ def set_selection(module, pkg, question, vtype, value, unseen): data = ' '.join([ question, vtype, value ]) setsel = module.get_bin_path('debconf-set-selections', True) - cmd = ["echo '%s %s' |" % (pkg, data), setsel] + cmd = ["echo %s %s |" % (pipes.quote(pkg), pipes.quote(data)), setsel] if unseen: cmd.append('-u') - return module.run_command(' '.join(cmd)) + return module.run_command(' '.join(cmd), use_unsafe_shell=True) def main(): @@ -125,10 +128,10 @@ def main(): supports_check_mode=True, ) - #TODO: enable passing array of optionas and/or debconf file from get-selections dump + #TODO: enable passing array of options and/or debconf file from get-selections dump pkg = module.params["name"] question = module.params["question"] - vtype = module.params["vtype"] + vtype = module.params["vtype"] value = module.params["value"] unseen = module.params["unseen"] @@ -140,7 +143,7 @@ def main(): if question is not None: if vtype is None or value is None: - module.fail_json(msg="when supliying a question you must supply a valide vtype and value") + module.fail_json(msg="when supplying a question you must supply a valid vtype and value") if not question in prev or prev[question] != value: changed = True diff --git a/library/system/filesystem b/library/system/filesystem index 698c71d4534..46e798f6e81 100644 --- a/library/system/filesystem +++ b/library/system/filesystem @@ -79,7 +79,7 @@ def main(): cmd = module.get_bin_path('blkid', required=True) - rc,raw_fs,err = module.run_command("%s -o value -s TYPE %s" % (cmd, dev)) + rc,raw_fs,err = module.run_command("%s -c /dev/null -o value -s TYPE %s" % (cmd, dev)) fs = raw_fs.strip() diff --git a/library/system/firewalld b/library/system/firewalld index 62c90d0656c..22db165aad3 100644 --- a/library/system/firewalld +++ b/library/system/firewalld @@ -85,8 +85,13 @@ try: from firewall.client import FirewallClient fw = FirewallClient() + if not fw.connected: + raise Exception('failed to connect to the firewalld daemon') except ImportError: - print "fail=True msg='firewalld required for this module'" + print "failed=True msg='firewalld required for this module'" + sys.exit(1) +except Exception, e: + print "failed=True msg='%s'" % str(e) sys.exit(1) ################ diff --git a/library/system/hostname b/library/system/hostname index 781bdcd08aa..c6d1f819451 100644 --- a/library/system/hostname +++ b/library/system/hostname @@ -285,8 +285,8 @@ class FedoraStrategy(GenericStrategy): (rc, out, err)) def get_permanent_hostname(self): - cmd = 'hostnamectl status | awk \'/^ *Static hostname:/{printf("%s", $3)}\'' - rc, out, err = self.module.run_command(cmd) + cmd = 'hostnamectl --static status' + rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) diff --git a/library/system/locale_gen b/library/system/locale_gen new file mode 100644 index 00000000000..6225ce236dc --- /dev/null +++ b/library/system/locale_gen @@ -0,0 +1,151 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +import os +import os.path +from subprocess import Popen, PIPE, call + +DOCUMENTATION = ''' +--- +module: locale_gen +short_description: Creates of removes locales. +description: + - Manages locales by editing /etc/locale.gen and invoking locale-gen. +version_added: "1.6" +options: + name: + description: + - Name and encoding of the locale, such as "en_GB.UTF-8". + required: true + default: null + aliases: [] + state: + description: + - Whether the locale shall be present. + required: false + choices: ["present", "absent"] + default: "present" +''' + +EXAMPLES = ''' +# Ensure a locale exists. +- locale_gen: name=de_CH.UTF-8 state=present +''' + +# =========================================== +# location module specific support methods. +# + +def is_present(name): + """Checks if the given locale is currently installed.""" + output = Popen(["locale", "-a"], stdout=PIPE).communicate()[0] + return any(fix_case(name) == fix_case(line) for line in output.splitlines()) + +def fix_case(name): + """locale -a might return the encoding in either lower or upper case. + Passing through this function makes them uniform for comparisons.""" + return name.replace(".utf8", ".UTF-8") + +def replace_line(existing_line, new_line): + """Replaces lines in /etc/locale.gen""" + with open("/etc/locale.gen", "r") as f: + lines = [line.replace(existing_line, new_line) for line in f] + with open("/etc/locale.gen", "w") as f: + f.write("".join(lines)) + +def apply_change(targetState, name, encoding): + """Create or remove locale. + + Keyword arguments: + targetState -- Desired state, eiter present or absent. + name -- Name including encoding such as de_CH.UTF-8. + encoding -- Encoding such as UTF-8. + """ + if targetState=="present": + # Create locale. + replace_line("# "+name+" "+encoding, name+" "+encoding) + else: + # Delete locale. + replace_line(name+" "+encoding, "# "+name+" "+encoding) + + localeGenExitValue = call("locale-gen") + if localeGenExitValue!=0: + raise EnvironmentError(localeGenExitValue, "locale.gen failed to execute, it returned "+str(localeGenExitValue)) + +def apply_change_ubuntu(targetState, name, encoding): + """Create or remove locale. + + Keyword arguments: + targetState -- Desired state, eiter present or absent. + name -- Name including encoding such as de_CH.UTF-8. + encoding -- Encoding such as UTF-8. + """ + if targetState=="present": + # Create locale. + # Ubuntu's patched locale-gen automatically adds the new locale to /var/lib/locales/supported.d/local + localeGenExitValue = call(["locale-gen", name]) + else: + # Delete locale involves discarding the locale from /var/lib/locales/supported.d/local and regenerating all locales. + with open("/var/lib/locales/supported.d/local", "r") as f: + content = f.readlines() + with open("/var/lib/locales/supported.d/local", "w") as f: + for line in content: + if line!=(name+" "+encoding+"\n"): + f.write(line) + # Purge locales and regenerate. + # Please provide a patch if you know how to avoid regenerating the locales to keep! + localeGenExitValue = call(["locale-gen", "--purge"]) + + if localeGenExitValue!=0: + raise EnvironmentError(localeGenExitValue, "locale.gen failed to execute, it returned "+str(localeGenExitValue)) + +# ============================================================== +# main + +def main(): + + module = AnsibleModule( + argument_spec = dict( + name = dict(required=True), + state = dict(choices=['present','absent'], required=True), + ), + supports_check_mode=True + ) + + name = module.params['name'] + if not "." in name: + module.fail_json(msg="Locale does not match pattern. Did you specify the encoding?") + state = module.params['state'] + + if not os.path.exists("/etc/locale.gen"): + if os.path.exists("/var/lib/locales/supported.d/local"): + # Ubuntu created its own system to manage locales. + ubuntuMode = True + else: + module.fail_json(msg="/etc/locale.gen and /var/lib/locales/supported.d/local are missing. Is the package “locales” installed?") + else: + # We found the common way to manage locales. + ubuntuMode = False + + prev_state = "present" if is_present(name) else "absent" + changed = (prev_state!=state) + + if module.check_mode: + module.exit_json(changed=changed) + else: + encoding = name.split(".")[1] + if changed: + try: + if ubuntuMode==False: + apply_change(state, name, encoding) + else: + apply_change_ubuntu(state, name, encoding) + except EnvironmentError as e: + module.fail_json(msg=e.strerror, exitValue=e.errno) + + module.exit_json(name=name, changed=changed, msg="OK") + +# import module snippets +from ansible.module_utils.basic import * + +main() diff --git a/library/system/lvg b/library/system/lvg index 4e24b25a5c9..906e13d6469 100644 --- a/library/system/lvg +++ b/library/system/lvg @@ -41,6 +41,12 @@ options: - The size of the physical extent in megabytes. Must be a power of 2. default: 4 required: false + vg_options: + description: + - Additional options to pass to C(vgcreate) when creating the volume group. + default: null + required: false + version_added: "1.6" state: choices: [ "present", "absent" ] default: present @@ -99,6 +105,7 @@ def main(): vg=dict(required=True), pvs=dict(type='list'), pesize=dict(type='int', default=4), + vg_options=dict(), state=dict(choices=["absent", "present"], default='present'), force=dict(type='bool', default='no'), ), @@ -109,6 +116,7 @@ def main(): state = module.params['state'] force = module.boolean(module.params['force']) pesize = module.params['pesize'] + vgoptions = module.params.get('vg_options', '').split() if module.params['pvs']: dev_string = ' '.join(module.params['pvs']) @@ -162,13 +170,13 @@ def main(): ### create PV pvcreate_cmd = module.get_bin_path('pvcreate', True) for current_dev in dev_list: - rc,_,err = module.run_command("%s %s"%(pvcreate_cmd,current_dev)) + rc,_,err = module.run_command("%s %s" % (pvcreate_cmd,current_dev)) if rc == 0: changed = True else: - module.fail_json(msg="Creating physical volume '%s' failed"%current_dev, rc=rc, err=err) + module.fail_json(msg="Creating physical volume '%s' failed" % current_dev, rc=rc, err=err) vgcreate_cmd = module.get_bin_path('vgcreate') - rc,_,err = module.run_command("%s -s %s %s %s"%(vgcreate_cmd, pesize, vg, dev_string)) + rc,_,err = module.run_command([vgcreate_cmd] + vgoptions + ['-s', str(pesize), vg, dev_string]) if rc == 0: changed = True else: @@ -210,7 +218,7 @@ def main(): module.fail_json(msg="Creating physical volume '%s' failed"%current_dev, rc=rc, err=err) ### add PV to our VG vgextend_cmd = module.get_bin_path('vgextend', True) - rc,_,err = module.run_command("%s %s %s"%(vgextend_cmd, vg, devs_to_add_string)) + rc,_,err = module.run_command("%s %s %s" % (vgextend_cmd, vg, devs_to_add_string)) if rc == 0: changed = True else: diff --git a/library/system/modprobe b/library/system/modprobe index 82ca86b9bd5..73e2c827f41 100644 --- a/library/system/modprobe +++ b/library/system/modprobe @@ -34,11 +34,19 @@ options: choices: [ present, absent ] description: - Whether the module should be present or absent. + params: + required: false + default: "" + version_added: "1.6" + description: + - Modules parameters. ''' EXAMPLES = ''' # Add the 802.1q module - modprobe: name=8021q state=present +# Add the dummy module +- modprobe: name=dummy state=present params="numdummies=2" ''' def main(): @@ -46,6 +54,7 @@ def main(): argument_spec={ 'name': {'required': True}, 'state': {'default': 'present', 'choices': ['present', 'absent']}, + 'params': {'default': ''}, }, supports_check_mode=True, ) @@ -54,14 +63,16 @@ def main(): 'failed': False, 'name': module.params['name'], 'state': module.params['state'], + 'params': module.params['params'], } # Check if module is present try: modules = open('/proc/modules') present = False + module_name = args['name'].replace('-', '_') + ' ' for line in modules: - if line.startswith(args['name'] + ' '): + if line.startswith(module_name): present = True break modules.close() @@ -81,7 +92,7 @@ def main(): # Add/remove module as needed if args['state'] == 'present': if not present: - rc, _, err = module.run_command(['modprobe', args['name']]) + rc, _, err = module.run_command(['modprobe', args['name'], args['params']]) if rc != 0: module.fail_json(msg=err, **args) args['changed'] = True diff --git a/library/system/open_iscsi b/library/system/open_iscsi index 2e57727cf59..3fd2b1a5a21 100644 --- a/library/system/open_iscsi +++ b/library/system/open_iscsi @@ -138,7 +138,7 @@ def iscsi_get_cached_nodes(module, portal=None): # older versions of scsiadm don't have nice return codes # for newer versions see iscsiadm(8); also usr/iscsiadm.c for details # err can contain [N|n]o records... - elif rc == 21 or (rc == 255 and err.find("o records found") != -1): + elif rc == 21 or (rc == 255 and "o records found" in err): nodes = [] else: module.fail_json(cmd=cmd, rc=rc, msg=err) diff --git a/library/system/service b/library/system/service index 2e26a47b636..a694d8d92b8 100644 --- a/library/system/service +++ b/library/system/service @@ -37,8 +37,8 @@ options: description: - C(started)/C(stopped) are idempotent actions that will not run commands unless necessary. C(restarted) will always bounce the - service. C(reloaded) will always reload. At least one of state - and enabled are required. + service. C(reloaded) will always reload. B(At least one of state + and enabled are required.) sleep: required: false version_added: "1.3" @@ -59,8 +59,8 @@ options: required: false choices: [ "yes", "no" ] description: - - Whether the service should start on boot. At least one of state and - enabled are required. + - Whether the service should start on boot. B(At least one of state and + enabled are required.) runlevel: required: false @@ -207,7 +207,9 @@ class Service(object): os._exit(0) # Start the command - p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=lambda: os.close(pipe[1])) + if isinstance(cmd, basestring): + cmd = shlex.split(cmd) + p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=lambda: os.close(pipe[1])) stdout = "" stderr = "" fds = [p.stdout, p.stderr] @@ -410,11 +412,13 @@ class LinuxService(Service): # adjust the service name to account for template service unit files index = name.find('@') if index != -1: - name = name[:index+1] + template_name = name[:index+1] + else: + template_name = name self.__systemd_unit = None for line in out.splitlines(): - if line.startswith(name): + if line.startswith(template_name): self.__systemd_unit = name return True return False @@ -473,7 +477,34 @@ class LinuxService(Service): if location.get('initctl', None): self.svc_initctl = location['initctl'] + def get_systemd_status_dict(self): + (rc, out, err) = self.execute_command("%s show %s" % (self.enable_cmd, self.__systemd_unit,)) + if rc != 0: + self.module.fail_json('failure %d running systemctl show for %r: %s' % (self.__systemd_unit, rc, err)) + return dict(line.split('=', 1) for line in out.splitlines()) + + def get_systemd_service_status(self): + d = self.get_systemd_status_dict() + if d.get('ActiveState') == 'active': + # run-once services (for which a single successful exit indicates + # that they are running as designed) should not be restarted here. + # Thus, we are not checking d['SubState']. + self.running = True + self.crashed = False + elif d.get('ActiveState') == 'failed': + self.running = False + self.crashed = True + elif d.get('ActiveState') is None: + self.module.fail_json(msg='No ActiveState value in systemctl show output for %r' % (self.__systemd_unit,)) + else: + self.running = False + self.crashed = False + return self.running + def get_service_status(self): + if self.svc_cmd and self.svc_cmd.endswith('systemctl'): + return self.get_systemd_service_status() + self.action = "status" rc, status_stdout, status_stderr = self.service_control() @@ -481,9 +512,9 @@ class LinuxService(Service): if self.svc_initctl and self.running is None: # check the job status by upstart response initctl_rc, initctl_status_stdout, initctl_status_stderr = self.execute_command("%s status %s" % (self.svc_initctl, self.name)) - if initctl_status_stdout.find("stop/waiting") != -1: + if "stop/waiting" in initctl_status_stdout: self.running = False - elif initctl_status_stdout.find("start/running") != -1: + elif "start/running" in initctl_status_stdout: self.running = True if self.svc_cmd and self.svc_cmd.endswith("rc-service") and self.running is None: @@ -523,7 +554,7 @@ class LinuxService(Service): # if the job status is still not known check it by special conditions if self.running is None: - if self.name == 'iptables' and status_stdout.find("ACCEPT") != -1: + if self.name == 'iptables' and "ACCEPT" in status_stdout: # iptables status command output is lame # TODO: lookup if we can use a return code for this instead? self.running = True @@ -534,7 +565,7 @@ class LinuxService(Service): def service_enable(self): if self.enable_cmd is None: - self.module.fail_json(msg='service name not recognized') + self.module.fail_json(msg='unknown init system, cannot enable service') # FIXME: we use chkconfig or systemctl # to decide whether to run the command here but need something @@ -577,7 +608,7 @@ class LinuxService(Service): self.execute_command("%s --add %s" % (self.enable_cmd, self.name)) (rc, out, err) = self.execute_command("%s --list %s" % (self.enable_cmd, self.name)) if not self.name in out: - self.module.fail_json(msg="unknown service name") + self.module.fail_json(msg="service %s does not support chkconfig" % self.name) state = out.split()[-1] if self.enable and ( "3:on" in out and "5:on" in out ): return @@ -585,9 +616,7 @@ class LinuxService(Service): return if self.enable_cmd.endswith("systemctl"): - (rc, out, err) = self.execute_command("%s show %s" % (self.enable_cmd, self.__systemd_unit)) - - d = dict(line.split('=', 1) for line in out.splitlines()) + d = self.get_systemd_status_dict() if "UnitFileState" in d: if self.enable and d["UnitFileState"] == "enabled": return @@ -629,16 +658,16 @@ class LinuxService(Service): if line.startswith('rename'): self.changed = True break - elif self.enable and line.find('do not exist') != -1: + elif self.enable and 'do not exist' in line: self.changed = True break - elif not self.enable and line.find('already exist') != -1: + elif not self.enable and 'already exist' in line: self.changed = True break # Debian compatibility for line in err.splitlines(): - if self.enable and line.find('no runlevel symlinks to modify') != -1: + if self.enable and 'no runlevel symlinks to modify' in line: self.changed = True break @@ -658,7 +687,8 @@ class LinuxService(Service): return self.execute_command("%s %s enable" % (self.enable_cmd, self.name)) else: - return self.execute_command("%s -f %s remove" % (self.enable_cmd, self.name)) + return self.execute_command("%s %s disable" % (self.enable_cmd, + self.name)) # we change argument depending on real binary used: # - update-rc.d and systemctl wants enable/disable @@ -979,10 +1009,10 @@ class SunOSService(Service): # enabled true # enabled false for line in stdout.split("\n"): - if line.find("enabled") == 0: - if line.find("true") != -1: + if line.startswith("enabled"): + if "true" in line: enabled = True - if line.find("temporary") != -1: + if "temporary" in line: temporary = True startup_enabled = (enabled and not temporary) or (not enabled and temporary) @@ -1174,7 +1204,7 @@ def main(): (rc, out, err) = service.modify_service_state() if rc != 0: - if err and err.find("is already") != -1: + if err and "is already" in err: # upstart got confused, one such possibility is MySQL on Ubuntu 12.04 # where status may report it has no start/stop links and we could # not get accurate status diff --git a/library/system/setup b/library/system/setup index f140991dc27..cc3a5855f1e 100644 --- a/library/system/setup +++ b/library/system/setup @@ -18,22 +18,6 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . -import os -import array -import fcntl -import fnmatch -import glob -import platform -import re -import socket -import struct -import datetime -import getpass -import subprocess -import ConfigParser -import StringIO - - DOCUMENTATION = ''' --- module: setup @@ -87,2229 +71,22 @@ ansible all -m setup -a 'filter=facter_*' ansible all -m setup -a 'filter=ansible_eth[0-2]' """ -try: - import selinux - HAVE_SELINUX=True -except ImportError: - HAVE_SELINUX=False - -try: - import json -except ImportError: - import simplejson as json - -class Facts(object): - """ - This class should only attempt to populate those facts that - are mostly generic to all systems. This includes platform facts, - service facts (eg. ssh keys or selinux), and distribution facts. - Anything that requires extensive code or may have more than one - possible implementation to establish facts for a given topic should - subclass Facts. - """ - - _I386RE = re.compile(r'i[3456]86') - # For the most part, we assume that platform.dist() will tell the truth. - # This is the fallback to handle unknowns or exceptions - OSDIST_DICT = { '/etc/redhat-release': 'RedHat', - '/etc/vmware-release': 'VMwareESX', - '/etc/openwrt_release': 'OpenWrt', - '/etc/system-release': 'OtherLinux', - '/etc/alpine-release': 'Alpine', - '/etc/release': 'Solaris', - '/etc/arch-release': 'Archlinux', - '/etc/SuSE-release': 'SuSE', - '/etc/gentoo-release': 'Gentoo', - '/etc/os-release': 'Debian' } - SELINUX_MODE_DICT = { 1: 'enforcing', 0: 'permissive', -1: 'disabled' } - - # A list of dicts. If there is a platform with more than one - # package manager, put the preferred one last. If there is an - # ansible module, use that as the value for the 'name' key. - PKG_MGRS = [ { 'path' : '/usr/bin/yum', 'name' : 'yum' }, - { 'path' : '/usr/bin/apt-get', 'name' : 'apt' }, - { 'path' : '/usr/bin/zypper', 'name' : 'zypper' }, - { 'path' : '/usr/sbin/urpmi', 'name' : 'urpmi' }, - { 'path' : '/usr/bin/pacman', 'name' : 'pacman' }, - { 'path' : '/bin/opkg', 'name' : 'opkg' }, - { 'path' : '/opt/local/bin/pkgin', 'name' : 'pkgin' }, - { 'path' : '/opt/local/bin/port', 'name' : 'macports' }, - { 'path' : '/sbin/apk', 'name' : 'apk' }, - { 'path' : '/usr/sbin/pkg', 'name' : 'pkgng' }, - { 'path' : '/usr/sbin/swlist', 'name' : 'SD-UX' }, - { 'path' : '/usr/bin/emerge', 'name' : 'portage' }, - ] - - def __init__(self): - self.facts = {} - self.get_platform_facts() - self.get_distribution_facts() - self.get_cmdline() - self.get_public_ssh_host_keys() - self.get_selinux_facts() - self.get_pkg_mgr_facts() - self.get_lsb_facts() - self.get_date_time_facts() - self.get_user_facts() - self.get_local_facts() - self.get_env_facts() - - def populate(self): - return self.facts - - # Platform - # platform.system() can be Linux, Darwin, Java, or Windows - def get_platform_facts(self): - self.facts['system'] = platform.system() - self.facts['kernel'] = platform.release() - self.facts['machine'] = platform.machine() - self.facts['python_version'] = platform.python_version() - self.facts['fqdn'] = socket.getfqdn() - self.facts['hostname'] = platform.node().split('.')[0] - self.facts['domain'] = '.'.join(self.facts['fqdn'].split('.')[1:]) - arch_bits = platform.architecture()[0] - self.facts['userspace_bits'] = arch_bits.replace('bit', '') - if self.facts['machine'] == 'x86_64': - self.facts['architecture'] = self.facts['machine'] - if self.facts['userspace_bits'] == '64': - self.facts['userspace_architecture'] = 'x86_64' - elif self.facts['userspace_bits'] == '32': - self.facts['userspace_architecture'] = 'i386' - elif Facts._I386RE.search(self.facts['machine']): - self.facts['architecture'] = 'i386' - if self.facts['userspace_bits'] == '64': - self.facts['userspace_architecture'] = 'x86_64' - elif self.facts['userspace_bits'] == '32': - self.facts['userspace_architecture'] = 'i386' - else: - self.facts['architecture'] = self.facts['machine'] - if self.facts['system'] == 'Linux': - self.get_distribution_facts() - elif self.facts['system'] == 'AIX': - rc, out, err = module.run_command("/usr/sbin/bootinfo -p") - data = out.split('\n') - self.facts['architecture'] = data[0] - - - def get_local_facts(self): - - fact_path = module.params.get('fact_path', None) - if not fact_path or not os.path.exists(fact_path): - return - - local = {} - for fn in sorted(glob.glob(fact_path + '/*.fact')): - # where it will sit under local facts - fact_base = os.path.basename(fn).replace('.fact','') - if os.access(fn, os.X_OK): - # run it - # try to read it as json first - # if that fails read it with ConfigParser - # if that fails, skip it - rc, out, err = module.run_command(fn) - else: - out = open(fn).read() - - # load raw json - fact = 'loading %s' % fact_base - try: - fact = json.loads(out) - except ValueError, e: - # load raw ini - cp = ConfigParser.ConfigParser() - try: - cp.readfp(StringIO.StringIO(out)) - except ConfigParser.Error, e: - fact="error loading fact - please check content" - else: - fact = {} - #print cp.sections() - for sect in cp.sections(): - if sect not in fact: - fact[sect] = {} - for opt in cp.options(sect): - val = cp.get(sect, opt) - fact[sect][opt]=val - - local[fact_base] = fact - if not local: - return - self.facts['local'] = local - - # platform.dist() is deprecated in 2.6 - # in 2.6 and newer, you should use platform.linux_distribution() - def get_distribution_facts(self): - - # A list with OS Family members - OS_FAMILY = dict( - RedHat = 'RedHat', Fedora = 'RedHat', CentOS = 'RedHat', Scientific = 'RedHat', - SLC = 'RedHat', Ascendos = 'RedHat', CloudLinux = 'RedHat', PSBM = 'RedHat', - OracleLinux = 'RedHat', OVS = 'RedHat', OEL = 'RedHat', Amazon = 'RedHat', - XenServer = 'RedHat', Ubuntu = 'Debian', Debian = 'Debian', SLES = 'Suse', - SLED = 'Suse', OpenSuSE = 'Suse', SuSE = 'Suse', Gentoo = 'Gentoo', Funtoo = 'Gentoo', - Archlinux = 'Archlinux', Mandriva = 'Mandrake', Mandrake = 'Mandrake', - Solaris = 'Solaris', Nexenta = 'Solaris', OmniOS = 'Solaris', OpenIndiana = 'Solaris', - SmartOS = 'Solaris', AIX = 'AIX', Alpine = 'Alpine', MacOSX = 'Darwin', - FreeBSD = 'FreeBSD', HPUX = 'HP-UX' - ) - - if self.facts['system'] == 'AIX': - self.facts['distribution'] = 'AIX' - rc, out, err = module.run_command("/usr/bin/oslevel") - data = out.split('.') - self.facts['distribution_version'] = data[0] - self.facts['distribution_release'] = data[1] - elif self.facts['system'] == 'HP-UX': - self.facts['distribution'] = 'HP-UX' - rc, out, err = module.run_command("/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'") - data = re.search('HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out) - if data: - self.facts['distribution_version'] = data.groups()[0] - self.facts['distribution_release'] = data.groups()[1] - elif self.facts['system'] == 'Darwin': - self.facts['distribution'] = 'MacOSX' - rc, out, err = module.run_command("/usr/bin/sw_vers -productVersion") - data = out.split()[-1] - self.facts['distribution_version'] = data - elif self.facts['system'] == 'FreeBSD': - self.facts['distribution'] = 'FreeBSD' - self.facts['distribution_release'] = platform.release() - self.facts['distribution_version'] = platform.version() - elif self.facts['system'] == 'OpenBSD': - self.facts['distribution'] = 'OpenBSD' - self.facts['distribution_release'] = platform.release() - rc, out, err = module.run_command("/sbin/sysctl -n kern.version") - match = re.match('OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out) - if match: - self.facts['distribution_version'] = match.groups()[0] - else: - self.facts['distribution_version'] = 'release' - else: - dist = platform.dist() - self.facts['distribution'] = dist[0].capitalize() or 'NA' - self.facts['distribution_version'] = dist[1] or 'NA' - self.facts['distribution_release'] = dist[2] or 'NA' - # Try to handle the exceptions now ... - for (path, name) in Facts.OSDIST_DICT.items(): - if os.path.exists(path): - if self.facts['distribution'] == 'Fedora': - pass - elif name == 'RedHat': - data = get_file_content(path) - if 'Red Hat' in data: - self.facts['distribution'] = name - else: - self.facts['distribution'] = data.split()[0] - elif name == 'OtherLinux': - data = get_file_content(path) - if 'Amazon' in data: - self.facts['distribution'] = 'Amazon' - self.facts['distribution_version'] = data.split()[-1] - elif name == 'OpenWrt': - data = get_file_content(path) - if 'OpenWrt' in data: - self.facts['distribution'] = name - version = re.search('DISTRIB_RELEASE="(.*)"', data) - if version: - self.facts['distribution_version'] = version.groups()[0] - release = re.search('DISTRIB_CODENAME="(.*)"', data) - if release: - self.facts['distribution_release'] = release.groups()[0] - elif name == 'Alpine': - data = get_file_content(path) - self.facts['distribution'] = 'Alpine' - self.facts['distribution_version'] = data - elif name == 'Solaris': - data = get_file_content(path).split('\n')[0] - ora_prefix = '' - if 'Oracle Solaris' in data: - data = data.replace('Oracle ','') - ora_prefix = 'Oracle ' - self.facts['distribution'] = data.split()[0] - self.facts['distribution_version'] = data.split()[1] - self.facts['distribution_release'] = ora_prefix + data - elif name == 'SuSE': - data = get_file_content(path).splitlines() - self.facts['distribution_release'] = data[2].split('=')[1].strip() - elif name == 'Debian': - data = get_file_content(path).split('\n')[0] - release = re.search("PRETTY_NAME.+ \(?([^ ]+?)\)?\"", data) - if release: - self.facts['distribution_release'] = release.groups()[0] - else: - self.facts['distribution'] = name - - self.facts['os_family'] = self.facts['distribution'] - if self.facts['distribution'] in OS_FAMILY: - self.facts['os_family'] = OS_FAMILY[self.facts['distribution']] - - def get_cmdline(self): - data = get_file_content('/proc/cmdline') - if data: - self.facts['cmdline'] = {} - for piece in shlex.split(data): - item = piece.split('=', 1) - if len(item) == 1: - self.facts['cmdline'][item[0]] = True - else: - self.facts['cmdline'][item[0]] = item[1] - - def get_public_ssh_host_keys(self): - dsa_filename = '/etc/ssh/ssh_host_dsa_key.pub' - rsa_filename = '/etc/ssh/ssh_host_rsa_key.pub' - ecdsa_filename = '/etc/ssh/ssh_host_ecdsa_key.pub' - - if self.facts['system'] == 'Darwin': - dsa_filename = '/etc/ssh_host_dsa_key.pub' - rsa_filename = '/etc/ssh_host_rsa_key.pub' - ecdsa_filename = '/etc/ssh_host_ecdsa_key.pub' - dsa = get_file_content(dsa_filename) - rsa = get_file_content(rsa_filename) - ecdsa = get_file_content(ecdsa_filename) - if dsa is None: - dsa = 'NA' - else: - self.facts['ssh_host_key_dsa_public'] = dsa.split()[1] - if rsa is None: - rsa = 'NA' - else: - self.facts['ssh_host_key_rsa_public'] = rsa.split()[1] - if ecdsa is None: - ecdsa = 'NA' - else: - self.facts['ssh_host_key_ecdsa_public'] = ecdsa.split()[1] - - def get_pkg_mgr_facts(self): - self.facts['pkg_mgr'] = 'unknown' - for pkg in Facts.PKG_MGRS: - if os.path.exists(pkg['path']): - self.facts['pkg_mgr'] = pkg['name'] - if self.facts['system'] == 'OpenBSD': - self.facts['pkg_mgr'] = 'openbsd_pkg' - - def get_lsb_facts(self): - lsb_path = module.get_bin_path('lsb_release') - if lsb_path: - rc, out, err = module.run_command([lsb_path, "-a"]) - if rc == 0: - self.facts['lsb'] = {} - for line in out.split('\n'): - if len(line) < 1: - continue - value = line.split(':', 1)[1].strip() - if 'LSB Version:' in line: - self.facts['lsb']['release'] = value - elif 'Distributor ID:' in line: - self.facts['lsb']['id'] = value - elif 'Description:' in line: - self.facts['lsb']['description'] = value - elif 'Release:' in line: - self.facts['lsb']['release'] = value - elif 'Codename:' in line: - self.facts['lsb']['codename'] = value - if 'lsb' in self.facts and 'release' in self.facts['lsb']: - self.facts['lsb']['major_release'] = self.facts['lsb']['release'].split('.')[0] - elif lsb_path is None and os.path.exists('/etc/lsb-release'): - self.facts['lsb'] = {} - f = open('/etc/lsb-release', 'r') - try: - for line in f.readlines(): - value = line.split('=',1)[1].strip() - if 'DISTRIB_ID' in line: - self.facts['lsb']['id'] = value - elif 'DISTRIB_RELEASE' in line: - self.facts['lsb']['release'] = value - elif 'DISTRIB_DESCRIPTION' in line: - self.facts['lsb']['description'] = value - elif 'DISTRIB_CODENAME' in line: - self.facts['lsb']['codename'] = value - finally: - f.close() - else: - return self.facts - - if 'lsb' in self.facts and 'release' in self.facts['lsb']: - self.facts['lsb']['major_release'] = self.facts['lsb']['release'].split('.')[0] - - - def get_selinux_facts(self): - if not HAVE_SELINUX: - self.facts['selinux'] = False - return - self.facts['selinux'] = {} - if not selinux.is_selinux_enabled(): - self.facts['selinux']['status'] = 'disabled' - else: - self.facts['selinux']['status'] = 'enabled' - try: - self.facts['selinux']['policyvers'] = selinux.security_policyvers() - except OSError, e: - self.facts['selinux']['policyvers'] = 'unknown' - try: - (rc, configmode) = selinux.selinux_getenforcemode() - if rc == 0: - self.facts['selinux']['config_mode'] = Facts.SELINUX_MODE_DICT.get(configmode, 'unknown') - else: - self.facts['selinux']['config_mode'] = 'unknown' - except OSError, e: - self.facts['selinux']['config_mode'] = 'unknown' - try: - mode = selinux.security_getenforce() - self.facts['selinux']['mode'] = Facts.SELINUX_MODE_DICT.get(mode, 'unknown') - except OSError, e: - self.facts['selinux']['mode'] = 'unknown' - try: - (rc, policytype) = selinux.selinux_getpolicytype() - if rc == 0: - self.facts['selinux']['type'] = policytype - else: - self.facts['selinux']['type'] = 'unknown' - except OSError, e: - self.facts['selinux']['type'] = 'unknown' - - - def get_date_time_facts(self): - self.facts['date_time'] = {} - - now = datetime.datetime.now() - self.facts['date_time']['year'] = now.strftime('%Y') - self.facts['date_time']['month'] = now.strftime('%m') - self.facts['date_time']['day'] = now.strftime('%d') - self.facts['date_time']['hour'] = now.strftime('%H') - self.facts['date_time']['minute'] = now.strftime('%M') - self.facts['date_time']['second'] = now.strftime('%S') - self.facts['date_time']['epoch'] = now.strftime('%s') - if self.facts['date_time']['epoch'] == '' or self.facts['date_time']['epoch'][0] == '%': - self.facts['date_time']['epoch'] = str(int(time.time())) - self.facts['date_time']['date'] = now.strftime('%Y-%m-%d') - self.facts['date_time']['time'] = now.strftime('%H:%M:%S') - self.facts['date_time']['iso8601_micro'] = now.utcnow().strftime("%Y-%m-%dT%H:%M:%S.%fZ") - self.facts['date_time']['iso8601'] = now.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ") - self.facts['date_time']['tz'] = time.strftime("%Z") - self.facts['date_time']['tz_offset'] = time.strftime("%z") - - - # User - def get_user_facts(self): - self.facts['user_id'] = getpass.getuser() - - def get_env_facts(self): - self.facts['env'] = {} - for k,v in os.environ.iteritems(): - self.facts['env'][k] = v - -class Hardware(Facts): - """ - This is a generic Hardware subclass of Facts. This should be further - subclassed to implement per platform. If you subclass this, it - should define: - - memfree_mb - - memtotal_mb - - swapfree_mb - - swaptotal_mb - - processor (a list) - - processor_cores - - processor_count - - All subclasses MUST define platform. - """ - platform = 'Generic' - - def __new__(cls, *arguments, **keyword): - subclass = cls - for sc in Hardware.__subclasses__(): - if sc.platform == platform.system(): - subclass = sc - return super(cls, subclass).__new__(subclass, *arguments, **keyword) - - def __init__(self): - Facts.__init__(self) - - def populate(self): - return self.facts - -class LinuxHardware(Hardware): - """ - Linux-specific subclass of Hardware. Defines memory and CPU facts: - - memfree_mb - - memtotal_mb - - swapfree_mb - - swaptotal_mb - - processor (a list) - - processor_cores - - processor_count - - In addition, it also defines number of DMI facts and device facts. - """ - - platform = 'Linux' - MEMORY_FACTS = ['MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'] - - def __init__(self): - Hardware.__init__(self) - - def populate(self): - self.get_cpu_facts() - self.get_memory_facts() - self.get_dmi_facts() - self.get_device_facts() - self.get_mount_facts() - return self.facts - - def get_memory_facts(self): - if not os.access("/proc/meminfo", os.R_OK): - return - for line in open("/proc/meminfo").readlines(): - data = line.split(":", 1) - key = data[0] - if key in LinuxHardware.MEMORY_FACTS: - val = data[1].strip().split(' ')[0] - self.facts["%s_mb" % key.lower()] = long(val) / 1024 - - def get_cpu_facts(self): - i = 0 - physid = 0 - coreid = 0 - sockets = {} - cores = {} - if not os.access("/proc/cpuinfo", os.R_OK): - return - self.facts['processor'] = [] - for line in open("/proc/cpuinfo").readlines(): - data = line.split(":", 1) - key = data[0].strip() - # model name is for Intel arch, Processor (mind the uppercase P) - # works for some ARM devices, like the Sheevaplug. - if key == 'model name' or key == 'Processor': - if 'processor' not in self.facts: - self.facts['processor'] = [] - self.facts['processor'].append(data[1].strip()) - i += 1 - elif key == 'physical id': - physid = data[1].strip() - if physid not in sockets: - sockets[physid] = 1 - elif key == 'core id': - coreid = data[1].strip() - if coreid not in sockets: - cores[coreid] = 1 - elif key == 'cpu cores': - sockets[physid] = int(data[1].strip()) - elif key == 'siblings': - cores[coreid] = int(data[1].strip()) - self.facts['processor_count'] = sockets and len(sockets) or i - self.facts['processor_cores'] = sockets.values() and sockets.values()[0] or 1 - self.facts['processor_threads_per_core'] = ((cores.values() and - cores.values()[0] or 1) / self.facts['processor_cores']) - self.facts['processor_vcpus'] = (self.facts['processor_threads_per_core'] * - self.facts['processor_count'] * self.facts['processor_cores']) - - def get_dmi_facts(self): - ''' learn dmi facts from system - - Try /sys first for dmi related facts. - If that is not available, fall back to dmidecode executable ''' - - if os.path.exists('/sys/devices/virtual/dmi/id/product_name'): - # Use kernel DMI info, if available - - # DMI SPEC -- http://www.dmtf.org/sites/default/files/standards/documents/DSP0134_2.7.0.pdf - FORM_FACTOR = [ "Unknown", "Other", "Unknown", "Desktop", - "Low Profile Desktop", "Pizza Box", "Mini Tower", "Tower", - "Portable", "Laptop", "Notebook", "Hand Held", "Docking Station", - "All In One", "Sub Notebook", "Space-saving", "Lunch Box", - "Main Server Chassis", "Expansion Chassis", "Sub Chassis", - "Bus Expansion Chassis", "Peripheral Chassis", "RAID Chassis", - "Rack Mount Chassis", "Sealed-case PC", "Multi-system", - "CompactPCI", "AdvancedTCA", "Blade" ] - - DMI_DICT = { - 'bios_date': '/sys/devices/virtual/dmi/id/bios_date', - 'bios_version': '/sys/devices/virtual/dmi/id/bios_version', - 'form_factor': '/sys/devices/virtual/dmi/id/chassis_type', - 'product_name': '/sys/devices/virtual/dmi/id/product_name', - 'product_serial': '/sys/devices/virtual/dmi/id/product_serial', - 'product_uuid': '/sys/devices/virtual/dmi/id/product_uuid', - 'product_version': '/sys/devices/virtual/dmi/id/product_version', - 'system_vendor': '/sys/devices/virtual/dmi/id/sys_vendor' - } - - for (key,path) in DMI_DICT.items(): - data = get_file_content(path) - if data is not None: - if key == 'form_factor': - try: - self.facts['form_factor'] = FORM_FACTOR[int(data)] - except IndexError, e: - self.facts['form_factor'] = 'unknown (%s)' % data - else: - self.facts[key] = data - else: - self.facts[key] = 'NA' - - else: - # Fall back to using dmidecode, if available - dmi_bin = module.get_bin_path('dmidecode') - DMI_DICT = { - 'bios_date': 'bios-release-date', - 'bios_version': 'bios-version', - 'form_factor': 'chassis-type', - 'product_name': 'system-product-name', - 'product_serial': 'system-serial-number', - 'product_uuid': 'system-uuid', - 'product_version': 'system-version', - 'system_vendor': 'system-manufacturer' - } - for (k, v) in DMI_DICT.items(): - if dmi_bin is not None: - (rc, out, err) = module.run_command('%s -s %s' % (dmi_bin, v)) - if rc == 0: - # Strip out commented lines (specific dmidecode output) - thisvalue = ''.join([ line for line in out.split('\n') if not line.startswith('#') ]) - try: - json.dumps(thisvalue) - except UnicodeDecodeError: - thisvalue = "NA" - - self.facts[k] = thisvalue - else: - self.facts[k] = 'NA' - else: - self.facts[k] = 'NA' - - def get_mount_facts(self): - self.facts['mounts'] = [] - mtab = get_file_content('/etc/mtab', '') - for line in mtab.split('\n'): - if line.startswith('/'): - fields = line.rstrip('\n').split() - if(fields[2] != 'none'): - size_total = None - size_available = None - try: - statvfs_result = os.statvfs(fields[1]) - size_total = statvfs_result.f_bsize * statvfs_result.f_blocks - size_available = statvfs_result.f_bsize * (statvfs_result.f_bavail) - except OSError, e: - continue - - self.facts['mounts'].append( - {'mount': fields[1], - 'device':fields[0], - 'fstype': fields[2], - 'options': fields[3], - # statvfs data - 'size_total': size_total, - 'size_available': size_available, - }) - - def get_device_facts(self): - self.facts['devices'] = {} - lspci = module.get_bin_path('lspci') - if lspci: - rc, pcidata, err = module.run_command([lspci, '-D']) - else: - pcidata = None - - try: - block_devs = os.listdir("/sys/block") - except OSError: - return - - for block in block_devs: - virtual = 1 - sysfs_no_links = 0 - try: - path = os.readlink(os.path.join("/sys/block/", block)) - except OSError, e: - if e.errno == errno.EINVAL: - path = block - sysfs_no_links = 1 - else: - continue - if "virtual" in path: - continue - sysdir = os.path.join("/sys/block", path) - if sysfs_no_links == 1: - for folder in os.listdir(sysdir): - if "device" in folder: - virtual = 0 - break - if virtual: - continue - d = {} - diskname = os.path.basename(sysdir) - for key in ['vendor', 'model']: - d[key] = get_file_content(sysdir + "/device/" + key) - - for key,test in [ ('removable','/removable'), \ - ('support_discard','/queue/discard_granularity'), - ]: - d[key] = get_file_content(sysdir + test) - - d['partitions'] = {} - for folder in os.listdir(sysdir): - m = re.search("(" + diskname + "\d+)", folder) - if m: - part = {} - partname = m.group(1) - part_sysdir = sysdir + "/" + partname - - part['start'] = get_file_content(part_sysdir + "/start",0) - part['sectors'] = get_file_content(part_sysdir + "/size",0) - part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size",512) - part['size'] = module.pretty_bytes((float(part['sectors']) * float(part['sectorsize']))) - d['partitions'][partname] = part - - d['rotational'] = get_file_content(sysdir + "/queue/rotational") - d['scheduler_mode'] = "" - scheduler = get_file_content(sysdir + "/queue/scheduler") - if scheduler is not None: - m = re.match(".*?(\[(.*)\])", scheduler) - if m: - d['scheduler_mode'] = m.group(2) - - d['sectors'] = get_file_content(sysdir + "/size") - if not d['sectors']: - d['sectors'] = 0 - d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size") - if not d['sectorsize']: - d['sectorsize'] = 512 - d['size'] = module.pretty_bytes(float(d['sectors']) * float(d['sectorsize'])) - - d['host'] = "" - - # domains are numbered (0 to ffff), bus (0 to ff), slot (0 to 1f), and function (0 to 7). - m = re.match(".+/([a-f0-9]{4}:[a-f0-9]{2}:[0|1][a-f0-9]\.[0-7])/", sysdir) - if m and pcidata: - pciid = m.group(1) - did = re.escape(pciid) - m = re.search("^" + did + "\s(.*)$", pcidata, re.MULTILINE) - d['host'] = m.group(1) - - d['holders'] = [] - if os.path.isdir(sysdir + "/holders"): - for folder in os.listdir(sysdir + "/holders"): - if not folder.startswith("dm-"): - continue - name = get_file_content(sysdir + "/holders/" + folder + "/dm/name") - if name: - d['holders'].append(name) - else: - d['holders'].append(folder) - - self.facts['devices'][diskname] = d - - -class SunOSHardware(Hardware): - """ - In addition to the generic memory and cpu facts, this also sets - swap_reserved_mb and swap_allocated_mb that is available from *swap -s*. - """ - platform = 'SunOS' - - def __init__(self): - Hardware.__init__(self) - - def populate(self): - self.get_cpu_facts() - self.get_memory_facts() - return self.facts - - def get_cpu_facts(self): - physid = 0 - sockets = {} - rc, out, err = module.run_command("/usr/bin/kstat cpu_info") - self.facts['processor'] = [] - for line in out.split('\n'): - if len(line) < 1: - continue - data = line.split(None, 1) - key = data[0].strip() - # "brand" works on Solaris 10 & 11. "implementation" for Solaris 9. - if key == 'module:': - brand = '' - elif key == 'brand': - brand = data[1].strip() - elif key == 'clock_MHz': - clock_mhz = data[1].strip() - elif key == 'implementation': - processor = brand or data[1].strip() - # Add clock speed to description for SPARC CPU - if self.facts['machine'] != 'i86pc': - processor += " @ " + clock_mhz + "MHz" - if 'processor' not in self.facts: - self.facts['processor'] = [] - self.facts['processor'].append(processor) - elif key == 'chip_id': - physid = data[1].strip() - if physid not in sockets: - sockets[physid] = 1 - else: - sockets[physid] += 1 - # Counting cores on Solaris can be complicated. - # https://blogs.oracle.com/mandalika/entry/solaris_show_me_the_cpu - # Treat 'processor_count' as physical sockets and 'processor_cores' as - # virtual CPUs visisble to Solaris. Not a true count of cores for modern SPARC as - # these processors have: sockets -> cores -> threads/virtual CPU. - if len(sockets) > 0: - self.facts['processor_count'] = len(sockets) - self.facts['processor_cores'] = reduce(lambda x, y: x + y, sockets.values()) - else: - self.facts['processor_cores'] = 'NA' - self.facts['processor_count'] = len(self.facts['processor']) - - def get_memory_facts(self): - rc, out, err = module.run_command(["/usr/sbin/prtconf"]) - for line in out.split('\n'): - if 'Memory size' in line: - self.facts['memtotal_mb'] = line.split()[2] - rc, out, err = module.run_command("/usr/sbin/swap -s") - allocated = long(out.split()[1][:-1]) - reserved = long(out.split()[5][:-1]) - used = long(out.split()[8][:-1]) - free = long(out.split()[10][:-1]) - self.facts['swapfree_mb'] = free / 1024 - self.facts['swaptotal_mb'] = (free + used) / 1024 - self.facts['swap_allocated_mb'] = allocated / 1024 - self.facts['swap_reserved_mb'] = reserved / 1024 - -class OpenBSDHardware(Hardware): - """ - OpenBSD-specific subclass of Hardware. Defines memory, CPU and device facts: - - memfree_mb - - memtotal_mb - - swapfree_mb - - swaptotal_mb - - processor (a list) - - processor_cores - - processor_count - - processor_speed - - devices - """ - platform = 'OpenBSD' - DMESG_BOOT = '/var/run/dmesg.boot' - - def __init__(self): - Hardware.__init__(self) - - def populate(self): - self.sysctl = self.get_sysctl() - self.get_memory_facts() - self.get_processor_facts() - self.get_device_facts() - return self.facts - - def get_sysctl(self): - rc, out, err = module.run_command(["/sbin/sysctl", "hw"]) - if rc != 0: - return dict() - sysctl = dict() - for line in out.splitlines(): - (key, value) = line.split('=') - sysctl[key] = value.strip() - return sysctl - - def get_memory_facts(self): - # Get free memory. vmstat output looks like: - # procs memory page disks traps cpu - # r b w avm fre flt re pi po fr sr wd0 fd0 int sys cs us sy id - # 0 0 0 47512 28160 51 0 0 0 0 0 1 0 116 89 17 0 1 99 - rc, out, err = module.run_command("/usr/bin/vmstat") - if rc == 0: - self.facts['memfree_mb'] = long(out.splitlines()[-1].split()[4]) / 1024 - self.facts['memtotal_mb'] = long(self.sysctl['hw.usermem']) / 1024 / 1024 - - # Get swapctl info. swapctl output looks like: - # total: 69268 1K-blocks allocated, 0 used, 69268 available - # And for older OpenBSD: - # total: 69268k bytes allocated = 0k used, 69268k available - rc, out, err = module.run_command("/sbin/swapctl -sk") - if rc == 0: - data = out.split() - self.facts['swapfree_mb'] = long(data[-2].translate(None, "kmg")) / 1024 - self.facts['swaptotal_mb'] = long(data[1].translate(None, "kmg")) / 1024 - - def get_processor_facts(self): - processor = [] - dmesg_boot = get_file_content(OpenBSDHardware.DMESG_BOOT) - if not dmesg_boot: - rc, dmesg_boot, err = module.run_command("/sbin/dmesg") - i = 0 - for line in dmesg_boot.splitlines(): - if line.split(' ', 1)[0] == 'cpu%i:' % i: - processor.append(line.split(' ', 1)[1]) - i = i + 1 - processor_count = i - self.facts['processor'] = processor - self.facts['processor_count'] = processor_count - # I found no way to figure out the number of Cores per CPU in OpenBSD - self.facts['processor_cores'] = 'NA' - - def get_device_facts(self): - devices = [] - devices.extend(self.sysctl['hw.disknames'].split(',')) - self.facts['devices'] = devices - -class FreeBSDHardware(Hardware): - """ - FreeBSD-specific subclass of Hardware. Defines memory and CPU facts: - - memfree_mb - - memtotal_mb - - swapfree_mb - - swaptotal_mb - - processor (a list) - - processor_cores - - processor_count - - devices - """ - platform = 'FreeBSD' - DMESG_BOOT = '/var/run/dmesg.boot' - - def __init__(self): - Hardware.__init__(self) - - def populate(self): - self.get_cpu_facts() - self.get_memory_facts() - self.get_dmi_facts() - self.get_device_facts() - self.get_mount_facts() - return self.facts - - def get_cpu_facts(self): - self.facts['processor'] = [] - rc, out, err = module.run_command("/sbin/sysctl -n hw.ncpu") - self.facts['processor_count'] = out.strip() - - dmesg_boot = get_file_content(FreeBSDHardware.DMESG_BOOT) - if not dmesg_boot: - rc, dmesg_boot, err = module.run_command("/sbin/dmesg") - for line in dmesg_boot.split('\n'): - if 'CPU:' in line: - cpu = re.sub(r'CPU:\s+', r"", line) - self.facts['processor'].append(cpu.strip()) - if 'Logical CPUs per core' in line: - self.facts['processor_cores'] = line.split()[4] - - - def get_memory_facts(self): - rc, out, err = module.run_command("/sbin/sysctl vm.stats") - for line in out.split('\n'): - data = line.split() - if 'vm.stats.vm.v_page_size' in line: - pagesize = long(data[1]) - if 'vm.stats.vm.v_page_count' in line: - pagecount = long(data[1]) - if 'vm.stats.vm.v_free_count' in line: - freecount = long(data[1]) - self.facts['memtotal_mb'] = pagesize * pagecount / 1024 / 1024 - self.facts['memfree_mb'] = pagesize * freecount / 1024 / 1024 - # Get swapinfo. swapinfo output looks like: - # Device 1M-blocks Used Avail Capacity - # /dev/ada0p3 314368 0 314368 0% - # - rc, out, err = module.run_command("/usr/sbin/swapinfo -m") - lines = out.split('\n') - if len(lines[-1]) == 0: - lines.pop() - data = lines[-1].split() - self.facts['swaptotal_mb'] = data[1] - self.facts['swapfree_mb'] = data[3] - - def get_mount_facts(self): - self.facts['mounts'] = [] - fstab = get_file_content('/etc/fstab') - if fstab: - for line in fstab.split('\n'): - if line.startswith('#') or line.strip() == '': - continue - fields = re.sub(r'\s+',' ',line.rstrip('\n')).split() - self.facts['mounts'].append({'mount': fields[1] , 'device': fields[0], 'fstype' : fields[2], 'options': fields[3]}) - - def get_device_facts(self): - sysdir = '/dev' - self.facts['devices'] = {} - drives = re.compile('(ada?\d+|da\d+|a?cd\d+)') #TODO: rc, disks, err = module.run_command("/sbin/sysctl kern.disks") - slices = re.compile('(ada?\d+s\d+\w*|da\d+s\d+\w*)') - if os.path.isdir(sysdir): - dirlist = sorted(os.listdir(sysdir)) - for device in dirlist: - d = drives.match(device) - if d: - self.facts['devices'][d.group(1)] = [] - s = slices.match(device) - if s: - self.facts['devices'][d.group(1)].append(s.group(1)) - - def get_dmi_facts(self): - ''' learn dmi facts from system - - Use dmidecode executable if available''' - - # Fall back to using dmidecode, if available - dmi_bin = module.get_bin_path('dmidecode') - DMI_DICT = dict( - bios_date='bios-release-date', - bios_version='bios-version', - form_factor='chassis-type', - product_name='system-product-name', - product_serial='system-serial-number', - product_uuid='system-uuid', - product_version='system-version', - system_vendor='system-manufacturer' - ) - for (k, v) in DMI_DICT.items(): - if dmi_bin is not None: - (rc, out, err) = module.run_command('%s -s %s' % (dmi_bin, v)) - if rc == 0: - # Strip out commented lines (specific dmidecode output) - self.facts[k] = ''.join([ line for line in out.split('\n') if not line.startswith('#') ]) - try: - json.dumps(self.facts[k]) - except UnicodeDecodeError: - self.facts[k] = 'NA' - else: - self.facts[k] = 'NA' - else: - self.facts[k] = 'NA' - - -class NetBSDHardware(Hardware): - """ - NetBSD-specific subclass of Hardware. Defines memory and CPU facts: - - memfree_mb - - memtotal_mb - - swapfree_mb - - swaptotal_mb - - processor (a list) - - processor_cores - - processor_count - - devices - """ - platform = 'NetBSD' - MEMORY_FACTS = ['MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'] - - def __init__(self): - Hardware.__init__(self) - - def populate(self): - self.get_cpu_facts() - self.get_memory_facts() - self.get_mount_facts() - return self.facts - - def get_cpu_facts(self): - - i = 0 - physid = 0 - sockets = {} - if not os.access("/proc/cpuinfo", os.R_OK): - return - self.facts['processor'] = [] - for line in open("/proc/cpuinfo").readlines(): - data = line.split(":", 1) - key = data[0].strip() - # model name is for Intel arch, Processor (mind the uppercase P) - # works for some ARM devices, like the Sheevaplug. - if key == 'model name' or key == 'Processor': - if 'processor' not in self.facts: - self.facts['processor'] = [] - self.facts['processor'].append(data[1].strip()) - i += 1 - elif key == 'physical id': - physid = data[1].strip() - if physid not in sockets: - sockets[physid] = 1 - elif key == 'cpu cores': - sockets[physid] = int(data[1].strip()) - if len(sockets) > 0: - self.facts['processor_count'] = len(sockets) - self.facts['processor_cores'] = reduce(lambda x, y: x + y, sockets.values()) - else: - self.facts['processor_count'] = i - self.facts['processor_cores'] = 'NA' - - def get_memory_facts(self): - if not os.access("/proc/meminfo", os.R_OK): - return - for line in open("/proc/meminfo").readlines(): - data = line.split(":", 1) - key = data[0] - if key in NetBSDHardware.MEMORY_FACTS: - val = data[1].strip().split(' ')[0] - self.facts["%s_mb" % key.lower()] = long(val) / 1024 - - def get_mount_facts(self): - self.facts['mounts'] = [] - fstab = get_file_content('/etc/fstab') - if fstab: - for line in fstab.split('\n'): - if line.startswith('#') or line.strip() == '': - continue - fields = re.sub(r'\s+',' ',line.rstrip('\n')).split() - self.facts['mounts'].append({'mount': fields[1] , 'device': fields[0], 'fstype' : fields[2], 'options': fields[3]}) - -class AIX(Hardware): - """ - AIX-specific subclass of Hardware. Defines memory and CPU facts: - - memfree_mb - - memtotal_mb - - swapfree_mb - - swaptotal_mb - - processor (a list) - - processor_cores - - processor_count - """ - platform = 'AIX' - - def __init__(self): - Hardware.__init__(self) - - def populate(self): - self.get_cpu_facts() - self.get_memory_facts() - self.get_dmi_facts() - return self.facts - - def get_cpu_facts(self): - self.facts['processor'] = [] - - - rc, out, err = module.run_command("/usr/sbin/lsdev -Cc processor") - if out: - i = 0 - for line in out.split('\n'): - - if 'Available' in line: - if i == 0: - data = line.split(' ') - cpudev = data[0] - - i += 1 - self.facts['processor_count'] = int(i) - - rc, out, err = module.run_command("/usr/sbin/lsattr -El " + cpudev + " -a type") - - data = out.split(' ') - self.facts['processor'] = data[1] - - rc, out, err = module.run_command("/usr/sbin/lsattr -El " + cpudev + " -a smt_threads") - - data = out.split(' ') - self.facts['processor_cores'] = int(data[1]) - - def get_memory_facts(self): - pagesize = 4096 - rc, out, err = module.run_command("/usr/bin/vmstat -v") - for line in out.split('\n'): - data = line.split() - if 'memory pages' in line: - pagecount = long(data[0]) - if 'free pages' in line: - freecount = long(data[0]) - self.facts['memtotal_mb'] = pagesize * pagecount / 1024 / 1024 - self.facts['memfree_mb'] = pagesize * freecount / 1024 / 1024 - # Get swapinfo. swapinfo output looks like: - # Device 1M-blocks Used Avail Capacity - # /dev/ada0p3 314368 0 314368 0% - # - rc, out, err = module.run_command("/usr/sbin/lsps -s") - if out: - lines = out.split('\n') - data = lines[1].split() - swaptotal_mb = long(data[0].rstrip('MB')) - percused = int(data[1].rstrip('%')) - self.facts['swaptotal_mb'] = swaptotal_mb - self.facts['swapfree_mb'] = long(swaptotal_mb * ( 100 - percused ) / 100) - - def get_dmi_facts(self): - rc, out, err = module.run_command("/usr/sbin/lsattr -El sys0 -a fwversion") - data = out.split() - self.facts['firmware_version'] = data[1].strip('IBM,') - -class HPUX(Hardware): - """ - HP-UX-specifig subclass of Hardware. Defines memory and CPU facts: - - memfree_mb - - memtotal_mb - - swapfree_mb - - swaptotal_mb - - processor - - processor_cores - - processor_count - - model - - firmware - """ - - platform = 'HP-UX' - - def __init__(self): - Hardware.__init__(self) - - def populate(self): - self.get_cpu_facts() - self.get_memory_facts() - self.get_hw_facts() - return self.facts - - def get_cpu_facts(self): - if self.facts['architecture'] == '9000/800': - rc, out, err = module.run_command("ioscan -FkCprocessor|wc -l") - self.facts['processor_count'] = int(out.strip()) - #Working with machinfo mess - elif self.facts['architecture'] == 'ia64': - if self.facts['distribution_version'] == "B.11.23": - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep 'Number of CPUs'") - self.facts['processor_count'] = int(out.strip().split('=')[1]) - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep 'processor family'") - self.facts['processor'] = re.search('.*(Intel.*)', out).groups()[0].strip() - rc, out, err = module.run_command("ioscan -FkCprocessor|wc -l") - self.facts['processor_cores'] = int(out.strip()) - if self.facts['distribution_version'] == "B.11.31": - #if machinfo return cores strings release B.11.31 > 1204 - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep core|wc -l") - if out.strip()== '0': - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep Intel") - self.facts['processor_count'] = int(out.strip().split(" ")[0]) - #If hyperthreading is active divide cores by 2 - rc, out, err = module.run_command("/usr/sbin/psrset |grep LCPU") - data = re.sub(' +',' ',out).strip().split(' ') - if len(data) == 1: - hyperthreading = 'OFF' - else: - hyperthreading = data[1] - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep logical") - data = out.strip().split(" ") - if hyperthreading == 'ON': - self.facts['processor_cores'] = int(data[0])/2 - else: - if len(data) == 1: - self.facts['processor_cores'] = self.facts['processor_count'] - else: - self.facts['processor_cores'] = int(data[0]) - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep Intel |cut -d' ' -f4-") - self.facts['processor'] = out.strip() - else: - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |egrep 'socket[s]?$' | tail -1") - self.facts['processor_count'] = int(out.strip().split(" ")[0]) - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep -e '[0-9] core' |tail -1") - self.facts['processor_cores'] = int(out.strip().split(" ")[0]) - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep Intel") - self.facts['processor'] = out.strip() - - def get_memory_facts(self): - pagesize = 4096 - rc, out, err = module.run_command("/usr/bin/vmstat|tail -1") - data = int(re.sub(' +',' ',out).split(' ')[5].strip()) - self.facts['memfree_mb'] = pagesize * data / 1024 / 1024 - if self.facts['architecture'] == '9000/800': - rc, out, err = module.run_command("grep Physical /var/adm/syslog/syslog.log") - data = re.search('.*Physical: ([0-9]*) Kbytes.*',out).groups()[0].strip() - self.facts['memtotal_mb'] = int(data) / 1024 - else: - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep Memory") - data = re.search('Memory[\ :=]*([0-9]*).*MB.*',out).groups()[0].strip() - self.facts['memtotal_mb'] = int(data) - rc, out, err = module.run_command("/usr/sbin/swapinfo -m -d -f -q") - self.facts['swaptotal_mb'] = int(out.strip()) - rc, out, err = module.run_command("/usr/sbin/swapinfo -m -d -f |egrep '^dev|^fs'") - swap = 0 - for line in out.strip().split('\n'): - swap += int(re.sub(' +',' ',line).split(' ')[3].strip()) - self.facts['swapfree_mb'] = swap - - def get_hw_facts(self): - rc, out, err = module.run_command("model") - self.facts['model'] = out.strip() - if self.facts['architecture'] == 'ia64': - rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep -i 'Firmware revision' |grep -v BMC") - self.facts['firmware_version'] = out.split(':')[1].strip() - - -class Darwin(Hardware): - """ - Darwin-specific subclass of Hardware. Defines memory and CPU facts: - - processor - - processor_cores - - memtotal_mb - - memfree_mb - - model - - osversion - - osrevision - """ - platform = 'Darwin' - - def __init__(self): - Hardware.__init__(self) - - def populate(self): - self.sysctl = self.get_sysctl() - self.get_mac_facts() - self.get_cpu_facts() - self.get_memory_facts() - return self.facts - - def get_sysctl(self): - rc, out, err = module.run_command(["/usr/sbin/sysctl", "hw", "machdep", "kern"]) - if rc != 0: - return dict() - sysctl = dict() - for line in out.splitlines(): - if line.rstrip("\n"): - (key, value) = re.split(' = |: ', line, maxsplit=1) - sysctl[key] = value.strip() - return sysctl - - def get_system_profile(self): - rc, out, err = module.run_command(["/usr/sbin/system_profiler", "SPHardwareDataType"]) - if rc != 0: - return dict() - system_profile = dict() - for line in out.splitlines(): - if ': ' in line: - (key, value) = line.split(': ', 1) - system_profile[key.strip()] = ' '.join(value.strip().split()) - return system_profile - - def get_mac_facts(self): - self.facts['model'] = self.sysctl['hw.model'] - self.facts['osversion'] = self.sysctl['kern.osversion'] - self.facts['osrevision'] = self.sysctl['kern.osrevision'] - - def get_cpu_facts(self): - if 'machdep.cpu.brand_string' in self.sysctl: # Intel - self.facts['processor'] = self.sysctl['machdep.cpu.brand_string'] - self.facts['processor_cores'] = self.sysctl['machdep.cpu.core_count'] - else: # PowerPC - system_profile = self.get_system_profile() - self.facts['processor'] = '%s @ %s' % (system_profile['Processor Name'], system_profile['Processor Speed']) - self.facts['processor_cores'] = self.sysctl['hw.physicalcpu'] - - def get_memory_facts(self): - self.facts['memtotal_mb'] = long(self.sysctl['hw.memsize']) / 1024 / 1024 - self.facts['memfree_mb'] = long(self.sysctl['hw.usermem']) / 1024 / 1024 - -class Network(Facts): - """ - This is a generic Network subclass of Facts. This should be further - subclassed to implement per platform. If you subclass this, - you must define: - - interfaces (a list of interface names) - - interface_ dictionary of ipv4, ipv6, and mac address information. - - All subclasses MUST define platform. - """ - platform = 'Generic' - - IPV6_SCOPE = { '0' : 'global', - '10' : 'host', - '20' : 'link', - '40' : 'admin', - '50' : 'site', - '80' : 'organization' } - - def __new__(cls, *arguments, **keyword): - subclass = cls - for sc in Network.__subclasses__(): - if sc.platform == platform.system(): - subclass = sc - return super(cls, subclass).__new__(subclass, *arguments, **keyword) - - def __init__(self): - Facts.__init__(self) - - def populate(self): - return self.facts - -class LinuxNetwork(Network): - """ - This is a Linux-specific subclass of Network. It defines - - interfaces (a list of interface names) - - interface_ dictionary of ipv4, ipv6, and mac address information. - - all_ipv4_addresses and all_ipv6_addresses: lists of all configured addresses. - - ipv4_address and ipv6_address: the first non-local address for each family. - """ - platform = 'Linux' - - def __init__(self): - Network.__init__(self) - - def populate(self): - ip_path = module.get_bin_path('ip') - if ip_path is None: - return self.facts - default_ipv4, default_ipv6 = self.get_default_interfaces(ip_path) - interfaces, ips = self.get_interfaces_info(ip_path, default_ipv4, default_ipv6) - self.facts['interfaces'] = interfaces.keys() - for iface in interfaces: - self.facts[iface] = interfaces[iface] - self.facts['default_ipv4'] = default_ipv4 - self.facts['default_ipv6'] = default_ipv6 - self.facts['all_ipv4_addresses'] = ips['all_ipv4_addresses'] - self.facts['all_ipv6_addresses'] = ips['all_ipv6_addresses'] - return self.facts - - def get_default_interfaces(self, ip_path): - # Use the commands: - # ip -4 route get 8.8.8.8 -> Google public DNS - # ip -6 route get 2404:6800:400a:800::1012 -> ipv6.google.com - # to find out the default outgoing interface, address, and gateway - command = dict( - v4 = [ip_path, '-4', 'route', 'get', '8.8.8.8'], - v6 = [ip_path, '-6', 'route', 'get', '2404:6800:400a:800::1012'] - ) - interface = dict(v4 = {}, v6 = {}) - for v in 'v4', 'v6': - if v == 'v6' and self.facts['os_family'] == 'RedHat' \ - and self.facts['distribution_version'].startswith('4.'): - continue - if v == 'v6' and not socket.has_ipv6: - continue - rc, out, err = module.run_command(command[v]) - if not out: - # v6 routing may result in - # RTNETLINK answers: Invalid argument - continue - words = out.split('\n')[0].split() - # A valid output starts with the queried address on the first line - if len(words) > 0 and words[0] == command[v][-1]: - for i in range(len(words) - 1): - if words[i] == 'dev': - interface[v]['interface'] = words[i+1] - elif words[i] == 'src': - interface[v]['address'] = words[i+1] - elif words[i] == 'via' and words[i+1] != command[v][-1]: - interface[v]['gateway'] = words[i+1] - return interface['v4'], interface['v6'] - - def get_interfaces_info(self, ip_path, default_ipv4, default_ipv6): - interfaces = {} - ips = dict( - all_ipv4_addresses = [], - all_ipv6_addresses = [], - ) - - for path in glob.glob('/sys/class/net/*'): - if not os.path.isdir(path): - continue - device = os.path.basename(path) - interfaces[device] = { 'device': device } - if os.path.exists(os.path.join(path, 'address')): - macaddress = open(os.path.join(path, 'address')).read().strip() - if macaddress and macaddress != '00:00:00:00:00:00': - interfaces[device]['macaddress'] = macaddress - if os.path.exists(os.path.join(path, 'mtu')): - interfaces[device]['mtu'] = int(open(os.path.join(path, 'mtu')).read().strip()) - if os.path.exists(os.path.join(path, 'operstate')): - interfaces[device]['active'] = open(os.path.join(path, 'operstate')).read().strip() != 'down' -# if os.path.exists(os.path.join(path, 'carrier')): -# interfaces[device]['link'] = open(os.path.join(path, 'carrier')).read().strip() == '1' - if os.path.exists(os.path.join(path, 'device','driver', 'module')): - interfaces[device]['module'] = os.path.basename(os.path.realpath(os.path.join(path, 'device', 'driver', 'module'))) - if os.path.exists(os.path.join(path, 'type')): - type = open(os.path.join(path, 'type')).read().strip() - if type == '1': - interfaces[device]['type'] = 'ether' - elif type == '512': - interfaces[device]['type'] = 'ppp' - elif type == '772': - interfaces[device]['type'] = 'loopback' - if os.path.exists(os.path.join(path, 'bridge')): - interfaces[device]['type'] = 'bridge' - interfaces[device]['interfaces'] = [ os.path.basename(b) for b in glob.glob(os.path.join(path, 'brif', '*')) ] - if os.path.exists(os.path.join(path, 'bridge', 'bridge_id')): - interfaces[device]['id'] = open(os.path.join(path, 'bridge', 'bridge_id')).read().strip() - if os.path.exists(os.path.join(path, 'bridge', 'stp_state')): - interfaces[device]['stp'] = open(os.path.join(path, 'bridge', 'stp_state')).read().strip() == '1' - if os.path.exists(os.path.join(path, 'bonding')): - interfaces[device]['type'] = 'bonding' - interfaces[device]['slaves'] = open(os.path.join(path, 'bonding', 'slaves')).read().split() - interfaces[device]['mode'] = open(os.path.join(path, 'bonding', 'mode')).read().split()[0] - interfaces[device]['miimon'] = open(os.path.join(path, 'bonding', 'miimon')).read().split()[0] - interfaces[device]['lacp_rate'] = open(os.path.join(path, 'bonding', 'lacp_rate')).read().split()[0] - primary = open(os.path.join(path, 'bonding', 'primary')).read() - if primary: - interfaces[device]['primary'] = primary - path = os.path.join(path, 'bonding', 'all_slaves_active') - if os.path.exists(path): - interfaces[device]['all_slaves_active'] = open(path).read() == '1' - - # Check whether a interface is in promiscuous mode - if os.path.exists(os.path.join(path,'flags')): - promisc_mode = False - # The second byte indicates whether the interface is in promiscuous mode. - # 1 = promisc - # 0 = no promisc - data = int(open(os.path.join(path, 'flags')).read().strip(),16) - promisc_mode = (data & 0x0100 > 0) - interfaces[device]['promisc'] = promisc_mode - - def parse_ip_output(output, secondary=False): - for line in output.split('\n'): - if not line: - continue - words = line.split() - if words[0] == 'inet': - if '/' in words[1]: - address, netmask_length = words[1].split('/') - else: - # pointopoint interfaces do not have a prefix - address = words[1] - netmask_length = "32" - address_bin = struct.unpack('!L', socket.inet_aton(address))[0] - netmask_bin = (1<<32) - (1<<32>>int(netmask_length)) - netmask = socket.inet_ntoa(struct.pack('!L', netmask_bin)) - network = socket.inet_ntoa(struct.pack('!L', address_bin & netmask_bin)) - iface = words[-1] - if iface != device: - interfaces[iface] = {} - if not secondary or "ipv4" not in interfaces[iface]: - interfaces[iface]['ipv4'] = {'address': address, - 'netmask': netmask, - 'network': network} - else: - if "ipv4_secondaries" not in interfaces[iface]: - interfaces[iface]["ipv4_secondaries"] = [] - interfaces[iface]["ipv4_secondaries"].append({ - 'address': address, - 'netmask': netmask, - 'network': network, - }) - - # add this secondary IP to the main device - if secondary: - if "ipv4_secondaries" not in interfaces[device]: - interfaces[device]["ipv4_secondaries"] = [] - interfaces[device]["ipv4_secondaries"].append({ - 'address': address, - 'netmask': netmask, - 'network': network, - }) - - # If this is the default address, update default_ipv4 - if 'address' in default_ipv4 and default_ipv4['address'] == address: - default_ipv4['netmask'] = netmask - default_ipv4['network'] = network - default_ipv4['macaddress'] = macaddress - default_ipv4['mtu'] = interfaces[device]['mtu'] - default_ipv4['type'] = interfaces[device].get("type", "unknown") - default_ipv4['alias'] = words[-1] - if not address.startswith('127.'): - ips['all_ipv4_addresses'].append(address) - elif words[0] == 'inet6': - address, prefix = words[1].split('/') - scope = words[3] - if 'ipv6' not in interfaces[device]: - interfaces[device]['ipv6'] = [] - interfaces[device]['ipv6'].append({ - 'address' : address, - 'prefix' : prefix, - 'scope' : scope - }) - # If this is the default address, update default_ipv6 - if 'address' in default_ipv6 and default_ipv6['address'] == address: - default_ipv6['prefix'] = prefix - default_ipv6['scope'] = scope - default_ipv6['macaddress'] = macaddress - default_ipv6['mtu'] = interfaces[device]['mtu'] - default_ipv6['type'] = interfaces[device].get("type", "unknown") - if not address == '::1': - ips['all_ipv6_addresses'].append(address) - - ip_path = module.get_bin_path("ip") - primary_data = subprocess.Popen( - [ip_path, 'addr', 'show', 'primary', device], - stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0] - secondary_data = subprocess.Popen( - [ip_path, 'addr', 'show', 'secondary', device], - stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0] - parse_ip_output(primary_data) - parse_ip_output(secondary_data, secondary=True) - - # replace : by _ in interface name since they are hard to use in template - new_interfaces = {} - for i in interfaces: - if ':' in i: - new_interfaces[i.replace(':','_')] = interfaces[i] - else: - new_interfaces[i] = interfaces[i] - return new_interfaces, ips - -class GenericBsdIfconfigNetwork(Network): - """ - This is a generic BSD subclass of Network using the ifconfig command. - It defines - - interfaces (a list of interface names) - - interface_ dictionary of ipv4, ipv6, and mac address information. - - all_ipv4_addresses and all_ipv6_addresses: lists of all configured addresses. - It currently does not define - - default_ipv4 and default_ipv6 - - type, mtu and network on interfaces - """ - platform = 'Generic_BSD_Ifconfig' - - def __init__(self): - Network.__init__(self) - - def populate(self): - - ifconfig_path = module.get_bin_path('ifconfig') - - if ifconfig_path is None: - return self.facts - route_path = module.get_bin_path('route') - - if route_path is None: - return self.facts - - default_ipv4, default_ipv6 = self.get_default_interfaces(route_path) - interfaces, ips = self.get_interfaces_info(ifconfig_path) - self.merge_default_interface(default_ipv4, interfaces, 'ipv4') - self.merge_default_interface(default_ipv6, interfaces, 'ipv6') - self.facts['interfaces'] = interfaces.keys() - - for iface in interfaces: - self.facts[iface] = interfaces[iface] - - self.facts['default_ipv4'] = default_ipv4 - self.facts['default_ipv6'] = default_ipv6 - self.facts['all_ipv4_addresses'] = ips['all_ipv4_addresses'] - self.facts['all_ipv6_addresses'] = ips['all_ipv6_addresses'] - - return self.facts - - def get_default_interfaces(self, route_path): - - # Use the commands: - # route -n get 8.8.8.8 -> Google public DNS - # route -n get -inet6 2404:6800:400a:800::1012 -> ipv6.google.com - # to find out the default outgoing interface, address, and gateway - - command = dict( - v4 = [route_path, '-n', 'get', '8.8.8.8'], - v6 = [route_path, '-n', 'get', '-inet6', '2404:6800:400a:800::1012'] - ) - - interface = dict(v4 = {}, v6 = {}) - - for v in 'v4', 'v6': - - if v == 'v6' and not socket.has_ipv6: - continue - rc, out, err = module.run_command(command[v]) - if not out: - # v6 routing may result in - # RTNETLINK answers: Invalid argument - continue - lines = out.split('\n') - for line in lines: - words = line.split() - # Collect output from route command - if len(words) > 1: - if words[0] == 'interface:': - interface[v]['interface'] = words[1] - if words[0] == 'gateway:': - interface[v]['gateway'] = words[1] - - return interface['v4'], interface['v6'] - - def get_interfaces_info(self, ifconfig_path): - interfaces = {} - current_if = {} - ips = dict( - all_ipv4_addresses = [], - all_ipv6_addresses = [], - ) - # FreeBSD, DragonflyBSD, NetBSD, OpenBSD and OS X all implicitly add '-a' - # when running the command 'ifconfig'. - # Solaris must explicitly run the command 'ifconfig -a'. - rc, out, err = module.run_command([ifconfig_path, '-a']) - - for line in out.split('\n'): - - if line: - words = line.split() - - if re.match('^\S', line) and len(words) > 3: - current_if = self.parse_interface_line(words) - interfaces[ current_if['device'] ] = current_if - elif words[0].startswith('options='): - self.parse_options_line(words, current_if, ips) - elif words[0] == 'nd6': - self.parse_nd6_line(words, current_if, ips) - elif words[0] == 'ether': - self.parse_ether_line(words, current_if, ips) - elif words[0] == 'media:': - self.parse_media_line(words, current_if, ips) - elif words[0] == 'status:': - self.parse_status_line(words, current_if, ips) - elif words[0] == 'lladdr': - self.parse_lladdr_line(words, current_if, ips) - elif words[0] == 'inet': - self.parse_inet_line(words, current_if, ips) - elif words[0] == 'inet6': - self.parse_inet6_line(words, current_if, ips) - else: - self.parse_unknown_line(words, current_if, ips) - - return interfaces, ips - - def parse_interface_line(self, words): - device = words[0][0:-1] - current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'} - current_if['flags'] = self.get_options(words[1]) - current_if['mtu'] = words[3] - current_if['macaddress'] = 'unknown' # will be overwritten later - return current_if - - def parse_options_line(self, words, current_if, ips): - # Mac has options like this... - current_if['options'] = self.get_options(words[0]) - - def parse_nd6_line(self, words, current_if, ips): - # FreBSD has options like this... - current_if['options'] = self.get_options(words[1]) - - def parse_ether_line(self, words, current_if, ips): - current_if['macaddress'] = words[1] - - def parse_media_line(self, words, current_if, ips): - # not sure if this is useful - we also drop information - current_if['media'] = words[1] - if len(words) > 2: - current_if['media_select'] = words[2] - if len(words) > 3: - current_if['media_type'] = words[3][1:] - if len(words) > 4: - current_if['media_options'] = self.get_options(words[4]) - - def parse_status_line(self, words, current_if, ips): - current_if['status'] = words[1] - - def parse_lladdr_line(self, words, current_if, ips): - current_if['lladdr'] = words[1] - - def parse_inet_line(self, words, current_if, ips): - address = {'address': words[1]} - # deal with hex netmask - if re.match('([0-9a-f]){8}', words[3]) and len(words[3]) == 8: - words[3] = '0x' + words[3] - if words[3].startswith('0x'): - address['netmask'] = socket.inet_ntoa(struct.pack('!L', int(words[3], base=16))) - else: - # otherwise assume this is a dotted quad - address['netmask'] = words[3] - # calculate the network - address_bin = struct.unpack('!L', socket.inet_aton(address['address']))[0] - netmask_bin = struct.unpack('!L', socket.inet_aton(address['netmask']))[0] - address['network'] = socket.inet_ntoa(struct.pack('!L', address_bin & netmask_bin)) - # broadcast may be given or we need to calculate - if len(words) > 5: - address['broadcast'] = words[5] - else: - address['broadcast'] = socket.inet_ntoa(struct.pack('!L', address_bin | (~netmask_bin & 0xffffffff))) - # add to our list of addresses - if not words[1].startswith('127.'): - ips['all_ipv4_addresses'].append(address['address']) - current_if['ipv4'].append(address) - - def parse_inet6_line(self, words, current_if, ips): - address = {'address': words[1]} - if (len(words) >= 4) and (words[2] == 'prefixlen'): - address['prefix'] = words[3] - if (len(words) >= 6) and (words[4] == 'scopeid'): - address['scope'] = words[5] - localhost6 = ['::1', '::1/128', 'fe80::1%lo0'] - if address['address'] not in localhost6: - ips['all_ipv6_addresses'].append(address['address']) - current_if['ipv6'].append(address) - - def parse_unknown_line(self, words, current_if, ips): - # we are going to ignore unknown lines here - this may be - # a bad idea - but you can override it in your subclass - pass - - def get_options(self, option_string): - start = option_string.find('<') + 1 - end = option_string.rfind('>') - if (start > 0) and (end > 0) and (end > start + 1): - option_csv = option_string[start:end] - return option_csv.split(',') - else: - return [] - - def merge_default_interface(self, defaults, interfaces, ip_type): - if not 'interface' in defaults.keys(): - return - if not defaults['interface'] in interfaces: - return - ifinfo = interfaces[defaults['interface']] - # copy all the interface values across except addresses - for item in ifinfo.keys(): - if item != 'ipv4' and item != 'ipv6': - defaults[item] = ifinfo[item] - if len(ifinfo[ip_type]) > 0: - for item in ifinfo[ip_type][0].keys(): - defaults[item] = ifinfo[ip_type][0][item] - -class DarwinNetwork(GenericBsdIfconfigNetwork, Network): - """ - This is the Mac OS X/Darwin Network Class. - It uses the GenericBsdIfconfigNetwork unchanged - """ - platform = 'Darwin' - - # media line is different to the default FreeBSD one - def parse_media_line(self, words, current_if, ips): - # not sure if this is useful - we also drop information - current_if['media'] = 'Unknown' # Mac does not give us this - current_if['media_select'] = words[1] - if len(words) > 2: - current_if['media_type'] = words[2][1:] - if len(words) > 3: - current_if['media_options'] = self.get_options(words[3]) - - -class FreeBSDNetwork(GenericBsdIfconfigNetwork, Network): - """ - This is the FreeBSD Network Class. - It uses the GenericBsdIfconfigNetwork unchanged. - """ - platform = 'FreeBSD' - -class AIXNetwork(GenericBsdIfconfigNetwork, Network): - """ - This is the AIX Network Class. - It uses the GenericBsdIfconfigNetwork unchanged. - """ - platform = 'AIX' - - # AIX 'ifconfig -a' does not have three words in the interface line - def get_interfaces_info(self, ifconfig_path): - interfaces = {} - current_if = {} - ips = dict( - all_ipv4_addresses = [], - all_ipv6_addresses = [], - ) - rc, out, err = module.run_command([ifconfig_path, '-a']) - - for line in out.split('\n'): - - if line: - words = line.split() - - # only this condition differs from GenericBsdIfconfigNetwork - if re.match('^\w*\d*:', line): - current_if = self.parse_interface_line(words) - interfaces[ current_if['device'] ] = current_if - elif words[0].startswith('options='): - self.parse_options_line(words, current_if, ips) - elif words[0] == 'nd6': - self.parse_nd6_line(words, current_if, ips) - elif words[0] == 'ether': - self.parse_ether_line(words, current_if, ips) - elif words[0] == 'media:': - self.parse_media_line(words, current_if, ips) - elif words[0] == 'status:': - self.parse_status_line(words, current_if, ips) - elif words[0] == 'lladdr': - self.parse_lladdr_line(words, current_if, ips) - elif words[0] == 'inet': - self.parse_inet_line(words, current_if, ips) - elif words[0] == 'inet6': - self.parse_inet6_line(words, current_if, ips) - else: - self.parse_unknown_line(words, current_if, ips) - - return interfaces, ips - - # AIX 'ifconfig -a' does not inform about MTU, so remove current_if['mtu'] here - def parse_interface_line(self, words): - device = words[0][0:-1] - current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'} - current_if['flags'] = self.get_options(words[1]) - current_if['macaddress'] = 'unknown' # will be overwritten later - return current_if - -class OpenBSDNetwork(GenericBsdIfconfigNetwork, Network): - """ - This is the OpenBSD Network Class. - It uses the GenericBsdIfconfigNetwork. - """ - platform = 'OpenBSD' - - # Return macaddress instead of lladdr - def parse_lladdr_line(self, words, current_if, ips): - current_if['macaddress'] = words[1] - -class SunOSNetwork(GenericBsdIfconfigNetwork, Network): - """ - This is the SunOS Network Class. - It uses the GenericBsdIfconfigNetwork. - - Solaris can have different FLAGS and MTU for IPv4 and IPv6 on the same interface - so these facts have been moved inside the 'ipv4' and 'ipv6' lists. - """ - platform = 'SunOS' - - # Solaris 'ifconfig -a' will print interfaces twice, once for IPv4 and again for IPv6. - # MTU and FLAGS also may differ between IPv4 and IPv6 on the same interface. - # 'parse_interface_line()' checks for previously seen interfaces before defining - # 'current_if' so that IPv6 facts don't clobber IPv4 facts (or vice versa). - def get_interfaces_info(self, ifconfig_path): - interfaces = {} - current_if = {} - ips = dict( - all_ipv4_addresses = [], - all_ipv6_addresses = [], - ) - rc, out, err = module.run_command([ifconfig_path, '-a']) - - for line in out.split('\n'): - - if line: - words = line.split() - - if re.match('^\S', line) and len(words) > 3: - current_if = self.parse_interface_line(words, current_if, interfaces) - interfaces[ current_if['device'] ] = current_if - elif words[0].startswith('options='): - self.parse_options_line(words, current_if, ips) - elif words[0] == 'nd6': - self.parse_nd6_line(words, current_if, ips) - elif words[0] == 'ether': - self.parse_ether_line(words, current_if, ips) - elif words[0] == 'media:': - self.parse_media_line(words, current_if, ips) - elif words[0] == 'status:': - self.parse_status_line(words, current_if, ips) - elif words[0] == 'lladdr': - self.parse_lladdr_line(words, current_if, ips) - elif words[0] == 'inet': - self.parse_inet_line(words, current_if, ips) - elif words[0] == 'inet6': - self.parse_inet6_line(words, current_if, ips) - else: - self.parse_unknown_line(words, current_if, ips) - - # 'parse_interface_line' and 'parse_inet*_line' leave two dicts in the - # ipv4/ipv6 lists which is ugly and hard to read. - # This quick hack merges the dictionaries. Purely cosmetic. - for iface in interfaces: - for v in 'ipv4', 'ipv6': - combined_facts = {} - for facts in interfaces[iface][v]: - combined_facts.update(facts) - if len(combined_facts.keys()) > 0: - interfaces[iface][v] = [combined_facts] - - return interfaces, ips - - def parse_interface_line(self, words, current_if, interfaces): - device = words[0][0:-1] - if device not in interfaces.keys(): - current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'} - else: - current_if = interfaces[device] - flags = self.get_options(words[1]) - if 'IPv4' in flags: - v = 'ipv4' - if 'IPv6' in flags: - v = 'ipv6' - current_if[v].append({'flags': flags, 'mtu': words[3]}) - current_if['macaddress'] = 'unknown' # will be overwritten later - return current_if - - # Solaris displays single digit octets in MAC addresses e.g. 0:1:2:d:e:f - # Add leading zero to each octet where needed. - def parse_ether_line(self, words, current_if, ips): - macaddress = '' - for octet in words[1].split(':'): - octet = ('0' + octet)[-2:None] - macaddress += (octet + ':') - current_if['macaddress'] = macaddress[0:-1] - -class Virtual(Facts): - """ - This is a generic Virtual subclass of Facts. This should be further - subclassed to implement per platform. If you subclass this, - you should define: - - virtualization_type - - virtualization_role - - container (e.g. solaris zones, freebsd jails, linux containers) - - All subclasses MUST define platform. - """ - - def __new__(cls, *arguments, **keyword): - subclass = cls - for sc in Virtual.__subclasses__(): - if sc.platform == platform.system(): - subclass = sc - return super(cls, subclass).__new__(subclass, *arguments, **keyword) - - def __init__(self): - Facts.__init__(self) - - def populate(self): - return self.facts - -class LinuxVirtual(Virtual): - """ - This is a Linux-specific subclass of Virtual. It defines - - virtualization_type - - virtualization_role - """ - platform = 'Linux' - - def __init__(self): - Virtual.__init__(self) - - def populate(self): - self.get_virtual_facts() - return self.facts - - # For more information, check: http://people.redhat.com/~rjones/virt-what/ - def get_virtual_facts(self): - if os.path.exists("/proc/xen"): - self.facts['virtualization_type'] = 'xen' - self.facts['virtualization_role'] = 'guest' - try: - for line in open('/proc/xen/capabilities'): - if "control_d" in line: - self.facts['virtualization_role'] = 'host' - except IOError: - pass - return - - if os.path.exists('/proc/vz'): - self.facts['virtualization_type'] = 'openvz' - if os.path.exists('/proc/bc'): - self.facts['virtualization_role'] = 'host' - else: - self.facts['virtualization_role'] = 'guest' - return - - if os.path.exists('/proc/1/cgroup'): - for line in open('/proc/1/cgroup').readlines(): - if re.search('/lxc/', line): - self.facts['virtualization_type'] = 'lxc' - self.facts['virtualization_role'] = 'guest' - return - - product_name = get_file_content('/sys/devices/virtual/dmi/id/product_name') - - if product_name in ['KVM', 'Bochs']: - self.facts['virtualization_type'] = 'kvm' - self.facts['virtualization_role'] = 'guest' - return - - if product_name == 'RHEV Hypervisor': - self.facts['virtualization_type'] = 'RHEV' - self.facts['virtualization_role'] = 'guest' - return - - if product_name == 'VMware Virtual Platform': - self.facts['virtualization_type'] = 'VMware' - self.facts['virtualization_role'] = 'guest' - return - - bios_vendor = get_file_content('/sys/devices/virtual/dmi/id/bios_vendor') - - if bios_vendor == 'Xen': - self.facts['virtualization_type'] = 'xen' - self.facts['virtualization_role'] = 'guest' - return - - if bios_vendor == 'innotek GmbH': - self.facts['virtualization_type'] = 'virtualbox' - self.facts['virtualization_role'] = 'guest' - return - - sys_vendor = get_file_content('/sys/devices/virtual/dmi/id/sys_vendor') - - # FIXME: This does also match hyperv - if sys_vendor == 'Microsoft Corporation': - self.facts['virtualization_type'] = 'VirtualPC' - self.facts['virtualization_role'] = 'guest' - return - - if sys_vendor == 'Parallels Software International Inc.': - self.facts['virtualization_type'] = 'parallels' - self.facts['virtualization_role'] = 'guest' - return - - if os.path.exists('/proc/self/status'): - for line in open('/proc/self/status').readlines(): - if re.match('^VxID: \d+', line): - self.facts['virtualization_type'] = 'linux_vserver' - if re.match('^VxID: 0', line): - self.facts['virtualization_role'] = 'host' - else: - self.facts['virtualization_role'] = 'guest' - return - - if os.path.exists('/proc/cpuinfo'): - for line in open('/proc/cpuinfo').readlines(): - if re.match('^model name.*QEMU Virtual CPU', line): - self.facts['virtualization_type'] = 'kvm' - elif re.match('^vendor_id.*User Mode Linux', line): - self.facts['virtualization_type'] = 'uml' - elif re.match('^model name.*UML', line): - self.facts['virtualization_type'] = 'uml' - elif re.match('^vendor_id.*PowerVM Lx86', line): - self.facts['virtualization_type'] = 'powervm_lx86' - elif re.match('^vendor_id.*IBM/S390', line): - self.facts['virtualization_type'] = 'ibm_systemz' - else: - continue - self.facts['virtualization_role'] = 'guest' - return - - # Beware that we can have both kvm and virtualbox running on a single system - if os.path.exists("/proc/modules") and os.access('/proc/modules', os.R_OK): - modules = [] - for line in open("/proc/modules").readlines(): - data = line.split(" ", 1) - modules.append(data[0]) - - if 'kvm' in modules: - self.facts['virtualization_type'] = 'kvm' - self.facts['virtualization_role'] = 'host' - return - - if 'vboxdrv' in modules: - self.facts['virtualization_type'] = 'virtualbox' - self.facts['virtualization_role'] = 'host' - return - -class HPUXVirtual(Virtual): - """ - This is a HP-UX specific subclass of Virtual. It defines - - virtualization_type - - virtualization_role - """ - platform = 'HP-UX' - - def __init__(self): - Virtual.__init__(self) - - def populate(self): - self.get_virtual_facts() - return self.facts - - def get_virtual_facts(self): - if os.path.exists('/usr/sbin/vecheck'): - rc, out, err = module.run_command("/usr/sbin/vecheck") - if rc == 0: - self.facts['virtualization_type'] = 'guest' - self.facts['virtualization_role'] = 'HP vPar' - if os.path.exists('/opt/hpvm/bin/hpvminfo'): - rc, out, err = module.run_command("/opt/hpvm/bin/hpvminfo") - if rc == 0 and re.match('.*Running.*HPVM vPar.*', out): - self.facts['virtualization_type'] = 'guest' - self.facts['virtualization_role'] = 'HPVM vPar' - elif rc == 0 and re.match('.*Running.*HPVM guest.*', out): - self.facts['virtualization_type'] = 'guest' - self.facts['virtualization_role'] = 'HPVM IVM' - elif rc == 0 and re.match('.*Running.*HPVM host.*', out): - self.facts['virtualization_type'] = 'host' - self.facts['virtualization_role'] = 'HPVM' - if os.path.exists('/usr/sbin/parstatus'): - rc, out, err = module.run_command("/usr/sbin/parstatus") - if rc == 0: - self.facts['virtualization_type'] = 'guest' - self.facts['virtualization_role'] = 'HP nPar' - - -class SunOSVirtual(Virtual): - """ - This is a SunOS-specific subclass of Virtual. It defines - - virtualization_type - - virtualization_role - - container - """ - platform = 'SunOS' - - def __init__(self): - Virtual.__init__(self) - - def populate(self): - self.get_virtual_facts() - return self.facts - - def get_virtual_facts(self): - rc, out, err = module.run_command("/usr/sbin/prtdiag") - for line in out.split('\n'): - if 'VMware' in line: - self.facts['virtualization_type'] = 'vmware' - self.facts['virtualization_role'] = 'guest' - if 'Parallels' in line: - self.facts['virtualization_type'] = 'parallels' - self.facts['virtualization_role'] = 'guest' - if 'VirtualBox' in line: - self.facts['virtualization_type'] = 'virtualbox' - self.facts['virtualization_role'] = 'guest' - if 'HVM domU' in line: - self.facts['virtualization_type'] = 'xen' - self.facts['virtualization_role'] = 'guest' - # Check if it's a zone - if os.path.exists("/usr/bin/zonename"): - rc, out, err = module.run_command("/usr/bin/zonename") - if out.rstrip() != "global": - self.facts['container'] = 'zone' - # Check if it's a branded zone (i.e. Solaris 8/9 zone) - if os.path.isdir('/.SUNWnative'): - self.facts['container'] = 'zone' - # If it's a zone check if we can detect if our global zone is itself virtualized. - # Relies on the "guest tools" (e.g. vmware tools) to be installed - if 'container' in self.facts and self.facts['container'] == 'zone': - rc, out, err = module.run_command("/usr/sbin/modinfo") - for line in out.split('\n'): - if 'VMware' in line: - self.facts['virtualization_type'] = 'vmware' - self.facts['virtualization_role'] = 'guest' - if 'VirtualBox' in line: - self.facts['virtualization_type'] = 'virtualbox' - self.facts['virtualization_role'] = 'guest' - -def get_file_content(path, default=None): - data = default - if os.path.exists(path) and os.access(path, os.R_OK): - data = open(path).read().strip() - if len(data) == 0: - data = default - return data - -def ansible_facts(): - facts = {} - facts.update(Facts().populate()) - facts.update(Hardware().populate()) - facts.update(Network().populate()) - facts.update(Virtual().populate()) - return facts - -# =========================================== def run_setup(module): - setup_options = {} - facts = ansible_facts() + setup_options = dict(module_setup=True) + facts = ansible_facts(module) for (k, v) in facts.items(): setup_options["ansible_%s" % k.replace('-', '_')] = v # Look for the path to the facter and ohai binary and set # the variable to that path. - facter_path = module.get_bin_path('facter') ohai_path = module.get_bin_path('ohai') # if facter is installed, and we can use --json because # ruby-json is ALSO installed, include facter data in the JSON - if facter_path is not None: rc, out, err = module.run_command(facter_path + " --json") facter = True @@ -2322,7 +99,6 @@ def run_setup(module): setup_options["facter_%s" % k] = v # ditto for ohai - if ohai_path is not None: rc, out, err = module.run_command(ohai_path) ohai = True @@ -2359,5 +135,9 @@ def main(): module.exit_json(**data) # import module snippets + from ansible.module_utils.basic import * + +from ansible.module_utils.facts import * + main() diff --git a/library/system/sysctl b/library/system/sysctl index 97e5bc5e6c1..ab1da5e0959 100644 --- a/library/system/sysctl +++ b/library/system/sysctl @@ -144,9 +144,13 @@ class SysctlModule(object): if self.file_values[thisname] is None and self.args['state'] == "present": self.changed = True self.write_file = True + elif self.file_values[thisname] is None and self.args['state'] == "absent": + self.changed = False elif self.file_values[thisname] != self.args['value']: self.changed = True self.write_file = True + + # use the sysctl command or not? if self.args['sysctl_set']: if self.proc_value is None: self.changed = True @@ -235,7 +239,16 @@ class SysctlModule(object): # Get the token value from the sysctl file def read_sysctl_file(self): - lines = open(self.sysctl_file, "r").readlines() + + lines = [] + if os.path.isfile(self.sysctl_file): + try: + f = open(self.sysctl_file, "r") + lines = f.readlines() + f.close() + except IOError, e: + self.module.fail_json(msg="Failed to open %s: %s" % (self.sysctl_file, str(e))) + for line in lines: line = line.strip() self.file_lines.append(line) diff --git a/library/system/ufw b/library/system/ufw new file mode 100644 index 00000000000..8496997b279 --- /dev/null +++ b/library/system/ufw @@ -0,0 +1,261 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2014, Ahti Kitsik +# (c) 2014, Jarno Keskikangas +# (c) 2013, Aleksey Ovcharenko +# (c) 2013, James Martin +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +DOCUMENTATION = ''' +--- +module: ufw +short_description: Manage firewall with UFW +description: + - Manage firewall with UFW. +version_added: 1.6 +author: Aleksey Ovcharenko, Jarno Keskikangas, Ahti Kitsik +notes: + - See C(man ufw) for more examples. +requirements: + - C(ufw) package +options: + state: + description: + - C(enabled) reloads firewall and enables firewall on boot. + - C(disabled) unloads firewall and disables firewall on boot. + - C(reloaded) reloads firewall. + - C(reset) disables and resets firewall to installation defaults. + required: false + choices: ['enabled', 'disabled', 'reloaded', 'reset'] + policy: + description: + - Change the default policy for incoming or outgoing traffic. + required: false + alias: default + choices: ['allow', 'deny', 'reject'] + direction: + description: + - Select direction for a rule or default policy command. + required: false + choices: ['in', 'out', 'incoming', 'outgoing'] + logging: + description: + - Toggles logging. Logged packets use the LOG_KERN syslog facility. + choices: ['on', 'off', 'low', 'medium', 'high', 'full'] + required: false + insert: + description: + - Insert the corresponding rule as rule number NUM + required: false + rule: + description: + - Add firewall rule + required: false + choices: ['allow', 'deny', 'reject', 'limit'] + log: + description: + - Log new connections matched to this rule + required: false + choices: ['yes', 'no'] + from_ip: + description: + - Source IP address. + required: false + aliases: ['from', 'src'] + default: 'any' + from_port: + description: + - Source port. + required: false + to_ip: + description: + - Destination IP address. + required: false + aliases: ['to', 'dest'] + default: 'any' + to_port: + description: + - Destination port. + required: false + aliases: ['port'] + proto: + description: + - TCP/IP protocol. + choices: ['any', 'tcp', 'udp', 'ipv6', 'esp', 'ah'] + required: false + name: + description: + - Use profile located in C(/etc/ufw/applications.d) + required: false + aliases: ['app'] + delete: + description: + - Delete rule. + required: false + choices: ['yes', 'no'] +''' + +EXAMPLES = ''' +# Allow everything and enable UFW +ufw: state=enabled policy=allow + +# Set logging +ufw: logging=on + +# Sometimes it is desirable to let the sender know when traffic is +# being denied, rather than simply ignoring it. In these cases, use +# reject instead of deny. In addition, log rejected connections: +ufw: rule=reject port=auth log=yes + +# ufw supports connection rate limiting, which is useful for protecting +# against brute-force login attacks. ufw will deny connections if an IP +# address has attempted to initiate 6 or more connections in the last +# 30 seconds. See http://www.debian-administration.org/articles/187 +# for details. Typical usage is: +ufw: rule=limit port=ssh proto=tcp + +# Allow OpenSSH +ufw: rule=allow name=OpenSSH + +# Delete OpenSSH rule +ufw: rule=allow name=OpenSSH delete=yes + +# Deny all access to port 53: +ufw: rule=deny port=53 + +# Allow all access to tcp port 80: +ufw: rule=allow port=80 proto=tcp + +# Allow all access from RFC1918 networks to this host: +ufw: rule=allow src={{ item }} +with_items: +- 10.0.0.0/8 +- 172.16.0.0/12 +- 192.168.0.0/16 + +# Deny access to udp port 514 from host 1.2.3.4: +ufw: rule=deny proto=udp src=1.2.3.4 port=514 + +# Allow incoming access to eth0 from 1.2.3.5 port 5469 to 1.2.3.4 port 5469 +ufw: rule=allow interface=eth0 direction=in proto=udp src=1.2.3.5 from_port=5469 dest=1.2.3.4 to_port=5469 + +# Deny all traffic from the IPv6 2001:db8::/32 to tcp port 25 on this host. +# Note that IPv6 must be enabled in /etc/default/ufw for IPv6 firewalling to work. +ufw: rule=deny proto=tcp src=2001:db8::/32 port=25 +''' + +from operator import itemgetter + + +def main(): + module = AnsibleModule( + argument_spec = dict( + state = dict(default=None, choices=['enabled', 'disabled', 'reloaded', 'reset']), + default = dict(default=None, aliases=['policy'], choices=['allow', 'deny', 'reject']), + logging = dict(default=None, choices=['on', 'off', 'low', 'medium', 'high', 'full']), + direction = dict(default=None, choices=['in', 'incoming', 'out', 'outgoing']), + delete = dict(default=False, type='bool'), + insert = dict(default=None), + rule = dict(default=None, choices=['allow', 'deny', 'reject', 'limit']), + interface = dict(default=None, aliases=['if']), + log = dict(default=False, type='bool'), + from_ip = dict(default='any', aliases=['src', 'from']), + from_port = dict(default=None), + to_ip = dict(default='any', aliases=['dest', 'to']), + to_port = dict(default=None, aliases=['port']), + proto = dict(default=None, aliases=['protocol'], choices=['any', 'tcp', 'udp', 'ipv6', 'esp', 'ah']), + app = dict(default=None, aliases=['name']) + ), + supports_check_mode = True, + mutually_exclusive = [['app', 'proto', 'logging']] + ) + + cmds = [] + + def execute(cmd): + cmd = ' '.join(map(itemgetter(-1), filter(itemgetter(0), cmd))) + + cmds.append(cmd) + (rc, out, err) = module.run_command(cmd) + + if rc != 0: + module.fail_json(msg=err or out) + + params = module.params + + # Ensure at least one of the command arguments are given + command_keys = ['state', 'default', 'rule', 'logging'] + commands = dict((key, params[key]) for key in command_keys if params[key]) + + if len(commands) < 1: + module.fail_json(msg="Not any of the command arguments %s given" % commands) + + # Ensure ufw is available + ufw_bin = module.get_bin_path('ufw', True) + + # Save the pre state and rules in order to recognize changes + (_, pre_state, _) = module.run_command(ufw_bin + ' status verbose') + (_, pre_rules, _) = module.run_command("grep '^### tuple' /lib/ufw/user*.rules") + + # Execute commands + for (command, value) in commands.iteritems(): + cmd = [[ufw_bin], [module.check_mode, '--dry-run']] + + if command == 'state': + states = { 'enabled': 'enable', 'disabled': 'disable', + 'reloaded': 'reload', 'reset': 'reset' } + execute(cmd + [['-f'], [states[value]]]) + + elif command == 'logging': + execute(cmd + [[command], [value]]) + + elif command == 'default': + execute(cmd + [[command], [value], [params['direction']]]) + + elif command == 'rule': + # Rules are constructed according to the long format + # + # ufw [--dry-run] [delete] [insert NUM] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] \ + # [from ADDRESS [port PORT]] [to ADDRESS [port PORT]] \ + # [proto protocol] [app application] + cmd.append([module.boolean(params['delete']), 'delete']) + cmd.append([params['insert'], "insert %s" % params['insert']]) + cmd.append([value]) + cmd.append([module.boolean(params['log']), 'log']) + + for (key, template) in [('direction', "%s" ), ('interface', "on %s" ), + ('from_ip', "from %s" ), ('from_port', "port %s" ), + ('to_ip', "to %s" ), ('to_port', "port %s" ), + ('proto', "proto %s"), ('app', "app '%s'")]: + + value = params[key] + cmd.append([value, template % (value)]) + + execute(cmd) + + # Get the new state + (_, post_state, _) = module.run_command(ufw_bin + ' status verbose') + (_, post_rules, _) = module.run_command("grep '^### tuple' /lib/ufw/user*.rules") + changed = (pre_state != post_state) or (pre_rules != post_rules) + + return module.exit_json(changed=changed, commands=cmds, msg=post_state.rstrip()) + +# import module snippets +from ansible.module_utils.basic import * + +main() diff --git a/library/system/user b/library/system/user index a6d3a0ec32d..8c649c0607c 100644 --- a/library/system/user +++ b/library/system/user @@ -61,6 +61,8 @@ options: except the primary group. append: required: false + default: "no" + choices: [ "yes", "no" ] description: - If C(yes), will only add groups, not set them to just the list in I(groups). @@ -181,6 +183,9 @@ EXAMPLES = ''' # Add the user 'johnd' with a specific uid and a primary group of 'admin' - user: name=johnd comment="John Doe" uid=1040 +# Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups +- user: name=james shell=/bin/bash groups=admins,developers append=yes + # Remove the user 'johnd' - user: name=johnd state=absent remove=yes @@ -1186,6 +1191,7 @@ class SunOS(User): lines.append(line) continue fields[1] = self.password + fields[2] = str(int(time.time() / 86400)) line = ':'.join(fields) lines.append('%s\n' % line) open(self.SHADOWFILE, 'w+').writelines(lines) @@ -1272,6 +1278,7 @@ class SunOS(User): lines.append(line) continue fields[1] = self.password + fields[2] = str(int(time.time() / 86400)) line = ':'.join(fields) lines.append('%s\n' % line) open(self.SHADOWFILE, 'w+').writelines(lines) diff --git a/library/utilities/accelerate b/library/utilities/accelerate index a6e84e32376..5a8c96c64a9 100644 --- a/library/utilities/accelerate +++ b/library/utilities/accelerate @@ -53,6 +53,14 @@ options: if this parameter is set to true. required: false default: false + multi_key: + description: + - When enabled, the daemon will open a local socket file which can be used by future daemon executions to + upload a new key to the already running daemon, so that multiple users can connect using different keys. + This access still requires an ssh connection as the uid for which the daemon is currently running. + required: false + default: no + version_added: "1.6" notes: - See the advanced playbooks chapter for more about using accelerated mode. requirements: [ "python-keyczar" ] @@ -71,6 +79,7 @@ EXAMPLES = ''' ''' import base64 +import errno import getpass import json import os @@ -88,10 +97,13 @@ import traceback import SocketServer from datetime import datetime -from threading import Thread +from threading import Thread, Lock + +# import module snippets +# we must import this here at the top so we can use get_module_path() +from ansible.module_utils.basic import * syslog.openlog('ansible-%s' % os.path.basename(__file__)) -PIDFILE = os.path.expanduser("~/.accelerate.pid") # the chunk size to read and send, assuming mtu 1500 and # leaving room for base64 (+33%) encoding and header (100 bytes) @@ -107,6 +119,9 @@ def log(msg, cap=0): if DEBUG_LEVEL >= cap: syslog.syslog(syslog.LOG_NOTICE|syslog.LOG_DAEMON, msg) +def v(msg): + log(msg, cap=1) + def vv(msg): log(msg, cap=2) @@ -116,16 +131,6 @@ def vvv(msg): def vvvv(msg): log(msg, cap=4) -if os.path.exists(PIDFILE): - try: - data = int(open(PIDFILE).read()) - try: - os.kill(data, signal.SIGKILL) - except OSError: - pass - except ValueError: - pass - os.unlink(PIDFILE) HAS_KEYCZAR = False try: @@ -134,10 +139,26 @@ try: except ImportError: pass +SOCKET_FILE = os.path.join(get_module_path(), '.ansible-accelerate', ".local.socket") + +def get_pid_location(module): + """ + Try to find a pid directory in the common locations, falling + back to the user's home directory if no others exist + """ + for dir in ['/var/run', '/var/lib/run', '/run', os.path.expanduser("~/")]: + try: + if os.path.isdir(dir) and os.access(dir, os.R_OK|os.W_OK): + return os.path.join(dir, '.accelerate.pid') + except: + pass + module.fail_json(msg="couldn't find any valid directory to use for the accelerate pid file") + + # NOTE: this shares a fair amount of code in common with async_wrapper, if async_wrapper were a new module we could move # this into utils.module_common and probably should anyway -def daemonize_self(module, password, port, minutes): +def daemonize_self(module, password, port, minutes, pid_file): # daemonizing code: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66012 try: pid = os.fork() @@ -158,11 +179,11 @@ def daemonize_self(module, password, port, minutes): try: pid = os.fork() if pid > 0: - log("daemon pid %s, writing %s" % (pid, PIDFILE)) - pid_file = open(PIDFILE, "w") + log("daemon pid %s, writing %s" % (pid, pid_file)) + pid_file = open(pid_file, "w") pid_file.write("%s" % pid) pid_file.close() - vvv("pidfile written") + vvv("pid file written") sys.exit(0) except OSError, e: log("fork #2 failed: %d (%s)" % (e.errno, e.strerror)) @@ -174,8 +195,85 @@ def daemonize_self(module, password, port, minutes): os.dup2(dev_null.fileno(), sys.stderr.fileno()) log("daemonizing successful") -class ThreadWithReturnValue(Thread): +class LocalSocketThread(Thread): + server = None + terminated = False + + def __init__(self, group=None, target=None, name=None, args=(), kwargs={}, Verbose=None): + self.server = kwargs.get('server') + Thread.__init__(self, group, target, name, args, kwargs, Verbose) + + def run(self): + try: + if os.path.exists(SOCKET_FILE): + os.remove(SOCKET_FILE) + else: + dir = os.path.dirname(SOCKET_FILE) + if os.path.exists(dir): + if not os.path.isdir(dir): + log("The socket file path (%s) exists, but is not a directory. No local connections will be available" % dir) + return + else: + # make sure the directory is accessible only to this + # user, as socket files derive their permissions from + # the directory that contains them + os.chmod(dir, 0700) + elif not os.path.exists(dir): + os.makedirs(dir, 0700) + except OSError: + pass + self.s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) + self.s.bind(SOCKET_FILE) + self.s.listen(5) + while not self.terminated: + try: + conn, addr = self.s.accept() + vv("received local connection") + data = "" + while "\n" not in data: + data += conn.recv(2048) + try: + new_key = AesKey.Read(data.strip()) + found = False + for key in self.server.key_list: + try: + new_key.Decrypt(key.Encrypt("foo")) + found = True + break + except: + pass + if not found: + vv("adding new key to the key list") + self.server.key_list.append(new_key) + conn.sendall("OK\n") + else: + vv("key already exists in the key list, ignoring") + conn.sendall("EXISTS\n") + + # update the last event time so the server doesn't + # shutdown sooner than expected for new cliets + try: + self.server.last_event_lock.acquire() + self.server.last_event = datetime.now() + finally: + self.server.last_event_lock.release() + except Exception, e: + vv("key loaded locally was invalid, ignoring (%s)" % e) + conn.sendall("BADKEY\n") + finally: + try: + conn.close() + except: + pass + except: + pass + + def terminate(self): + self.terminated = True + self.s.shutdown(socket.SHUT_RDWR) + self.s.close() +class ThreadWithReturnValue(Thread): def __init__(self, group=None, target=None, name=None, args=(), kwargs={}, Verbose=None): Thread.__init__(self, group, target, name, args, kwargs, Verbose) self._return = None @@ -190,24 +288,41 @@ class ThreadWithReturnValue(Thread): return self._return class ThreadedTCPServer(SocketServer.ThreadingTCPServer): - def __init__(self, server_address, RequestHandlerClass, module, password, timeout): + key_list = [] + last_event = datetime.now() + last_event_lock = Lock() + def __init__(self, server_address, RequestHandlerClass, module, password, timeout, use_ipv6=False): self.module = module - self.key = AesKey.Read(password) + self.key_list.append(AesKey.Read(password)) self.allow_reuse_address = True self.timeout = timeout - SocketServer.ThreadingTCPServer.__init__(self, server_address, RequestHandlerClass) -class ThreadedTCPV6Server(SocketServer.ThreadingTCPServer): - def __init__(self, server_address, RequestHandlerClass, module, password, timeout): - self.module = module - self.address_family = socket.AF_INET6 - self.key = AesKey.Read(password) - self.allow_reuse_address = True - self.timeout = timeout + if use_ipv6: + self.address_family = socket.AF_INET6 + + if self.module.params.get('multi_key', False): + vv("starting thread to handle local connections for multiple keys") + self.local_thread = LocalSocketThread(kwargs=dict(server=self)) + self.local_thread.start() + SocketServer.ThreadingTCPServer.__init__(self, server_address, RequestHandlerClass) + def shutdown(self): + self.local_thread.terminate() + self.running = False + SocketServer.ThreadingTCPServer.shutdown(self) + class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): + # the key to use for this connection + active_key = None + def send_data(self, data): + try: + self.server.last_event_lock.acquire() + self.server.last_event = datetime.now() + finally: + self.server.last_event_lock.release() + packed_len = struct.pack('!Q', len(data)) return self.request.sendall(packed_len + data) @@ -216,23 +331,40 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): data = "" vvvv("in recv_data(), waiting for the header") while len(data) < header_len: - d = self.request.recv(header_len - len(data)) - if not d: - vvv("received nothing, bailing out") + try: + d = self.request.recv(header_len - len(data)) + if not d: + vvv("received nothing, bailing out") + return None + data += d + except: + # probably got a connection reset + vvvv("exception received while waiting for recv(), returning None") return None - data += d vvvv("in recv_data(), got the header, unpacking") data_len = struct.unpack('!Q',data[:header_len])[0] data = data[header_len:] vvvv("data received so far (expecting %d): %d" % (data_len,len(data))) while len(data) < data_len: - d = self.request.recv(data_len - len(data)) - if not d: - vvv("received nothing, bailing out") + try: + d = self.request.recv(data_len - len(data)) + if not d: + vvv("received nothing, bailing out") + return None + data += d + vvvv("data received so far (expecting %d): %d" % (data_len,len(data))) + except: + # probably got a connection reset + vvvv("exception received while waiting for recv(), returning None") return None - data += d - vvvv("data received so far (expecting %d): %d" % (data_len,len(data))) vvvv("received all of the data, returning") + + try: + self.server.last_event_lock.acquire() + self.server.last_event = datetime.now() + finally: + self.server.last_event_lock.release() + return data def handle(self): @@ -243,18 +375,26 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): if not data: vvvv("received nothing back from recv_data(), breaking out") break - try: - vvvv("got data, decrypting") - data = self.server.key.Decrypt(data) - vvvv("decryption done") - except: - vv("bad decrypt, skipping...") - data2 = json.dumps(dict(rc=1)) - data2 = self.server.key.Encrypt(data2) - self.send_data(data2) - return + vvvv("got data, decrypting") + if not self.active_key: + for key in self.server.key_list: + try: + data = key.Decrypt(data) + self.active_key = key + break + except: + pass + else: + vv("bad decrypt, exiting the connection handler") + return + else: + try: + data = self.active_key.Decrypt(data) + except: + vv("bad decrypt, exiting the connection handler") + return - vvvv("loading json from the data") + vvvv("decryption done, loading json from the data") data = json.loads(data) mode = data['mode'] @@ -270,7 +410,7 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): last_pong = datetime.now() vvvv("command still running, sending keepalive packet") data2 = json.dumps(dict(pong=True)) - data2 = self.server.key.Encrypt(data2) + data2 = self.active_key.Encrypt(data2) self.send_data(data2) time.sleep(0.1) response = twrv._return @@ -286,8 +426,9 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): response = self.validate_user(data) vvvv("response result is %s" % str(response)) - data2 = json.dumps(response) - data2 = self.server.key.Encrypt(data2) + json_response = json.dumps(response) + vvvv("dumped json is %s" % json_response) + data2 = self.active_key.Encrypt(json_response) vvvv("sending the response back to the controller") self.send_data(data2) vvvv("done sending the response") @@ -299,9 +440,10 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): tb = traceback.format_exc() log("encountered an unhandled exception in the handle() function") log("error was:\n%s" % tb) - data2 = json.dumps(dict(rc=1, failed=True, msg="unhandled error in the handle() function")) - data2 = self.server.key.Encrypt(data2) - self.send_data(data2) + if self.active_key: + data2 = json.dumps(dict(rc=1, failed=True, msg="unhandled error in the handle() function")) + data2 = self.active_key.Encrypt(data2) + self.send_data(data2) def validate_user(self, data): if 'username' not in data: @@ -329,11 +471,15 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): return dict(failed=True, msg='internal error: cmd is required') if 'tmp_path' not in data: return dict(failed=True, msg='internal error: tmp_path is required') - if 'executable' not in data: - return dict(failed=True, msg='internal error: executable is required') vvvv("executing: %s" % data['cmd']) - rc, stdout, stderr = self.server.module.run_command(data['cmd'], executable=data['executable'], close_fds=True) + + use_unsafe_shell = False + executable = data.get('executable') + if executable: + use_unsafe_shell = True + + rc, stdout, stderr = self.server.module.run_command(data['cmd'], executable=executable, use_unsafe_shell=use_unsafe_shell) if stdout is None: stdout = '' if stderr is None: @@ -358,7 +504,7 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): last = True data = dict(data=base64.b64encode(data), last=last) data = json.dumps(data) - data = self.server.key.Encrypt(data) + data = self.active_key.Encrypt(data) if self.send_data(data): return dict(failed=True, stderr="failed to send data") @@ -367,7 +513,7 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): if not response: log("failed to get a response, aborting") return dict(failed=True, stderr="Failed to get a response from %s" % self.host) - response = self.server.key.Decrypt(response) + response = self.active_key.Decrypt(response) response = json.loads(response) if response.get('failed',False): @@ -390,8 +536,14 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): final_path = None if 'user' in data and data.get('user') != getpass.getuser(): - vv("the target user doesn't match this user, we'll move the file into place via sudo") - (fd,out_path) = tempfile.mkstemp(prefix='ansible.', dir=os.path.expanduser('~/.ansible/tmp/')) + vvv("the target user doesn't match this user, we'll move the file into place via sudo") + tmp_path = os.path.expanduser('~/.ansible/tmp/') + if not os.path.exists(tmp_path): + try: + os.makedirs(tmp_path, 0700) + except: + return dict(failed=True, msg='could not create a temporary directory at %s' % tmp_path) + (fd,out_path) = tempfile.mkstemp(prefix='ansible.', dir=tmp_path) out_fd = os.fdopen(fd, 'w', 0) final_path = data['out_path'] else: @@ -405,14 +557,14 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): bytes += len(out) out_fd.write(out) response = json.dumps(dict()) - response = self.server.key.Encrypt(response) + response = self.active_key.Encrypt(response) self.send_data(response) if data['last']: break data = self.recv_data() if not data: raise "" - data = self.server.key.Decrypt(data) + data = self.active_key.Decrypt(data) data = json.loads(data) except: out_fd.close() @@ -428,27 +580,45 @@ class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): self.server.module.atomic_move(out_path, final_path) return dict() -def daemonize(module, password, port, timeout, minutes, ipv6): +def daemonize(module, password, port, timeout, minutes, use_ipv6, pid_file): try: - daemonize_self(module, password, port, minutes) + daemonize_self(module, password, port, minutes, pid_file) - def catcher(signum, _): - module.exit_json(msg='timer expired') + def timer_handler(signum, _): + try: + server.last_event_lock.acquire() + td = datetime.now() - server.last_event + # older python timedelta objects don't have total_seconds(), + # so we use the formula from the docs to calculate it + total_seconds = (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / 10**6 + if total_seconds >= minutes * 60: + log("server has been idle longer than the timeout, shutting down") + server.running = False + server.shutdown() + else: + # reschedule the check + vvvv("daemon idle for %d seconds (timeout=%d)" % (total_seconds,minutes*60)) + signal.alarm(30) + except: + pass + finally: + server.last_event_lock.release() - signal.signal(signal.SIGALRM, catcher) - signal.setitimer(signal.ITIMER_REAL, 60 * minutes) + signal.signal(signal.SIGALRM, timer_handler) + signal.alarm(30) tries = 5 while tries > 0: try: - if ipv6: - server = ThreadedTCPV6Server(("::", port), ThreadedTCPRequestHandler, module, password, timeout) + if use_ipv6: + address = ("::", port) else: - server = ThreadedTCPServer(("0.0.0.0", port), ThreadedTCPRequestHandler, module, password, timeout) + address = ("0.0.0.0", port) + server = ThreadedTCPServer(address, ThreadedTCPRequestHandler, module, password, timeout, use_ipv6=use_ipv6) server.allow_reuse_address = True break - except: - vv("Failed to create the TCP server (tries left = %d)" % tries) + except Exception, e: + vv("Failed to create the TCP server (tries left = %d) (error: %s) " % (tries,e)) tries -= 1 time.sleep(0.2) @@ -456,8 +626,20 @@ def daemonize(module, password, port, timeout, minutes, ipv6): vv("Maximum number of attempts to create the TCP server reached, bailing out") raise Exception("max # of attempts to serve reached") - vv("serving!") - server.serve_forever(poll_interval=0.1) + # run the server in a separate thread to make signal handling work + server_thread = Thread(target=server.serve_forever, kwargs=dict(poll_interval=0.1)) + server_thread.start() + server.running = True + + v("serving!") + while server.running: + time.sleep(1) + + # wait for the thread to exit fully + server_thread.join() + + v("server thread terminated, exiting!") + sys.exit(0) except Exception, e: tb = traceback.format_exc() log("exception caught, exiting accelerated mode: %s\n%s" % (e, tb)) @@ -469,6 +651,7 @@ def main(): argument_spec = dict( port=dict(required=False, default=5099), ipv6=dict(required=False, default=False, type='bool'), + multi_key=dict(required=False, default=False, type='bool'), timeout=dict(required=False, default=300), password=dict(required=True), minutes=dict(required=False, default=30), @@ -483,14 +666,62 @@ def main(): minutes = int(module.params['minutes']) debug = int(module.params['debug']) ipv6 = module.params['ipv6'] + multi_key = module.params['multi_key'] if not HAS_KEYCZAR: module.fail_json(msg="keyczar is not installed (on the remote side)") DEBUG_LEVEL=debug + pid_file = get_pid_location(module) - daemonize(module, password, port, timeout, minutes, ipv6) + daemon_pid = None + daemon_running = False + if os.path.exists(pid_file): + try: + daemon_pid = int(open(pid_file).read()) + try: + # sending signal 0 doesn't do anything to the + # process, other than tell the calling program + # whether other signals can be sent + os.kill(daemon_pid, 0) + except OSError, e: + if e.errno == errno.EPERM: + # no permissions means the pid is probably + # running, but as a different user, so fail + module.fail_json(msg="the accelerate daemon appears to be running as a different user that this user cannot access (pid=%d)" % daemon_pid) + else: + daemon_running = True + except ValueError: + # invalid pid file, unlink it - otherwise we don't care + try: + os.unlink(pid_file) + except: + pass + + if daemon_running and multi_key: + # try to connect to the file socket for the daemon if it exists + s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) + try: + s.connect(SOCKET_FILE) + s.sendall(password + '\n') + data = "" + while '\n' not in data: + data += s.recv(2048) + res = data.strip() + except: + module.fail_json(msg="failed to connect to the local socket file") + finally: + try: + s.close() + except: + pass + + if res in ("OK", "EXISTS"): + module.exit_json(msg="transferred new key to the existing daemon") + else: + module.fail_json(msg="could not transfer new key: %s" % data.strip()) + else: + # try to start up the daemon + daemonize(module, password, port, timeout, minutes, ipv6, pid_file) -# import module snippets -from ansible.module_utils.basic import * main() diff --git a/library/utilities/wait_for b/library/utilities/wait_for index faf821e2749..3a381f06944 100644 --- a/library/utilities/wait_for +++ b/library/utilities/wait_for @@ -33,9 +33,11 @@ description: are not immediately available after their init scripts return - which is true of certain Java application servers. It is also useful when starting guests with the M(virt) module and - needing to pause until they are ready. This module can - also be used to wait for a file to be available on the filesystem - or with a regex match a string to be present in a file. + needing to pause until they are ready. + - This module can also be used to wait for a regex match a string to be present in a file. + - In 1.6 and later, this module can + also be used to wait for a file to be available or absent on the + filesystem. version_added: "0.7" options: host: @@ -60,10 +62,10 @@ options: required: false state: description: - - either C(present), C(started), or C(stopped) + - either C(present), C(started), or C(stopped), C(absent) - When checking a port C(started) will ensure the port is open, C(stopped) will check that it is closed - - When checking for a file or a search string C(present) or C(started) will ensure that the file or string is present before continuing - choices: [ "present", "started", "stopped" ] + - When checking for a file or a search string C(present) or C(started) will ensure that the file or string is present before continuing, C(absent) will check that file is absent or removed + choices: [ "present", "started", "stopped", "absent" ] default: "started" path: version_added: "1.4" @@ -78,7 +80,7 @@ options: notes: [] requirements: [] -author: Jeroen Hoekx, John Jarvis +author: Jeroen Hoekx, John Jarvis, Andrii Radyk ''' EXAMPLES = ''' @@ -92,6 +94,12 @@ EXAMPLES = ''' # wait until the string "completed" is in the file /tmp/foo before continuing - wait_for: path=/tmp/foo search_regex=completed +# wait until the lock file is removed +- wait_for: path=/var/lock/file.lock state=absent + +# wait until the process is finished and pid was destroyed +- wait_for: path=/proc/3466/status state=absent + ''' def main(): @@ -105,7 +113,7 @@ def main(): port=dict(default=None), path=dict(default=None), search_regex=dict(default=None), - state=dict(default='started', choices=['started', 'stopped', 'present']), + state=dict(default='started', choices=['started', 'stopped', 'present', 'absent']), ), ) @@ -133,23 +141,35 @@ def main(): if delay: time.sleep(delay) - if state == 'stopped': + if state in [ 'stopped', 'absent' ]: ### first wait for the stop condition end = start + datetime.timedelta(seconds=timeout) while datetime.datetime.now() < end: - s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - s.settimeout(connect_timeout) - try: - s.connect( (host, port) ) - s.shutdown(socket.SHUT_RDWR) - s.close() - time.sleep(1) - except: - break + if path: + try: + f = open(path) + f.close() + time.sleep(1) + pass + except IOError: + break + elif port: + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + s.settimeout(connect_timeout) + try: + s.connect( (host, port) ) + s.shutdown(socket.SHUT_RDWR) + s.close() + time.sleep(1) + except: + break else: elapsed = datetime.datetime.now() - start - module.fail_json(msg="Timeout when waiting for %s:%s to stop." % (host, port), elapsed=elapsed.seconds) + if port: + module.fail_json(msg="Timeout when waiting for %s:%s to stop." % (host, port), elapsed=elapsed.seconds) + elif path: + module.fail_json(msg="Timeout when waiting for %s to be absent." % (path), elapsed=elapsed.seconds) elif state in ['started', 'present']: ### wait for start condition diff --git a/library/web_infrastructure/apache2_module b/library/web_infrastructure/apache2_module new file mode 100644 index 00000000000..73a92f40434 --- /dev/null +++ b/library/web_infrastructure/apache2_module @@ -0,0 +1,98 @@ +#!/usr/bin/python +#coding: utf-8 -*- + +# (c) 2013, Christian Berendt +# +# This module is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This software is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this software. If not, see . + +DOCUMENTATION = ''' +--- +module: apache2_module +version_added: 1.6 +short_description: enables/disables a module of the Apache2 webserver +description: + - Enables or disables a specified module of the Apache2 webserver. +options: + name: + description: + - name of the module to enable/disable + required: true + state: + description: + - indicate the desired state of the resource + choices: ['present', 'absent'] + default: present + +''' + +EXAMPLES = ''' +# enables the Apache2 module "wsgi" +- apache2_module: state=present name=wsgi + +# disables the Apache2 module "wsgi" +- apache2_module: state=absent name wsgi +''' + +def _module_is_enabled(module): + name = module.params['name'] + a2enmod_binary = module.get_bin_path("a2enmod") + result, stdout, stderr = module.run_command("%s -q %s" % (a2enmod_binary, name)) + return result == 0 + +def _module_is_disabled(module): + return _module_is_enabled(module) == False + +def _disable_module(module): + name = module.params['name'] + + if _module_is_disabled(module): + module.exit_json(changed = False, result = "Success") + + result, stdout, stderr = module.run_command("a2dismod %s" % name) + if result != 0: + module.fail_json(msg="Failed to disable module %s: %s" % (name, stdout)) + + module.exit_json(changed = True, result = "Disabled") + +def _enable_module(module): + name = module.params['name'] + + if _module_is_enabled(module): + module.exit_json(changed = False, result = "Success") + + a2enmod_binary = module.get_bin_path("a2enmod") + result, stdout, stderr = module.run_command("%s %s" % (a2enmod_binary, name)) + if result != 0: + module.fail_json(msg="Failed to enable module %s: %s" % (name, stdout)) + + module.exit_json(changed = True, result = "Enabled") + +def main(): + module = AnsibleModule( + argument_spec = dict( + name = dict(required=True), + state = dict(default='present', choices=['absent', 'present']) + ), + ) + + if module.params['state'] == 'present': + _enable_module(module) + + if module.params['state'] == 'absent': + _disable_module(module) + +# import module snippets +from ansible.module_utils.basic import * +main() + diff --git a/library/web_infrastructure/django_manage b/library/web_infrastructure/django_manage index 68eb92c1bfe..42ce3781fda 100644 --- a/library/web_infrastructure/django_manage +++ b/library/web_infrastructure/django_manage @@ -74,14 +74,17 @@ options: description: - Will skip over out-of-order missing migrations, you can only use this parameter with I(migrate) required: false + version_added: "1.3" merge: description: - Will run out-of-order or missing migrations as they are not rollback migrations, you can only use this parameter with 'migrate' command required: false + version_added: "1.3" link: description: - Will create links to the files instead of copying them, you can only use this parameter with 'collectstatic' command required: false + version_added: "1.3" notes: - I(virtualenv) (U(http://www.virtualenv.org)) must be installed on the remote host if the virtualenv parameter is specified. - This module will create a virtualenv if the virtualenv parameter is specified and a virtualenv does not already exist at the given location. @@ -203,13 +206,13 @@ def main(): apps = dict(default=None, required=False), cache_table = dict(default=None, required=False), database = dict(default=None, required=False), - failfast = dict(default='no', required=False, choices=BOOLEANS, aliases=['fail_fast']), + failfast = dict(default='no', required=False, type='bool', aliases=['fail_fast']), fixtures = dict(default=None, required=False), liveserver = dict(default=None, required=False, aliases=['live_server']), testrunner = dict(default=None, required=False, aliases=['test_runner']), - skip = dict(default=None, required=False, choices=BOOLEANS), - merge = dict(default=None, required=False, choices=BOOLEANS), - link = dict(default=None, required=False, choices=BOOLEANS), + skip = dict(default=None, required=False, type='bool'), + merge = dict(default=None, required=False, type='bool'), + link = dict(default=None, required=False, type='bool'), ), ) @@ -232,7 +235,6 @@ def main(): _ensure_virtualenv(module) - os.chdir(app_path) cmd = "python manage.py %s" % (command, ) if command in noinput_commands: @@ -251,7 +253,7 @@ def main(): if module.params[param]: cmd = '%s %s' % (cmd, module.params[param]) - rc, out, err = module.run_command(cmd) + rc, out, err = module.run_command(cmd, cwd=app_path) if rc != 0: if command == 'createcachetable' and 'table' in err and 'already exists' in err: out = 'Already exists.' diff --git a/library/web_infrastructure/jira b/library/web_infrastructure/jira new file mode 100644 index 00000000000..950fc3dbfcf --- /dev/null +++ b/library/web_infrastructure/jira @@ -0,0 +1,347 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +# (c) 2014, Steve Smith +# Atlassian open-source approval reference OSR-76. +# +# This file is part of Ansible. +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . +# + +DOCUMENTATION = """ +module: jira +version_added: "1.6" +short_description: create and modify issues in a JIRA instance +description: + - Create and modify issues in a JIRA instance. + +options: + uri: + required: true + description: + - Base URI for the JIRA instance + + operation: + required: true + aliases: [ command ] + choices: [ create, comment, edit, fetch, transition ] + description: + - The operation to perform. + + username: + required: true + description: + - The username to log-in with. + + password: + required: true + description: + - The password to log-in with. + + project: + aliases: [ prj ] + required: false + description: + - The project for this operation. Required for issue creation. + + summary: + required: false + description: + - The issue summary, where appropriate. + + description: + required: false + description: + - The issue description, where appropriate. + + issuetype: + required: false + description: + - The issue type, for issue creation. + + issue: + required: false + description: + - An existing issue key to operate on. + + comment: + required: false + description: + - The comment text to add. + + status: + required: false + description: + - The desired status; only relevant for the transition operation. + + assignee: + required: false + description: + - Sets the assignee on create or transition operations. Note not all transitions will allow this. + + fields: + required: false + description: + - This is a free-form data structure that can contain arbitrary data. This is passed directly to the JIRA REST API (possibly after merging with other required data, as when passed to create). See examples for more information, and the JIRA REST API for the structure required for various fields. + +notes: + - "Currently this only works with basic-auth." + +author: Steve Smith +""" + +EXAMPLES = """ +# Create a new issue and add a comment to it: +- name: Create an issue + jira: uri={{server}} username={{user}} password={{pass}} + project=ANS operation=create + summary="Example Issue" description="Created using Ansible" issuetype=Task + register: issue + +- name: Comment on issue + jira: uri={{server}} username={{user}} password={{pass}} + issue={{issue.meta.key}} operation=comment + comment="A comment added by Ansible" + +# Assign an existing issue using edit +- name: Assign an issue using free-form fields + jira: uri={{server}} username={{user}} password={{pass}} + issue={{issue.meta.key}} operation=edit + assignee=ssmith + +# Create an issue with an existing assignee +- name: Create an assigned issue + jira: uri={{server}} username={{user}} password={{pass}} + project=ANS operation=create + summary="Assigned issue" description="Created and assigned using Ansible" + issuetype=Task assignee=ssmith + +# Edit an issue using free-form fields +- name: Set the labels on an issue using free-form fields + jira: uri={{server}} username={{user}} password={{pass}} + issue={{issue.meta.key}} operation=edit + args: { fields: {labels: ["autocreated", "ansible"]}} + +- name: Set the labels on an issue, YAML version + jira: uri={{server}} username={{user}} password={{pass}} + issue={{issue.meta.key}} operation=edit + args: + fields: + labels: + - "autocreated" + - "ansible" + - "yaml" + +# Retrieve metadata for an issue and use it to create an account +- name: Get an issue + jira: uri={{server}} username={{user}} password={{pass}} + project=ANS operation=fetch issue="ANS-63" + register: issue + +- name: Create a unix account for the reporter + sudo: true + user: name="{{issue.meta.fields.creator.name}}" comment="{{issue.meta.fields.creator.displayName}}" + +# Transition an issue by target status +- name: Close the issue + jira: uri={{server}} username={{user}} password={{pass}} + issue={{issue.meta.key}} operation=transition status="Done" +""" + +import json +import base64 + +def request(url, user, passwd, data=None, method=None): + if data: + data = json.dumps(data) + + # NOTE: fetch_url uses a password manager, which follows the + # standard request-then-challenge basic-auth semantics. However as + # JIRA allows some unauthorised operations it doesn't necessarily + # send the challenge, so the request occurs as the anonymous user, + # resulting in unexpected results. To work around this we manually + # inject the basic-auth header up-front to ensure that JIRA treats + # the requests as authorized for this user. + auth = base64.encodestring('%s:%s' % (user, passwd)).replace('\n', '') + response, info = fetch_url(module, url, data=data, method=method, + headers={'Content-Type':'application/json', + 'Authorization':"Basic %s" % auth}) + + if info['status'] not in (200, 204): + module.fail_json(msg=info['msg']) + + body = response.read() + + if body: + return json.loads(body) + else: + return {} + +def post(url, user, passwd, data): + return request(url, user, passwd, data=data, method='POST') + +def put(url, user, passwd, data): + return request(url, user, passwd, data=data, method='PUT') + +def get(url, user, passwd): + return request(url, user, passwd) + + +def create(restbase, user, passwd, params): + createfields = { + 'project': { 'key': params['project'] }, + 'summary': params['summary'], + 'description': params['description'], + 'issuetype': { 'name': params['issuetype'] }} + + # Merge in any additional or overridden fields + if params['fields']: + createfields.update(params['fields']) + + data = {'fields': createfields} + + url = restbase + '/issue/' + + ret = post(url, user, passwd, data) + + return ret + + +def comment(restbase, user, passwd, params): + data = { + 'body': params['comment'] + } + + url = restbase + '/issue/' + params['issue'] + '/comment' + + ret = post(url, user, passwd, data) + + return ret + + +def edit(restbase, user, passwd, params): + data = { + 'fields': params['fields'] + } + + url = restbase + '/issue/' + params['issue'] + + ret = put(url, user, passwd, data) + + return ret + + +def fetch(restbase, user, passwd, params): + url = restbase + '/issue/' + params['issue'] + ret = get(url, user, passwd) + return ret + + +def transition(restbase, user, passwd, params): + # Find the transition id + turl = restbase + '/issue/' + params['issue'] + "/transitions" + tmeta = get(turl, user, passwd) + + target = params['status'] + tid = None + for t in tmeta['transitions']: + if t['name'] == target: + tid = t['id'] + break + + if not tid: + raise ValueError("Failed find valid transition for '%s'" % target) + + # Perform it + url = restbase + '/issue/' + params['issue'] + "/transitions" + data = { 'transition': { "id" : tid }, + 'fields': params['fields']} + + ret = post(url, user, passwd, data) + + return ret + + +# Some parameters are required depending on the operation: +OP_REQUIRED = dict(create=['project', 'issuetype', 'summary', 'description'], + comment=['issue', 'comment'], + edit=[], + fetch=['issue'], + transition=['status']) + +def main(): + + global module + module = AnsibleModule( + argument_spec=dict( + uri=dict(required=True), + operation=dict(choices=['create', 'comment', 'edit', 'fetch', 'transition'], + aliases=['command'], required=True), + username=dict(required=True), + password=dict(required=True), + project=dict(), + summary=dict(), + description=dict(), + issuetype=dict(), + issue=dict(aliases=['ticket']), + comment=dict(), + status=dict(), + assignee=dict(), + fields=dict(default={}) + ), + supports_check_mode=False + ) + + op = module.params['operation'] + + # Check we have the necessary per-operation parameters + missing = [] + for parm in OP_REQUIRED[op]: + if not module.params[parm]: + missing.append(parm) + if missing: + module.fail_json(msg="Operation %s require the following missing parameters: %s" % (op, ",".join(missing))) + + # Handle rest of parameters + uri = module.params['uri'] + user = module.params['username'] + passwd = module.params['password'] + if module.params['assignee']: + module.params['fields']['assignee'] = { 'name': module.params['assignee'] } + + if not uri.endswith('/'): + uri = uri+'/' + restbase = uri + 'rest/api/2' + + # Dispatch + try: + + # Lookup the corresponding method for this operation. This is + # safe as the AnsibleModule should remove any unknown operations. + thismod = sys.modules[__name__] + method = getattr(thismod, op) + + ret = method(restbase, user, passwd, module.params) + + except Exception as e: + return module.fail_json(msg=e.message) + + + module.exit_json(changed=True, meta=ret) + + +from ansible.module_utils.basic import * +from ansible.module_utils.urls import * +main() diff --git a/library/web_infrastructure/supervisorctl b/library/web_infrastructure/supervisorctl index 564368af5f4..2d458169e76 100644 --- a/library/web_infrastructure/supervisorctl +++ b/library/web_infrastructure/supervisorctl @@ -23,70 +23,76 @@ import os DOCUMENTATION = ''' --- module: supervisorctl -short_description: Manage the state of a program or group of programs running via Supervisord +short_description: Manage the state of a program or group of programs running via supervisord description: - - Manage the state of a program or group of programs running via I(Supervisord) + - Manage the state of a program or group of programs running via supervisord version_added: "0.7" options: name: description: - - The name of the I(supervisord) program/process to manage + - The name of the supervisord program or group to manage. + - The name will be taken as group name when it ends with a colon I(:) + - Group support is only available in Ansible version 1.6 or later. required: true default: null config: description: - - configuration file path, passed as -c to supervisorctl + - The supervisor configuration file path required: false default: null version_added: "1.3" server_url: description: - - URL on which supervisord server is listening, passed as -s to supervisorctl + - URL on which supervisord server is listening required: false default: null version_added: "1.3" username: description: - - username to use for authentication with server, passed as -u to supervisorctl + - username to use for authentication required: false default: null version_added: "1.3" password: description: - - password to use for authentication with server, passed as -p to supervisorctl + - password to use for authentication required: false default: null version_added: "1.3" state: description: - - The state of service + - The desired state of program/group. required: true default: null choices: [ "present", "started", "stopped", "restarted" ] supervisorctl_path: description: - - Path to supervisorctl executable to use + - path to supervisorctl executable required: false default: null version_added: "1.4" -requirements: - - supervisorctl -requirements: [ ] -author: Matt Wright +notes: + - When C(state) = I(present), the module will call C(supervisorctl reread) then C(supervisorctl add) if the program/group does not exist. + - When C(state) = I(restarted), the module will call C(supervisorctl update) then call C(supervisorctl restart). +requirements: [ "supervisorctl" ] +author: Matt Wright, Aaron Wang ''' EXAMPLES = ''' # Manage the state of program to be in 'started' state. - supervisorctl: name=my_app state=started +# Manage the state of program group to be in 'started' state. +- supervisorctl: name='my_apps:' state=started + # Restart my_app, reading supervisorctl configuration from a specified file. - supervisorctl: name=my_app state=restarted config=/var/opt/my_project/supervisord.conf # Restart my_app, connecting to supervisord with credentials and server URL. - supervisorctl: name=my_app state=restarted username=test password=testpass server_url=http://localhost:9001 - ''' + def main(): arg_spec = dict( name=dict(required=True), @@ -101,6 +107,10 @@ def main(): module = AnsibleModule(argument_spec=arg_spec, supports_check_mode=True) name = module.params['name'] + is_group = False + if name.endswith(':'): + is_group = True + name = name.rstrip(':') state = module.params['state'] config = module.params.get('config') server_url = module.params.get('server_url') @@ -111,11 +121,12 @@ def main(): if supervisorctl_path: supervisorctl_path = os.path.expanduser(supervisorctl_path) if os.path.exists(supervisorctl_path) and module.is_executable(supervisorctl_path): - supervisorctl_args = [ supervisorctl_path ] + supervisorctl_args = [supervisorctl_path] else: - module.fail_json(msg="Provided path to supervisorctl does not exist or isn't executable: %s" % supervisorctl_path) + module.fail_json( + msg="Provided path to supervisorctl does not exist or isn't executable: %s" % supervisorctl_path) else: - supervisorctl_args = [ module.get_bin_path('supervisorctl', True) ] + supervisorctl_args = [module.get_bin_path('supervisorctl', True)] if config: supervisorctl_args.extend(['-c', os.path.expanduser(config)]) @@ -133,61 +144,76 @@ def main(): args.append(name) return module.run_command(args, **kwargs) - rc, out, err = run_supervisorctl('status') - present = name in out - - if state == 'present': - if not present: - if module.check_mode: - module.exit_json(changed=True) - run_supervisorctl('reread', check_rc=True) - rc, out, err = run_supervisorctl('add', name) - - if '%s: added process group' % name in out: - module.exit_json(changed=True, name=name, state=state) + def get_matched_processes(): + matched = [] + rc, out, err = run_supervisorctl('status') + for line in out.splitlines(): + # One status line may look like one of these two: + # process not in group: + # echo_date_lonely RUNNING pid 7680, uptime 13:22:18 + # process in group: + # echo_date_group:echo_date_00 RUNNING pid 7681, uptime 13:22:18 + fields = [field for field in line.split(' ') if field != ''] + process_name = fields[0] + status = fields[1] + + if is_group: + # If there is ':', this process must be in a group. + if ':' in process_name: + group = process_name.split(':')[0] + if group != name: + continue + else: + continue else: - module.fail_json(msg=out, name=name, state=state) - - module.exit_json(changed=False, name=name, state=state) + if process_name != name: + continue - rc, out, err = run_supervisorctl('status', name) - running = 'RUNNING' in out or '(already running)' in out + matched.append((process_name, status)) + return matched - if running and state == 'started': - module.exit_json(changed=False, name=name, state=state) + def take_action_on_processes(processes, status_filter, action, expected_result): + to_take_action_on = [] + for process_name, status in processes: + if status_filter(status): + to_take_action_on.append(process_name) - if running and state == 'stopped': + if len(to_take_action_on) == 0: + module.exit_json(changed=False, name=name, state=state) if module.check_mode: module.exit_json(changed=True) - rc, out, err = run_supervisorctl('stop', name) - - if '%s: stopped' % name in out: - module.exit_json(changed=True, name=name, state=state) + for process_name in to_take_action_on: + rc, out, err = run_supervisorctl(action, process_name) + if '%s: %s' % (process_name, expected_result) not in out: + module.fail_json(msg=out) - module.fail_json(msg=out) + module.exit_json(changed=True, name=name, state=state, affected=to_take_action_on) - elif state == 'restarted': - if module.check_mode: - module.exit_json(changed=True) - rc, out, err = run_supervisorctl('update', name) - rc, out, err = run_supervisorctl('restart', name) + if state == 'restarted': + rc, out, err = run_supervisorctl('update') + processes = get_matched_processes() + take_action_on_processes(processes, lambda s: True, 'restart', 'started') - if '%s: started' % name in out: - module.exit_json(changed=True, name=name, state=state) + processes = get_matched_processes() - module.fail_json(msg=out) + if state == 'present': + if len(processes) > 0: + module.exit_json(changed=False, name=name, state=state) - elif not running and state == 'started': if module.check_mode: module.exit_json(changed=True) - rc, out, err = run_supervisorctl('start',name) - - if '%s: started' % name in out: + run_supervisorctl('reread', check_rc=True) + rc, out, err = run_supervisorctl('add', name) + if '%s: added process group' % name in out: module.exit_json(changed=True, name=name, state=state) + else: + module.fail_json(msg=out, name=name, state=state) - module.fail_json(msg=out) + if state == 'started': + take_action_on_processes(processes, lambda s: s != 'RUNNING', 'start', 'started') - module.exit_json(changed=False, name=name, state=state) + if state == 'stopped': + take_action_on_processes(processes, lambda s: s == 'RUNNING', 'stop', 'stopped') # import module snippets from ansible.module_utils.basic import * diff --git a/packaging/arch/PKGBUILD b/packaging/arch/PKGBUILD index 05c4cc0c835..edca7aaecd6 100644 --- a/packaging/arch/PKGBUILD +++ b/packaging/arch/PKGBUILD @@ -31,19 +31,19 @@ build() { package() { cd $pkgname - mkdir -p "$pkgdir/usr/share/ansible" + install -dm755 $pkgdir/usr/share/ansible cp -dpr --no-preserve=ownership ./library/* "$pkgdir/usr/share/ansible/" cp -dpr --no-preserve=ownership ./examples "$pkgdir/usr/share/ansible" python2 setup.py install -O1 --root="$pkgdir" - install -D examples/ansible.cfg "$pkgdir/etc/ansible/ansible.cfg" + install -Dm644 examples/ansible.cfg $pkgdir/etc/ansible/ansible.cfg - install -D README.md "$pkgdir/usr/share/doc/ansible/README.md" - install -D COPYING "$pkgdir/usr/share/doc/ansible/COPYING" - install -D CHANGELOG.md "$pkgdir/usr/share/doc/ansible/CHANGELOG.md" + install -Dm644 README.md $pkgdir/usr/share/doc/ansible/README.md + install -Dm644 COPYING $pkgdir/usr/share/doc/ansible/COPYING + install -Dm644 CHANGELOG.md $pkgdir/usr/share/doc/ansible/CHANGELOG.md - mkdir -p "$pkgdir/usr/share/man/man{1,3}" + install -dm755 ${pkgdir}/usr/share/man/man{1,3} cp -dpr --no-preserve=ownership docs/man/man1/*.1 "$pkgdir/usr/share/man/man1" cp -dpr --no-preserve=ownership docs/man/man3/*.3 "$pkgdir/usr/share/man/man3" } diff --git a/packaging/debian/README.md b/packaging/debian/README.md index efd8677f400..9aa54060bb8 100644 --- a/packaging/debian/README.md +++ b/packaging/debian/README.md @@ -11,4 +11,9 @@ To create an Ansible DEB package: The debian package file will be placed in the `../` directory. This can then be added to an APT repository or installed with `dpkg -i `. -Note that `dpkg -i` does not resolve dependencies +Note that `dpkg -i` does not resolve dependencies. + +To install the Ansible DEB package and resolve dependencies: + + sudo dpkg -i + sudo apt-get -fy install \ No newline at end of file diff --git a/packaging/debian/changelog b/packaging/debian/changelog index 9a6fb7c58ca..446287cd52b 100644 --- a/packaging/debian/changelog +++ b/packaging/debian/changelog @@ -4,12 +4,33 @@ ansible (1.6) unstable; urgency=low -- Michael DeHaan Fri, 28 February 2014 15:00:03 -0500 +ansible (1.5.3) unstable; urgency=low + + * 1.5.3 release + + -- Michael DeHaan Thu, 13 March 2014 08:46:00 -0500 + +ansible (1.5.2) unstable; urgency=low + + * 1.5.2 release + + -- Michael DeHaan Tue, 11 March 2014 08:46:00 -0500 + +ansible (1.5.1) unstable; urgency=low + + * 1.5.1 release + + -- Michael DeHaan Mon, 10 March 2014 17:33:44 -0500 ansible (1.5) unstable; urgency=low * 1.5 release +<<<<<<< HEAD -- Michael DeHaan Fri, 28 February 2014 15:00:02 -0500 +======= + -- Michael DeHaan Fri, 28 February 2014 00:00:00 -0500 +>>>>>>> 16c05cbc8892041cacba3ff87c86e68b86b4511b ansible (1.4.5) unstable; urgency=low diff --git a/packaging/rpm/ansible.spec b/packaging/rpm/ansible.spec index d3783c1e9ce..298450d9647 100644 --- a/packaging/rpm/ansible.spec +++ b/packaging/rpm/ansible.spec @@ -102,9 +102,21 @@ rm -rf %{buildroot} %changelog -* Thu Feb 28 2014 Michael DeHaan - 1.6-0 +* Thu Mar 14 2014 Michael DeHaan - 1.6-0 * (PENDING) +* Thu Mar 13 2014 Michael DeHaan - 1.5.3 +- Release 1.5.3 + +* Tue Mar 11 2014 Michael DeHaan - 1.5.2 +- Release 1.5.2 + +* Mon Mar 10 2014 Michael DeHaan - 1.5.1 +- Release 1.5.1 + +* Fri Feb 28 2014 Michael DeHaan - 1.5.0 +- Release 1.5.0 + * Thu Feb 28 2014 Michael DeHaan - 1.5-0 * Release 1.5 diff --git a/plugins/callbacks/hipchat.py b/plugins/callbacks/hipchat.py new file mode 100644 index 00000000000..a5acf9194ea --- /dev/null +++ b/plugins/callbacks/hipchat.py @@ -0,0 +1,206 @@ +# (C) 2014, Matt Martz + +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +import os +import urllib +import urllib2 + +from ansible import utils + +try: + import prettytable + HAS_PRETTYTABLE = True +except ImportError: + HAS_PRETTYTABLE = False + + +class CallbackModule(object): + """This is an example ansible callback plugin that sends status + updates to a HipChat channel during playbook execution. + + This plugin makes use of the following environment variables: + HIPCHAT_TOKEN (required): HipChat API token + HIPCHAT_ROOM (optional): HipChat room to post in. Default: ansible + HIPCHAT_FROM (optional): Name to post as. Default: ansible + HIPCHAT_NOTIFY (optional): Add notify flag to important messages ("true" or "false"). Default: true + + Requires: + prettytable + + """ + + def __init__(self): + if not HAS_PRETTYTABLE: + self.disabled = True + utils.warning('The `prettytable` python module is not installed. ' + 'Disabling the HipChat callback plugin.') + + self.msg_uri = 'https://api.hipchat.com/v1/rooms/message' + self.token = os.getenv('HIPCHAT_TOKEN') + self.room = os.getenv('HIPCHAT_ROOM', 'ansible') + self.from_name = os.getenv('HIPCHAT_FROM', 'ansible') + self.allow_notify = (os.getenv('HIPCHAT_NOTIFY') != 'false') + + if self.token is None: + self.disabled = True + utils.warning('HipChat token could not be loaded. The HipChat ' + 'token can be provided using the `HIPCHAT_TOKEN` ' + 'environment variable.') + + self.printed_playbook = False + self.playbook_name = None + + def send_msg(self, msg, msg_format='text', color='yellow', notify=False): + """Method for sending a message to HipChat""" + + params = {} + params['room_id'] = self.room + params['from'] = self.from_name[:15] # max length is 15 + params['message'] = msg + params['message_format'] = msg_format + params['color'] = color + params['notify'] = int(self.allow_notify and notify) + + url = ('%s?auth_token=%s' % (self.msg_uri, self.token)) + try: + response = urllib2.urlopen(url, urllib.urlencode(params)) + return response.read() + except: + utils.warning('Could not submit message to hipchat') + + def on_any(self, *args, **kwargs): + pass + + def runner_on_failed(self, host, res, ignore_errors=False): + pass + + def runner_on_ok(self, host, res): + pass + + def runner_on_error(self, host, msg): + pass + + def runner_on_skipped(self, host, item=None): + pass + + def runner_on_unreachable(self, host, res): + pass + + def runner_on_no_hosts(self): + pass + + def runner_on_async_poll(self, host, res, jid, clock): + pass + + def runner_on_async_ok(self, host, res, jid): + pass + + def runner_on_async_failed(self, host, res, jid): + pass + + def playbook_on_start(self): + pass + + def playbook_on_notify(self, host, handler): + pass + + def playbook_on_no_hosts_matched(self): + pass + + def playbook_on_no_hosts_remaining(self): + pass + + def playbook_on_task_start(self, name, is_conditional): + pass + + def playbook_on_vars_prompt(self, varname, private=True, prompt=None, + encrypt=None, confirm=False, salt_size=None, + salt=None, default=None): + pass + + def playbook_on_setup(self): + pass + + def playbook_on_import_for_host(self, host, imported_file): + pass + + def playbook_on_not_import_for_host(self, host, missing_file): + pass + + def playbook_on_play_start(self, pattern): + """Display Playbook and play start messages""" + + # This block sends information about a playbook when it starts + # The playbook object is not immediately available at + # playbook_on_start so we grab it via the play + # + # Displays info about playbook being started by a person on an + # inventory, as well as Tags, Skip Tags and Limits + if not self.printed_playbook: + self.playbook_name, _ = os.path.splitext( + os.path.basename(self.play.playbook.filename)) + host_list = self.play.playbook.inventory.host_list + inventory = os.path.basename(os.path.realpath(host_list)) + self.send_msg("%s: Playbook initiated by %s against %s" % + (self.playbook_name, + self.play.playbook.remote_user, + inventory), notify=True) + self.printed_playbook = True + subset = self.play.playbook.inventory._subset + skip_tags = self.play.playbook.skip_tags + self.send_msg("%s:\nTags: %s\nSkip Tags: %s\nLimit: %s" % + (self.playbook_name, + ', '.join(self.play.playbook.only_tags), + ', '.join(skip_tags) if skip_tags else None, + ', '.join(subset) if subset else subset)) + + # This is where we actually say we are starting a play + self.send_msg("%s: Starting play: %s" % + (self.playbook_name, pattern)) + + def playbook_on_stats(self, stats): + """Display info about playbook statistics""" + hosts = sorted(stats.processed.keys()) + + t = prettytable.PrettyTable(['Host', 'Ok', 'Changed', 'Unreachable', + 'Failures']) + + failures = False + unreachable = False + + for h in hosts: + s = stats.summarize(h) + + if s['failures'] > 0: + failures = True + if s['unreachable'] > 0: + unreachable = True + + t.add_row([h] + [s[k] for k in ['ok', 'changed', 'unreachable', + 'failures']]) + + self.send_msg("%s: Playbook complete" % self.playbook_name, + notify=True) + + if failures or unreachable: + color = 'red' + self.send_msg("%s: Failures detected" % self.playbook_name, + color=color, notify=True) + else: + color = 'green' + + self.send_msg("/code %s:\n%s" % (self.playbook_name, t), color=color) diff --git a/plugins/inventory/ec2.ini b/plugins/inventory/ec2.ini index 9d05dfad031..b931c4a7da9 100644 --- a/plugins/inventory/ec2.ini +++ b/plugins/inventory/ec2.ini @@ -39,7 +39,7 @@ vpc_destination_variable = ip_address route53 = False # Additionally, you can specify the list of zones to exclude looking up in -# 'route53_excluded_zones' as a comma-seperated list. +# 'route53_excluded_zones' as a comma-separated list. # route53_excluded_zones = samplezone1.com, samplezone2.com # API calls to EC2 are slow. For this reason, we cache the results of an API diff --git a/plugins/inventory/ec2.py b/plugins/inventory/ec2.py index 4ec4abd36d7..84841d3f09a 100755 --- a/plugins/inventory/ec2.py +++ b/plugins/inventory/ec2.py @@ -510,6 +510,8 @@ class Ec2Inventory(object): instance_vars[key] = '' elif key == 'ec2_region': instance_vars[key] = value.name + elif key == 'ec2__placement': + instance_vars['ec2_placement'] = value.zone elif key == 'ec2_tags': for k, v in value.iteritems(): key = self.to_safe('ec2_tag_' + k) diff --git a/plugins/inventory/libvirt_lxc.py b/plugins/inventory/libvirt_lxc.py new file mode 100755 index 00000000000..f588a671faf --- /dev/null +++ b/plugins/inventory/libvirt_lxc.py @@ -0,0 +1,37 @@ +#!/usr/bin/env python + +# (c) 2013, Michael Scherer +# +# This file is part of Ansible, +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +from subprocess import Popen,PIPE +import sys +import json + +result = {} +result['all'] = {} + +pipe = Popen(['virsh', '-q', '-c', 'lxc:///', 'list', '--name', '--all'], stdout=PIPE, universal_newlines=True) +result['all']['hosts'] = [x[:-1] for x in pipe.stdout.readlines()] +result['all']['vars'] = {} +result['all']['vars']['ansible_connection'] = 'lxc' + +if len(sys.argv) == 2 and sys.argv[1] == '--list': + print json.dumps(result) +elif len(sys.argv) == 3 and sys.argv[1] == '--host': + print json.dumps({'ansible_connection': 'lxc'}) +else: + print "Need a argument, either --list or --host " diff --git a/plugins/inventory/linode.py b/plugins/inventory/linode.py index b4bcb1fad61..e68bf5d8b31 100755 --- a/plugins/inventory/linode.py +++ b/plugins/inventory/linode.py @@ -5,7 +5,7 @@ Linode external inventory script ================================= Generates inventory that Ansible can understand by making API request to -AWS Linode using the Chube library. +Linode using the Chube library. NOTE: This script assumes Ansible is being executed where Chube is already installed and has a valid config at ~/.chube. If not, run: @@ -71,21 +71,39 @@ just adapted that for Linode. ###################################################################### # Standard imports +import os import re import sys import argparse from time import time + try: import json except ImportError: import simplejson as json -# chube imports 'yaml', which is also the name of an inventory plugin, -# so we remove the plugins dir from sys.path before importing it. -old_path = sys.path -sys.path = [d for d in sys.path if "ansible/plugins" not in d] -from chube import * -sys.path = old_path +try: + from chube import load_chube_config + from chube import api as chube_api + from chube.datacenter import Datacenter + from chube.linode_obj import Linode +except: + try: + # remove local paths and other stuff that may + # cause an import conflict, as chube is sensitive + # to name collisions on importing + old_path = sys.path + sys.path = [d for d in sys.path if d not in ('', os.getcwd(), os.path.dirname(os.path.realpath(__file__)))] + + from chube import load_chube_config + from chube import api as chube_api + from chube.datacenter import Datacenter + from chube.linode_obj import Linode + + sys.path = old_path + except Exception, e: + raise Exception("could not import chube") + load_chube_config() # Imports for ansible @@ -166,7 +184,7 @@ class LinodeInventory(object): try: for node in Linode.search(status=Linode.STATUS_RUNNING): self.add_node(node) - except api.linode_api.ApiError, e: + except chube_api.linode_api.ApiError, e: print "Looks like Linode's API is down:" print print e @@ -176,7 +194,7 @@ class LinodeInventory(object): """Gets details about a specific node.""" try: return Linode.find(api_id=linode_id) - except api.linode_api.ApiError, e: + except chube_api.linode_api.ApiError, e: print "Looks like Linode's API is down:" print print e diff --git a/plugins/inventory/rax.py b/plugins/inventory/rax.py index 6836db61f66..457c20962a6 100755 --- a/plugins/inventory/rax.py +++ b/plugins/inventory/rax.py @@ -22,9 +22,11 @@ DOCUMENTATION = ''' inventory: rax short_description: Rackspace Public Cloud external inventory script description: - - Generates inventory that Ansible can understand by making API request to Rackspace Public Cloud API + - Generates inventory that Ansible can understand by making API request to + Rackspace Public Cloud API - | - When run against a specific host, this script returns the following variables: + When run against a specific host, this script returns the following + variables: rax_os-ext-sts_task_state rax_addresses rax_links @@ -65,12 +67,23 @@ options: authors: - Jesse Keating - Paul Durivage + - Matt Martz notes: - - RAX_CREDS_FILE is an optional environment variable that points to a pyrax-compatible credentials file. - - If RAX_CREDS_FILE is not supplied, rax.py will look for a credentials file at ~/.rackspace_cloud_credentials. + - RAX_CREDS_FILE is an optional environment variable that points to a + pyrax-compatible credentials file. + - If RAX_CREDS_FILE is not supplied, rax.py will look for a credentials file + at ~/.rackspace_cloud_credentials. - See https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating - - RAX_REGION is an optional environment variable to narrow inventory search scope - - RAX_REGION, if used, needs a value like ORD, DFW, SYD (a Rackspace datacenter) and optionally accepts a comma-separated list + - RAX_REGION is an optional environment variable to narrow inventory search + scope + - RAX_REGION, if used, needs a value like ORD, DFW, SYD (a Rackspace + datacenter) and optionally accepts a comma-separated list + - RAX_ENV is an environment variable that will use an environment as + configured in ~/.pyrax.cfg, see + https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration + - RAX_META_PREFIX is an environment variable that changes the prefix used + for meta key/value groups. For compatibility with ec2.py set to + RAX_META_PREFIX=tag requirements: [ "pyrax" ] examples: - description: List server instances @@ -83,13 +96,14 @@ examples: code: RAX_CREDS_FILE=~/.raxpub rax.py --host server.example.com ''' -import sys -import re import os - +import re +import sys import argparse import collections +from types import NoneType + try: import json except: @@ -98,9 +112,26 @@ except: try: import pyrax except ImportError: - print('pyrax required for this module') + print('pyrax is required for this module') sys.exit(1) +NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) + + +def rax_slugify(value): + return 'rax_%s' % (re.sub('[^\w-]', '_', value).lower().lstrip('_')) + + +def to_dict(obj): + instance = {} + for key in dir(obj): + value = getattr(obj, key) + if (isinstance(value, NON_CALLABLES) and not key.startswith('_')): + key = rax_slugify(key) + instance[key] = value + + return instance + def host(regions, hostname): hostvars = {} @@ -110,15 +141,7 @@ def host(regions, hostname): cs = pyrax.connect_to_cloudservers(region=region) for server in cs.servers.list(): if server.name == hostname: - keys = [key for key in vars(server) if key not in ('manager', '_info')] - for key in keys: - # Extract value - value = getattr(server, key) - - # Generate sanitized key - key = 'rax_' + (re.sub("[^A-Za-z0-9\-]", "_", key) - .lower() - .lstrip("_")) + for key, value in to_dict(server).items(): hostvars[key] = value # And finally, add an IP address @@ -129,6 +152,7 @@ def host(regions, hostname): def _list(regions): groups = collections.defaultdict(list) hostvars = collections.defaultdict(dict) + images = {} # Go through all the regions looking for servers for region in regions: @@ -139,26 +163,40 @@ def _list(regions): groups[region].append(server.name) # Check if group metadata key in servers' metadata - try: - group = server.metadata['group'] - except KeyError: - pass - else: - # Create group if not exist and add the server + group = server.metadata.get('group') + if group: groups[group].append(server.name) + for extra_group in server.metadata.get('groups', '').split(','): + if extra_group: + groups[extra_group].append(server.name) + # Add host metadata - keys = [key for key in vars(server) if key not in ('manager', '_info')] - for key in keys: - # Extract value - value = getattr(server, key) - - # Generate sanitized key - key = 'rax_' + (re.sub("[^A-Za-z0-9\-]", "_", key) - .lower() - .lstrip('_')) + for key, value in to_dict(server).items(): hostvars[server.name][key] = value + hostvars[server.name]['rax_region'] = region + + for key, value in server.metadata.iteritems(): + prefix = os.getenv('RAX_META_PREFIX', 'meta') + groups['%s_%s_%s' % (prefix, key, value)].append(server.name) + + groups['instance-%s' % server.id].append(server.name) + groups['flavor-%s' % server.flavor['id']].append(server.name) + try: + imagegroup = 'image-%s' % images[server.image['id']] + groups[imagegroup].append(server.name) + groups['image-%s' % server.image['id']].append(server.name) + except KeyError: + try: + image = cs.images.get(server.image['id']) + except cs.exceptions.NotFound: + groups['image-%s' % server.image['id']].append(server.name) + else: + images[image.id] = image.human_id + groups['image-%s' % image.human_id].append(server.name) + groups['image-%s' % server.image['id']].append(server.name) + # And finally, add an IP address hostvars[server.name]['ansible_ssh_host'] = server.accessIPv4 @@ -172,7 +210,7 @@ def parse_args(): 'inventory module') group = parser.add_mutually_exclusive_group(required=True) group.add_argument('--list', action='store_true', - help='List active servers') + help='List active servers') group.add_argument('--host', help='List details about the specific host') return parser.parse_args() @@ -180,38 +218,54 @@ def parse_args(): def setup(): default_creds_file = os.path.expanduser('~/.rackspace_cloud_credentials') + env = os.getenv('RAX_ENV', None) + if env: + pyrax.set_environment(env) + + keyring_username = pyrax.get_setting('keyring_username') + # Attempt to grab credentials from environment first try: - creds_file = os.environ['RAX_CREDS_FILE'] + creds_file = os.path.expanduser(os.environ['RAX_CREDS_FILE']) except KeyError, e: - # But if that fails, use the default location of ~/.rackspace_cloud_credentials + # But if that fails, use the default location of + # ~/.rackspace_cloud_credentials if os.path.isfile(default_creds_file): creds_file = default_creds_file - else: + elif not keyring_username: sys.stderr.write('No value in environment variable %s and/or no ' 'credentials file at %s\n' % (e.message, default_creds_file)) sys.exit(1) - pyrax.set_setting('identity_type', 'rackspace') + identity_type = pyrax.get_setting('identity_type') + pyrax.set_setting('identity_type', identity_type or 'rackspace') + + region = pyrax.get_setting('region') try: - pyrax.set_credential_file(os.path.expanduser(creds_file)) + if keyring_username: + pyrax.keyring_auth(keyring_username, region=region) + else: + pyrax.set_credential_file(creds_file, region=region) except Exception, e: sys.stderr.write("%s: %s\n" % (e, e.message)) sys.exit(1) regions = [] - for region in os.getenv('RAX_REGION', 'all').split(','): - region = region.strip().upper() - if region == 'ALL': - regions = pyrax.regions - break - elif region not in pyrax.regions: - sys.stderr.write('Unsupported region %s' % region) - sys.exit(1) - elif region not in regions: - regions.append(region) + if region: + regions.append(region) + else: + for region in os.getenv('RAX_REGION', 'all').split(','): + region = region.strip().upper() + if region == 'ALL': + regions = pyrax.regions + break + elif region not in pyrax.regions: + sys.stderr.write('Unsupported region %s' % region) + sys.exit(1) + elif region not in regions: + regions.append(region) return regions @@ -225,5 +279,6 @@ def main(): host(regions, args.host) sys.exit(0) + if __name__ == '__main__': - main() \ No newline at end of file + main() diff --git a/plugins/inventory/ssh_config.py b/plugins/inventory/ssh_config.py new file mode 100755 index 00000000000..7c04c8cc6da --- /dev/null +++ b/plugins/inventory/ssh_config.py @@ -0,0 +1,111 @@ +#!/usr/bin/env python + +# (c) 2014, Tomas Karasek +# +# This file is part of Ansible. +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +# Dynamic inventory script which lets you use aliases from ~/.ssh/config. +# +# It prints inventory based on parsed ~/.ssh/config. You can refer to hosts +# with their alias, rather than with the IP or hostname. It takes advantage +# of the ansible_ssh_{host,port,user,private_key_file}. +# +# If you have in your .ssh/config: +# Host git +# HostName git.domain.org +# User tkarasek +# IdentityFile /home/tomk/keys/thekey +# +# You can do +# $ ansible git -m ping +# +# Example invocation: +# ssh_config.py --list +# ssh_config.py --host + +import argparse +import os.path +import sys + +import paramiko + +try: + import json +except ImportError: + import simplejson as json + +_key = 'ssh_config' + +_ssh_to_ansible = [('user', 'ansible_ssh_user'), + ('hostname', 'ansible_ssh_host'), + ('identityfile', 'ansible_ssh_private_key_file'), + ('port', 'ansible_ssh_port')] + + +def get_config(): + with open(os.path.expanduser('~/.ssh/config')) as f: + cfg = paramiko.SSHConfig() + cfg.parse(f) + ret_dict = {} + for d in cfg._config: + _copy = dict(d) + del _copy['host'] + for host in d['host']: + ret_dict[host] = _copy['config'] + return ret_dict + + +def print_list(): + cfg = get_config() + meta = {'hostvars': {}} + for alias, attributes in cfg.items(): + tmp_dict = {} + for ssh_opt, ans_opt in _ssh_to_ansible: + if ssh_opt in attributes: + tmp_dict[ans_opt] = attributes[ssh_opt] + if tmp_dict: + meta['hostvars'][alias] = tmp_dict + + print json.dumps({_key: list(set(meta['hostvars'].keys())), '_meta': meta}) + + +def print_host(host): + cfg = get_config() + print json.dumps(cfg[host]) + + +def get_args(args_list): + parser = argparse.ArgumentParser( + description='ansible inventory script parsing .ssh/config') + mutex_group = parser.add_mutually_exclusive_group(required=True) + help_list = 'list all hosts from .ssh/config inventory' + mutex_group.add_argument('--list', action='store_true', help=help_list) + help_host = 'display variables for a host' + mutex_group.add_argument('--host', help=help_host) + return parser.parse_args(args_list) + + +def main(args_list): + + args = get_args(args_list) + if args.list: + print_list() + if args.host: + print_host(args.host) + + +if __name__ == '__main__': + main(sys.argv[1:]) diff --git a/plugins/inventory/vmware.ini b/plugins/inventory/vmware.ini new file mode 100644 index 00000000000..13b8384bf6d --- /dev/null +++ b/plugins/inventory/vmware.ini @@ -0,0 +1,15 @@ +# Ansible vmware external inventory script settings +# +[defaults] +guests_only = True +#vm_group = +#hw_group = + +[cache] +cache_max_age = 3600 +cache_dir = /var/tmp + +[auth] +host = vcenter.example.com +user = ihasaccess +password = ssshverysecret diff --git a/plugins/inventory/vmware.py b/plugins/inventory/vmware.py new file mode 100755 index 00000000000..6ed73865899 --- /dev/null +++ b/plugins/inventory/vmware.py @@ -0,0 +1,205 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +''' +VMWARE external inventory script +================================= + +shamelessly copied from existing inventory scripts. + +This script and it's ini can be used more than once, + +i.e vmware.py/vmware_colo.ini vmware_idf.py/vmware_idf.ini +(script can be link) + +so if you don't have clustered vcenter but multiple esx machines or +just diff clusters you can have a inventory per each and automatically +group hosts based on file name or specify a group in the ini. +''' + +import os +import sys +import time +import ConfigParser +from psphere.client import Client +from psphere.managedobjects import HostSystem + +try: + import json +except ImportError: + import simplejson as json + + +def save_cache(cache_item, data, config): + ''' saves item to cache ''' + dpath = config.get('defaults', 'cache_dir') + try: + cache = open('/'.join([dpath,cache_item]), 'w') + cache.write(json.dumps(data)) + cache.close() + except IOError, e: + pass # not really sure what to do here + + +def get_cache(cache_item, config): + ''' returns cached item ''' + dpath = config.get('defaults', 'cache_dir') + inv = {} + try: + cache = open('/'.join([dpath,cache_item]), 'r') + inv = json.loads(cache.read()) + cache.close() + except IOError, e: + pass # not really sure what to do here + + return inv + +def cache_available(cache_item, config): + ''' checks if we have a 'fresh' cache available for item requested ''' + + if config.has_option('defaults', 'cache_dir'): + dpath = config.get('defaults', 'cache_dir') + + try: + existing = os.stat( '/'.join([dpath,cache_item])) + except: + # cache doesn't exist or isn't accessible + return False + + if config.has_option('defaults', 'cache_max_age'): + maxage = config.get('defaults', 'cache_max_age') + + if (existing.st_mtime - int(time.time())) <= maxage: + return True + + return False + +def get_host_info(host): + ''' Get variables about a specific host ''' + + hostinfo = { + 'vmware_name' : host.name, + 'vmware_tag' : host.tag, + 'vmware_parent': host.parent.name, + } + for k in host.capability.__dict__.keys(): + if k.startswith('_'): + continue + try: + hostinfo['vmware_' + k] = str(host.capability[k]) + except: + continue + + return hostinfo + + +def get_inventory(client, config): + ''' Reads the inventory from cache or vmware api ''' + + if cache_available('inventory', config): + inv = get_cache('inventory',config) + else: + inv= { 'all': {'hosts': []}, '_meta': { 'hostvars': {} } } + default_group = os.path.basename(sys.argv[0]).rstrip('.py') + + if config.has_option('defaults', 'guests_only'): + guests_only = config.get('defaults', 'guests_only') + else: + guests_only = True + + if not guests_only: + if config.has_option('defaults','hw_group'): + hw_group = config.get('defaults','hw_group') + else: + hw_group = default_group + '_hw' + inv[hw_group] = [] + + if config.has_option('defaults','vm_group'): + vm_group = config.get('defaults','vm_group') + else: + vm_group = default_group + '_vm' + inv[vm_group] = [] + + # Loop through physical hosts: + hosts = HostSystem.all(client) + for host in hosts: + if not guests_only: + inv['all']['hosts'].append(host.name) + inv[hw_group].append(host.name) + if host.tag: + taggroup = 'vmware_' + host.tag + if taggroup in inv: + inv[taggroup].append(host.name) + else: + inv[taggroup] = [ host.name ] + + inv['_meta']['hostvars'][host.name] = get_host_info(host) + save_cache(vm.name, inv['_meta']['hostvars'][host.name], config) + + for vm in host.vm: + inv['all']['hosts'].append(vm.name) + inv[vm_group].append(vm.name) + if vm.tag: + taggroup = 'vmware_' + vm.tag + if taggroup in inv: + inv[taggroup].append(vm.name) + else: + inv[taggroup] = [ vm.name ] + + inv['_meta']['hostvars'][vm.name] = get_host_info(host) + save_cache(vm.name, inv['_meta']['hostvars'][vm.name], config) + + save_cache('inventory', inv, config) + return json.dumps(inv) + +def get_single_host(client, config, hostname): + + inv = {} + + if cache_available(hostname, config): + inv = get_cache(hostname,config) + else: + hosts = HostSystem.all(client) #TODO: figure out single host getter + for host in hosts: + if hostname == host.name: + inv = get_host_info(host) + break + for vm in host.vm: + if hostname == vm.name: + inv = get_host_info(host) + break + save_cache(hostname,inv,config) + + return json.dumps(inv) + +if __name__ == '__main__': + inventory = {} + hostname = None + + if len(sys.argv) > 1: + if sys.argv[1] == "--host": + hostname = sys.argv[2] + + # Read config + config = ConfigParser.SafeConfigParser() + for configfilename in [os.path.abspath(sys.argv[0]).rstrip('.py') + '.ini', 'vmware.ini']: + if os.path.exists(configfilename): + config.read(configfilename) + break + + try: + client = Client( config.get('auth','host'), + config.get('auth','user'), + config.get('auth','password'), + ) + except Exception, e: + client = None + #print >> STDERR "Unable to login (only cache avilable): %s", str(e) + + # acitually do the work + if hostname is None: + inventory = get_inventory(client, config) + else: + inventory = get_single_host(client, config, hostname) + + # return to ansible + print inventory diff --git a/test/README.md b/test/README.md index e5339acc625..3e746062cd1 100644 --- a/test/README.md +++ b/test/README.md @@ -2,6 +2,7 @@ Ansible Test System =================== Folders +======= unit ---- @@ -11,12 +12,14 @@ mock interfaces rather than producing side effects. Playbook engine code is better suited for integration tests. +Requirements: sudo pip install paramiko PyYAML jinja2 httplib2 passlib + integration ----------- Integration test layer, constructed using playbooks. -Some tests may require cloud credentials, others will not, and destructive tests are seperated from non-destructive so a subset +Some tests may require cloud credentials, others will not, and destructive tests are separated from non-destructive so a subset can be run on development machines. learn more diff --git a/test/integration/Makefile b/test/integration/Makefile index 7cdae607df0..da2758c1406 100644 --- a/test/integration/Makefile +++ b/test/integration/Makefile @@ -1,28 +1,60 @@ -all: non_destructive destructive check_mode test_hash +INVENTORY ?= inventory +VARS_FILE ?= integration_config.yml + +# Create a semi-random string for use when testing cloud-based resources +ifndef CLOUD_RESOURCE_PREFIX +CLOUD_RESOURCE_PREFIX := $(shell python -c "import string,random; print 'ansible-testing-' + ''.join(random.choice(string.ascii_letters + string.digits) for _ in xrange(8));") +endif + +CREDENTIALS_FILE = credentials.yml +# If credentials.yml exists, use it +ifneq ("$(wildcard $(CREDENTIALS_FILE))","") +CREDENTIALS_ARG = -e @$(CREDENTIALS_FILE) +else +CREDENTIALS_ARG = +endif + +all: non_destructive destructive check_mode test_hash test_handlers non_destructive: - ansible-playbook non_destructive.yml -i inventory -e @integration_config.yml -v $(TEST_FLAGS) + ansible-playbook non_destructive.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) destructive: - ansible-playbook destructive.yml -i inventory -e @integration_config.yml -v $(TEST_FLAGS) + ansible-playbook destructive.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) check_mode: - ansible-playbook check_mode.yml -i inventory -e @integration_config.yml -v --check $(TEST_FLAGS) + ansible-playbook check_mode.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v --check $(TEST_FLAGS) + +test_handlers: + ansible-playbook test_handlers.yml -i inventory.handlers -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) test_hash: - ANSIBLE_HASH_BEHAVIOUR=replace ansible-playbook test_hash.yml -i inventory -v -e '{"test_hash":{"extra_args":"this is an extra arg"}}' - ANSIBLE_HASH_BEHAVIOUR=merge ansible-playbook test_hash.yml -i inventory -v -e '{"test_hash":{"extra_args":"this is an extra arg"}}' + ANSIBLE_HASH_BEHAVIOUR=replace ansible-playbook test_hash.yml -i $(INVENTORY) $(CREDENTIALS_ARG) -v -e '{"test_hash":{"extra_args":"this is an extra arg"}}' + ANSIBLE_HASH_BEHAVIOUR=merge ansible-playbook test_hash.yml -i $(INVENTORY) $(CREDENTIALS_ARG) -v -e '{"test_hash":{"extra_args":"this is an extra arg"}}' cloud: amazon rackspace -credentials.yml: - @echo "No credentials.yml file found. A file named 'credentials.yml' is needed to provide credentials needed to run cloud tests." - @exit 1 +cloud_cleanup: amazon_cleanup rackspace_cleanup + +amazon_cleanup: + python cleanup_ec2.py -y --match="^$(CLOUD_RESOURCE_PREFIX)" -amazon: credentials.yml - ansible-playbook amazon.yml -i inventory -e @integration_config.yml -e @credentials.yml -v $(TEST_FLAGS) - @# FIXME - Cleanup won't run if the previous tests fail - python cleanup_ec2.py -y +rackspace_cleanup: + @echo "FIXME - cleanup_rax.py not yet implemented" + @# python cleanup_rax.py -y --match="^$(CLOUD_RESOURCE_PREFIX)" + +$(CREDENTIALS_FILE): + @echo "No credentials file found. A file named '$(CREDENTIALS_FILE)' is needed to provide credentials needed to run cloud tests. See sample 'credentials.template' file." + @exit 1 -rackspace: credentials.yml - ansible-playbook rackspace.yml -i inventory -e @integration_config.yml -e @credentials.yml -v $(TEST_FLAGS) +amazon: $(CREDENTIALS_FILE) + ansible-playbook amazon.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -e "resource_prefix=$(CLOUD_RESOURCE_PREFIX)" -v $(TEST_FLAGS) ; \ + RC=$$? ; \ + CLOUD_RESOURCE_PREFIX="$(CLOUD_RESOURCE_PREFIX)" make amazon_cleanup ; \ + exit $$RC; + +rackspace: $(CREDENTIALS_FILE) + ansible-playbook rackspace.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -e "resource_prefix=$(CLOUD_RESOURCE_PREFIX)" -v $(TEST_FLAGS) ; \ + RC=$$? ; \ + CLOUD_RESOURCE_PREFIX="$(CLOUD_RESOURCE_PREFIX)" make rackspace_cleanup ; \ + exit $$RC; diff --git a/test/integration/README.md b/test/integration/README.md index 1bdc099cd1d..e05f843ac2f 100644 --- a/test/integration/README.md +++ b/test/integration/README.md @@ -5,15 +5,17 @@ The ansible integration system. Tests for playbooks, by playbooks. -Some tests may require cloud credentials. +Some tests may require credentials. Credentials may be specified with `credentials.yml`. Tests should be run as root. Configuration ============= -Making your own version of integration_config.yml can allow for setting some tunable parameters to help run -the tests better in your environment. +Making your own version of `integration_config.yml` can allow for setting some +tunable parameters to help run the tests better in your environment. Some +tests (e.g. cloud) will only run when access credentials are provided. For +more information about supported credentials, refer to `credentials.template`. Prerequisites ============= @@ -41,12 +43,30 @@ Destructive Tests These tests are allowed to install and remove some trivial packages. You will likely want to devote these to a virtual environment. They won't reformat your filesystem, however :) - + make destructive Cloud Tests =========== -Details pending, but these require cloud credentials. These are not 'tests run in the cloud' so much as tests -that leverage the cloud modules and are organized by cloud provider. +Cloud tests exercise capabilities of cloud modules (e.g. ec2_key). These are +not 'tests run in the cloud' so much as tests that leverage the cloud modules +and are organized by cloud provider. + +In order to run cloud tests, you must provide access credentials in a file +named `credentials.yml`. A sample credentials file named +`credentials.template` is available for syntax help. + + +Provide cloud credentials: + cp credentials.template credentials.yml + ${EDITOR:-vi} credentials.yml + +Run the tests: + make cloud +*WARNING* running cloud integration tests will create and destroy cloud +resources. Running these tests may result in additional fees associated with +your cloud account. Care is taken to ensure that created resources are +removed. However, it is advisable to inspect your AWS console to ensure no +unexpected resources are running. diff --git a/test/integration/cleanup_ec2.py b/test/integration/cleanup_ec2.py index 08d54751aaf..d82dc4f340b 100644 --- a/test/integration/cleanup_ec2.py +++ b/test/integration/cleanup_ec2.py @@ -15,12 +15,15 @@ def delete_aws_resources(get_func, attr, opts): for item in get_func(): val = getattr(item, attr) if re.search(opts.match_re, val): - prompt_and_delete("Delete object with %s=%s? [y/n]: " % (attr, val), opts.assumeyes) + prompt_and_delete(item, "Delete matching %s? [y/n]: " % (item,), opts.assumeyes) -def prompt_and_delete(prompt, assumeyes): - while not assumeyes: - assumeyes = raw_input(prompt) - obj.delete() +def prompt_and_delete(item, prompt, assumeyes): + if not assumeyes: + assumeyes = raw_input(prompt).lower() == 'y' + assert hasattr(item, 'delete'), "Class <%s> has no delete attribute" % item.__class__ + if assumeyes: + item.delete() + print ("Deleted %s" % item) def parse_args(): # Load details from credentials.yml @@ -72,8 +75,11 @@ if __name__ == '__main__': aws = boto.connect_ec2(aws_access_key_id=opts.ec2_access_key, aws_secret_access_key=opts.ec2_secret_key) - # Delete matching keys - delete_aws_resources(aws.get_all_key_pairs, 'name', opts) + try: + # Delete matching keys + delete_aws_resources(aws.get_all_key_pairs, 'name', opts) - # Delete matching groups - delete_aws_resources(aws.get_all_security_groups, 'name', opts) + # Delete matching groups + delete_aws_resources(aws.get_all_security_groups, 'name', opts) + except KeyboardInterrupt, e: + print "\nExiting on user command." diff --git a/test/integration/credentials.template b/test/integration/credentials.template new file mode 100644 index 00000000000..f21100405fc --- /dev/null +++ b/test/integration/credentials.template @@ -0,0 +1,7 @@ +--- +# AWS Credentials +ec2_access_key: +ec2_secret_key: + +# GITHUB SSH private key - a path to a SSH private key for use with github.com +github_ssh_private_key: "{{ lookup('env','HOME') }}/.ssh/id_rsa" diff --git a/test/integration/destructive.yml b/test/integration/destructive.yml index 8d0b11c6acc..406db63906b 100644 --- a/test/integration/destructive.yml +++ b/test/integration/destructive.yml @@ -1,9 +1,9 @@ - hosts: testhost gather_facts: True - roles: + roles: - { role: test_service, tags: test_service } - { role: test_pip, tags: test_pip } - { role: test_gem, tags: test_gem } - { role: test_yum, tags: test_yum } - { role: test_apt, tags: test_apt } - + - { role: test_apt_repository, tags: test_apt_repository } diff --git a/test/integration/host_vars/testhost b/test/integration/host_vars/testhost index facd519959b..6e1d11307f9 100644 --- a/test/integration/host_vars/testhost +++ b/test/integration/host_vars/testhost @@ -7,4 +7,4 @@ test_hash: host_vars_testhost: "this is in host_vars/testhost" # Support execution from within a virtualenv -ansible_python_interpreter: ${VIRTUAL_ENV-/usr}/bin/python +ansible_python_interpreter: '/usr/bin/env python' diff --git a/test/integration/inventory.handlers b/test/integration/inventory.handlers new file mode 100644 index 00000000000..905026f12ef --- /dev/null +++ b/test/integration/inventory.handlers @@ -0,0 +1,6 @@ +[testgroup] +A +B +C +D +E diff --git a/test/integration/non_destructive.yml b/test/integration/non_destructive.yml index f8c6772ee9f..c8d836896aa 100644 --- a/test/integration/non_destructive.yml +++ b/test/integration/non_destructive.yml @@ -36,3 +36,4 @@ - { role: test_command_shell, tags: test_command_shell } - { role: test_failed_when, tags: test_failed_when } - { role: test_script, tags: test_script } + - { role: test_authorized_key, tags: test_authorized_key } diff --git a/test/integration/roles/setup_ec2/defaults/main.yml b/test/integration/roles/setup_ec2/defaults/main.yml new file mode 100644 index 00000000000..fb1f88b1ecb --- /dev/null +++ b/test/integration/roles/setup_ec2/defaults/main.yml @@ -0,0 +1,2 @@ +--- +resource_prefix: 'ansible-testing-' diff --git a/test/integration/roles/test_apt/tasks/apt.yml b/test/integration/roles/test_apt/tasks/apt.yml index 151f5313595..be0facdf098 100644 --- a/test/integration/roles/test_apt/tasks/apt.yml +++ b/test/integration/roles/test_apt/tasks/apt.yml @@ -1,4 +1,19 @@ -# UNINSTALL +# UNINSTALL 'python-apt' +# The `apt` module has the smarts to auto-install `python-apt`. To test, we +# will first uninstall `python-apt`. +- name: check python-apt with dpkg + shell: dpkg -s python-apt + register: dpkg_result + ignore_errors: true + +- name: uninstall python-apt with apt + apt: pkg=python-apt state=absent purge=yes + register: apt_result + when: dpkg_result|success + +# UNINSTALL 'hello' +# With 'python-apt' uninstalled, the first call to 'apt' should install +# python-apt. - name: uninstall hello with apt apt: pkg=hello state=absent purge=yes register: apt_result @@ -8,9 +23,6 @@ failed_when: False register: dpkg_result -- debug: var=apt_result -- debug: var=dpkg_result - - name: verify uninstallation of hello assert: that: diff --git a/test/integration/roles/test_apt_repository/meta/main.yml b/test/integration/roles/test_apt_repository/meta/main.yml new file mode 100644 index 00000000000..07faa217762 --- /dev/null +++ b/test/integration/roles/test_apt_repository/meta/main.yml @@ -0,0 +1,2 @@ +dependencies: + - prepare_tests diff --git a/test/integration/roles/test_apt_repository/tasks/apt.yml b/test/integration/roles/test_apt_repository/tasks/apt.yml new file mode 100644 index 00000000000..7cbc9d2128a --- /dev/null +++ b/test/integration/roles/test_apt_repository/tasks/apt.yml @@ -0,0 +1,137 @@ +--- + +- set_fact: + test_ppa_name: 'ppa:menulibre-dev/devel' + test_ppa_spec: 'deb http://ppa.launchpad.net/menulibre-dev/devel/ubuntu {{ansible_distribution_release}} main' + test_ppa_key: 'A7AD98A1' # http://keyserver.ubuntu.com:11371/pks/lookup?search=0xD06AAF4C11DAB86DF421421EFE6B20ECA7AD98A1&op=index + +# +# TEST: apt_repository: repo= +# +- include: 'cleanup.yml' + +- name: 'record apt cache mtime' + stat: path='/var/cache/apt/pkgcache.bin' + register: cache_before + +- name: 'name= (expect: pass)' + apt_repository: repo='{{test_ppa_name}}' state=present + register: result + +- name: 'assert the apt cache did *NOT* change' + assert: + that: + - 'result.changed' + - 'result.state == "present"' + - 'result.repo == "{{test_ppa_name}}"' + +- name: 'examine apt cache mtime' + stat: path='/var/cache/apt/pkgcache.bin' + register: cache_after + +- name: 'assert the apt cache did change' + assert: + that: + - 'cache_before.stat.mtime != cache_after.stat.mtime' + +- name: 'ensure ppa key is installed (expect: pass)' + apt_key: id='{{test_ppa_key}}' state=present + +# +# TEST: apt_repository: repo= update_cache=no +# +- include: 'cleanup.yml' + +- name: 'record apt cache mtime' + stat: path='/var/cache/apt/pkgcache.bin' + register: cache_before + +- name: 'name= update_cache=no (expect: pass)' + apt_repository: repo='{{test_ppa_name}}' state=present update_cache=no + register: result + +- assert: + that: + - 'result.changed' + - 'result.state == "present"' + - 'result.repo == "{{test_ppa_name}}"' + +- name: 'examine apt cache mtime' + stat: path='/var/cache/apt/pkgcache.bin' + register: cache_after + +- name: 'assert the apt cache did *NOT* change' + assert: + that: + - 'cache_before.stat.mtime == cache_after.stat.mtime' + +- name: 'ensure ppa key is installed (expect: pass)' + apt_key: id='{{test_ppa_key}}' state=present + +# +# TEST: apt_repository: repo= update_cache=yes +# +- include: 'cleanup.yml' + +- name: 'record apt cache mtime' + stat: path='/var/cache/apt/pkgcache.bin' + register: cache_before + +- name: 'name= update_cache=yes (expect: pass)' + apt_repository: repo='{{test_ppa_name}}' state=present update_cache=yes + register: result + +- assert: + that: + - 'result.changed' + - 'result.state == "present"' + - 'result.repo == "{{test_ppa_name}}"' + +- name: 'examine apt cache mtime' + stat: path='/var/cache/apt/pkgcache.bin' + register: cache_after + +- name: 'assert the apt cache did change' + assert: + that: + - 'cache_before.stat.mtime != cache_after.stat.mtime' + +- name: 'ensure ppa key is installed (expect: pass)' + apt_key: id='{{test_ppa_key}}' state=present + +# +# TEST: apt_repository: repo= +# +- include: 'cleanup.yml' + +- name: 'record apt cache mtime' + stat: path='/var/cache/apt/pkgcache.bin' + register: cache_before + +- name: 'name= (expect: pass)' + apt_repository: repo='{{test_ppa_spec}}' state=present + register: result + +- assert: + that: + - 'result.changed' + - 'result.state == "present"' + - 'result.repo == "{{test_ppa_spec}}"' + +- name: 'examine apt cache mtime' + stat: path='/var/cache/apt/pkgcache.bin' + register: cache_after + +- name: 'assert the apt cache did change' + assert: + that: + - 'cache_before.stat.mtime != cache_after.stat.mtime' + +# When installing a repo with the spec, the key is *NOT* added +- name: 'ensure ppa key is absent (expect: pass)' + apt_key: id='{{test_ppa_key}}' state=absent + +# +# TEARDOWN +# +- include: 'cleanup.yml' diff --git a/test/integration/roles/test_apt_repository/tasks/cleanup.yml b/test/integration/roles/test_apt_repository/tasks/cleanup.yml new file mode 100644 index 00000000000..86a09dd5aec --- /dev/null +++ b/test/integration/roles/test_apt_repository/tasks/cleanup.yml @@ -0,0 +1,18 @@ +--- +# tasks to cleanup a repo and assert it is gone + +- name: remove existing ppa + apt_repository: repo={{test_ppa_name}} state=absent + ignore_errors: true + +- name: test that ppa does not exist (expect pass) + shell: cat /etc/apt/sources.list /etc/apt/sources.list.d/* | grep "{{test_ppa_spec}}" + register: command + failed_when: command.rc == 0 + changed_when: false + +# Should this use apt-key, maybe? +- name: remove ppa key + apt_key: id={{test_ppa_key}} state=absent + ignore_errors: true + diff --git a/test/integration/roles/test_apt_repository/tasks/main.yml b/test/integration/roles/test_apt_repository/tasks/main.yml new file mode 100644 index 00000000000..8a16a061bd9 --- /dev/null +++ b/test/integration/roles/test_apt_repository/tasks/main.yml @@ -0,0 +1,21 @@ +# test code for the apt_repository module +# (c) 2014, James Laska + +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +- include: 'apt.yml' + when: ansible_distribution in ('Ubuntu', 'Debian') + diff --git a/test/integration/roles/test_assemble/tasks/main.yml b/test/integration/roles/test_assemble/tasks/main.yml index b20551f8866..f06cee6ace8 100644 --- a/test/integration/roles/test_assemble/tasks/main.yml +++ b/test/integration/roles/test_assemble/tasks/main.yml @@ -69,3 +69,13 @@ - "result.state == 'file'" - "result.md5sum == '96905702a2ece40de6bf3a94b5062513'" +- name: test assemble with remote_src=False and a delimiter + assemble: src="./" dest="{{output_dir}}/assembled5" remote_src=no delimiter="#--- delimiter ---#" + register: result + +- name: assert the fragments were assembled without remote + assert: + that: + - "result.state == 'file'" + - "result.md5sum == '4773eac67aba3f0be745876331c8a450'" + diff --git a/test/integration/roles/test_async/tasks/main.yml b/test/integration/roles/test_async/tasks/main.yml index 350d5ef4701..502140599fc 100644 --- a/test/integration/roles/test_async/tasks/main.yml +++ b/test/integration/roles/test_async/tasks/main.yml @@ -43,3 +43,17 @@ - "'stdout_lines' in async_result" - "async_result.rc == 0" +- name: test async without polling + command: sleep 5 + async: 30 + poll: 0 + register: async_result + +- debug: var=async_result + +- name: validate async without polling returns + assert: + that: + - "'ansible_job_id' in async_result" + - "'started' in async_result" + - "'finished' not in async_result" diff --git a/test/integration/roles/test_authorized_key/defaults/main.yml b/test/integration/roles/test_authorized_key/defaults/main.yml new file mode 100644 index 00000000000..e3a7606e01b --- /dev/null +++ b/test/integration/roles/test_authorized_key/defaults/main.yml @@ -0,0 +1,15 @@ +--- +dss_key_basic: > + ssh-dss DATA_BASIC root@testing +dss_key_unquoted_option: > + idle-timeout=5m ssh-dss DATA_UNQUOTED_OPTION root@testing +dss_key_command: > + command="/bin/true" ssh-dss DATA_COMMAND root@testing +dss_key_complex_command: > + command="echo foo 'bar baz'" ssh-dss DATA_COMPLEX_COMMAND root@testing +dss_key_command_single_option: > + no-port-forwarding,command="/bin/true" ssh-dss DATA_COMMAND_SINGLE_OPTIONS root@testing +dss_key_command_multiple_options: > + no-port-forwarding,idle-timeout=5m,command="/bin/true" ssh-dss DATA_COMMAND_MULTIPLE_OPTIONS root@testing +dss_key_trailing: > + ssh-dss DATA_TRAILING root@testing foo bar baz diff --git a/test/integration/roles/test_authorized_key/meta/main.yml b/test/integration/roles/test_authorized_key/meta/main.yml new file mode 100644 index 00000000000..145d4f7ca1f --- /dev/null +++ b/test/integration/roles/test_authorized_key/meta/main.yml @@ -0,0 +1,2 @@ +dependencies: + - prepare_tests diff --git a/test/integration/roles/test_authorized_key/tasks/main.yml b/test/integration/roles/test_authorized_key/tasks/main.yml new file mode 100644 index 00000000000..20f369e509c --- /dev/null +++ b/test/integration/roles/test_authorized_key/tasks/main.yml @@ -0,0 +1,244 @@ +# test code for the authorized_key module +# (c) 2014, James Cammarata + +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + + +# ------------------------------------------------------------- +# Setup steps + +- name: touch the authorized_keys file + file: dest="{{output_dir}}/authorized_keys" state=touch + register: result + +- name: assert that the authorized_keys file was created + assert: + that: + - ['result.changed == True'] + - ['result.state == "file"'] + +# ------------------------------------------------------------- +# basic ssh-dss key + +- name: add basic ssh-dss key + authorized_key: user=root key="{{ dss_key_basic }}" state=present path="{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that the key was added + assert: + that: + - ['result.changed == True'] + - ['result.key == dss_key_basic'] + - ['result.key_options == None'] + +- name: re-add basic ssh-dss key + authorized_key: user=root key="{{ dss_key_basic }}" state=present path="{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that nothing changed + assert: + that: + - ['result.changed == False'] + +# ------------------------------------------------------------- +# ssh-dss key with an unquoted option + +- name: add ssh-dss key with an unquoted option + authorized_key: + user: root + key: "{{ dss_key_unquoted_option }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that the key was added + assert: + that: + - ['result.changed == True'] + - ['result.key == dss_key_unquoted_option'] + - ['result.key_options == None'] + +- name: re-add ssh-dss key with an unquoted option + authorized_key: + user: root + key: "{{ dss_key_unquoted_option }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that nothing changed + assert: + that: + - ['result.changed == False'] + +# ------------------------------------------------------------- +# ssh-dss key with a leading command="/bin/foo" + +- name: add ssh-dss key with a leading command + authorized_key: + user: root + key: "{{ dss_key_command }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that the key was added + assert: + that: + - ['result.changed == True'] + - ['result.key == dss_key_command'] + - ['result.key_options == None'] + +- name: re-add ssh-dss key with a leading command + authorized_key: + user: root + key: "{{ dss_key_command }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that nothing changed + assert: + that: + - ['result.changed == False'] + +# ------------------------------------------------------------- +# ssh-dss key with a complex quoted leading command +# ie. command="/bin/echo foo 'bar baz'" + +- name: add ssh-dss key with a complex quoted leading command + authorized_key: + user: root + key: "{{ dss_key_complex_command }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that the key was added + assert: + that: + - ['result.changed == True'] + - ['result.key == dss_key_complex_command'] + - ['result.key_options == None'] + +- name: re-add ssh-dss key with a complex quoted leading command + authorized_key: + user: root + key: "{{ dss_key_complex_command }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that nothing changed + assert: + that: + - ['result.changed == False'] + +# ------------------------------------------------------------- +# ssh-dss key with a command and a single option, which are +# in a comma-separated list + +- name: add ssh-dss key with a command and a single option + authorized_key: + user: root + key: "{{ dss_key_command_single_option }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that the key was added + assert: + that: + - ['result.changed == True'] + - ['result.key == dss_key_command_single_option'] + - ['result.key_options == None'] + +- name: re-add ssh-dss key with a command and a single option + authorized_key: + user: root + key: "{{ dss_key_command_single_option }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that nothing changed + assert: + that: + - ['result.changed == False'] + +# ------------------------------------------------------------- +# ssh-dss key with a command and multiple other options + +- name: add ssh-dss key with a command and multiple options + authorized_key: + user: root + key: "{{ dss_key_command_multiple_options }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that the key was added + assert: + that: + - ['result.changed == True'] + - ['result.key == dss_key_command_multiple_options'] + - ['result.key_options == None'] + +- name: re-add ssh-dss key with a command and multiple options + authorized_key: + user: root + key: "{{ dss_key_command_multiple_options }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that nothing changed + assert: + that: + - ['result.changed == False'] + +# ------------------------------------------------------------- +# ssh-dss key with multiple trailing parts, which are space- +# separated and not quoted in any way + +- name: add ssh-dss key with trailing parts + authorized_key: + user: root + key: "{{ dss_key_trailing }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that the key was added + assert: + that: + - ['result.changed == True'] + - ['result.key == dss_key_trailing'] + - ['result.key_options == None'] + +- name: re-add ssh-dss key with trailing parts + authorized_key: + user: root + key: "{{ dss_key_trailing }}" + state: present + path: "{{output_dir|expanduser}}/authorized_keys" + register: result + +- name: assert that nothing changed + assert: + that: + - ['result.changed == False'] + diff --git a/test/integration/roles/test_ec2/meta/main.yml b/test/integration/roles/test_ec2/meta/main.yml index 1050c23ce30..1f64f1169a9 100644 --- a/test/integration/roles/test_ec2/meta/main.yml +++ b/test/integration/roles/test_ec2/meta/main.yml @@ -1,3 +1,3 @@ -dependencies: +dependencies: - prepare_tests - + - setup_ec2 diff --git a/test/integration/roles/test_ec2_ami/meta/main.yml b/test/integration/roles/test_ec2_ami/meta/main.yml index 1050c23ce30..1f64f1169a9 100644 --- a/test/integration/roles/test_ec2_ami/meta/main.yml +++ b/test/integration/roles/test_ec2_ami/meta/main.yml @@ -1,3 +1,3 @@ -dependencies: +dependencies: - prepare_tests - + - setup_ec2 diff --git a/test/integration/roles/test_ec2_eip/meta/main.yml b/test/integration/roles/test_ec2_eip/meta/main.yml index 1050c23ce30..1f64f1169a9 100644 --- a/test/integration/roles/test_ec2_eip/meta/main.yml +++ b/test/integration/roles/test_ec2_eip/meta/main.yml @@ -1,3 +1,3 @@ -dependencies: +dependencies: - prepare_tests - + - setup_ec2 diff --git a/test/integration/roles/test_ec2_elb/meta/main.yml b/test/integration/roles/test_ec2_elb/meta/main.yml index 1050c23ce30..1f64f1169a9 100644 --- a/test/integration/roles/test_ec2_elb/meta/main.yml +++ b/test/integration/roles/test_ec2_elb/meta/main.yml @@ -1,3 +1,3 @@ -dependencies: +dependencies: - prepare_tests - + - setup_ec2 diff --git a/test/integration/roles/test_ec2_elb_lb/meta/main.yml b/test/integration/roles/test_ec2_elb_lb/meta/main.yml index 1050c23ce30..1f64f1169a9 100644 --- a/test/integration/roles/test_ec2_elb_lb/meta/main.yml +++ b/test/integration/roles/test_ec2_elb_lb/meta/main.yml @@ -1,3 +1,3 @@ -dependencies: +dependencies: - prepare_tests - + - setup_ec2 diff --git a/test/integration/roles/test_ec2_facts/meta/main.yml b/test/integration/roles/test_ec2_facts/meta/main.yml index 1050c23ce30..1f64f1169a9 100644 --- a/test/integration/roles/test_ec2_facts/meta/main.yml +++ b/test/integration/roles/test_ec2_facts/meta/main.yml @@ -1,3 +1,3 @@ -dependencies: +dependencies: - prepare_tests - + - setup_ec2 diff --git a/test/integration/roles/test_ec2_group/defaults/main.yml b/test/integration/roles/test_ec2_group/defaults/main.yml index e10da44d847..4063791af4b 100644 --- a/test/integration/roles/test_ec2_group/defaults/main.yml +++ b/test/integration/roles/test_ec2_group/defaults/main.yml @@ -1,5 +1,5 @@ --- # defaults file for test_ec2_group -ec2_group_name: 'ansible-testing-{{ random_string }}' +ec2_group_name: '{{resource_prefix}}' ec2_group_description: 'Created by ansible integration tests' diff --git a/test/integration/roles/test_ec2_key/defaults/main.yml b/test/integration/roles/test_ec2_key/defaults/main.yml index 2242ea07093..df0082d999b 100644 --- a/test/integration/roles/test_ec2_key/defaults/main.yml +++ b/test/integration/roles/test_ec2_key/defaults/main.yml @@ -1,3 +1,3 @@ --- # defaults file for test_ec2_key -ec2_key_name: 'ansible-testing-{{ random_string }}' +ec2_key_name: '{{resource_prefix}}' diff --git a/test/integration/roles/test_ec2_tag/meta/main.yml b/test/integration/roles/test_ec2_tag/meta/main.yml index 1050c23ce30..1f64f1169a9 100644 --- a/test/integration/roles/test_ec2_tag/meta/main.yml +++ b/test/integration/roles/test_ec2_tag/meta/main.yml @@ -1,3 +1,3 @@ -dependencies: +dependencies: - prepare_tests - + - setup_ec2 diff --git a/test/integration/roles/test_ec2_vol/meta/main.yml b/test/integration/roles/test_ec2_vol/meta/main.yml index 1050c23ce30..1f64f1169a9 100644 --- a/test/integration/roles/test_ec2_vol/meta/main.yml +++ b/test/integration/roles/test_ec2_vol/meta/main.yml @@ -1,3 +1,3 @@ -dependencies: +dependencies: - prepare_tests - + - setup_ec2 diff --git a/test/integration/roles/test_ec2_vpc/meta/main.yml b/test/integration/roles/test_ec2_vpc/meta/main.yml index 1050c23ce30..1f64f1169a9 100644 --- a/test/integration/roles/test_ec2_vpc/meta/main.yml +++ b/test/integration/roles/test_ec2_vpc/meta/main.yml @@ -1,3 +1,3 @@ -dependencies: +dependencies: - prepare_tests - + - setup_ec2 diff --git a/test/integration/roles/test_file/tasks/main.yml b/test/integration/roles/test_file/tasks/main.yml index 174f66a9fba..588c1b6747b 100644 --- a/test/integration/roles/test_file/tasks/main.yml +++ b/test/integration/roles/test_file/tasks/main.yml @@ -164,5 +164,24 @@ that: - "file11_result.uid == 1235" +- name: fail to create soft link to non existant file + file: src=/noneexistant dest={{output_dir}}/soft2.txt state=link force=no + register: file12_result + ignore_errors: true + +- name: verify that link was not created + assert: + that: + - "file12_result.failed == true" + +- name: force creation soft link to non existant + file: src=/noneexistant dest={{output_dir}}/soft2.txt state=link force=yes + register: file13_result + +- name: verify that link was created + assert: + that: + - "file13_result.changed == true" + - name: remote directory foobar file: path={{output_dir}}/foobar state=absent diff --git a/test/integration/roles/test_git/tasks/main.yml b/test/integration/roles/test_git/tasks/main.yml index a7072d1ab52..d5b92c8366c 100644 --- a/test/integration/roles/test_git/tasks/main.yml +++ b/test/integration/roles/test_git/tasks/main.yml @@ -16,11 +16,15 @@ # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . -- name: set where to extract the repo - set_fact: checkout_dir={{ output_dir }}/git - -- name: set what repo to use - set_fact: repo=https://github.com/jimi-c/test_role +- name: set role facts + set_fact: + checkout_dir: '{{ output_dir }}/git' + repo_format1: 'https://github.com/jimi-c/test_role' + repo_format2: 'git@github.com:jimi-c/test_role.git' + repo_format3: 'ssh://git@github.com/jimi-c/test_role.git' + known_host_files: + - "{{ lookup('env','HOME') }}/.ssh/known_hosts" + - '/etc/ssh/ssh_known_hosts' - name: clean out the output_dir shell: rm -rf {{ output_dir }}/* @@ -28,28 +32,26 @@ - name: verify that git is installed so this test can continue shell: which git +# +# Test repo=https://github.com/... +# + - name: initial checkout - git: repo={{ repo }} dest={{ checkout_dir }} + git: repo={{ repo_format1 }} dest={{ checkout_dir }} register: git_result -- debug: var=git_result - -- shell: ls ~/ansible_testing/git - - name: verify information about the initial clone assert: that: - "'before' in git_result" - "'after' in git_result" - "not git_result.before" - - "git_result.changed" + - "git_result.changed" - name: repeated checkout - git: repo={{ repo }} dest={{ checkout_dir }} + git: repo={{ repo_format1 }} dest={{ checkout_dir }} register: git_result2 -- debug: var=git_result2 - - name: check for tags stat: path={{ checkout_dir }}/.git/refs/tags register: tags @@ -74,6 +76,61 @@ that: - "not git_result2.changed" +# +# Test repo=git@github.com:/... +# Requires variable: github_ssh_private_key +# + +- name: clear checkout_dir + file: state=absent path={{ checkout_dir }} +- name: remove known_host files + file: state=absent path={{ item }} + with_items: known_host_files +- name: checkout ssh://git@github.com repo without accept_hostkey (expected fail) + git: repo={{ repo_format2 }} dest={{ checkout_dir }} + register: git_result + ignore_errors: true +- assert: + that: + - 'git_result.failed' + - 'git_result.msg == "github.com has an unknown hostkey. Set accept_hostkey to True or manually add the hostkey prior to running the git module"' + +- name: checkout git@github.com repo with accept_hostkey (expected pass) + git: + repo: '{{ repo_format2 }}' + dest: '{{ checkout_dir }}' + accept_hostkey: true + key_file: '{{ github_ssh_private_key }}' + register: git_result + when: github_ssh_private_key is defined + +- assert: + that: + - 'git_result.changed' + when: not git_result|skipped + +# +# Test repo=ssh://git@github.com/... +# Requires variable: github_ssh_private_key +# + +- name: clear checkout_dir + file: state=absent path={{ checkout_dir }} + +- name: checkout ssh://git@github.com repo with accept_hostkey (expected pass) + git: + repo: '{{ repo_format3 }}' + dest: '{{ checkout_dir }}' + version: 'master' + accept_hostkey: false # should already have been accepted + key_file: '{{ github_ssh_private_key }}' + register: git_result + when: github_ssh_private_key is defined + +- assert: + that: + - 'git_result.changed' + when: not git_result|skipped diff --git a/test/integration/roles/test_handlers_meta/handlers/main.yml b/test/integration/roles/test_handlers_meta/handlers/main.yml new file mode 100644 index 00000000000..634e6eca2ad --- /dev/null +++ b/test/integration/roles/test_handlers_meta/handlers/main.yml @@ -0,0 +1,7 @@ +- name: set_handler_fact_1 + set_fact: + handler1_called: True + +- name: set_handler_fact_2 + set_fact: + handler2_called: True diff --git a/test/integration/roles/test_handlers_meta/meta/main.yml b/test/integration/roles/test_handlers_meta/meta/main.yml new file mode 100644 index 00000000000..1050c23ce30 --- /dev/null +++ b/test/integration/roles/test_handlers_meta/meta/main.yml @@ -0,0 +1,3 @@ +dependencies: + - prepare_tests + diff --git a/test/integration/roles/test_handlers_meta/tasks/main.yml b/test/integration/roles/test_handlers_meta/tasks/main.yml new file mode 100644 index 00000000000..047b61ce886 --- /dev/null +++ b/test/integration/roles/test_handlers_meta/tasks/main.yml @@ -0,0 +1,41 @@ +# test code for the async keyword +# (c) 2014, James Tanner + +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +- name: notify the first handler + shell: echo + notify: + - set_handler_fact_1 + +- name: force handler execution now + meta: "flush_handlers" + +- name: assert handler1 ran and not handler2 + assert: + that: + - "handler1_called is defined" + - "handler2_called is not defined" + +- name: reset handler1_called + set_fact: + handler1_called: False + +- name: notify the second handler + shell: echo + notify: + - set_handler_fact_2 + diff --git a/test/integration/roles/test_lookups/tasks/main.yml b/test/integration/roles/test_lookups/tasks/main.yml index d54b769ecb9..0340a12c74e 100644 --- a/test/integration/roles/test_lookups/tasks/main.yml +++ b/test/integration/roles/test_lookups/tasks/main.yml @@ -82,3 +82,17 @@ assert: that: - "test_val == known_var_value.stdout" + + +# PIPE LOOKUP + +# https://github.com/ansible/ansible/issues/6550 +- name: confirm pipe lookup works with a single positional arg + debug: msg="{{ lookup('pipe', 'ls') }}" + +# https://github.com/ansible/ansible/issues/6550 +- name: confirm pipe lookup works with multiple positional args + debug: msg="{{ lookup('pipe', 'ls /tmp /') }}" + + + diff --git a/test/integration/roles/test_service/tasks/main.yml b/test/integration/roles/test_service/tasks/main.yml index a9da5d951a8..749d164724e 100644 --- a/test/integration/roles/test_service/tasks/main.yml +++ b/test/integration/roles/test_service/tasks/main.yml @@ -11,7 +11,7 @@ - "install_result.mode == '0755'" - include: 'sysv_setup.yml' - when: ansible_distribution in ('RHEL', 'CentOS', 'ScientificLinux') + when: ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] - include: 'systemd_setup.yml' when: ansible_distribution == 'Fedora' - include: 'upstart_setup.yml' @@ -101,7 +101,7 @@ - "remove_result.state == 'absent'" - include: 'sysv_cleanup.yml' - when: ansible_distribution in ('RHEL', 'CentOS', 'ScientificLinux') + when: ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] - include: 'systemd_cleanup.yml' when: ansible_distribution == 'Fedora' - include: 'upstart_cleanup.yml' diff --git a/test/integration/roles/test_service/tasks/upstart_cleanup.yml b/test/integration/roles/test_service/tasks/upstart_cleanup.yml index 3c4e4e50477..c99446bf652 100644 --- a/test/integration/roles/test_service/tasks/upstart_cleanup.yml +++ b/test/integration/roles/test_service/tasks/upstart_cleanup.yml @@ -1,10 +1,10 @@ - name: remove the upstart init file - file: path=/etc/init/ansible_test state=absent + file: path=/etc/init/ansible_test.conf state=absent register: remove_upstart_result - name: assert that the upstart init file was removed assert: that: - - "remove_upstart_result.path == '/etc/init/ansible_test'" + - "remove_upstart_result.path == '/etc/init/ansible_test.conf'" - "remove_upstart_result.state == 'absent'" diff --git a/test/integration/roles/test_service/tasks/upstart_setup.yml b/test/integration/roles/test_service/tasks/upstart_setup.yml index 70fbee26d05..e889ef2789d 100644 --- a/test/integration/roles/test_service/tasks/upstart_setup.yml +++ b/test/integration/roles/test_service/tasks/upstart_setup.yml @@ -1,12 +1,12 @@ - name: install the upstart init file - copy: src=ansible.upstart dest=/etc/init/ansible_test mode=0755 + copy: src=ansible.upstart dest=/etc/init/ansible_test.conf mode=0644 register: install_upstart_result - name: assert that the upstart init file was installed assert: that: - - "install_upstart_result.dest == '/etc/init/ansible_test'" + - "install_upstart_result.dest == '/etc/init/ansible_test.conf'" - "install_upstart_result.state == 'file'" - - "install_upstart_result.mode == '0755'" + - "install_upstart_result.mode == '0644'" - "install_upstart_result.md5sum == 'ab3900ea4de8423add764c12aeb90c01'" diff --git a/test/integration/roles/test_subversion/tasks/main.yml b/test/integration/roles/test_subversion/tasks/main.yml index 22503de35c8..1b2d26529da 100644 --- a/test/integration/roles/test_subversion/tasks/main.yml +++ b/test/integration/roles/test_subversion/tasks/main.yml @@ -90,6 +90,10 @@ - debug: var=subverted3 +- name: checkout with export + subversion: repo={{ repo }} dest={{ checkout_dir }} export=True + register: subverted4 + # FIXME: this needs to be fixed in the code see GitHub 6079 #- name: verify on a reclone things are marked unchanged diff --git a/test/integration/roles/test_unarchive/tasks/main.yml b/test/integration/roles/test_unarchive/tasks/main.yml index 817096617bf..56b31e6b2d0 100644 --- a/test/integration/roles/test_unarchive/tasks/main.yml +++ b/test/integration/roles/test_unarchive/tasks/main.yml @@ -64,6 +64,33 @@ - name: remove our tar.gz unarchive destination file: path={{output_dir}}/test-unarchive-tar-gz state=absent +- name: create our tar.gz unarchive destination for creates + file: path={{output_dir}}/test-unarchive-tar-gz state=directory + +- name: unarchive a tar.gz file with creates set + unarchive: src={{output_dir}}/test-unarchive.tar.gz dest={{output_dir | expanduser}}/test-unarchive-tar-gz copy=no creates={{output_dir}}/test-unarchive-tar-gz/foo-unarchive.txt + register: unarchive02b + +- name: verify that the file was marked as changed + assert: + that: + - "unarchive02b.changed == true" + +- name: verify that the file was unarchived + file: path={{output_dir}}/test-unarchive-tar-gz/foo-unarchive.txt state=file + +- name: unarchive a tar.gz file with creates over an existing file + unarchive: src={{output_dir}}/test-unarchive.tar.gz dest={{output_dir | expanduser}}/test-unarchive-tar-gz copy=no creates={{output_dir}}/test-unarchive-tar-gz/foo-unarchive.txt + register: unarchive02c + +- name: verify that the file was not marked as changed + assert: + that: + - "unarchive02c.changed == false" + +- name: remove our tar.gz unarchive destination + file: path={{output_dir}}/test-unarchive-tar-gz state=absent + - name: create our zip unarchive destination file: path={{output_dir}}/test-unarchive-zip state=directory diff --git a/test/integration/roles/test_yum/tasks/main.yml b/test/integration/roles/test_yum/tasks/main.yml index 472dfff8e81..5df887ae9f9 100644 --- a/test/integration/roles/test_yum/tasks/main.yml +++ b/test/integration/roles/test_yum/tasks/main.yml @@ -17,5 +17,5 @@ # along with Ansible. If not, see . - include: 'yum.yml' - when: ansible_distribution in ('RHEL', 'CentOS', 'ScientificLinux', 'Fedora') + when: ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora'] diff --git a/test/integration/roles/test_yum/tasks/yum.yml b/test/integration/roles/test_yum/tasks/yum.yml index 7c0b089ace5..316b8b3a77f 100644 --- a/test/integration/roles/test_yum/tasks/yum.yml +++ b/test/integration/roles/test_yum/tasks/yum.yml @@ -1,4 +1,21 @@ +# UNINSTALL 'yum-utils' +# The `yum` module has the smarts to auto-install `yum-utils`. To test, we +# will first uninstall `yum-utils`. +- name: check yum-utils with rpm + shell: rpm -q yum-utils + register: rpm_result + ignore_errors: true + +# Don't uninstall yum-utils with the `yum` module, it would be bad. The `yum` +# module does some `repoquery` magic after removing a package. It fails when you +# remove `yum-utils. +- name: uninstall yum-utils with shell + shell: yum -y remove yum-utils + when: rpm_result|success + # UNINSTALL +# With 'yum-utils' uninstalled, the first call to 'yum' should install +# yum-utils. - name: uninstall sos yum: name=sos state=removed register: yum_result diff --git a/test/integration/test_handlers.yml b/test/integration/test_handlers.yml new file mode 100644 index 00000000000..dd766a9deaf --- /dev/null +++ b/test/integration/test_handlers.yml @@ -0,0 +1,24 @@ +--- +- name: run handlers + hosts: A + gather_facts: False + connection: local + roles: + - { role: test_handlers_meta } + +- name: verify final handler was run + hosts: A + gather_facts: False + connection: local + tasks: + - name: verify handler2 ran + assert: + that: + - "not hostvars[inventory_hostname]['handler1_called']" + - "'handler2_called' in hostvars[inventory_hostname]" + +#- hosts: testgroup +# gather_facts: False +# connection: local +# roles: +# - { role: test_handlers_meta } diff --git a/test/units/TestFilters.py b/test/units/TestFilters.py index d850db4c3a3..9389147516c 100644 --- a/test/units/TestFilters.py +++ b/test/units/TestFilters.py @@ -116,6 +116,21 @@ class TestFilters(unittest.TestCase): True) assert a == True + def test_regex_replace_case_sensitive(self): + a = ansible.runner.filter_plugins.core.regex_replace('ansible', '^a.*i(.*)$', + 'a\\1') + assert a == 'able' + + def test_regex_replace_case_insensitive(self): + a = ansible.runner.filter_plugins.core.regex_replace('ansible', '^A.*I(.*)$', + 'a\\1', True) + assert a == 'able' + + def test_regex_replace_no_match(self): + a = ansible.runner.filter_plugins.core.regex_replace('ansible', '^b.*i(.*)$', + 'a\\1') + assert a == 'ansible' + #def test_filters(self): # this test is pretty low level using a playbook, hence I am disabling it for now -- MPD. @@ -137,3 +152,26 @@ class TestFilters(unittest.TestCase): #out = open(dest).read() #self.assertEqual(DEST, out) + def test_version_compare(self): + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(0, 1.1, 'lt', False)) + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.1, 1.2, '<')) + + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.2, 1.2, '==')) + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.2, 1.2, '=')) + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.2, 1.2, 'eq')) + + + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.3, 1.2, 'gt')) + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.3, 1.2, '>')) + + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.3, 1.2, 'ne')) + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.3, 1.2, '!=')) + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.3, 1.2, '<>')) + + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.1, 1.1, 'ge')) + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.2, 1.1, '>=')) + + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.1, 1.1, 'le')) + self.assertTrue(ansible.runner.filter_plugins.core.version_compare(1.0, 1.1, '<=')) + + self.assertTrue(ansible.runner.filter_plugins.core.version_compare('12.04', 12, 'ge')) diff --git a/test/units/TestInventory.py b/test/units/TestInventory.py index 2ae6256e62b..4aae739a233 100644 --- a/test/units/TestInventory.py +++ b/test/units/TestInventory.py @@ -236,9 +236,10 @@ class TestInventory(unittest.TestCase): print vars expected = dict( - a='1', b='2', c='3', d='10002', e='10003', f='10004 != 10005', + a=1, b=2, c=3, d=10002, e=10003, f='10004 != 10005', g=' g ', h=' h ', i="' i \"", j='" j', - rga='1', rgb='2', rgc='3', + k=[ 'k1', 'k2' ], + rga=1, rgb=2, rgc=3, inventory_hostname='rtp_a', inventory_hostname_short='rtp_a', group_names=[ 'eastcoast', 'nc', 'redundantgroup', 'redundantgroup2', 'redundantgroup3', 'rtp', 'us' ] ) @@ -417,15 +418,24 @@ class TestInventory(unittest.TestCase): auth = inventory.get_variables('neptun')['auth'] assert auth == 'YWRtaW46YWRtaW4=' - # test disabled as needs to be updated to model desired behavior - # - #def test_dir_inventory(self): - # inventory = self.dir_inventory() - # vars = inventory.get_variables('zeus') - # - # print "VARS=%s" % vars - # - # assert vars == {'inventory_hostname': 'zeus', - # 'inventory_hostname_short': 'zeus', - # 'group_names': ['greek', 'major-god', 'ungrouped'], - # 'var_a': '1#2'} + def test_dir_inventory(self): + inventory = self.dir_inventory() + + host_vars = inventory.get_variables('zeus') + + expected_vars = {'inventory_hostname': 'zeus', + 'inventory_hostname_short': 'zeus', + 'group_names': ['greek', 'major-god', 'ungrouped'], + 'var_a': '3#4'} + + print "HOST VARS=%s" % host_vars + print "EXPECTED VARS=%s" % expected_vars + + assert host_vars == expected_vars + + def test_dir_inventory_multiple_groups(self): + inventory = self.dir_inventory() + group_greek = inventory.get_hosts('greek') + actual_host_names = [host.name for host in group_greek] + print "greek : %s " % actual_host_names + assert actual_host_names == ['zeus', 'morpheus'] diff --git a/test/units/TestModuleUtilsBasic.py b/test/units/TestModuleUtilsBasic.py new file mode 100644 index 00000000000..d1ee95a1a0e --- /dev/null +++ b/test/units/TestModuleUtilsBasic.py @@ -0,0 +1,161 @@ +import os +import tempfile + +import unittest +from nose.tools import raises + +from ansible import errors +from ansible.module_common import ModuleReplacer +from ansible.utils import md5 as utils_md5 + +TEST_MODULE_DATA = """ +from ansible.module_utils.basic import * + +def get_module(): + return AnsibleModule( + argument_spec = dict(), + supports_check_mode = True, + no_log = True, + ) + +get_module() + +""" + +class TestModuleUtilsBasic(unittest.TestCase): + + def cleanup_temp_file(self, fd, path): + try: + os.close(fd) + os.remove(path) + except: + pass + + def cleanup_temp_dir(self, path): + try: + os.rmdir(path) + except: + pass + + def setUp(self): + # create a temporary file for the test module + # we're about to generate + self.tmp_fd, self.tmp_path = tempfile.mkstemp() + os.write(self.tmp_fd, TEST_MODULE_DATA) + + # template the module code and eval it + module_data, module_style, shebang = ModuleReplacer().modify_module(self.tmp_path, {}, "", {}) + + d = {} + exec(module_data, d, d) + self.module = d['get_module']() + + # module_utils/basic.py screws with CWD, let's save it and reset + self.cwd = os.getcwd() + + def tearDown(self): + self.cleanup_temp_file(self.tmp_fd, self.tmp_path) + # Reset CWD back to what it was before basic.py changed it + os.chdir(self.cwd) + + ################################################################################# + # run_command() tests + + # test run_command with a string command + def test_run_command_string(self): + (rc, out, err) = self.module.run_command("/bin/echo -n 'foo bar'") + self.assertEqual(rc, 0) + self.assertEqual(out, 'foo bar') + (rc, out, err) = self.module.run_command("/bin/echo -n 'foo bar'", use_unsafe_shell=True) + self.assertEqual(rc, 0) + self.assertEqual(out, 'foo bar') + + # test run_command with an array of args (with both use_unsafe_shell=True|False) + def test_run_command_args(self): + (rc, out, err) = self.module.run_command(['/bin/echo', '-n', "foo bar"]) + self.assertEqual(rc, 0) + self.assertEqual(out, 'foo bar') + (rc, out, err) = self.module.run_command(['/bin/echo', '-n', "foo bar"], use_unsafe_shell=True) + self.assertEqual(rc, 0) + self.assertEqual(out, 'foo bar') + + # test run_command with leading environment variables + @raises(SystemExit) + def test_run_command_string_with_env_variables(self): + self.module.run_command('FOO=bar /bin/echo -n "foo bar"') + + @raises(SystemExit) + def test_run_command_args_with_env_variables(self): + self.module.run_command(['FOO=bar', '/bin/echo', '-n', 'foo bar']) + + def test_run_command_string_unsafe_with_env_variables(self): + (rc, out, err) = self.module.run_command('FOO=bar /bin/echo -n "foo bar"', use_unsafe_shell=True) + self.assertEqual(rc, 0) + self.assertEqual(out, 'foo bar') + + # test run_command with a command pipe (with both use_unsafe_shell=True|False) + def test_run_command_string_unsafe_with_pipe(self): + (rc, out, err) = self.module.run_command('echo "foo bar" | cat', use_unsafe_shell=True) + self.assertEqual(rc, 0) + self.assertEqual(out, 'foo bar\n') + + # test run_command with a shell redirect in (with both use_unsafe_shell=True|False) + def test_run_command_string_unsafe_with_redirect_in(self): + (rc, out, err) = self.module.run_command('cat << EOF\nfoo bar\nEOF', use_unsafe_shell=True) + self.assertEqual(rc, 0) + self.assertEqual(out, 'foo bar\n') + + # test run_command with a shell redirect out (with both use_unsafe_shell=True|False) + def test_run_command_string_unsafe_with_redirect_out(self): + tmp_fd, tmp_path = tempfile.mkstemp() + try: + (rc, out, err) = self.module.run_command('echo "foo bar" > %s' % tmp_path, use_unsafe_shell=True) + self.assertEqual(rc, 0) + self.assertTrue(os.path.exists(tmp_path)) + md5sum = utils_md5(tmp_path) + self.assertEqual(md5sum, '5ceaa7ed396ccb8e959c02753cb4bd18') + except: + raise + finally: + self.cleanup_temp_file(tmp_fd, tmp_path) + + # test run_command with a double shell redirect out (append) (with both use_unsafe_shell=True|False) + def test_run_command_string_unsafe_with_double_redirect_out(self): + tmp_fd, tmp_path = tempfile.mkstemp() + try: + (rc, out, err) = self.module.run_command('echo "foo bar" >> %s' % tmp_path, use_unsafe_shell=True) + self.assertEqual(rc, 0) + self.assertTrue(os.path.exists(tmp_path)) + md5sum = utils_md5(tmp_path) + self.assertEqual(md5sum, '5ceaa7ed396ccb8e959c02753cb4bd18') + except: + raise + finally: + self.cleanup_temp_file(tmp_fd, tmp_path) + + # test run_command with data + def test_run_command_string_with_data(self): + (rc, out, err) = self.module.run_command('cat', data='foo bar') + self.assertEqual(rc, 0) + self.assertEqual(out, 'foo bar\n') + + # test run_command with binary data + def test_run_command_string_with_binary_data(self): + (rc, out, err) = self.module.run_command('cat', data='\x41\x42\x43\x44', binary_data=True) + self.assertEqual(rc, 0) + self.assertEqual(out, 'ABCD') + + # test run_command with a cwd set + def test_run_command_string_with_cwd(self): + tmp_path = tempfile.mkdtemp() + try: + (rc, out, err) = self.module.run_command('pwd', cwd=tmp_path) + self.assertEqual(rc, 0) + self.assertTrue(os.path.exists(tmp_path)) + self.assertEqual(out.strip(), os.path.realpath(tmp_path)) + except: + raise + finally: + self.cleanup_temp_dir(tmp_path) + + diff --git a/test/units/TestModules.py b/test/units/TestModules.py new file mode 100644 index 00000000000..54e3ec3213f --- /dev/null +++ b/test/units/TestModules.py @@ -0,0 +1,30 @@ +# -*- coding: utf-8 -*- + +import os +import ast +import unittest +from ansible import utils + + +class TestModules(unittest.TestCase): + + def list_all_modules(self): + paths = utils.plugins.module_finder._get_paths() + paths = [x for x in paths if os.path.isdir(x)] + module_list = [] + for path in paths: + for (dirpath, dirnames, filenames) in os.walk(path): + for filename in filenames: + module_list.append(os.path.join(dirpath, filename)) + return module_list + + def test_ast_parse(self): + module_list = self.list_all_modules() + ERRORS = [] + # attempt to parse each module with ast + for m in module_list: + try: + ast.parse(''.join(open(m))) + except Exception, e: + ERRORS.append((m, e)) + assert len(ERRORS) == 0, "get_docstring errors: %s" % ERRORS diff --git a/test/units/TestPlayVarsFiles.py b/test/units/TestPlayVarsFiles.py new file mode 100644 index 00000000000..cdfa48fe557 --- /dev/null +++ b/test/units/TestPlayVarsFiles.py @@ -0,0 +1,415 @@ +#!/usr/bin/env python + +import os +import shutil +from tempfile import mkstemp +from tempfile import mkdtemp +from ansible.playbook.play import Play +import ansible + +import unittest +from nose.plugins.skip import SkipTest + + +class FakeCallBacks(object): + def __init__(self): + pass + def on_vars_prompt(self): + pass + def on_import_for_host(self, host, filename): + pass + +class FakeInventory(object): + def __init__(self): + self.hosts = {} + def basedir(self): + return "." + def get_variables(self, host, vault_password=None): + if host in self.hosts: + return self.hosts[host] + else: + return {} + +class FakePlayBook(object): + def __init__(self): + self.extra_vars = {} + self.remote_user = None + self.remote_port = None + self.sudo = None + self.sudo_user = None + self.su = None + self.su_user = None + self.transport = None + self.only_tags = None + self.skip_tags = None + self.VARS_CACHE = {} + self.SETUP_CACHE = {} + self.inventory = FakeInventory() + self.callbacks = FakeCallBacks() + + self.VARS_CACHE['localhost'] = {} + + +class TestMe(unittest.TestCase): + + ######################################## + # BASIC FILE LOADING BEHAVIOR TESTS + ######################################## + + def test_play_constructor(self): + # __init__(self, playbook, ds, basedir, vault_password=None) + playbook = FakePlayBook() + ds = { "hosts": "localhost"} + basedir = "." + play = Play(playbook, ds, basedir) + + def test_vars_file(self): + + # make a vars file + fd, temp_path = mkstemp() + f = open(temp_path, "wb") + f.write("foo: bar\n") + f.close() + + # create a play with a vars_file + playbook = FakePlayBook() + ds = { "hosts": "localhost", + "vars_files": [temp_path]} + basedir = "." + play = Play(playbook, ds, basedir) + os.remove(temp_path) + + # make sure the variable was loaded + assert 'foo' in play.vars, "vars_file was not loaded into play.vars" + assert play.vars['foo'] == 'bar', "foo was not set to bar in play.vars" + + def test_vars_file_nonlist_error(self): + + # make a vars file + fd, temp_path = mkstemp() + f = open(temp_path, "wb") + f.write("foo: bar\n") + f.close() + + # create a play with a string for vars_files + playbook = FakePlayBook() + ds = { "hosts": "localhost", + "vars_files": temp_path} + basedir = "." + error_hit = False + try: + play = Play(playbook, ds, basedir) + except: + error_hit = True + os.remove(temp_path) + + assert error_hit == True, "no error was thrown when vars_files was not a list" + + + def test_multiple_vars_files(self): + + # make a vars file + fd, temp_path = mkstemp() + f = open(temp_path, "wb") + f.write("foo: bar\n") + f.close() + + # make a second vars file + fd, temp_path2 = mkstemp() + f = open(temp_path2, "wb") + f.write("baz: bang\n") + f.close() + + + # create a play with two vars_files + playbook = FakePlayBook() + ds = { "hosts": "localhost", + "vars_files": [temp_path, temp_path2]} + basedir = "." + play = Play(playbook, ds, basedir) + os.remove(temp_path) + os.remove(temp_path2) + + # make sure the variables were loaded + assert 'foo' in play.vars, "vars_file was not loaded into play.vars" + assert play.vars['foo'] == 'bar', "foo was not set to bar in play.vars" + assert 'baz' in play.vars, "vars_file2 was not loaded into play.vars" + assert play.vars['baz'] == 'bang', "baz was not set to bang in play.vars" + + def test_vars_files_first_found(self): + + # make a vars file + fd, temp_path = mkstemp() + f = open(temp_path, "wb") + f.write("foo: bar\n") + f.close() + + # get a random file path + fd, temp_path2 = mkstemp() + # make sure this file doesn't exist + os.remove(temp_path2) + + # create a play + playbook = FakePlayBook() + ds = { "hosts": "localhost", + "vars_files": [[temp_path2, temp_path]]} + basedir = "." + play = Play(playbook, ds, basedir) + os.remove(temp_path) + + # make sure the variable was loaded + assert 'foo' in play.vars, "vars_file was not loaded into play.vars" + assert play.vars['foo'] == 'bar', "foo was not set to bar in play.vars" + + def test_vars_files_multiple_found(self): + + # make a vars file + fd, temp_path = mkstemp() + f = open(temp_path, "wb") + f.write("foo: bar\n") + f.close() + + # make a second vars file + fd, temp_path2 = mkstemp() + f = open(temp_path2, "wb") + f.write("baz: bang\n") + f.close() + + # create a play + playbook = FakePlayBook() + ds = { "hosts": "localhost", + "vars_files": [[temp_path, temp_path2]]} + basedir = "." + play = Play(playbook, ds, basedir) + os.remove(temp_path) + os.remove(temp_path2) + + # make sure the variables were loaded + assert 'foo' in play.vars, "vars_file was not loaded into play.vars" + assert play.vars['foo'] == 'bar', "foo was not set to bar in play.vars" + assert 'baz' not in play.vars, "vars_file2 was loaded after vars_file1 was loaded" + + def test_vars_files_assert_all_found(self): + + # make a vars file + fd, temp_path = mkstemp() + f = open(temp_path, "wb") + f.write("foo: bar\n") + f.close() + + # make a second vars file + fd, temp_path2 = mkstemp() + # make sure it doesn't exist + os.remove(temp_path2) + + # create a play + playbook = FakePlayBook() + ds = { "hosts": "localhost", + "vars_files": [temp_path, temp_path2]} + basedir = "." + + error_hit = False + error_msg = None + + try: + play = Play(playbook, ds, basedir) + except ansible.errors.AnsibleError, e: + error_hit = True + error_msg = e + + os.remove(temp_path) + assert error_hit == True, "no error was thrown for missing vars_file" + + + ######################################## + # VARIABLE PRECEDENCE TESTS + ######################################## + + # On the first run vars_files are loaded into play.vars by host == None + # * only files with vars from host==None will work here + # On the secondary run(s), a host is given and the vars_files are loaded into VARS_CACHE + # * this only occurs if host is not None, filename2 has vars in the name, and filename3 does not + + # filename -- the original string + # filename2 -- filename templated with play vars + # filename3 -- filename2 template with inject (hostvars + setup_cache + vars_cache) + # filename4 -- path_dwim(filename3) + + def test_vars_files_for_host(self): + + # host != None + # vars in filename2 + # no vars in filename3 + + # make a vars file + fd, temp_path = mkstemp() + f = open(temp_path, "wb") + f.write("foo: bar\n") + f.close() + + # build play attributes + playbook = FakePlayBook() + ds = { "hosts": "localhost", + "vars_files": ["{{ temp_path }}"]} + basedir = "." + playbook.VARS_CACHE['localhost']['temp_path'] = temp_path + + # create play and do first run + play = Play(playbook, ds, basedir) + + # the second run is started by calling update_vars_files + play.update_vars_files(['localhost']) + os.remove(temp_path) + + assert 'foo' in play.playbook.VARS_CACHE['localhost'], "vars_file vars were not loaded into vars_cache" + assert play.playbook.VARS_CACHE['localhost']['foo'] == 'bar', "foo does not equal bar" + + def test_vars_files_for_host_with_extra_vars(self): + + # host != None + # vars in filename2 + # no vars in filename3 + + # make a vars file + fd, temp_path = mkstemp() + f = open(temp_path, "wb") + f.write("foo: bar\n") + f.close() + + # build play attributes + playbook = FakePlayBook() + ds = { "hosts": "localhost", + "vars_files": ["{{ temp_path }}"]} + basedir = "." + playbook.VARS_CACHE['localhost']['temp_path'] = temp_path + playbook.extra_vars = {"foo": "extra"} + + # create play and do first run + play = Play(playbook, ds, basedir) + + # the second run is started by calling update_vars_files + play.update_vars_files(['localhost']) + os.remove(temp_path) + + assert 'foo' in play.vars, "extra vars were not set in play.vars" + assert 'foo' in play.playbook.VARS_CACHE['localhost'], "vars_file vars were not loaded into vars_cache" + assert play.playbook.VARS_CACHE['localhost']['foo'] == 'extra', "extra vars did not overwrite vars_files vars" + + + ######################################## + # COMPLEX FILENAME TEMPLATING TESTS + ######################################## + + def test_vars_files_two_vars_in_name(self): + + # self.vars = ds['vars'] + # self.vars += _get_vars() ... aka extra_vars + + # make a temp dir + temp_dir = mkdtemp() + + # make a temp file + fd, temp_file = mkstemp(dir=temp_dir) + f = open(temp_file, "wb") + f.write("foo: bar\n") + f.close() + + # build play attributes + playbook = FakePlayBook() + ds = { "hosts": "localhost", + "vars": { "temp_dir": os.path.dirname(temp_file), + "temp_file": os.path.basename(temp_file) }, + "vars_files": ["{{ temp_dir + '/' + temp_file }}"]} + basedir = "." + + # create play and do first run + play = Play(playbook, ds, basedir) + + # cleanup + shutil.rmtree(temp_dir) + + assert 'foo' in play.vars, "double var templated vars_files filename not loaded" + + def test_vars_files_two_vars_different_scope(self): + + # + # Use a play var and an inventory var to create the filename + # + + # self.playbook.inventory.get_variables(host) + # {'group_names': ['ungrouped'], 'inventory_hostname': 'localhost', + # 'ansible_ssh_user': 'root', 'inventory_hostname_short': 'localhost'} + + # make a temp dir + temp_dir = mkdtemp() + + # make a temp file + fd, temp_file = mkstemp(dir=temp_dir) + f = open(temp_file, "wb") + f.write("foo: bar\n") + f.close() + + # build play attributes + playbook = FakePlayBook() + playbook.inventory.hosts['localhost'] = {'inventory_hostname': os.path.basename(temp_file)} + ds = { "hosts": "localhost", + "vars": { "temp_dir": os.path.dirname(temp_file)}, + "vars_files": ["{{ temp_dir + '/' + inventory_hostname }}"]} + basedir = "." + + # create play and do first run + play = Play(playbook, ds, basedir) + + # do the host run + play.update_vars_files(['localhost']) + + # cleanup + shutil.rmtree(temp_dir) + + assert 'foo' not in play.vars, \ + "mixed scope vars_file loaded into play vars" + assert 'foo' in play.playbook.VARS_CACHE['localhost'], \ + "differently scoped templated vars_files filename not loaded" + assert play.playbook.VARS_CACHE['localhost']['foo'] == 'bar', \ + "foo is not bar" + + def test_vars_files_two_vars_different_scope_first_found(self): + + # + # Use a play var and an inventory var to create the filename + # + + # make a temp dir + temp_dir = mkdtemp() + + # make a temp file + fd, temp_file = mkstemp(dir=temp_dir) + f = open(temp_file, "wb") + f.write("foo: bar\n") + f.close() + + # build play attributes + playbook = FakePlayBook() + playbook.inventory.hosts['localhost'] = {'inventory_hostname': os.path.basename(temp_file)} + ds = { "hosts": "localhost", + "vars": { "temp_dir": os.path.dirname(temp_file)}, + "vars_files": [["{{ temp_dir + '/' + inventory_hostname }}"]]} + basedir = "." + + # create play and do first run + play = Play(playbook, ds, basedir) + + # do the host run + play.update_vars_files(['localhost']) + + # cleanup + shutil.rmtree(temp_dir) + + assert 'foo' not in play.vars, \ + "mixed scope vars_file loaded into play vars" + assert 'foo' in play.playbook.VARS_CACHE['localhost'], \ + "differently scoped templated vars_files filename not loaded" + assert play.playbook.VARS_CACHE['localhost']['foo'] == 'bar', \ + "foo is not bar" + + diff --git a/test/units/TestSynchronize.py b/test/units/TestSynchronize.py index 7965f2295e7..c6fa31bf9c6 100644 --- a/test/units/TestSynchronize.py +++ b/test/units/TestSynchronize.py @@ -19,12 +19,20 @@ class FakeRunner(object): self.private_key_file = None self.check = False - def _execute_module(self, conn, tmp, module_name, args, inject=None): + def _execute_module(self, conn, tmp, module_name, args, + async_jid=None, async_module=None, async_limit=None, inject=None, + persist_files=False, complex_args=None, delete_remote_tmp=True): self.executed_conn = conn self.executed_tmp = tmp self.executed_module_name = module_name self.executed_args = args + self.executed_async_jid = async_jid + self.executed_async_module = async_module + self.executed_async_limit = async_limit self.executed_inject = inject + self.executed_persist_files = persist_files + self.executed_complex_args = complex_args + self.executed_delete_remote_tmp = delete_remote_tmp def noop_on_check(self, inject): return self.check @@ -60,8 +68,37 @@ class TestSynchronize(unittest.TestCase): x.run(conn, "/tmp", "synchronize", "src=/tmp/foo dest=/tmp/bar", inject) assert runner.executed_inject['delegate_to'] == "127.0.0.1", "was not delegated to 127.0.0.1" - assert runner.executed_args == "dest=root@el6.lab.net:/tmp/bar src=/tmp/foo", "wrong args used" - assert runner.sudo == False, "sudo not set to false" + assert runner.executed_complex_args == {"dest":"root@el6.lab.net:/tmp/bar", "src":"/tmp/foo"}, "wrong args used" + assert runner.sudo == None, "sudo was not reset to None" + + def test_synchronize_action_sudo(self): + + """ verify the synchronize action plugin unsets and then sets sudo """ + + runner = FakeRunner() + runner.sudo = True + runner.remote_user = "root" + runner.transport = "ssh" + conn = FakeConn() + inject = { + 'inventory_hostname': "el6.lab.net", + 'inventory_hostname_short': "el6", + 'ansible_connection': None, + 'ansible_ssh_user': 'root', + 'delegate_to': None, + 'playbook_dir': '.', + } + + x = Synchronize(runner) + x.setup("synchronize", inject) + x.run(conn, "/tmp", "synchronize", "src=/tmp/foo dest=/tmp/bar", inject) + + assert runner.executed_inject['delegate_to'] == "127.0.0.1", "was not delegated to 127.0.0.1" + assert runner.executed_complex_args == {'dest':'root@el6.lab.net:/tmp/bar', + 'src':'/tmp/foo', + 'rsync_path':'"sudo rsync"'}, "wrong args used" + assert runner.sudo == True, "sudo was not reset to True" + def test_synchronize_action_local(self): @@ -89,9 +126,9 @@ class TestSynchronize(unittest.TestCase): assert runner.transport == "paramiko", "runner transport was changed" assert runner.remote_user == "jtanner", "runner remote_user was changed" assert runner.executed_inject['delegate_to'] == "127.0.0.1", "was not delegated to 127.0.0.1" - assert "dest_port" not in runner.executed_args, "dest_port should not have been set" - assert "src=/tmp/foo" in runner.executed_args, "source was set incorrectly" - assert "dest=/tmp/bar" in runner.executed_args, "dest was set incorrectly" + assert "dest_port" not in runner.executed_complex_args, "dest_port should not have been set" + assert runner.executed_complex_args.get("src") == "/tmp/foo", "source was set incorrectly" + assert runner.executed_complex_args.get("dest") == "/tmp/bar", "dest was set incorrectly" def test_synchronize_action_vagrant(self): @@ -130,7 +167,7 @@ class TestSynchronize(unittest.TestCase): assert runner.remote_user == "jtanner", "runner remote_user was changed" assert runner.executed_inject['delegate_to'] == "127.0.0.1", "was not delegated to 127.0.0.1" assert runner.executed_inject['ansible_ssh_user'] == "vagrant", "runner user was changed" - assert "dest_port=2222" in runner.executed_args, "remote port was not set to 2222" - assert "src=/tmp/foo" in runner.executed_args, "source was set incorrectly" - assert "dest=vagrant@127.0.0.1:/tmp/bar" in runner.executed_args, "dest was set incorrectly" + assert runner.executed_complex_args.get("dest_port") == "2222", "remote port was not set to 2222" + assert runner.executed_complex_args.get("src") == "/tmp/foo", "source was set incorrectly" + assert runner.executed_complex_args.get("dest") == "vagrant@127.0.0.1:/tmp/bar", "dest was set incorrectly" diff --git a/test/units/TestUtils.py b/test/units/TestUtils.py index 4bddb4748ba..c60a0d82910 100644 --- a/test/units/TestUtils.py +++ b/test/units/TestUtils.py @@ -4,115 +4,676 @@ import unittest import os import os.path import tempfile +import yaml +import passlib.hash +import string +import StringIO +import copy from nose.plugins.skip import SkipTest import ansible.utils +import ansible.errors +import ansible.constants as C import ansible.utils.template as template2 +from ansible import __version__ + import sys reload(sys) sys.setdefaultencoding("utf8") class TestUtils(unittest.TestCase): + def test_before_comment(self): + ''' see if we can detect the part of a string before a comment. Used by INI parser in inventory ''' + + input = "before # comment" + expected = "before " + actual = ansible.utils.before_comment(input) + self.assertEqual(expected, actual) + + input = "before \# not a comment" + expected = "before # not a comment" + actual = ansible.utils.before_comment(input) + self.assertEqual(expected, actual) + + input = "" + expected = "" + actual = ansible.utils.before_comment(input) + self.assertEqual(expected, actual) + + input = "#" + expected = "" + actual = ansible.utils.before_comment(input) + self.assertEqual(expected, actual) + ##################################### ### check_conditional tests def test_check_conditional_jinja2_literals(self): # see http://jinja.pocoo.org/docs/templates/#literals + # none + self.assertEqual(ansible.utils.check_conditional( + None, '/', {}), True) + self.assertEqual(ansible.utils.check_conditional( + '', '/', {}), True) + + # list + self.assertEqual(ansible.utils.check_conditional( + ['true'], '/', {}), True) + self.assertEqual(ansible.utils.check_conditional( + ['false'], '/', {}), False) + + # non basestring or list + self.assertEqual(ansible.utils.check_conditional( + {}, '/', {}), {}) + # boolean - assert(ansible.utils.check_conditional( - 'true', '/', {}) == True) - assert(ansible.utils.check_conditional( - 'false', '/', {}) == False) - assert(ansible.utils.check_conditional( - 'True', '/', {}) == True) - assert(ansible.utils.check_conditional( - 'False', '/', {}) == False) + self.assertEqual(ansible.utils.check_conditional( + 'true', '/', {}), True) + self.assertEqual(ansible.utils.check_conditional( + 'false', '/', {}), False) + self.assertEqual(ansible.utils.check_conditional( + 'True', '/', {}), True) + self.assertEqual(ansible.utils.check_conditional( + 'False', '/', {}), False) # integer - assert(ansible.utils.check_conditional( - '1', '/', {}) == True) - assert(ansible.utils.check_conditional( - '0', '/', {}) == False) + self.assertEqual(ansible.utils.check_conditional( + '1', '/', {}), True) + self.assertEqual(ansible.utils.check_conditional( + '0', '/', {}), False) # string, beware, a string is truthy unless empty - assert(ansible.utils.check_conditional( - '"yes"', '/', {}) == True) - assert(ansible.utils.check_conditional( - '"no"', '/', {}) == True) - assert(ansible.utils.check_conditional( - '""', '/', {}) == False) + self.assertEqual(ansible.utils.check_conditional( + '"yes"', '/', {}), True) + self.assertEqual(ansible.utils.check_conditional( + '"no"', '/', {}), True) + self.assertEqual(ansible.utils.check_conditional( + '""', '/', {}), False) def test_check_conditional_jinja2_variable_literals(self): # see http://jinja.pocoo.org/docs/templates/#literals # boolean - assert(ansible.utils.check_conditional( - 'var', '/', {'var': 'True'}) == True) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': 'true'}) == True) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': 'False'}) == False) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': 'false'}) == False) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': 'True'}), True) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': 'true'}), True) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': 'False'}), False) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': 'false'}), False) # integer - assert(ansible.utils.check_conditional( - 'var', '/', {'var': '1'}) == True) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': 1}) == True) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': '0'}) == False) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': 0}) == False) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': '1'}), True) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': 1}), True) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': '0'}), False) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': 0}), False) # string, beware, a string is truthy unless empty - assert(ansible.utils.check_conditional( - 'var', '/', {'var': '"yes"'}) == True) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': '"no"'}) == True) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': '""'}) == False) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': '"yes"'}), True) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': '"no"'}), True) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': '""'}), False) # Python boolean in Jinja2 expression - assert(ansible.utils.check_conditional( - 'var', '/', {'var': True}) == True) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': False}) == False) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': True}), True) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': False}), False) def test_check_conditional_jinja2_expression(self): - assert(ansible.utils.check_conditional( - '1 == 1', '/', {}) == True) - assert(ansible.utils.check_conditional( - 'bar == 42', '/', {'bar': 42}) == True) - assert(ansible.utils.check_conditional( - 'bar != 42', '/', {'bar': 42}) == False) + self.assertEqual(ansible.utils.check_conditional( + '1 == 1', '/', {}), True) + self.assertEqual(ansible.utils.check_conditional( + 'bar == 42', '/', {'bar': 42}), True) + self.assertEqual(ansible.utils.check_conditional( + 'bar != 42', '/', {'bar': 42}), False) def test_check_conditional_jinja2_expression_in_variable(self): - assert(ansible.utils.check_conditional( - 'var', '/', {'var': '1 == 1'}) == True) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': 'bar == 42', 'bar': 42}) == True) - assert(ansible.utils.check_conditional( - 'var', '/', {'var': 'bar != 42', 'bar': 42}) == False) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': '1 == 1'}), True) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': 'bar == 42', 'bar': 42}), True) + self.assertEqual(ansible.utils.check_conditional( + 'var', '/', {'var': 'bar != 42', 'bar': 42}), False) def test_check_conditional_jinja2_unicode(self): - assert(ansible.utils.check_conditional( - u'"\u00df"', '/', {}) == True) - assert(ansible.utils.check_conditional( - u'var == "\u00df"', '/', {'var': u'\u00df'}) == True) + self.assertEqual(ansible.utils.check_conditional( + u'"\u00df"', '/', {}), True) + self.assertEqual(ansible.utils.check_conditional( + u'var == "\u00df"', '/', {'var': u'\u00df'}), True) ##################################### ### key-value parsing def test_parse_kv_basic(self): - assert (ansible.utils.parse_kv('a=simple b="with space" c="this=that"') == + self.assertEqual(ansible.utils.parse_kv('a=simple b="with space" c="this=that"'), {'a': 'simple', 'b': 'with space', 'c': 'this=that'}) + + def test_jsonify(self): + self.assertEqual(ansible.utils.jsonify(None), '{}') + self.assertEqual(ansible.utils.jsonify(dict(foo='bar', baz=['qux'])), + '{"baz": ["qux"], "foo": "bar"}') + expected = '''{ + "baz": [ + "qux" + ], + "foo": "bar" +}''' + self.assertEqual(ansible.utils.jsonify(dict(foo='bar', baz=['qux']), format=True), expected) + + def test_is_failed(self): + self.assertEqual(ansible.utils.is_failed(dict(rc=0)), False) + self.assertEqual(ansible.utils.is_failed(dict(rc=1)), True) + self.assertEqual(ansible.utils.is_failed(dict()), False) + self.assertEqual(ansible.utils.is_failed(dict(failed=False)), False) + self.assertEqual(ansible.utils.is_failed(dict(failed=True)), True) + self.assertEqual(ansible.utils.is_failed(dict(failed='True')), True) + self.assertEqual(ansible.utils.is_failed(dict(failed='true')), True) + + def test_is_changed(self): + self.assertEqual(ansible.utils.is_changed(dict()), False) + self.assertEqual(ansible.utils.is_changed(dict(changed=False)), False) + self.assertEqual(ansible.utils.is_changed(dict(changed=True)), True) + self.assertEqual(ansible.utils.is_changed(dict(changed='True')), True) + self.assertEqual(ansible.utils.is_changed(dict(changed='true')), True) + + def test_path_dwim(self): + self.assertEqual(ansible.utils.path_dwim(None, __file__), + __file__) + self.assertEqual(ansible.utils.path_dwim(None, '~'), + os.path.expanduser('~')) + self.assertEqual(ansible.utils.path_dwim(None, 'TestUtils.py'), + __file__.rstrip('c')) + + def test_path_dwim_relative(self): + self.assertEqual(ansible.utils.path_dwim_relative(__file__, 'units', 'TestUtils.py', + os.path.dirname(os.path.dirname(__file__))), + __file__.rstrip('c')) + + def test_json_loads(self): + self.assertEqual(ansible.utils.json_loads('{"foo": "bar"}'), dict(foo='bar')) + + def test_parse_json(self): + # leading junk + self.assertEqual(ansible.utils.parse_json('ansible\n{"foo": "bar"}'), dict(foo="bar")) + + # "baby" json + self.assertEqual(ansible.utils.parse_json('foo=bar baz=qux'), dict(foo='bar', baz='qux')) + + # No closing quotation + try: + ansible.utils.parse_json('foo=bar "') + except ValueError: + pass + else: + raise AssertionError('Incorrect exception, expected ValueError') + + # Failed to parse + try: + ansible.utils.parse_json('{') + except ansible.errors.AnsibleError: + pass + else: + raise AssertionError('Incorrect exception, expected ansible.errors.AnsibleError') + + # boolean changed/failed + self.assertEqual(ansible.utils.parse_json('changed=true'), dict(changed=True)) + self.assertEqual(ansible.utils.parse_json('changed=false'), dict(changed=False)) + self.assertEqual(ansible.utils.parse_json('failed=true'), dict(failed=True)) + self.assertEqual(ansible.utils.parse_json('failed=false'), dict(failed=False)) + + # rc + self.assertEqual(ansible.utils.parse_json('rc=0'), dict(rc=0)) + + # Just a string + self.assertEqual(ansible.utils.parse_json('foo'), dict(failed=True, parsed=False, msg='foo')) + + def test_smush_braces(self): + self.assertEqual(ansible.utils.smush_braces('{{ foo}}'), '{{foo}}') + self.assertEqual(ansible.utils.smush_braces('{{foo }}'), '{{foo}}') + self.assertEqual(ansible.utils.smush_braces('{{ foo }}'), '{{foo}}') + + def test_smush_ds(self): + # list + self.assertEqual(ansible.utils.smush_ds(['foo={{ foo }}']), ['foo={{foo}}']) + + # dict + self.assertEqual(ansible.utils.smush_ds(dict(foo='{{ foo }}')), dict(foo='{{foo}}')) + + # string + self.assertEqual(ansible.utils.smush_ds('foo={{ foo }}'), 'foo={{foo}}') + + # int + self.assertEqual(ansible.utils.smush_ds(0), 0) + + def test_parse_yaml(self): + #json + self.assertEqual(ansible.utils.parse_yaml('{"foo": "bar"}'), dict(foo='bar')) + + # broken json + try: + ansible.utils.parse_yaml('{') + except ansible.errors.AnsibleError: + pass + else: + raise AssertionError + + # broken json with path_hint + try: + ansible.utils.parse_yaml('{', path_hint='foo') + except ansible.errors.AnsibleError: + pass + else: + raise AssertionError + + # yaml with front-matter + self.assertEqual(ansible.utils.parse_yaml("---\nfoo: bar"), dict(foo='bar')) + # yaml no front-matter + self.assertEqual(ansible.utils.parse_yaml('foo: bar'), dict(foo='bar')) + # yaml indented first line (See #6348) + self.assertEqual(ansible.utils.parse_yaml(' - foo: bar\n baz: qux'), [dict(foo='bar', baz='qux')]) + + def test_process_common_errors(self): + # no quote + self.assertTrue('YAML thought it' in ansible.utils.process_common_errors('', 'foo: {{bar}}', 6)) + + # extra colon + self.assertTrue('an extra unquoted colon' in ansible.utils.process_common_errors('', 'foo: bar:', 8)) + + # match + self.assertTrue('same kind of quote' in ansible.utils.process_common_errors('', 'foo: "{{bar}}"baz', 6)) + self.assertTrue('same kind of quote' in ansible.utils.process_common_errors('', "foo: '{{bar}}'baz", 6)) + + # unbalanced + self.assertTrue('We could be wrong' in ansible.utils.process_common_errors('', 'foo: "bad" "wolf"', 6)) + self.assertTrue('We could be wrong' in ansible.utils.process_common_errors('', "foo: 'bad' 'wolf'", 6)) + + + def test_process_yaml_error(self): + data = 'foo: bar\n baz: qux' + try: + ansible.utils.parse_yaml(data) + except yaml.YAMLError, exc: + try: + ansible.utils.process_yaml_error(exc, data, __file__) + except ansible.errors.AnsibleYAMLValidationFailed, e: + self.assertTrue('Syntax Error while loading' in e.msg) + else: + raise AssertionError('Incorrect exception, expected AnsibleYAMLValidationFailed') + + data = 'foo: bar\n baz: {{qux}}' + try: + ansible.utils.parse_yaml(data) + except yaml.YAMLError, exc: + try: + ansible.utils.process_yaml_error(exc, data, __file__) + except ansible.errors.AnsibleYAMLValidationFailed, e: + self.assertTrue('Syntax Error while loading' in e.msg) + else: + raise AssertionError('Incorrect exception, expected AnsibleYAMLValidationFailed') + + data = '\xFF' + try: + ansible.utils.parse_yaml(data) + except yaml.YAMLError, exc: + try: + ansible.utils.process_yaml_error(exc, data, __file__) + except ansible.errors.AnsibleYAMLValidationFailed, e: + self.assertTrue('Check over' in e.msg) + else: + raise AssertionError('Incorrect exception, expected AnsibleYAMLValidationFailed') + + data = '\xFF' + try: + ansible.utils.parse_yaml(data) + except yaml.YAMLError, exc: + try: + ansible.utils.process_yaml_error(exc, data, None) + except ansible.errors.AnsibleYAMLValidationFailed, e: + self.assertTrue('Could not parse YAML.' in e.msg) + else: + raise AssertionError('Incorrect exception, expected AnsibleYAMLValidationFailed') + + def test_parse_yaml_from_file(self): + test = os.path.join(os.path.dirname(__file__), 'inventory_test_data', + 'common_vars.yml') + encrypted = os.path.join(os.path.dirname(__file__), 'inventory_test_data', + 'encrypted.yml') + broken = os.path.join(os.path.dirname(__file__), 'inventory_test_data', + 'broken.yml') + + try: + ansible.utils.parse_yaml_from_file(os.path.dirname(__file__)) + except ansible.errors.AnsibleError: + pass + else: + raise AssertionError('Incorrect exception, expected AnsibleError') + + self.assertEqual(ansible.utils.parse_yaml_from_file(test), yaml.safe_load(open(test))) + + self.assertEqual(ansible.utils.parse_yaml_from_file(encrypted, 'ansible'), dict(foo='bar')) + + try: + ansible.utils.parse_yaml_from_file(broken) + except ansible.errors.AnsibleYAMLValidationFailed, e: + self.assertTrue('Syntax Error while loading' in e.msg) + else: + raise AssertionError('Incorrect exception, expected AnsibleYAMLValidationFailed') + + def test_merge_hash(self): + self.assertEqual(ansible.utils.merge_hash(dict(foo='bar', baz='qux'), dict(foo='baz')), + dict(foo='baz', baz='qux')) + self.assertEqual(ansible.utils.merge_hash(dict(foo=dict(bar='baz')), dict(foo=dict(bar='qux'))), + dict(foo=dict(bar='qux'))) + + def test_md5s(self): + self.assertEqual(ansible.utils.md5s('ansible'), '640c8a5376aa12fa15cf02130ce239a6') + # Need a test that causes UnicodeEncodeError See 4221 + + def test_md5(self): + self.assertEqual(ansible.utils.md5(os.path.join(os.path.dirname(__file__), 'ansible.cfg')), + 'fb7b5b90ea63f04bde33e804b6fad42c') + self.assertEqual(ansible.utils.md5(os.path.join(os.path.dirname(__file__), 'ansible.cf')), + None) + + def test_default(self): + self.assertEqual(ansible.utils.default(None, lambda: {}), {}) + self.assertEqual(ansible.utils.default(dict(foo='bar'), lambda: {}), dict(foo='bar')) + + def test__gitinfo(self): + # this fails if not run from git clone + # self.assertEqual('last updated' in ansible.utils._gitinfo()) + # missing test for git submodule + # missing test outside of git clone + pass + + def test_version(self): + version = ansible.utils.version('ansible') + self.assertTrue(version.startswith('ansible %s' % __version__)) + # this fails if not run from git clone + # self.assertEqual('last updated' in version) + + def test_getch(self): + # figure out how to test this + pass + + def test_sanitize_output(self): + self.assertEqual(ansible.utils.sanitize_output('password=foo'), 'password=VALUE_HIDDEN') + self.assertEqual(ansible.utils.sanitize_output('foo=user:pass@foo/whatever'), + 'foo=user:********@foo/whatever') + self.assertEqual(ansible.utils.sanitize_output('foo=http://username:pass@wherever/foo'), + 'foo=http://username:********@wherever/foo') + self.assertEqual(ansible.utils.sanitize_output('foo=http://wherever/foo'), + 'foo=http://wherever/foo') + + def test_increment_debug(self): + ansible.utils.VERBOSITY = 0 + ansible.utils.increment_debug(None, None, None, None) + self.assertEqual(ansible.utils.VERBOSITY, 1) + + def test_base_parser(self): + output = ansible.utils.base_parser(output_opts=True) + self.assertTrue(output.has_option('--one-line') and output.has_option('--tree')) + + runas = ansible.utils.base_parser(runas_opts=True) + for opt in ['--sudo', '--sudo-user', '--user', '--su', '--su-user']: + self.assertTrue(runas.has_option(opt)) + + async = ansible.utils.base_parser(async_opts=True) + self.assertTrue(async.has_option('--poll') and async.has_option('--background')) + + connect = ansible.utils.base_parser(connect_opts=True) + self.assertTrue(connect.has_option('--connection')) + + subset = ansible.utils.base_parser(subset_opts=True) + self.assertTrue(subset.has_option('--limit')) + + check = ansible.utils.base_parser(check_opts=True) + self.assertTrue(check.has_option('--check')) + + diff = ansible.utils.base_parser(diff_opts=True) + self.assertTrue(diff.has_option('--diff')) + + def test_do_encrypt(self): + salt_chars = string.ascii_letters + string.digits + './' + salt = ansible.utils.random_password(length=8, chars=salt_chars) + hash = ansible.utils.do_encrypt('ansible', 'sha256_crypt', salt=salt) + self.assertTrue(passlib.hash.sha256_crypt.verify('ansible', hash)) + + hash = ansible.utils.do_encrypt('ansible', 'sha256_crypt') + self.assertTrue(passlib.hash.sha256_crypt.verify('ansible', hash)) + + hash = ansible.utils.do_encrypt('ansible', 'md5_crypt', salt_size=4) + self.assertTrue(passlib.hash.md5_crypt.verify('ansible', hash)) + + + try: + ansible.utils.do_encrypt('ansible', 'ansible') + except ansible.errors.AnsibleError: + pass + else: + raise AssertionError('Incorrect exception, expected AnsibleError') + + def test_last_non_blank_line(self): + self.assertEqual(ansible.utils.last_non_blank_line('a\n\nb\n\nc'), 'c') + self.assertEqual(ansible.utils.last_non_blank_line(''), '') + + def test_filter_leading_non_json_lines(self): + self.assertEqual(ansible.utils.filter_leading_non_json_lines('a\nb\nansible!\n{"foo": "bar"}'), + '{"foo": "bar"}\n') + self.assertEqual(ansible.utils.filter_leading_non_json_lines('a\nb\nansible!\n["foo", "bar"]'), + '["foo", "bar"]\n') + self.assertEqual(ansible.utils.filter_leading_non_json_lines('a\nb\nansible!\nfoo=bar'), + 'foo=bar\n') + + def test_boolean(self): + self.assertEqual(ansible.utils.boolean("true"), True) + self.assertEqual(ansible.utils.boolean("True"), True) + self.assertEqual(ansible.utils.boolean("TRUE"), True) + self.assertEqual(ansible.utils.boolean("t"), True) + self.assertEqual(ansible.utils.boolean("T"), True) + self.assertEqual(ansible.utils.boolean("Y"), True) + self.assertEqual(ansible.utils.boolean("y"), True) + self.assertEqual(ansible.utils.boolean("1"), True) + self.assertEqual(ansible.utils.boolean(1), True) + self.assertEqual(ansible.utils.boolean("false"), False) + self.assertEqual(ansible.utils.boolean("False"), False) + self.assertEqual(ansible.utils.boolean("0"), False) + self.assertEqual(ansible.utils.boolean(0), False) + self.assertEqual(ansible.utils.boolean("foo"), False) + + #def test_make_sudo_cmd(self): + # cmd = ansible.utils.make_sudo_cmd('root', '/bin/sh', '/bin/ls') + # self.assertTrue(isinstance(cmd, tuple)) + # self.assertEqual(len(cmd), 3) + # self.assertTrue('-u root' in cmd[0]) + # self.assertTrue('-p "[sudo via ansible, key=' in cmd[0] and cmd[1].startswith('[sudo via ansible, key')) + # self.assertTrue('echo SUDO-SUCCESS-' in cmd[0] and cmd[2].startswith('SUDO-SUCCESS-')) + # self.assertTrue('sudo -k' in cmd[0]) + + def test_make_su_cmd(self): + cmd = ansible.utils.make_su_cmd('root', '/bin/sh', '/bin/ls') + self.assertTrue(isinstance(cmd, tuple)) + self.assertEqual(len(cmd), 3) + self.assertTrue(' root /bin/sh' in cmd[0]) + self.assertTrue(cmd[1] == 'assword: ') + self.assertTrue('echo SUDO-SUCCESS-' in cmd[0] and cmd[2].startswith('SUDO-SUCCESS-')) + + def test_to_unicode(self): + uni = ansible.utils.to_unicode(u'ansible') + self.assertTrue(isinstance(uni, unicode)) + self.assertEqual(uni, u'ansible') + + none = ansible.utils.to_unicode(None) + self.assertTrue(isinstance(none, type(None))) + self.assertTrue(none is None) + + utf8 = ansible.utils.to_unicode('ansible') + self.assertTrue(isinstance(utf8, unicode)) + self.assertEqual(utf8, u'ansible') + + def test_is_list_of_strings(self): + self.assertEqual(ansible.utils.is_list_of_strings(['foo', 'bar', u'baz']), True) + self.assertEqual(ansible.utils.is_list_of_strings(['foo', 'bar', True]), False) + self.assertEqual(ansible.utils.is_list_of_strings(['one', 2, 'three']), False) + + def test_safe_eval(self): + # Not basestring + self.assertEqual(ansible.utils.safe_eval(len), len) + self.assertEqual(ansible.utils.safe_eval(1), 1) + self.assertEqual(ansible.utils.safe_eval(len, include_exceptions=True), (len, None)) + self.assertEqual(ansible.utils.safe_eval(1, include_exceptions=True), (1, None)) + + # module + self.assertEqual(ansible.utils.safe_eval('foo.bar('), 'foo.bar(') + self.assertEqual(ansible.utils.safe_eval('foo.bar(', include_exceptions=True), ('foo.bar(', None)) + + # import + self.assertEqual(ansible.utils.safe_eval('import foo'), 'import foo') + self.assertEqual(ansible.utils.safe_eval('import foo', include_exceptions=True), ('import foo', None)) + + # valid simple eval + self.assertEqual(ansible.utils.safe_eval('True'), True) + self.assertEqual(ansible.utils.safe_eval('True', include_exceptions=True), (True, None)) + + # valid eval with lookup + self.assertEqual(ansible.utils.safe_eval('foo + bar', dict(foo=1, bar=2)), 3) + self.assertEqual(ansible.utils.safe_eval('foo + bar', dict(foo=1, bar=2), include_exceptions=True), (3, None)) + + # invalid eval + self.assertEqual(ansible.utils.safe_eval('foo'), 'foo') + nameerror = ansible.utils.safe_eval('foo', include_exceptions=True) + self.assertTrue(isinstance(nameerror, tuple)) + self.assertEqual(nameerror[0], 'foo') + self.assertTrue(isinstance(nameerror[1], NameError)) + + def test_listify_lookup_plugin_terms(self): + basedir = os.path.dirname(__file__) + self.assertEqual(ansible.utils.listify_lookup_plugin_terms('things', basedir, dict()), + ['things']) + self.assertEqual(ansible.utils.listify_lookup_plugin_terms('things', basedir, dict(things=['one', 'two'])), + ['one', 'two']) + + def test_deprecated(self): + sys_stderr = sys.stderr + sys.stderr = StringIO.StringIO() + ansible.utils.deprecated('Ack!', '0.0') + out = sys.stderr.getvalue() + self.assertTrue('0.0' in out) + self.assertTrue('[DEPRECATION WARNING]' in out) + + sys.stderr = StringIO.StringIO() + ansible.utils.deprecated('Ack!', None) + out = sys.stderr.getvalue() + self.assertTrue('0.0' not in out) + self.assertTrue('[DEPRECATION WARNING]' in out) + + sys.stderr = StringIO.StringIO() + warnings = C.DEPRECATION_WARNINGS + C.DEPRECATION_WARNINGS = False + ansible.utils.deprecated('Ack!', None) + out = sys.stderr.getvalue() + self.assertTrue(not out) + C.DEPRECATION_WARNINGS = warnings + + sys.stderr = sys_stderr + + try: + ansible.utils.deprecated('Ack!', '0.0', True) + except ansible.errors.AnsibleError, e: + self.assertTrue('0.0' not in e.msg) + self.assertTrue('[DEPRECATED]' in e.msg) + else: + raise AssertionError("Incorrect exception, expected AnsibleError") + + def test_warning(self): + sys_stderr = sys.stderr + sys.stderr = StringIO.StringIO() + ansible.utils.warning('ANSIBLE') + out = sys.stderr.getvalue() + sys.stderr = sys_stderr + self.assertTrue('[WARNING]: ANSIBLE' in out) + + def test_combine_vars(self): + one = {'foo': {'bar': True}, 'baz': {'one': 'qux'}} + two = {'baz': {'two': 'qux'}} + replace = {'baz': {'two': 'qux'}, 'foo': {'bar': True}} + merge = {'baz': {'two': 'qux', 'one': 'qux'}, 'foo': {'bar': True}} + + C.DEFAULT_HASH_BEHAVIOUR = 'replace' + self.assertEqual(ansible.utils.combine_vars(one, two), replace) + + C.DEFAULT_HASH_BEHAVIOUR = 'merge' + self.assertEqual(ansible.utils.combine_vars(one, two), merge) + + def test_err(self): + sys_stderr = sys.stderr + sys.stderr = StringIO.StringIO() + ansible.utils.err('ANSIBLE') + out = sys.stderr.getvalue() + sys.stderr = sys_stderr + self.assertEqual(out, 'ANSIBLE\n') + + def test_exit(self): + sys_stderr = sys.stderr + sys.stderr = StringIO.StringIO() + try: + ansible.utils.exit('ansible') + except SystemExit, e: + self.assertEqual(e.code, 1) + self.assertEqual(sys.stderr.getvalue(), 'ansible\n') + else: + raise AssertionError('Incorrect exception, expected SystemExit') + finally: + sys.stderr = sys_stderr + + def test_unfrackpath(self): + os.environ['TEST_ROOT'] = os.path.dirname(os.path.dirname(__file__)) + self.assertEqual(ansible.utils.unfrackpath('$TEST_ROOT/units/../units/TestUtils.py'), __file__.rstrip('c')) + + def test_is_executable(self): + self.assertEqual(ansible.utils.is_executable(__file__), 0) + + bin_ansible = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), + 'bin', 'ansible') + self.assertNotEqual(ansible.utils.is_executable(bin_ansible), 0) + + def test_get_diff(self): + standard = dict( + before_header='foo', + after_header='bar', + before='fooo', + after='foo' + ) + + standard_expected = """--- before: foo ++++ after: bar +@@ -1 +1 @@ +-fooo+foo""" + + # workaround py26 and py27 difflib differences + standard_expected = """-fooo+foo""" + diff = ansible.utils.get_diff(standard) + diff = diff.split('\n') + del diff[0] + del diff[0] + del diff[0] + diff = '\n'.join(diff) + self.assertEqual(diff, unicode(standard_expected)) + diff --git a/test/units/TestUtilsStringFunctions.py b/test/units/TestUtilsStringFunctions.py new file mode 100644 index 00000000000..cccedf280d3 --- /dev/null +++ b/test/units/TestUtilsStringFunctions.py @@ -0,0 +1,33 @@ +# -*- coding: utf-8 -*- + +import unittest +import os +import os.path +import tempfile +import yaml +import passlib.hash +import string +import StringIO +import copy + +from nose.plugins.skip import SkipTest + +from ansible.utils import string_functions +import ansible.errors +import ansible.constants as C +import ansible.utils.template as template2 + +from ansible import __version__ + +import sys +reload(sys) +sys.setdefaultencoding("utf8") + +class TestUtilsStringFunctions(unittest.TestCase): + def test_isprintable(self): + self.assertFalse(string_functions.isprintable(chr(7))) + self.assertTrue(string_functions.isprintable('hello')) + + def test_count_newlines_from_end(self): + self.assertEqual(string_functions.count_newlines_from_end('foo\n\n\n\n'), 4) + self.assertEqual(string_functions.count_newlines_from_end('\nfoo'), 0) diff --git a/test/units/TestVault.py b/test/units/TestVault.py index f42188057f8..415d5c14aa8 100644 --- a/test/units/TestVault.py +++ b/test/units/TestVault.py @@ -12,6 +12,21 @@ from nose.plugins.skip import SkipTest from ansible import errors from ansible.utils.vault import VaultLib + +# Counter import fails for 2.0.1, requires >= 2.6.1 from pip +try: + from Crypto.Util import Counter + HAS_COUNTER = True +except ImportError: + HAS_COUNTER = False + +# KDF import fails for 2.0.1, requires >= 2.6.1 from pip +try: + from Crypto.Protocol.KDF import PBKDF2 + HAS_PBKDF2 = True +except ImportError: + HAS_PBKDF2 = False + # AES IMPORTS try: from Crypto.Cipher import AES as AES @@ -26,8 +41,8 @@ class TestVaultLib(TestCase): slots = ['is_encrypted', 'encrypt', 'decrypt', - '_add_headers_and_hexify_encrypted_data', - '_split_headers_and_get_unhexified_data',] + '_add_header', + '_split_header',] for slot in slots: assert hasattr(v, slot), "VaultLib is missing the %s method" % slot @@ -41,9 +56,7 @@ class TestVaultLib(TestCase): v = VaultLib('ansible') v.cipher_name = "TEST" sensitive_data = "ansible" - sensitive_hex = hexlify(sensitive_data) - data = v._add_headers_and_hexify_encrypted_data(sensitive_data) - open("/tmp/awx.log", "a").write("data: %s\n" % data) + data = v._add_header(sensitive_data) lines = data.split('\n') assert len(lines) > 1, "failed to properly add header" header = lines[0] @@ -53,19 +66,18 @@ class TestVaultLib(TestCase): assert header_parts[0] == '$ANSIBLE_VAULT', "header does not start with $ANSIBLE_VAULT" assert header_parts[1] == v.version, "header version is incorrect" assert header_parts[2] == 'TEST', "header does end with cipher name" - assert lines[1] == sensitive_hex - def test_remove_header(self): + def test_split_header(self): v = VaultLib('ansible') - data = "$ANSIBLE_VAULT;9.9;TEST\n%s" % hexlify("ansible") - rdata = v._split_headers_and_get_unhexified_data(data) + data = "$ANSIBLE_VAULT;9.9;TEST\nansible" + rdata = v._split_header(data) lines = rdata.split('\n') assert lines[0] == "ansible" assert v.cipher_name == 'TEST', "cipher name was not set" assert v.version == "9.9" - def test_encyrpt_decrypt(self): - if not HAS_AES: + def test_encrypt_decrypt_aes(self): + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2: raise SkipTest v = VaultLib('ansible') v.cipher_name = 'AES' @@ -74,8 +86,18 @@ class TestVaultLib(TestCase): assert enc_data != "foobar", "encryption failed" assert dec_data == "foobar", "decryption failed" + def test_encrypt_decrypt_aes256(self): + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2: + raise SkipTest + v = VaultLib('ansible') + v.cipher_name = 'AES256' + enc_data = v.encrypt("foobar") + dec_data = v.decrypt(enc_data) + assert enc_data != "foobar", "encryption failed" + assert dec_data == "foobar", "decryption failed" + def test_encrypt_encrypted(self): - if not HAS_AES: + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2: raise SkipTest v = VaultLib('ansible') v.cipher_name = 'AES' @@ -88,7 +110,7 @@ class TestVaultLib(TestCase): assert error_hit, "No error was thrown when trying to encrypt data with a header" def test_decrypt_decrypted(self): - if not HAS_AES: + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2: raise SkipTest v = VaultLib('ansible') data = "ansible" @@ -100,7 +122,8 @@ class TestVaultLib(TestCase): assert error_hit, "No error was thrown when trying to decrypt data without a header" def test_cipher_not_set(self): - if not HAS_AES: + # not setting the cipher should default to AES256 + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2: raise SkipTest v = VaultLib('ansible') data = "ansible" @@ -109,6 +132,5 @@ class TestVaultLib(TestCase): enc_data = v.encrypt(data) except errors.AnsibleError, e: error_hit = True - assert error_hit, "No error was thrown when trying to encrypt data without the cipher set" - - + assert not error_hit, "An error was thrown when trying to encrypt data without the cipher set" + assert v.cipher_name == "AES256", "cipher name is not set to AES256: %s" % v.cipher_name diff --git a/test/units/TestVaultEditor.py b/test/units/TestVaultEditor.py new file mode 100644 index 00000000000..cf7515370ab --- /dev/null +++ b/test/units/TestVaultEditor.py @@ -0,0 +1,167 @@ +#!/usr/bin/env python + +from unittest import TestCase +import getpass +import os +import shutil +import time +import tempfile +from binascii import unhexlify +from binascii import hexlify +from nose.plugins.skip import SkipTest + +from ansible import errors +from ansible.utils.vault import VaultLib +from ansible.utils.vault import VaultEditor + +# Counter import fails for 2.0.1, requires >= 2.6.1 from pip +try: + from Crypto.Util import Counter + HAS_COUNTER = True +except ImportError: + HAS_COUNTER = False + +# KDF import fails for 2.0.1, requires >= 2.6.1 from pip +try: + from Crypto.Protocol.KDF import PBKDF2 + HAS_PBKDF2 = True +except ImportError: + HAS_PBKDF2 = False + +# AES IMPORTS +try: + from Crypto.Cipher import AES as AES + HAS_AES = True +except ImportError: + HAS_AES = False + +class TestVaultEditor(TestCase): + + def test_methods_exist(self): + v = VaultEditor(None, None, None) + slots = ['create_file', + 'decrypt_file', + 'edit_file', + 'encrypt_file', + 'rekey_file', + 'read_data', + 'write_data', + 'shuffle_files'] + for slot in slots: + assert hasattr(v, slot), "VaultLib is missing the %s method" % slot + + def test_decrypt_1_0(self): + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2: + raise SkipTest + dirpath = tempfile.mkdtemp() + filename = os.path.join(dirpath, "foo-ansible-1.0.yml") + shutil.rmtree(dirpath) + shutil.copytree("vault_test_data", dirpath) + ve = VaultEditor(None, "ansible", filename) + + # make sure the password functions for the cipher + error_hit = False + try: + ve.decrypt_file() + except errors.AnsibleError, e: + error_hit = True + + # verify decrypted content + f = open(filename, "rb") + fdata = f.read() + f.close() + + shutil.rmtree(dirpath) + assert error_hit == False, "error decrypting 1.0 file" + assert fdata.strip() == "foo", "incorrect decryption of 1.0 file: %s" % fdata.strip() + + def test_decrypt_1_0_newline(self): + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2: + raise SkipTest + dirpath = tempfile.mkdtemp() + filename = os.path.join(dirpath, "foo-ansible-1.0-ansible-newline-ansible.yml") + shutil.rmtree(dirpath) + shutil.copytree("vault_test_data", dirpath) + ve = VaultEditor(None, "ansible\nansible\n", filename) + + # make sure the password functions for the cipher + error_hit = False + try: + ve.decrypt_file() + except errors.AnsibleError, e: + error_hit = True + + # verify decrypted content + f = open(filename, "rb") + fdata = f.read() + f.close() + + shutil.rmtree(dirpath) + assert error_hit == False, "error decrypting 1.0 file with newline in password" + #assert fdata.strip() == "foo", "incorrect decryption of 1.0 file: %s" % fdata.strip() + + + def test_decrypt_1_1(self): + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2: + raise SkipTest + dirpath = tempfile.mkdtemp() + filename = os.path.join(dirpath, "foo-ansible-1.1.yml") + shutil.rmtree(dirpath) + shutil.copytree("vault_test_data", dirpath) + ve = VaultEditor(None, "ansible", filename) + + # make sure the password functions for the cipher + error_hit = False + try: + ve.decrypt_file() + except errors.AnsibleError, e: + error_hit = True + + # verify decrypted content + f = open(filename, "rb") + fdata = f.read() + f.close() + + shutil.rmtree(dirpath) + assert error_hit == False, "error decrypting 1.0 file" + assert fdata.strip() == "foo", "incorrect decryption of 1.0 file: %s" % fdata.strip() + + + def test_rekey_migration(self): + if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2: + raise SkipTest + dirpath = tempfile.mkdtemp() + filename = os.path.join(dirpath, "foo-ansible-1.0.yml") + shutil.rmtree(dirpath) + shutil.copytree("vault_test_data", dirpath) + ve = VaultEditor(None, "ansible", filename) + + # make sure the password functions for the cipher + error_hit = False + try: + ve.rekey_file('ansible2') + except errors.AnsibleError, e: + error_hit = True + + # verify decrypted content + f = open(filename, "rb") + fdata = f.read() + f.close() + + shutil.rmtree(dirpath) + assert error_hit == False, "error rekeying 1.0 file to 1.1" + + # ensure filedata can be decrypted, is 1.1 and is AES256 + vl = VaultLib("ansible2") + dec_data = None + error_hit = False + try: + dec_data = vl.decrypt(fdata) + except errors.AnsibleError, e: + error_hit = True + + assert vl.cipher_name == "AES256", "wrong cipher name set after rekey: %s" % vl.cipher_name + assert error_hit == False, "error decrypting migrated 1.0 file" + assert dec_data.strip() == "foo", "incorrect decryption of rekeyed/migrated file: %s" % dec_data + + diff --git a/test/units/inventory_test_data/broken.yml b/test/units/inventory_test_data/broken.yml new file mode 100644 index 00000000000..0eccc1ba78c --- /dev/null +++ b/test/units/inventory_test_data/broken.yml @@ -0,0 +1,2 @@ +foo: bar + baz: qux diff --git a/test/units/inventory_test_data/complex_hosts b/test/units/inventory_test_data/complex_hosts index d7f172f203a..0217d03f993 100644 --- a/test/units/inventory_test_data/complex_hosts +++ b/test/units/inventory_test_data/complex_hosts @@ -40,6 +40,7 @@ e = 10003 h = ' h ' i = ' i " j = " j + k = ['k1', 'k2'] [rtp] rtp_a diff --git a/test/units/inventory_test_data/encrypted.yml b/test/units/inventory_test_data/encrypted.yml new file mode 100644 index 00000000000..ca33ab25cbb --- /dev/null +++ b/test/units/inventory_test_data/encrypted.yml @@ -0,0 +1,6 @@ +$ANSIBLE_VAULT;1.1;AES256 +33343734386261666161626433386662623039356366656637303939306563376130623138626165 +6436333766346533353463636566313332623130383662340a393835656134633665333861393331 +37666233346464636263636530626332623035633135363732623332313534306438393366323966 +3135306561356164310a343937653834643433343734653137383339323330626437313562306630 +3035 diff --git a/test/units/inventory_test_data/inventory_dir/0hosts b/test/units/inventory_test_data/inventory_dir/0hosts index 27fc46e8530..6f78a33a228 100644 --- a/test/units/inventory_test_data/inventory_dir/0hosts +++ b/test/units/inventory_test_data/inventory_dir/0hosts @@ -1,3 +1,3 @@ -zeus var_a=2 +zeus var_a=0 morpheus thor diff --git a/test/units/inventory_test_data/inventory_dir/2levels b/test/units/inventory_test_data/inventory_dir/2levels index 22f06bcd436..363294923ef 100644 --- a/test/units/inventory_test_data/inventory_dir/2levels +++ b/test/units/inventory_test_data/inventory_dir/2levels @@ -1,5 +1,5 @@ [major-god] -zeus var_a=1 +zeus var_a=2 thor [minor-god] diff --git a/test/units/inventory_test_data/inventory_dir/3comments b/test/units/inventory_test_data/inventory_dir/3comments index 74642f13cc7..e11b5e416bd 100644 --- a/test/units/inventory_test_data/inventory_dir/3comments +++ b/test/units/inventory_test_data/inventory_dir/3comments @@ -1,5 +1,5 @@ [major-god] # group with inline comments -zeus var_a="1#2" # host with inline comments and "#" in the var string +zeus var_a="3\#4" # host with inline comments and "#" in the var string # A comment thor diff --git a/test/units/vault_test_data/foo-ansible-1.0-ansible-newline-ansible.yml b/test/units/vault_test_data/foo-ansible-1.0-ansible-newline-ansible.yml new file mode 100644 index 00000000000..dd4e6e746b0 --- /dev/null +++ b/test/units/vault_test_data/foo-ansible-1.0-ansible-newline-ansible.yml @@ -0,0 +1,4 @@ +$ANSIBLE_VAULT;1.0;AES +53616c7465645f5ff0442ae8b08e2ff316d0d6512013185df7aded44f3c0eeef1b7544d078be1fe7 +ed88d0fedcb11928df45558f4b7f80fce627fbb08c5288885ab053f4129175779a8f24f5c1113731 +7d22cee14284670953c140612edf62f92485123fc4f15099ffe776e906e08145 diff --git a/test/units/vault_test_data/foo-ansible-1.0.yml b/test/units/vault_test_data/foo-ansible-1.0.yml new file mode 100644 index 00000000000..f71ddf10cee --- /dev/null +++ b/test/units/vault_test_data/foo-ansible-1.0.yml @@ -0,0 +1,4 @@ +$ANSIBLE_VAULT;1.0;AES +53616c7465645f5fd0026926a2d415a28a2622116273fbc90e377225c12a347e1daf4456d36a77f9 +9ad98d59f61d06a4b66718d855f16fb7bdfe54d1ec8aeaa4d06c2dc1fa630ae1846a029877f0eeb1 +83c62ffb04c2512995e815de4b4d29ed diff --git a/test/units/vault_test_data/foo-ansible-1.1.yml b/test/units/vault_test_data/foo-ansible-1.1.yml new file mode 100644 index 00000000000..d9a4a448a66 --- /dev/null +++ b/test/units/vault_test_data/foo-ansible-1.1.yml @@ -0,0 +1,6 @@ +$ANSIBLE_VAULT;1.1;AES256 +62303130653266653331306264616235333735323636616539316433666463323964623162386137 +3961616263373033353631316333623566303532663065310a393036623466376263393961326530 +64336561613965383835646464623865663966323464653236343638373165343863623638316664 +3631633031323837340a396530313963373030343933616133393566366137363761373930663833 +3739