WIP: Docs: User guide overhaul, part 4 (#69266)

Co-authored-by: Alicia Cozine <acozine@users.noreply.github.com>
pull/69553/head
Alicia Cozine 4 years ago committed by GitHub
parent b8469d5c7a
commit 6fffb0607b

@ -7,20 +7,12 @@ Lookup Plugins
:local: :local:
:depth: 2 :depth: 2
Lookup plugins allow Ansible to access data from outside sources. Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all :ref:`templating <playbooks_templating>`, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources.
This can include reading the filesystem in addition to contacting external datastores and services.
Like all templating, these plugins are evaluated on the Ansible control machine, not on the target/remote.
The data returned by a lookup plugin is made available using the standard templating system in Ansible,
and are typically used to load variables or templates with information from those systems.
Lookups are an Ansible-specific extension to the Jinja2 templating language.
.. note:: .. note::
- Lookups are executed with a working directory relative to the role or play, - Lookups are executed with a working directory relative to the role or play,
as opposed to local tasks, which are executed relative the executed script. as opposed to local tasks, which are executed relative the executed script.
- Since Ansible version 1.9, you can pass wantlist=True to lookups to use in Jinja2 template "for" loops. - Pass ``wantlist=True`` to lookups to use in Jinja2 template "for" loops.
- Lookup plugins are an advanced feature; to best leverage them you should have a good working knowledge of how to use Ansible plays.
.. warning:: .. warning::
- Some lookups pass arguments to a shell. When using variables from a remote/untrusted source, use the `|quote` filter to ensure safe usage. - Some lookups pass arguments to a shell. When using variables from a remote/untrusted source, use the `|quote` filter to ensure safe usage.
@ -31,7 +23,7 @@ Lookups are an Ansible-specific extension to the Jinja2 templating language.
Enabling lookup plugins Enabling lookup plugins
----------------------- -----------------------
You can activate a custom lookup by either dropping it into a ``lookup_plugins`` directory adjacent to your play, inside a role, or by putting it in one of the lookup directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`. Ansible enables all lookup plugins it can find. You can activate a custom lookup by either dropping it into a ``lookup_plugins`` directory adjacent to your play, inside the ``plugins/lookup/`` directory of a collection you have installed, inside a standalone role, or in one of the lookup directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`.
.. _using_lookup: .. _using_lookup:
@ -39,22 +31,21 @@ You can activate a custom lookup by either dropping it into a ``lookup_plugins``
Using lookup plugins Using lookup plugins
-------------------- --------------------
Lookup plugins can be used anywhere you can use templating in Ansible: in a play, in variables file, or in a Jinja2 template for the :ref:`template <template_module>` module. You can use lookup plugins anywhere you can use templating in Ansible: in a play, in variables file, or in a Jinja2 template for the :ref:`template <template_module>` module.
.. code-block:: YAML+Jinja .. code-block:: YAML+Jinja
vars: vars:
file_contents: "{{lookup('file', 'path/to/file.txt')}}" file_contents: "{{lookup('file', 'path/to/file.txt')}}"
Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. For this reason, most lookups output lists and take lists as input; for example, ``with_items`` uses the :ref:`items <items_lookup>` lookup::
This is also the reason most lookups output lists and take lists as input; for example, ``with_items`` uses the :ref:`items <items_lookup>` lookup::
tasks: tasks:
- name: count to 3 - name: count to 3
debug: msg={{item}} debug: msg={{item}}
with_items: [1, 2, 3] with_items: [1, 2, 3]
You can combine lookups with :ref:`playbooks_filters`, :ref:`playbooks_tests` and even each other to do some complex data generation and manipulation. For example:: You can combine lookups with :ref:`filters <playbooks_filters>`, :ref:`tests <playbooks_tests>` and even each other to do some complex data generation and manipulation. For example::
tasks: tasks:
- name: valid but useless and over complicated chained lookups and filters - name: valid but useless and over complicated chained lookups and filters
@ -66,16 +57,16 @@ You can combine lookups with :ref:`playbooks_filters`, :ref:`playbooks_tests` an
.. versionadded:: 2.6 .. versionadded:: 2.6
You can now control how errors behave in all lookup plugins by setting ``errors`` to ``ignore``, ``warn``, or ``strict``. The default setting is ``strict``, which causes the task to fail. For example: You can control how errors behave in all lookup plugins by setting ``errors`` to ``ignore``, ``warn``, or ``strict``. The default setting is ``strict``, which causes the task to fail if the lookup returns an error. For example:
To ignore errors:: To ignore lookup errors::
- name: file doesnt exist, but i dont care .. file plugin itself warns anyways ... - name: if this file does not exist, I do not care .. file plugin itself warns anyway ...
debug: msg="{{ lookup('file', '/idontexist', errors='ignore') }}" debug: msg="{{ lookup('file', '/nosuchfile', errors='ignore') }}"
.. code-block:: ansible-output .. code-block:: ansible-output
[WARNING]: Unable to find '/idontexist' in expected paths (use -vvvvv to see paths) [WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
ok: [localhost] => { ok: [localhost] => {
"msg": "" "msg": ""
@ -84,43 +75,43 @@ To ignore errors::
To get a warning instead of a failure:: To get a warning instead of a failure::
- name: file doesnt exist, let me know, but continue - name: if this file does not exist, let me know, but continue
debug: msg="{{ lookup('file', '/idontexist', errors='warn') }}" debug: msg="{{ lookup('file', '/nosuchfile', errors='warn') }}"
.. code-block:: ansible-output .. code-block:: ansible-output
[WARNING]: Unable to find '/idontexist' in expected paths (use -vvvvv to see paths) [WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
[WARNING]: An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /idontexist [WARNING]: An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /nosuchfile
ok: [localhost] => { ok: [localhost] => {
"msg": "" "msg": ""
} }
Fatal error (the default):: To get a fatal error (the default)::
- name: file doesnt exist, FAIL (this is the default) - name: if this file does not exist, FAIL (this is the default)
debug: msg="{{ lookup('file', '/idontexist', errors='strict') }}" debug: msg="{{ lookup('file', '/nosuchfile', errors='strict') }}"
.. code-block:: ansible-output .. code-block:: ansible-output
[WARNING]: Unable to find '/idontexist' in expected paths (use -vvvvv to see paths) [WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /idontexist"} fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /nosuchfile"}
.. _query: .. _query:
Invoking lookup plugins with ``query`` Forcing lookups to return lists: ``query`` and ``wantlist=True``
-------------------------------------- ----------------------------------------------------------------
.. versionadded:: 2.5 .. versionadded:: 2.5
In Ansible 2.5, a new jinja2 function called ``query`` was added for invoking lookup plugins. The difference between ``lookup`` and ``query`` is largely that ``query`` will always return a list. In Ansible 2.5, a new Jinja2 function called ``query`` was added for invoking lookup plugins. The difference between ``lookup`` and ``query`` is largely that ``query`` will always return a list.
The default behavior of ``lookup`` is to return a string of comma separated values. ``lookup`` can be explicitly configured to return a list using ``wantlist=True``. The default behavior of ``lookup`` is to return a string of comma separated values. ``lookup`` can be explicitly configured to return a list using ``wantlist=True``.
This was done primarily to provide an easier and more consistent interface for interacting with the new ``loop`` keyword, while maintaining backwards compatibility with other uses of ``lookup``. This feature provides an easier and more consistent interface for interacting with the new ``loop`` keyword, while maintaining backwards compatibility with other uses of ``lookup``.
The following examples are equivalent: The following examples are equivalent:
@ -130,7 +121,7 @@ The following examples are equivalent:
query('dict', dict_variable) query('dict', dict_variable)
As demonstrated above the behavior of ``wantlist=True`` is implicit when using ``query``. As demonstrated above, the behavior of ``wantlist=True`` is implicit when using ``query``.
Additionally, ``q`` was introduced as a shortform of ``query``: Additionally, ``q`` was introduced as a shortform of ``query``:

@ -3,8 +3,8 @@
Special Variables Special Variables
================= =================
Magic Magic variables
----- ---------------
These variables cannot be set directly by the user; Ansible will always override them to reflect internal state. These variables cannot be set directly by the user; Ansible will always override them to reflect internal state.
ansible_check_mode ansible_check_mode
@ -132,7 +132,7 @@ role_path
Facts Facts
----- -----
These are variables that contain information pertinent to the current host (`inventory_hostname`). They are only available if gathered first. These are variables that contain information pertinent to the current host (`inventory_hostname`). They are only available if gathered first. See :ref:`vars_and_facts` for more information.
ansible_facts ansible_facts
Contains any facts gathered or cached for the `inventory_hostname` Contains any facts gathered or cached for the `inventory_hostname`
@ -141,7 +141,7 @@ ansible_facts
ansible_local ansible_local
Contains any 'local facts' gathered or cached for the `inventory_hostname`. Contains any 'local facts' gathered or cached for the `inventory_hostname`.
The keys available depend on the custom facts created. The keys available depend on the custom facts created.
See the :ref:`setup <setup_module>` module for more details. See the :ref:`setup <setup_module>` module and :ref:`local_facts` for more details.
.. _connection_variables: .. _connection_variables:

@ -177,6 +177,8 @@ For numeric patterns, leading zeros can be included or removed, as desired. Rang
[databases] [databases]
db-[a:f].example.com db-[a:f].example.com
.. _variables_in_inventory:
Adding variables to inventory Adding variables to inventory
============================= =============================

@ -24,6 +24,7 @@ You should look at `Example Playbooks <https://github.com/ansible/ansible-exampl
playbooks_reuse playbooks_reuse
playbooks_reuse_roles playbooks_reuse_roles
playbooks_variables playbooks_variables
playbooks_vars_facts
playbooks_templating playbooks_templating
playbooks_conditionals playbooks_conditionals
playbooks_loops playbooks_loops

@ -1,74 +1,58 @@
.. _check_mode_dry: .. _check_mode_dry:
Check Mode ("Dry Run") ******************************************
====================== Validating tasks: check mode and diff mode
******************************************
.. versionadded:: 1.1 Ansible provides two modes of execution that validate tasks: check mode and diff mode. These modes can be used separately or together. They are useful when you are creating or editing a playbook or role and you want to know what it will do. In check mode, Ansible runs without making any changes on remote systems. Modules that support check mode report the changes they would have made. Modules that do not support check mode report nothing and do nothing. In diff mode, Ansible provides before-and-after comparisons. Modules that support diff mode display detailed information. You can combine check mode and diff mode for detailed validation of your playbook or role.
.. contents:: Topics .. contents::
:local:
When ansible-playbook is executed with ``--check`` it will not make any changes on remote systems. Instead, any module Using check mode
instrumented to support 'check mode' (which contains most of the primary core modules, but it is not required that all modules do ================
this) will report what changes they would have made rather than making them. Other modules that do not support check mode will also take no action, but just will not report what changes they might have made.
Check mode is just a simulation, and if you have steps that use conditionals that depend on the results of prior commands, Check mode is just a simulation. It will not generate output for tasks that use :ref:`conditionals based on registered variables <conditionals_registered_vars>` (results of prior tasks). However, it is great for validating configuration management playbooks that run on one node at a time. To run a playbook in check mode::
it may be less useful for you. However it is great for one-node-at-time basic configuration management use cases.
Example::
ansible-playbook foo.yml --check ansible-playbook foo.yml --check
.. _forcing_to_run_in_check_mode: .. _forcing_to_run_in_check_mode:
Enabling or disabling check mode for tasks Enforcing or preventing check mode on tasks
`````````````````````````````````````````` -------------------------------------------
.. versionadded:: 2.2 .. versionadded:: 2.2
Sometimes you may want to modify the check mode behavior of individual tasks. This is done via the ``check_mode`` option, which can If you want certain tasks to run in check mode always, or never, regardless of whether you run the playbook with or without ``--check``, you can add the ``check_mode`` option to those tasks:
be added to tasks.
There are two options:
1. Force a task to **run in check mode**, even when the playbook is called **without** ``--check``. This is called ``check_mode: yes``.
2. Force a task to **run in normal mode** and make changes to the system, even when the playbook is called **with** ``--check``. This is called ``check_mode: no``.
.. note:: Prior to version 2.2 only the equivalent of ``check_mode: no`` existed. The notation for that was ``always_run: yes``.
Instead of ``yes``/``no`` you can use a Jinja2 expression, just like the ``when`` clause. - To force a task to run in check mode, even when the playbook is called without ``--check``, set ``check_mode: yes``.
- To force a task to run in normal mode and make changes to the system, even when the playbook is called with ``--check``, set ``check_mode: no``.
Example:: For example::
tasks: tasks:
- name: this task will make changes to the system even in check mode - name: this task will always make changes to the system
command: /something/to/run --even-in-check-mode command: /something/to/run --even-in-check-mode
check_mode: no check_mode: no
- name: this task will always run under checkmode and not change the system - name: this task will never make changes to the system
lineinfile: lineinfile:
line: "important config" line: "important config"
dest: /path/to/myconfig.conf dest: /path/to/myconfig.conf
state: present state: present
check_mode: yes check_mode: yes
register: changes_to_important_config
Running single tasks with ``check_mode: yes`` can be useful for testing Ansible modules, either to test the module itself or to test the conditions under which a module would make changes. You can register variables (see :ref:`playbooks_conditionals`) on these tasks for even more detail on the potential changes.
Running single tasks with ``check_mode: yes`` can be useful to write tests for .. note:: Prior to version 2.2 only the equivalent of ``check_mode: no`` existed. The notation for that was ``always_run: yes``.
ansible modules, either to test the module itself or to the conditions under
which a module would make changes.
With ``register`` (see :ref:`playbooks_conditionals`) you can check the
potential changes.
Information about check mode in variables Skipping tasks or ignoring errors in check mode
````````````````````````````````````````` -----------------------------------------------
.. versionadded:: 2.1 .. versionadded:: 2.1
If you want to skip, or ignore errors on some tasks in check mode If you want to skip a task or ignore errors on a task when you run Ansible in check mode, you can use a boolean magic variable ``ansible_check_mode``, which is set to ``True`` when Ansible runs in check mode. For example::
you can use a boolean magic variable ``ansible_check_mode``
which will be set to ``True`` during check mode.
Example::
tasks: tasks:
@ -86,23 +70,21 @@ Example::
.. _diff_mode: .. _diff_mode:
Showing Differences with ``--diff`` Using diff mode
``````````````````````````````````` ===============
.. versionadded:: 1.1 The ``--diff`` option for ansible-playbook can be used alone or with ``--check``. When you run in diff mode, any module that supports diff mode reports the changes made or, if used with ``--check``, the changes that would have been made. Diff mode is most common in modules that manipulate files (for example, the template module) but other modules might also show 'before and after' information (for example, the user module).
The ``--diff`` option to ansible-playbook works great with ``--check`` (detailed above) but can also be used by itself. Diff mode produces a large amount of output, so it is best used when checking a single host at a time. For example::
When this flag is supplied and the module supports this, Ansible will report back the changes made or, if used with ``--check``, the changes that would have been made.
This is mostly used in modules that manipulate files (i.e. template) but other modules might also show 'before and after' information (i.e. user).
Since the diff feature produces a large amount of output, it is best used when checking a single host at a time. For example::
ansible-playbook foo.yml --check --diff --limit foo.example.com ansible-playbook foo.yml --check --diff --limit foo.example.com
.. versionadded:: 2.4 .. versionadded:: 2.4
The ``--diff`` option can reveal sensitive information. This option can be disabled for tasks by specifying ``diff: no``. Enforcing or preventing diff mode on tasks
------------------------------------------
Example:: Because the ``--diff`` option can reveal sensitive information, you can disable it for a task by specifying ``diff: no``. For example::
tasks: tasks:
- name: this task will not report a diff when the file changes - name: this task will not report a diff when the file changes

@ -1,60 +1,144 @@
.. _playbooks_conditionals: .. _playbooks_conditionals:
************
Conditionals Conditionals
============ ************
.. contents:: Topics In a playbook, you may want to execute different tasks, or have different goals, depending on the value of a fact (data about the remote system), a variable, or the result of a previous task. You may want the value of some variables to depend on the value of other variables. Or you may want to create additional groups of hosts based on whether the hosts match other criteria. You can do all of these things with conditionals.
Ansible uses Jinja2 :ref:`tests <playbooks_tests>` and :ref:`filters <playbooks_filters>` in conditionals. Ansible supports all the standard tests and filters, and adds some unique ones as well.
Often the result of a play may depend on the value of a variable, fact (something learned about the remote system), or previous task result.
In some cases, the values of variables may depend on other variables.
Additional groups can be created to manage hosts based on whether the hosts match other criteria. This topic covers how conditionals are used in playbooks.
.. note:: .. note::
There are many options to control execution flow in Ansible. More examples of supported conditionals can be located here: https://jinja.palletsprojects.com/en/master/templates/#comparisons. There are many options to control execution flow in Ansible. You can find more examples of supported conditionals at `<https://jinja.palletsprojects.com/en/master/templates/#comparisons>`_.
.. contents::
:local:
.. _the_when_statement: .. _the_when_statement:
The When Statement Basic conditionals with ``when``
`````````````````` ================================
The simplest conditional statement applies to a single task. Create the task, then add a ``when`` statement that applies a test. The ``when`` clause is a raw Jinja2 expression without double curly braces (see :ref:`group_by_module`). When you run the task or playbook, Ansible evaluates the test for all hosts. On any host where the test passes (returns a value of True), Ansible runs that task. For example, if you are installing mysql on multiple machines, some of which have SELinux enabled, you might have a task to configure SELinux to allow mysql to run. You would only want that task to run on machines that have SELinux enabled:
.. code-block:: yaml
tasks:
- name: Configure SELinux to start mysql on any port
seboolean: name=mysql_connect_any state=true persistent=yes
when: ansible_selinux.status == "enabled"
# all variables can be used directly in conditionals without double curly braces
Conditionals based on ansible_facts
-----------------------------------
Sometimes you will want to skip a particular step on a particular host. Often you want to execute or skip a task based on facts. Facts are attributes of individual hosts, including IP address, operating system, the status of a filesystem, and many more. With conditionals based on facts:
This could be something as simple as not installing a certain package if the operating system is a particular version,
or it could be something like performing some cleanup steps if a filesystem is getting full.
This is easy to do in Ansible with the ``when`` clause, which contains a raw `Jinja2 expression <https://jinja.palletsprojects.com/en/master/templates/#expressions>`_ without double curly braces (see :ref:`group_by_module`). - You can install a certain package only when the operating system is a particular version.
- You can skip configuring a firewall on hosts with internal IP addresses.
- You can perform cleanup tasks only when a filesystem is getting full.
.. note:: Jinja2 expressions are built up from comparisons, filters, tests, and logical combinations thereof. The below examples will give you an impression how to use them. However, for a more complete overview over all operators to use, please refer to the official `Jinja2 documentation <https://jinja.palletsprojects.com/en/master/templates/#expressions>`_ . See :ref:`commonly_used_facts` for a list of facts that frequently appear in conditional statements. Not all facts exist for all hosts. For example, the 'lsb_major_release' fact used in an example below only exists when the lsb_release package is installed on the target host. To see what facts are available on your systems, add a debug task to your playbook::
It's actually pretty simple:: - debug: var=ansible_facts
Here is a sample conditional based on a fact:
.. code-block:: yaml
tasks: tasks:
- name: "shut down Debian flavored systems" - name: shut down Debian flavored systems
command: /sbin/shutdown -t now command: /sbin/shutdown -t now
when: ansible_facts['os_family'] == "Debian" when: ansible_facts['os_family'] == "Debian"
# note that all variables can be used directly in conditionals without double curly braces
You can also use `parentheses to group and logical operators <https://jinja.palletsprojects.com/en/master/templates/#logic>`_ to combine conditions:: If you have multiple conditions, you can group them with parentheses:
.. code-block:: yaml
tasks: tasks:
- name: "shut down CentOS 6 and Debian 7 systems" - name: shut down CentOS 6 and Debian 7 systems
command: /sbin/shutdown -t now command: /sbin/shutdown -t now
when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "6") or when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "6") or
(ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "7") (ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "7")
Multiple conditions that all need to be true (that is, a logical ``and``) can also be specified as a list:: You can use `logical operators <https://jinja.palletsprojects.com/en/master/templates/#logic>`_ to combine conditions. When you have multiple conditions that all need to be true (that is, a logical ``and``), you can specify them as a list::
tasks: tasks:
- name: "shut down CentOS 6 systems" - name: shut down CentOS 6 systems
command: /sbin/shutdown -t now command: /sbin/shutdown -t now
when: when:
- ansible_facts['distribution'] == "CentOS" - ansible_facts['distribution'] == "CentOS"
- ansible_facts['distribution_major_version'] == "6" - ansible_facts['distribution_major_version'] == "6"
A number of Jinja2 `"tests" and "filters" <https://jinja.palletsprojects.com/en/master/templates/#other-operators>`_ can also be used in when statements, some of which are unique and provided by Ansible. If a fact or variable is a string, and you need to run a mathematical comparison on it, use a filter to ensure that Ansible reads the value as an integer::
Suppose we want to ignore the error of one statement and then decide to do something conditionally based on success or failure::
tasks:
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
.. _conditionals_registered_vars:
Conditions based on registered variables
----------------------------------------
Often in a playbook you want to execute or skip a task based on the outcome of an earlier task. For example, you might want to configure a service after it is upgraded by an earlier task. To create a conditional based on a registered variable:
#. Register the outcome of the earlier task as a variable.
#. Create a conditional test based on the registered variable.
You create the name of the registered variable using the ``register`` keyword. A registered variable always contains the status of the task that created it as well as any output that task generated. You can use registered variables in templates and action lines as well as in conditional ``when`` statements. You can access the string contents of the registered variable using ``variable.stdout``. For example::
- name: test play
hosts: all
tasks:
- shell: cat /etc/motd
register: motd_contents
- shell: echo "motd contains the word hi"
when: motd_contents.stdout.find('hi') != -1
You can use registered results in the loop of a task if the variable is a list. If the variable is not a list, you can convert it into a list, with either ``stdout_lines`` or with ``variable.stdout.split()``. You can also split the lines by other fields::
- name: registered variable usage as a loop list
hosts: all
tasks:
- name: retrieve the list of home directories
command: ls /home
register: home_dirs
- name: add home dirs to the backup spooler
file:
path: /mnt/bkspool/{{ item }}
src: /home/{{ item }}
state: link
loop: "{{ home_dirs.stdout_lines }}"
# same as loop: "{{ home_dirs.stdout.split() }}"
The string content of a registered variable can be empty. If you want to run another task only on hosts where the stdout of your registered variable is empty, check the registered variable's string contents for emptiness:
.. code-block:: yaml
- name: check registered variable for emptiness
hosts: all
tasks:
- name: list contents of directory
command: ls mydir
register: contents
- name: check contents for emptiness
debug:
msg: "Directory is empty"
when: contents.stdout == ""
Ansible always registers something in a registered variable for every host, even on hosts where a task fails or Ansible skips a task because a condition is not met. To run a follow-up task on these hosts, query the registered variable for ``is skipped`` (not for "undefined" or "default"). See :ref:`registered_variables` for more information. Here are sample conditionals based on the success or failure of a task. Remember to ignore errors if you want Ansible to continue executing on a host when a failure occurs:
.. code-block:: yaml
tasks: tasks:
- command: /bin/false - command: /bin/false
@ -64,53 +148,40 @@ Suppose we want to ignore the error of one statement and then decide to do somet
- command: /bin/something - command: /bin/something
when: result is failed when: result is failed
# Both `succeeded` and `success` both work. The former, however, is newer and uses the correct tense, while the latter is mainly used in older versions of Ansible.
- command: /bin/something_else - command: /bin/something_else
when: result is succeeded when: result is succeeded
- command: /bin/still/something_else - command: /bin/still/something_else
when: result is skipped when: result is skipped
.. note:: Older versions of Ansible used ``success`` and ``fail``, but ``succeeded`` and ``failed`` use the correct tense. All of these options are now valid.
.. note:: both `success` and `succeeded` work (`similarly for fail`/`failed`, etc).
.. warning:: You might expect a variable of a skipped task to be undefined and use `defined` or `default` to check that. **This is incorrect**! Even when a task is failed or skipped the variable is still registered with a failed or skipped status. See :ref:`registered_variables`.
To see what facts are available on a particular system, you can do the following in a playbook::
- debug: var=ansible_facts
Tip: Sometimes you'll get back a variable that's a string and you'll want to do a math operation comparison on it.
You can do this like so::
tasks: Conditionals based on variables
- shell: echo "only on Red Hat 6, derivatives, and later" -------------------------------
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release']|int >= 6
.. note:: the above example requires the lsb_release package on the target host in order to return the `lsb major_release` fact. You can also create conditionals based on variables defined in the playbooks or inventory. Because conditionals require boolean input (a test must evaluate as True to trigger the condition), you must apply the ``| bool`` filter to non boolean variables, such as string variables with content like 'yes', 'on', '1', or 'true'. You can define variables like this:
Variables defined in the playbooks or inventory can also be used, just make sure to apply the ``|bool`` filter to non-boolean variables (e.g., `string` variables with content like ``yes``, ``on``, ``1``, ``true``). .. code-block:: yaml
An example may be the execution of a task based on a variable's boolean value::
vars: vars:
epic: true epic: true
monumental: "yes" monumental: "yes"
Then a conditional execution might look like:: With the variables above, Ansible would run one of these tasks and skip the other:
.. code-block:: yaml
tasks: tasks:
- shell: echo "This certainly is epic!" - shell: echo "This certainly is epic!"
when: epic or monumental|bool when: epic or monumental | bool
or::
tasks:
- shell: echo "This certainly isn't epic!" - shell: echo "This certainly isn't epic!"
when: not epic when: not epic
If a required variable has not been set, you can skip or fail using Jinja2's ``defined`` test. If a required variable has not been set, you can skip or fail using Jinja2's `defined` test. For example:
For example::
.. code-block:: yaml
tasks: tasks:
- shell: echo "I've got '{{ foo }}' and am not afraid to use it!" - shell: echo "I've got '{{ foo }}' and am not afraid to use it!"
@ -120,27 +191,33 @@ For example::
when: bar is undefined when: bar is undefined
This is especially useful in combination with the conditional import of vars files (see below). This is especially useful in combination with the conditional import of vars files (see below).
As the examples show, you don't need to use ``{{ }}`` to use variables inside conditionals, as these are already implied. As the examples show, you do not need to use `{{ }}` to use variables inside conditionals, as these are already implied.
.. _loops_and_conditionals: .. _loops_and_conditionals:
Loops and Conditionals Using conditionals in loops
`````````````````````` ---------------------------
Combining ``when`` with loops (see :ref:`playbooks_loops`), be aware that the ``when`` statement is processed separately for each item. This is by design::
If you combine a ``when`` statement with a :ref:`loop <playbooks_loops>`, Ansible processes the condition separately for each item. This is by design, so you can execute the task on some items in the loop and skip it on other items. For example:
.. code-block:: yaml
tasks: tasks:
- command: echo {{ item }} - command: echo {{ item }}
loop: [ 0, 2, 4, 6, 8, 10 ] loop: [ 0, 2, 4, 6, 8, 10 ]
when: item > 5 when: item > 5
If you need to skip the whole task depending on the loop variable being defined, used the ``|default`` filter to provide an empty iterator:: If you need to skip the whole task when the loop variable is undefined, use the `|default` filter to provide an empty iterator. For example, when looping over a list:
.. code-block:: yaml
- command: echo {{ item }} - command: echo {{ item }}
loop: "{{ mylist|default([]) }}" loop: "{{ mylist|default([]) }}"
when: item > 5 when: item > 5
You can do the same thing when looping over a dict:
If using a dict in a loop:: .. code-block:: yaml
- command: echo {{ item.key }} - command: echo {{ item.key }}
loop: "{{ query('dict', mydict|default({})) }}" loop: "{{ query('dict', mydict|default({})) }}"
@ -148,12 +225,12 @@ If using a dict in a loop::
.. _loading_in_custom_facts: .. _loading_in_custom_facts:
Loading in Custom Facts Loading custom facts
``````````````````````` --------------------
You can provide your own facts, as described in :ref:`developing_modules`. To run them, just make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be accessible to future tasks:
It's also easy to provide your own facts if you want, which is covered in :ref:`developing_modules`. To run them, just .. code-block:: yaml
make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned
there will be accessible to future tasks::
tasks: tasks:
- name: gather site specific fact data - name: gather site specific fact data
@ -161,36 +238,21 @@ there will be accessible to future tasks::
- command: /usr/bin/thingy - command: /usr/bin/thingy
when: my_custom_fact_just_retrieved_from_the_remote_system == '1234' when: my_custom_fact_just_retrieved_from_the_remote_system == '1234'
.. _when_roles_and_includes: .. _when_with_reuse:
Applying 'when' to roles, imports, and includes
```````````````````````````````````````````````
Note that if you have several tasks that all share the same conditional statement, you can affix the conditional Conditionals with re-use
to a task include statement as below. All the tasks get evaluated, but the conditional is applied to each and every task:: ------------------------
- import_tasks: tasks/sometasks.yml You can use conditionals with re-usable tasks files, playbooks, or roles. Ansible executes these conditional statements differently for dynamic re-use (includes) and for static re-use (imports). See :ref:`playbooks_reuse` for more information on re-use in Ansible.
when: "'reticulating splines' in output"
.. note:: In versions prior to 2.0 this worked with task includes but not playbook includes. 2.0 allows it to work with both. .. _conditional_imports:
Or with a role::
- hosts: webservers
roles:
- role: debian_stock_config
when: ansible_facts['os_family'] == 'Debian'
You will note a lot of ``skipped`` output by default in Ansible when using this approach on systems that don't match the criteria.
In many cases the :ref:`group_by module <group_by_module>` can be a more streamlined way to accomplish the same thing; see
:ref:`os_variance`.
When a conditional is used with ``include_*`` tasks instead of imports, it is applied `only` to the include task itself and not Conditionals with imports
to any other tasks within the included file(s). A common situation where this distinction is important is as follows:: ^^^^^^^^^^^^^^^^^^^^^^^^^
# We wish to include a file to define a variable when it is not When you add a conditional to an import statement, Ansible applies the condition to all tasks within the imported file. This behavior is the equivalent of :ref:`tag_inheritance`. Ansible applies the condition to every task, and evaluates each task separately. For example, you might have a playbook called ``main.yml`` and a tasks file called ``other_tasks.yml``::
# already defined
# all tasks within an imported file inherit the condition from the import statement
# main.yml # main.yml
- import_tasks: other_tasks.yml # note "import" - import_tasks: other_tasks.yml # note "import"
when: x is not defined when: x is not defined
@ -201,150 +263,135 @@ to any other tasks within the included file(s). A common situation where this di
- debug: - debug:
var: x var: x
This expands at include time to the equivalent of:: Ansible expands this at execution time to the equivalent of::
- set_fact: - set_fact:
x: foo x: foo
when: x is not defined when: x is not defined
# this task sets a value for x
- debug: - debug:
var: x var: x
when: x is not defined when: x is not defined
# Ansible skips this task, because x is now defined
Thus if ``x`` is initially undefined, the ``debug`` task will be skipped. By using ``include_tasks`` instead of ``import_tasks``, Thus if ``x`` is initially undefined, the ``debug`` task will be skipped. If this is not the behavior you want, use an ``include_*`` statement to apply a condition only to that statement itself.
both tasks from ``other_tasks.yml`` will be executed as expected.
For more information on the differences between ``include`` v ``import`` see :ref:`playbooks_reuse`. You can apply conditions to ``import_playbook`` as well as to the other ``import_*`` statements. When you use this approach, Ansible returns a 'skipped' message for every task on every host that does not match the criteria, creating repetitive output. In many cases the :ref:`group_by module <group_by_module>` can be a more streamlined way to accomplish the same objective; see :ref:`os_variance`.
.. _conditional_imports:
Conditional Imports .. _conditional_includes:
```````````````````
.. note:: This is an advanced topic that is infrequently used. Conditionals with includes
^^^^^^^^^^^^^^^^^^^^^^^^^^
Sometimes you will want to do certain things differently in a playbook based on certain criteria. When you use a conditional on an ``include_*`` statement, the condition is applied only to the include task itself and not to any other tasks within the included file(s). To contrast with the example used for conditionals on imports above, look at the same playbook and tasks file, but using an include instead of an import::
Having one playbook that works on multiple platforms and OS versions is a good example.
As an example, the name of the Apache package may be different between CentOS and Debian, # Includes let you re-use a file to define a variable when it is not already defined
but it is easily handled with a minimum of syntax in an Ansible Playbook::
--- # main.yml
- hosts: all - include_tasks: other_tasks.yml
remote_user: root when: x is not defined
vars_files:
- "vars/common.yml"
- [ "vars/{{ ansible_facts['os_family'] }}.yml", "vars/os_defaults.yml" ]
tasks:
- name: make sure apache is started
service: name={{ apache }} state=started
.. note:: # other_tasks.yml
The variable "ansible_facts['os_family']" is being interpolated into - set_fact:
the list of filenames being defined for vars_files. x: foo
- debug:
var: x
As a reminder, the various YAML files contain just keys and values:: Ansible expands this at execution time to the equivalent of::
--- - include_tasks: other_tasks.yml
# for vars/RedHat.yml when: x is not defined
apache: httpd # if condition is met, Ansible includes other_tasks.yml
somethingelse: 42
How does this work? For Red Hat operating systems ('CentOS', for example), the first file Ansible tries to import - set_fact:
is 'vars/RedHat.yml'. If that file does not exist, Ansible attempts to load 'vars/os_defaults.yml'. If no files in x: foo
the list were found, an error is raised. # no condition applied to this task, Ansible sets the value of x to foo
On Debian, Ansible first looks for 'vars/Debian.yml' instead of 'vars/RedHat.yml', before - debug:
falling back on 'vars/os_defaults.yml'. var: x
# no condition applied to this task, Ansible prints the debug statement
Ansible's approach to configuration -- separating variables from tasks, keeping your playbooks By using ``include_tasks`` instead of ``import_tasks``, both tasks from ``other_tasks.yml`` will be executed as expected. For more information on the differences between ``include`` v ``import`` see :ref:`playbooks_reuse`.
from turning into arbitrary code with nested conditionals - results in more streamlined and auditable configuration rules because there are fewer decision points to track.
Selecting Files And Templates Based On Variables Conditionals with roles
```````````````````````````````````````````````` ^^^^^^^^^^^^^^^^^^^^^^^
.. note:: This is an advanced topic that is infrequently used. You can probably skip this section. There are three ways to apply conditions to roles:
Sometimes a configuration file you want to copy, or a template you will use may depend on a variable. - Add the same condition or conditions to all tasks in the role by placing your ``when`` statement under the ``roles`` keyword. See the example in this section.
The following construct selects the first available file appropriate for the variables of a given host, which is often much cleaner than putting a lot of if conditionals in a template. - Add the same condition or conditions to all tasks in the role by placing your ``when`` statement on a static ``import_role`` in your playbook.
- Add a condition or conditions to individual tasks or blocks within the role itself. This is the only approach that allows you to select or skip some tasks within the role based on your ``when`` statement. To select or skip tasks within the role, you must have conditions set on individual tasks or blocks, use the dynamic ``include_role`` in your playbook, and add the condition or conditions to the include. When you use this approach, Ansible applies the condition to the include itself plus any tasks in the role that also have that ``when`` statement.
The following example shows how to template out a configuration file that was very different between, say, CentOS and Debian:: When you incorporate a role in your playbook statically with the ``roles`` keyword, Ansible adds the conditions you define to all the tasks in the role. For example:
- name: template a file .. code-block:: yaml
template:
src: "{{ item }}"
dest: /etc/myapp/foo.conf
loop: "{{ query('first_found', { 'files': myfiles, 'paths': mypaths}) }}"
vars:
myfiles:
- "{{ansible_facts['distribution']}}.conf"
- default.conf
mypaths: ['search_location_one/somedir/', '/opt/other_location/somedir/']
Register Variables - hosts: webservers
`````````````````` roles:
- role: debian_stock_config
when: ansible_facts['os_family'] == 'Debian'
Often in a playbook it may be useful to store the result of a given command in a variable and access .. _conditional_variable_and_files:
it later. Use of the command module in this way can in many ways eliminate the need to write site specific facts, for
instance, you could test for the existence of a particular program.
.. note:: Registration happens even when a task is skipped due to the conditional. This way you can query the variable for `` is skipped`` to know if task was attempted or not. Selecting variables, files, or templates based on facts
-------------------------------------------------------
The ``register`` keyword decides what variable to save a result in. The resulting variables can be used in templates, action lines, or *when* statements. It looks like this (in an obviously trivial example):: Sometimes the facts about a host determine the values you want to use for certain variables or even the file or template you want to select for that host. For example, the names of packages are different on CentOS and on Debian. The configuration files for common services are also different on different OS flavors and versions. To load different variables file, templates, or other files based on a fact about the hosts you are managing:
- name: test play # Name your vars files, templates, or files to match the Ansible fact that differentiates them
hosts: all # Select the correct vars file, template, or file for each host with a variable based on that Ansible fact
tasks: Ansible separates variables from tasks, keeping your playbooks from turning into arbitrary code with nested conditionals. This approach results in more streamlined and auditable configuration rules because there are fewer decision points to track.
- shell: cat /etc/motd
register: motd_contents
- shell: echo "motd contains the word hi" Selecting variables files based on facts
when: motd_contents.stdout.find('hi') != -1 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As shown previously, the registered variable's string contents are accessible with the ``stdout`` value. You can create a playbook that works on multiple platforms and OS versions with a minimum of syntax by placing your variable values in vars files and conditionally importing them. If you want to install Apache on some CentOS and some Debian servers, create variables files with YAML keys and values. For example::
The registered result can be used in the loop of a task if it is converted into
a list (or already is a list) as shown below. ``stdout_lines`` is already available on the object as
well though you could also call ``home_dirs.stdout.split()`` if you wanted, and could split by other
fields::
- name: registered variable usage as a loop list ---
hosts: all # for vars/RedHat.yml
tasks: apache: httpd
somethingelse: 42
- name: retrieve the list of home directories Then import those variables files based on the facts you gather on the hosts in your playbook::
command: ls /home
register: home_dirs
- name: add home dirs to the backup spooler ---
file: - hosts: webservers
path: /mnt/bkspool/{{ item }} remote_user: root
src: /home/{{ item }} vars_files:
state: link - "vars/common.yml"
loop: "{{ home_dirs.stdout_lines }}" - [ "vars/{{ ansible_facts['os_family'] }}.yml", "vars/os_defaults.yml" ]
# same as loop: "{{ home_dirs.stdout.split() }}" tasks:
- name: make sure apache is started
service: name={{ apache }} state=started
Ansible gathers facts on the hosts in the webservers group, then interpolates the variable "ansible_facts['os_family']" into a list of filenames. If you have hosts with Red Hat operating systems ('CentOS', for example), Ansible looks for 'vars/RedHat.yml'. If that file does not exist, Ansible attempts to load 'vars/os_defaults.yml'. For Debian hosts, Ansible first looks for 'vars/Debian.yml', before falling back on 'vars/os_defaults.yml'. If no files in the list are found, Ansible raises an error.
As shown previously, the registered variable's string contents are accessible with the ``stdout`` value. Selecting files and templates based on facts
You may check the registered variable's string contents for emptiness:: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- name: check registered variable for emptiness You can use the same approach when different OS flavors or versions require different configuration files or templates. Select the appropriate file or template based on the variables assigned to each host. This approach is often much cleaner than putting a lot of conditionals into a single template to cover multiple OS or package versions.
hosts: all
tasks: For example, you can template out a configuration file that is very different between, say, CentOS and Debian::
- name: list contents of directory - name: template a file
command: ls mydir template:
register: contents src: "{{ item }}"
dest: /etc/myapp/foo.conf
loop: "{{ query('first_found', { 'files': myfiles, 'paths': mypaths}) }}"
vars:
myfiles:
- "{{ansible_facts['distribution']}}.conf"
- default.conf
mypaths: ['search_location_one/somedir/', '/opt/other_location/somedir/']
- name: check contents for emptiness .. _commonly_used_facts:
debug:
msg: "Directory is empty"
when: contents.stdout == ""
Commonly Used Facts Commonly-used facts
``````````````````` ===================
The following Facts are frequently used in Conditionals - see above for examples. The following Ansible facts are frequently used in conditionals.
.. _ansible_distribution: .. _ansible_distribution:
@ -381,7 +428,7 @@ Possible values (sample, not complete list)::
ansible_facts['distribution_major_version'] ansible_facts['distribution_major_version']
------------------------------------------- -------------------------------------------
This will be the major version of the operating system. For example, the value will be `16` for Ubuntu 16.04. The major version of the operating system. For example, the value is `16` for Ubuntu 16.04.
.. _ansible_os_family: .. _ansible_os_family:

@ -1,56 +1,55 @@
.. _playbook_debugger: .. _playbook_debugger:
Playbook Debugger ***************
================= Debugging tasks
***************
.. contents:: Topics Ansible offers a task debugger so you can try to fix errors during execution instead of fixing them in the playbook and then running it again. You have access to all of the features of the debugger in the context of the task. You can check or set the value of variables, update module arguments, and re-run the task with the new variables and arguments. The debugger lets you resolve the cause of the failure and continue with playbook execution.
Ansible includes a debugger as part of the strategy plugins. This debugger enables you to debug a task. .. contents::
You have access to all of the features of the debugger in the context of the task. You can then, for example, check or set the value of variables, update module arguments, and re-run the task with the new variables and arguments to help resolve the cause of the failure. :local:
Invoking the debugger
=====================
There are multiple ways to invoke the debugger. There are multiple ways to invoke the debugger.
Using the debugger keyword Using the debugger keyword
++++++++++++++++++++++++++ --------------------------
.. versionadded:: 2.5 .. versionadded:: 2.5
The ``debugger`` keyword can be used on any block where you provide a ``name`` attribute, such as a play, role, block or task. The ``debugger`` keyword can be used on any block where you provide a ``name`` attribute, such as a play, role, block or task. The ``debugger`` keyword accepts five values:
The ``debugger`` keyword accepts several values: .. table::
:class: documentation-table
always ========================= ======================================================
Always invoke the debugger, regardless of the outcome Value Result
========================= ======================================================
always Always invoke the debugger, regardless of the outcome
never never Never invoke the debugger, regardless of the outcome
Never invoke the debugger, regardless of the outcome
on_failed on_failed Only invoke the debugger if a task fails
Only invoke the debugger if a task fails
on_unreachable on_unreachable Only invoke the debugger if a host was unreachable
Only invoke the debugger if the a host was unreachable
on_skipped on_skipped Only invoke the debugger if the task is skipped
Only invoke the debugger if the task is skipped
These options override any global configuration to enable or disable the debugger. ========================= ======================================================
On a task When you use the ``debugger`` keyword, the setting you use overrides any global configuration to enable or disable the debugger. If you define ``debugger`` at two different levels, for example in a role and in a task, the more specific definition wins: the definition on a task overrides the definition on a block, which overrides the definition on a role or play.
`````````
:: Here are examples of invoking the debugger with the ``debugger`` keyword::
# on a task
- name: Execute a command - name: Execute a command
command: "false" command: "false"
debugger: on_failed debugger: on_failed
On a play # on a play
````````` - name: My play
::
- name: Play
hosts: all hosts: all
debugger: on_skipped debugger: on_skipped
tasks: tasks:
@ -58,7 +57,7 @@ On a play
command: "true" command: "true"
when: False when: False
When provided at a generic level and a more specific level, the more specific wins:: In the example below, the task will open the debugger when it fails, because the task-level definition overrides the play-level definition::
- name: Play - name: Play
hosts: all hosts: all
@ -68,30 +67,30 @@ When provided at a generic level and a more specific level, the more specific wi
command: "false" command: "false"
debugger: on_failed debugger: on_failed
In configuration or an environment variable
Configuration or environment variable -------------------------------------------
+++++++++++++++++++++++++++++++++++++
.. versionadded:: 2.5 .. versionadded:: 2.5
In ansible.cfg:: You can turn the task debugger on or off globally with a setting in ansible.cfg or with an environment variable. The only options are ``True`` or ``False``. If you set the configuration option or environment variable to ``True``, Ansible runs the debugger on failed tasks by default.
To invoke the task debugger from ansible.cfg::
[defaults] [defaults]
enable_task_debugger = True enable_task_debugger = True
As an environment variable:: To use an an environment variable to invoke the task debugger::
ANSIBLE_ENABLE_TASK_DEBUGGER=True ansible-playbook -i hosts site.yml ANSIBLE_ENABLE_TASK_DEBUGGER=True ansible-playbook -i hosts site.yml
When using this method, any failed or unreachable task will invoke the debugger, When you invoke the debugger using this method, any failed task will invoke the debugger, unless it is explicitly disabled for that role, play, block, or task. If you need more granular control what conditions trigger the debugger, use the ``debugger`` keyword.
unless otherwise explicitly disabled.
As a Strategy As a strategy
+++++++++++++ -------------
.. note:: .. note::
This is a backwards compatible method, to match Ansible versions before 2.5,
and may be removed in a future release This backwards-compatible method, which matches Ansible versions before 2.5, may be removed in a future release.
To use the ``debug`` strategy, change the ``strategy`` attribute like this:: To use the ``debug`` strategy, change the ``strategy`` attribute like this::
@ -100,17 +99,20 @@ To use the ``debug`` strategy, change the ``strategy`` attribute like this::
tasks: tasks:
... ...
If you don't want change the code, you can define ``ANSIBLE_STRATEGY=debug`` You can also set the strategy to ``debug`` with the environment variable ``ANSIBLE_STRATEGY=debug``, or by modifying ``ansible.cfg``:
environment variable in order to enable the debugger, or modify ``ansible.cfg`` such as::
.. code-block:: yaml
[defaults] [defaults]
strategy = debug strategy = debug
Examples Using the debugger
++++++++ ==================
Once you invoke the debugger, you can use the seven :ref:`debugger commands <available_commands>` to work through the error Ansible encountered. For example, the playbook below defines the ``var1`` variable but uses the ``wrong_var`` variable, which is undefined, by mistake.
For example, run the playbook below:: .. code-block:: yaml
- hosts: test - hosts: test
debugger: on_failed debugger: on_failed
@ -121,9 +123,7 @@ For example, run the playbook below::
- name: wrong variable - name: wrong variable
ping: data={{ wrong_var }} ping: data={{ wrong_var }}
The debugger is invoked since the *wrong_var* variable is undefined. If you run this playbook, Ansible invokes the debugger when the task fails. From the debug prompt, you can change the module arguments or the variables and run the task again.
Let's change the module's arguments and run the task again
.. code-block:: none .. code-block:: none
@ -158,19 +158,45 @@ Let's change the module's arguments and run the task again
PLAY RECAP ********************************************************************* PLAY RECAP *********************************************************************
192.0.2.10 : ok=1 changed=0 unreachable=0 failed=0 192.0.2.10 : ok=1 changed=0 unreachable=0 failed=0
This time, the task runs successfully! As the example above shows, once the task arguments use ``var1`` instead of ``wrong_var``, the task runs successfully.
.. _available_commands: .. _available_commands:
Available Commands Available debug commands
++++++++++++++++++ ========================
You can use these seven commands at the debug prompt:
.. table::
:class: documentation-table
========================== ============ =========================================================
Command Shortcut Action
========================== ============ =========================================================
print p Print information about the task
task.args[*key*] = *value* no shortcut Update module arguments
task_vars[*key*] = *value* no shortcut Update task variables (you must ``update_task`` next)
update_task u Recreate a task with updated task variables
redo r Run the task again
continue c Continue executing, starting with the next task
quit q Quit the debugger
========================== ============ =========================================================
For more details, see the individual descriptions and examples below.
.. _pprint_command: .. _pprint_command:
p(print) *task/task_vars/host/result* Print command
````````````````````````````````````` -------------
Print values used to execute a module:: ``print *task/task.args/task_vars/host/result*`` prints information about the task::
[192.0.2.10] TASK: install package (debug)> p task [192.0.2.10] TASK: install package (debug)> p task
TASK: install package TASK: install package
@ -194,12 +220,10 @@ Print values used to execute a module::
.. _update_args_command: .. _update_args_command:
task.args[*key*] = *value* Update args command
`````````````````````````` -------------------
Update module's argument. ``task.args[*key*] = *value*`` updates a module argument. This sample playbook has an invalid package name::
If you run a playbook like this::
- hosts: test - hosts: test
strategy: debug strategy: debug
@ -210,7 +234,7 @@ If you run a playbook like this::
- name: install package - name: install package
apt: name={{ pkg_name }} apt: name={{ pkg_name }}
Debugger is invoked due to wrong package name, so let's fix the module's args:: When you run the playbook, the invalid package name triggers an error, and Ansible invokes the debugger. You can fix the package name by viewing, then updating the module argument::
[192.0.2.10] TASK: install package (debug)> p task.args [192.0.2.10] TASK: install package (debug)> p task.args
{u'name': u'{{ pkg_name }}'} {u'name': u'{{ pkg_name }}'}
@ -219,16 +243,14 @@ Debugger is invoked due to wrong package name, so let's fix the module's args::
{u'name': 'bash'} {u'name': 'bash'}
[192.0.2.10] TASK: install package (debug)> redo [192.0.2.10] TASK: install package (debug)> redo
Then the task runs again with new args. After you update the module argument, use ``redo`` to run the task again with the new args.
.. _update_vars_command: .. _update_vars_command:
task_vars[*key*] = *value* Update vars command
`````````````````````````` -------------------
Update ``task_vars``.
Let's use the same playbook above, but fix ``task_vars`` instead of args:: ``task_vars[*key*] = *value*`` updates the ``task_vars``. You could fix the playbook above by viewing, then updating the task variables instead of the module args::
[192.0.2.10] TASK: install package (debug)> p task_vars['pkg_name'] [192.0.2.10] TASK: install package (debug)> p task_vars['pkg_name']
u'not_exist' u'not_exist'
@ -238,53 +260,51 @@ Let's use the same playbook above, but fix ``task_vars`` instead of args::
[192.0.2.10] TASK: install package (debug)> update_task [192.0.2.10] TASK: install package (debug)> update_task
[192.0.2.10] TASK: install package (debug)> redo [192.0.2.10] TASK: install package (debug)> redo
Then the task runs again with new ``task_vars``. After you update the task variables, you must use ``update_task`` to load the new variables before using ``redo`` to run the task again.
.. note:: .. note::
In 2.5 this was updated from ``vars`` to ``task_vars`` to not conflict with the ``vars()`` python function. In 2.5 this was updated from ``vars`` to ``task_vars`` to avoid conflicts with the ``vars()`` python function.
.. _update_task_command: .. _update_task_command:
u(pdate_task) Update task command
````````````` -------------------
.. versionadded:: 2.8 .. versionadded:: 2.8
This command re-creates the task from the original task data structure, and templates with updated ``task_vars`` ``u`` or ``update_task`` recreates the task from the original task data structure and templates with updated task variables. See the entry :ref:`update_vars_command` for an example of use.
See the above documentation for :ref:`update_vars_command` for an example of use.
.. _redo_command: .. _redo_command:
r(edo) Redo command
`````` ------------
Run the task again. ``r`` or ``redo`` runs the task again.
.. _continue_command: .. _continue_command:
c(ontinue) Continue command
`````````` ----------------
Just continue. ``c`` or ``continue`` continues executing, starting with the next task.
.. _quit_command: .. _quit_command:
q(uit) Quit command
`````` ------------
Quit from the debugger. The playbook execution is aborted. ``q`` or ``quit`` quits the debugger. The playbook execution is aborted.
Use with the free strategy Debugging and the free strategy
++++++++++++++++++++++++++ ===============================
Using the debugger on the ``free`` strategy will cause no further tasks to be queued or executed If you use the debugger with the ``free`` strategy, Ansible does not queue or execute any further tasks while the debugger is active. However, previously queued tasks remain in the queue and run as soon as you exit the debugger. If you use ``redo`` to reschedule a task from the debugger, other queued task may execute before your rescheduled task.
while the debugger is active. Additionally, using ``redo`` on a task to schedule it for re-execution
may cause the rescheduled task to execute after subsequent tasks listed in your playbook.
.. seealso:: .. seealso::
:ref:`playbooks_start_and_step`
Running playbooks while debugging or testing
:ref:`playbooks_intro` :ref:`playbooks_intro`
An introduction to playbooks An introduction to playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_

@ -1,156 +1,24 @@
.. _playbooks_delegation: .. _playbooks_delegation:
Delegation, Rolling Updates, and Local Actions Delegation and local actions
============================================== ============================
.. contents:: Topics By default Ansible executes all tasks on the machines that match the ``hosts`` line of your playbook. If you want to run some tasks on a different machine, you can use delegation. For example, when updating webservers, you might want to retrieve information from your database servers. In this scenario, your play would target the webservers group and you would delegate the database tasks to your dbservers group. With delegation, you can perform a task on one host on behalf of another, or execute tasks locally on behalf of remote hosts.
Being designed for multi-tier deployments since the beginning, Ansible is great at doing things on one host on behalf of another, or doing local steps with reference to some remote hosts. .. contents::
:local:
This in particular is very applicable when setting up continuous deployment infrastructure or zero downtime rolling updates, where you might be talking with load balancers or monitoring systems. Tasks that cannot be delegated
------------------------------
Additional features allow for tuning the orders in which things complete, and assigning a batch window size for how many machines to process at once during a rolling update. Some tasks always execute on the controller. These tasks, including ``include``, ``add_host``, and ``debug``, cannot be delegated.
This section covers all of these features. For examples of these items in use, `please see the ansible-examples repository <https://github.com/ansible/ansible-examples/>`_. There are quite a few examples of zero-downtime update procedures for different kinds of applications.
You should also consult the :ref:`module documentation<modules_by_category>` section. Modules like :ref:`ec2_elb<ec2_elb_module>`, :ref:`nagios<nagios_module>`, :ref:`bigip_pool<bigip_pool_module>`, and other :ref:`network_modules` dovetail neatly with the concepts mentioned here.
You'll also want to read up on :ref:`playbooks_reuse_roles`, as the 'pre_task' and 'post_task' concepts are the places where you would typically call these modules.
Be aware that certain tasks are impossible to delegate, i.e. `include`, `add_host`, `debug`, etc as they always execute on the controller.
.. _rolling_update_batch_size:
Rolling Update Batch Size
`````````````````````````
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the ``serial`` keyword::
---
- name: test play
hosts: webservers
serial: 2
gather_facts: False
tasks:
- name: task one
command: hostname
- name: task two
command: hostname
In the above example, if we had 4 hosts in the group 'webservers', 2
would complete the play completely before moving on to the next 2 hosts::
PLAY [webservers] ****************************************
TASK [task one] ******************************************
changed: [web2]
changed: [web1]
TASK [task two] ******************************************
changed: [web1]
changed: [web2]
PLAY [webservers] ****************************************
TASK [task one] ******************************************
changed: [web3]
changed: [web4]
TASK [task two] ******************************************
changed: [web3]
changed: [web4]
PLAY RECAP ***********************************************
web1 : ok=2 changed=2 unreachable=0 failed=0
web2 : ok=2 changed=2 unreachable=0 failed=0
web3 : ok=2 changed=2 unreachable=0 failed=0
web4 : ok=2 changed=2 unreachable=0 failed=0
The ``serial`` keyword can also be specified as a percentage, which will be applied to the total number of hosts in a
play, in order to determine the number of hosts per pass::
---
- name: test play
hosts: webservers
serial: "30%"
If the number of hosts does not divide equally into the number of passes, the final pass will contain the remainder.
As of Ansible 2.2, the batch sizes can be specified as a list, as follows::
---
- name: test play
hosts: webservers
serial:
- 1
- 5
- 10
In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left),
every following batch would contain 10 hosts until all available hosts are used.
It is also possible to list multiple batch sizes as percentages::
---
- name: test play
hosts: webservers
serial:
- "10%"
- "20%"
- "100%"
You can also mix and match the values::
---
- name: test play
hosts: webservers
serial:
- 1
- 5
- "20%"
.. note::
No matter how small the percentage, the number of hosts per pass will always be 1 or greater.
.. _maximum_failure_percentage:
Maximum Failure Percentage
``````````````````````````
By default, Ansible will continue executing actions as long as there are hosts in the batch that have not yet failed. The batch size for a play is determined by the ``serial`` parameter. If ``serial`` is not set, then batch size is all the hosts specified in the ``hosts:`` field.
In some situations, such as with the rolling updates described above, it may be desirable to abort the play when a
certain threshold of failures have been reached. To achieve this, you can set a maximum failure
percentage on a play as follows::
---
- hosts: webservers
max_fail_percentage: 30
serial: 10
In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.
.. note::
The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort
when 2 of the systems failed, the percentage should be set at 49 rather than 50.
.. _delegation: .. _delegation:
Delegation Delegating tasks
`````````` ----------------
If you want to perform a task on one host with reference to other hosts, use the 'delegate_to' keyword on a task. This is ideal for managing nodes in a load balanced pool or for controlling outage windows. You can use delegation with the :ref:`serial <rolling_update_batch_size>` keyword to control the number of hosts executing at one time::
This isn't actually rolling update specific but comes up frequently in those cases.
If you want to perform a task on one host with reference to other hosts, use the 'delegate_to' keyword on a task.
This is ideal for placing nodes in a load balanced pool, or removing them. It is also very useful for controlling outage windows.
Be aware that it does not make sense to delegate all tasks, debug, add_host, include, etc always get executed on the controller.
Using this with the 'serial' keyword to control the number of hosts executing at one time is also a good idea::
--- ---
- hosts: webservers - hosts: webservers
@ -170,8 +38,7 @@ Using this with the 'serial' keyword to control the number of hosts executing at
command: /usr/bin/add_back_to_pool {{ inventory_hostname }} command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
The first and third tasks in this play run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: 'local_action'. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1::
These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: 'local_action'. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1::
--- ---
# ... # ...
@ -185,8 +52,7 @@ These commands will run on 127.0.0.1, which is the machine running Ansible. Ther
- name: add back to load balancer pool - name: add back to load balancer pool
local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }} local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }}
A common pattern is to use a local action to call 'rsync' to recursively copy files to the managed servers. You can use a local action to call 'rsync' to recursively copy files to the managed servers::
Here is an example::
--- ---
# ... # ...
@ -198,7 +64,7 @@ Here is an example::
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync
will need to ask for a passphrase. will need to ask for a passphrase.
In case you have to specify more arguments you can use the following syntax:: To specify more arguments, use the following syntax::
--- ---
# ... # ...
@ -212,15 +78,14 @@ In case you have to specify more arguments you can use the following syntax::
body: "{{ mail_body }}" body: "{{ mail_body }}"
run_once: True run_once: True
The `ansible_host` variable (`ansible_ssh_host` in 1.x or specific to ssh/paramiko plugins) reflects the host a task is delegated to. The `ansible_host` variable reflects the host a task is delegated to.
.. _delegate_facts: .. _delegate_facts:
Delegated facts Delegating facts
``````````````` ----------------
By default, any fact gathered by a delegated task are assigned to the `inventory_hostname` (the current host) instead of the host which actually produced the facts (the delegated to host). Delegating Ansible tasks is like delegating tasks in the real world - your groceries belong to you, even if someone else delivers them to your home. Similarly, any facts gathered by a delegated task are assigned by default to the `inventory_hostname` (the current host), not to the host which produced the facts (the delegated to host). To assign gathered facts to the delegated host instead of the current host, set `delegate_facts` to `True`::
The directive `delegate_facts` may be set to `True` to assign the task's gathered facts to the delegated host instead of the current one.::
--- ---
- hosts: app_servers - hosts: app_servers
@ -232,17 +97,14 @@ The directive `delegate_facts` may be set to `True` to assign the task's gathere
delegate_facts: True delegate_facts: True
loop: "{{groups['dbservers']}}" loop: "{{groups['dbservers']}}"
The above will gather facts for the machines in the dbservers group and assign the facts to those machines and not to app_servers. This task gathers facts for the machines in the dbservers group and assigns the facts to those machines, even though the play targets the app_servers group. This way you can lookup `hostvars['dbhost1']['ansible_default_ipv4']['address']` even though dbservers were not part of the play, or left out by using `--limit`.
This way you can lookup `hostvars['dbhost1']['ansible_default_ipv4']['address']` even though dbservers were not part of the play, or left out by using `--limit`.
.. _run_once: .. _run_once:
Run Once Run once
```````` --------
In some cases there may be a need to only run a task one time for a batch of hosts. If you want a task to run only on the first host in your batch of hosts, set ``run_once`` to true on that task::
This can be achieved by configuring "run_once" on a task::
--- ---
# ... # ...
@ -256,16 +118,12 @@ This can be achieved by configuring "run_once" on a task::
# ... # ...
This directive forces the task to attempt execution on the first host in the current batch and then applies all results and facts to all the hosts in the same batch. Ansible executes this task on the first host in the current batch and applies all results and facts to all the hosts in the same batch. This approach is similar to applying a conditional to a task such as::
This approach is similar to applying a conditional to a task such as::
- command: /opt/application/upgrade_db.py - command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0] when: inventory_hostname == webservers[0]
But the results are applied to all the hosts. However, with ``run_once``, the results are applied to all the hosts. To specify an individual host to execute on, delegate the task::
Like most tasks, this can be optionally paired with "delegate_to" to specify an individual host to execute on::
- command: /opt/application/upgrade_db.py - command: /opt/application/upgrade_db.py
run_once: true run_once: true
@ -274,23 +132,21 @@ Like most tasks, this can be optionally paired with "delegate_to" to specify an
As always with delegation, the action will be executed on the delegated host, but the information is still that of the original host in the task. As always with delegation, the action will be executed on the delegated host, but the information is still that of the original host in the task.
.. note:: .. note::
When used together with "serial", tasks marked as "run_once" will be run on one host in *each* serial batch. When used together with "serial", tasks marked as "run_once" will be run on one host in *each* serial batch. If the task must run only once regardless of "serial" mode, use
If it's crucial that the task is run only once regardless of "serial" mode, use
:code:`when: inventory_hostname == ansible_play_hosts_all[0]` construct. :code:`when: inventory_hostname == ansible_play_hosts_all[0]` construct.
.. note:: .. note::
Any conditional (i.e `when:`) will use the variables of the 'first host' to decide if the task runs or not, no other hosts will be tested. Any conditional (i.e `when:`) will use the variables of the 'first host' to decide if the task runs or not, no other hosts will be tested.
.. note:: .. note::
If you want to avoid the default behaviour of setting the fact for all hosts, set `delegate_facts: True` for the specific task or block. If you want to avoid the default behavior of setting the fact for all hosts, set `delegate_facts: True` for the specific task or block.
.. _local_playbooks: .. _local_playbooks:
Local Playbooks Local playbooks
``````````````` ```````````````
It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful It may be useful to use a playbook locally on a remote host, rather than by connecting over SSH. This can be useful for assuring the configuration of a system by putting a playbook in a crontab. This may also be used
for assuring the configuration of a system by putting a playbook in a crontab. This may also be used
to run a playbook inside an OS installer, such as an Anaconda kickstart. to run a playbook inside an OS installer, such as an Anaconda kickstart.
To run an entire playbook locally, just set the "hosts:" line to "hosts: 127.0.0.1" and then run the playbook like so:: To run an entire playbook locally, just set the "hosts:" line to "hosts: 127.0.0.1" and then run the playbook like so::
@ -310,52 +166,6 @@ use the default remote connection type::
host_vars/localhost.yml, for example. You can avoid this issue by using ``local_action`` or ``delegate_to: localhost`` instead. host_vars/localhost.yml, for example. You can avoid this issue by using ``local_action`` or ``delegate_to: localhost`` instead.
.. _interrupt_execution_on_any_error:
Interrupt execution on any error
````````````````````````````````
With the ''any_errors_fatal'' option, any failure on any host in a multi-host play will be treated as fatal and Ansible will exit as soon as all hosts in the current batch have finished the fatal task. Subsequent tasks and plays will not be executed. You can recover from what would be a fatal error by adding a rescue section to the block.
Sometimes ''serial'' execution is unsuitable; the number of hosts is unpredictable (because of dynamic inventory) and speed is crucial (simultaneous execution is required), but all tasks must be 100% successful to continue playbook execution.
For example, consider a service located in many datacenters with some load balancers to pass traffic from users to the service. There is a deploy playbook to upgrade service deb-packages. The playbook has the stages:
- disable traffic on load balancers (must be turned off simultaneously)
- gracefully stop the service
- upgrade software (this step includes tests and starting the service)
- enable traffic on the load balancers (which should be turned on simultaneously)
The service can't be stopped with "alive" load balancers; they must be disabled first. Because of this, the second stage can't be played if any server failed in the first stage.
For datacenter "A", the playbook can be written this way::
---
- hosts: load_balancers_dc_a
any_errors_fatal: True
tasks:
- name: 'shutting down datacenter [ A ]'
command: /usr/bin/disable-dc
- hosts: frontends_dc_a
tasks:
- name: 'stopping service'
command: /usr/bin/stop-software
- name: 'updating software'
command: /usr/bin/upgrade-software
- hosts: load_balancers_dc_a
tasks:
- name: 'Starting datacenter [ A ]'
command: /usr/bin/enable-dc
In this example Ansible will start the software upgrade on the front ends only if all of the load balancers are successfully disabled.
.. seealso:: .. seealso::
:ref:`playbooks_intro` :ref:`playbooks_intro`

@ -1,15 +1,21 @@
.. _playbooks_environment: .. _playbooks_environment:
Setting the Environment (and Working With Proxies) Setting the remote environment
================================================== ==============================
.. versionadded:: 1.1 .. versionadded:: 1.1
The ``environment`` keyword allows you to set an environment variable for the action to be taken on the remote target. You can use the ``environment`` keyword at the play, block, or task level to set an environment variable for an action on a remote host. With this keyword, you can enable using a proxy for a task that does http requests, set the required environment variables for language-specific version managers, and more.
For example, it is quite possible that you may need to set a proxy for a task that does http requests.
Or maybe a utility or script that are called may also need certain environment variables set to run properly.
Here is an example:: When you set a value with ``environment:`` at the play or block level, it is available only to tasks within the play or block that are executed by the same user. The ``environment:`` keyword does not affect Ansible itself, Ansible configuration settings, the environment for other users, or the execution of other plugins like lookups and filters. Variables set with ``environment:`` do not automatically become Ansible facts, even when you set them at the play level. You must include an explicit ``gather_facts`` task in your playbook and set the ``environment`` keyword on that task to turn these values into Ansible facts.
.. contents::
:local:
Setting the remote environment in a task
----------------------------------------
You can set the environment directly at the task level::
- hosts: all - hosts: all
remote_user: root remote_user: root
@ -23,16 +29,12 @@ Here is an example::
environment: environment:
http_proxy: http://proxy.example.com:8080 http_proxy: http://proxy.example.com:8080
.. note:: You can re-use environment settings by defining them as variables in your play and accessing them in a task as you would access any stored Ansible variable::
``environment:`` does not affect Ansible itself, ONLY the context of the specific task action and this does not include
Ansible's own configuration settings nor the execution of any other plugins, including lookups, filters, and so on.
The environment can also be stored in a variable, and accessed like so::
- hosts: all - hosts: all
remote_user: root remote_user: root
# here we make a variable named "proxy_env" that is a dictionary # create a variable named "proxy_env" that is a dictionary
vars: vars:
proxy_env: proxy_env:
http_proxy: http://proxy.example.com:8080 http_proxy: http://proxy.example.com:8080
@ -45,19 +47,7 @@ The environment can also be stored in a variable, and accessed like so::
state: present state: present
environment: "{{ proxy_env }}" environment: "{{ proxy_env }}"
You can also use it at a play level:: You can store environment settings for re-use in multiple playbooks by defining them in a group_vars file::
- hosts: testhost
roles:
- php
- nginx
environment:
http_proxy: http://proxy.example.com:8080
While just proxy settings were shown above, any number of settings can be supplied. The most logical place
to define an environment hash might be a group_vars file, like so::
--- ---
# file: group_vars/boston # file: group_vars/boston
@ -68,11 +58,23 @@ to define an environment hash might be a group_vars file, like so::
http_proxy: http://proxy.bos.example.com:8080 http_proxy: http://proxy.bos.example.com:8080
https_proxy: http://proxy.bos.example.com:8080 https_proxy: http://proxy.bos.example.com:8080
You can set the remote environment at the play level::
- hosts: testing
roles:
- php
- nginx
environment:
http_proxy: http://proxy.example.com:8080
These examples show proxy settings, but you can provide any number of settings this way.
Working With Language-Specific Version Managers Working with language-specific version managers
=============================================== ===============================================
Some language-specific version managers (such as rbenv and nvm) require environment variables be set while these tools are in use. When using these tools manually, they usually require sourcing some environment variables via a script or lines added to your shell configuration file. In Ansible, you can instead use the environment directive:: Some language-specific version managers (such as rbenv and nvm) require you to set environment variables while these tools are in use. When using these tools manually, you usually source some environment variables from a script or from lines added to your shell configuration file. In Ansible, you can do this with the environment keyword at the play level::
--- ---
### A playbook demonstrating a common npm workflow: ### A playbook demonstrating a common npm workflow:
@ -109,10 +111,9 @@ Some language-specific version managers (such as rbenv and nvm) require environm
when: packagejson.stat.exists when: packagejson.stat.exists
.. note:: .. note::
``ansible_env:`` is normally populated by fact gathering (M(gather_facts)) and the value of the variables depends on the user The example above uses ``ansible_env`` as part of the PATH. Basing variables on ``ansible_env`` is risky. Ansible populates ``ansible_env`` values by gathering facts, so the value of the variables depends on the remote_user or become_user Ansible used when gathering those facts. If you change remote_user/become_user the values in ``ansible-env`` may not be the ones you expect.
that did the gathering action. If you change remote_user/become_user you might end up using the wrong values for those variables.
You might also want to simply specify the environment for a single task:: You can also specify the environment at the task level::
--- ---
- name: install ruby 2.3.1 - name: install ruby 2.3.1

@ -27,10 +27,7 @@ Ignoring unreachable host errors
.. versionadded:: 2.7 .. versionadded:: 2.7
You may ignore task failure due to the host instance being 'UNREACHABLE' with the ``ignore_unreachable`` keyword. You may ignore task failure due to the host instance being 'UNREACHABLE' with the ``ignore_unreachable`` keyword. Ansible ignores the task errors, but continues to execute future tasks against the unreachable host. For example, at the task level::
Note that task errors are what's being ignored, not the unreachable host.
Here's an example explaining the behavior for an unreachable host at the task level::
- name: this executes, fails, and the failure is ignored - name: this executes, fails, and the failure is ignored
command: /bin/true command: /bin/true
@ -162,7 +159,12 @@ The :ref:`command <command_module>` and :ref:`shell <shell_module>` modules care
Aborting a play on all hosts Aborting a play on all hosts
============================ ============================
Sometimes you want a failure on a single host to abort the entire play on all hosts. If you set ``any_errors_fatal`` and a task returns an error, Ansible lets all hosts in the current batch finish the fatal task and then stops executing the play on all hosts. You can set ``any_errors_fatal`` at the play or block level:: Sometimes you want a failure on a single host, or failures on a certain percentage of hosts, to abort the entire play on all hosts. You can stop play execution after the first failure happens with ``any_errors_fatal``. For finer-grained control, you can use ``max_fail_percentage`` to abort the run after a given percentage of hosts has failed.
Aborting on the first error: any_errors_fatal
---------------------------------------------
If you set ``any_errors_fatal`` and a task returns an error, Ansible finishes the fatal task on all hosts in the current batch, then stops executing the play on all hosts. Subsequent tasks and plays are not executed. You can recover from fatal errors by adding a :ref:`rescue section <block_error_handling>` to the block. You can set ``any_errors_fatal`` at the play or block level::
- hosts: somehosts - hosts: somehosts
any_errors_fatal: true any_errors_fatal: true
@ -175,7 +177,49 @@ Sometimes you want a failure on a single host to abort the entire play on all ho
- include_tasks: mytasks.yml - include_tasks: mytasks.yml
any_errors_fatal: true any_errors_fatal: true
For finer-grained control, you can use ``max_fail_percentage`` to abort the run after a given percentage of hosts has failed. You can use this feature when all tasks must be 100% successful to continue playbook execution. For example, if you run a service on machines in multiple data centers with load balancers to pass traffic from users to the service, you want all load balancers to be disabled before you stop the service for maintenance. To ensure that any failure in the task that disables the load balancers will stop all other tasks::
---
- hosts: load_balancers_dc_a
any_errors_fatal: True
tasks:
- name: 'shutting down datacenter [ A ]'
command: /usr/bin/disable-dc
- hosts: frontends_dc_a
tasks:
- name: 'stopping service'
command: /usr/bin/stop-software
- name: 'updating software'
command: /usr/bin/upgrade-software
- hosts: load_balancers_dc_a
tasks:
- name: 'Starting datacenter [ A ]'
command: /usr/bin/enable-dc
In this example Ansible starts the software upgrade on the front ends only if all of the load balancers are successfully disabled.
.. _maximum_failure_percentage:
Setting a maximum failure percentage
------------------------------------
By default, Ansible continues to execute tasks as long as there are hosts that have not yet failed. In some situations, such as when executing a rolling update, you may want to abort the play when a certain threshold of failures has been reached. To achieve this, you can set a maximum failure percentage on a play::
---
- hosts: webservers
max_fail_percentage: 30
serial: 10
The ``max_fail_percentage`` setting applies to each batch when you use it with :ref:`serial <rolling_update_batch_size>`. In the example above, if more than 3 of the 10 servers in the first (or any) batch of servers failed, the rest of the play would be aborted.
.. note::
The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort the play when 2 of the systems failed, set the max_fail_percentage at 49 rather than 50.
Controlling errors in blocks Controlling errors in blocks
============================ ============================

@ -178,6 +178,12 @@ You can cast values as certain types. For example, if you expect the input "True
msg: test msg: test
when: some_string_value | bool when: some_string_value | bool
If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string::
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
.. versionadded:: 1.6 .. versionadded:: 1.6
.. _filters_for_formatting_data: .. _filters_for_formatting_data:

@ -1,36 +1,17 @@
.. _playbooks_lookups: .. _playbooks_lookups:
*******
Lookups Lookups
------- *******
Lookup plugins allow access to outside data sources. Like all templating, these plugins are evaluated on the Ansible control machine, and can include reading the filesystem as well as contacting external datastores and services. This data is then made available using the standard templating system in Ansible.
.. note::
- Lookups occur on the local computer, not on the remote computer.
- They are executed within the directory containing the role or play, as opposed to local tasks which are executed with the directory of the executed script.
- You can pass wantlist=True to lookups to use in jinja2 template "for" loops.
- Lookups are an advanced feature. You should have a good working knowledge of Ansible plays before incorporating them.
.. warning:: Some lookups pass arguments to a shell. When using variables from a remote/untrusted source, use the `|quote` filter to ensure safe usage.
.. contents:: Topics
.. _lookups_and_loops:
Lookups and loops
`````````````````
*lookup plugins* are a way to query external data sources, such as shell commands or even key value stores.
Before Ansible 2.5, lookups were mostly used indirectly in ``with_<lookup>`` constructs for looping. Starting with Ansible version 2.5, lookups are used more explicitly as part of Jinja2 expressions fed into the ``loop`` keyword.
Lookup plugins retrieve data from outside sources such as files, databases, key/value stores, APIs, and other services. Like all templating, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. Before Ansible 2.5, lookups were mostly used indirectly in ``with_<lookup>`` constructs for looping. Starting with Ansible 2.5, lookups are used more explicitly as part of Jinja2 expressions fed into the ``loop`` keyword.
.. _lookups_and_variables: .. _lookups_and_variables:
Lookups and variables Using lookups in variables
````````````````````` ==========================
One way of using lookups is to populate variables. These macros are evaluated each time they are used in a task (or template):: You can populate variables using lookups. Ansible evaluates the value each time it is executed in a task (or template)::
vars: vars:
motd_value: "{{ lookup('file', '/etc/motd') }}" motd_value: "{{ lookup('file', '/etc/motd') }}"
@ -38,7 +19,7 @@ One way of using lookups is to populate variables. These macros are evaluated ea
- debug: - debug:
msg: "motd value is {{ motd_value }}" msg: "motd value is {{ motd_value }}"
For more details and a complete list of lookup plugins available, please see :ref:`plugins_lookup`. For more details and a list of lookup plugins in ansible-base, see :ref:`plugins_lookup`. You may also find lookup plugins in collections. You can review a list of lookup plugins installed on your control machine with the command ``ansible-doc -l -t lookup``.
.. seealso:: .. seealso::

@ -3,7 +3,7 @@
Module defaults Module defaults
=============== ===============
If you find yourself calling the same module repeatedly with the same arguments, it can be useful to define default arguments for that particular module using the ``module_defaults`` attribute. If you frequently call the same module with the same arguments, it can be useful to define default arguments for that particular module using the ``module_defaults`` attribute.
Here is a basic example:: Here is a basic example::
@ -33,7 +33,7 @@ The ``module_defaults`` attribute can be used at the play, block, and task level
debug: debug:
msg: "a default message" msg: "a default message"
It's also possible to remove any previously established defaults for a module by specifying an empty dict:: You can remove any previously established defaults for a module by specifying an empty dict::
- file: - file:
state: touch state: touch
@ -82,8 +82,7 @@ Module defaults groups
.. versionadded:: 2.7 .. versionadded:: 2.7
Ansible 2.7 adds a preview-status feature to group together modules that share common sets of parameters. This makes Ansible 2.7 adds a preview-status feature to group together modules that share common sets of parameters. This makes it easier to author playbooks making heavy use of API-based modules such as cloud modules.
it easier to author playbooks making heavy use of API-based modules such as cloud modules.
+---------+---------------------------+-----------------+ +---------+---------------------------+-----------------+
| Group | Purpose | Ansible Version | | Group | Purpose | Ansible Version |

@ -242,6 +242,8 @@ You can pass other keywords, including variables and tags, when importing roles:
When you add a tag to an ``import_role`` statement, Ansible applies the tag to `all` tasks within the role. See :ref:`tag_inheritance` for details. When you add a tag to an ``import_role`` statement, Ansible applies the tag to `all` tasks within the role. See :ref:`tag_inheritance` for details.
.. _run_role_twice:
Running a role multiple times in one playbook Running a role multiple times in one playbook
============================================= =============================================

@ -3,9 +3,7 @@
Advanced Playbooks Features Advanced Playbooks Features
=========================== ===========================
Here are some playbook features that not everyone may need to learn, but can be quite useful for particular applications. As you write more playbooks and roles, you might have some special use cases. For example, you may want to execute "dry runs" of your playbooks (:ref:`check_mode_dry`), ask playbook users to supply information (:ref:`playbooks_prompts`), retrieve information from an external datastore or API (:ref:`lookup_plugins`), or change the way Ansible handles failures (:ref:`playbooks_error_handling`). The topics listed on this page cover these use cases and many more. If you cannot achieve your goals with basic Ansible concepts and actions, browse through these topics for help with your use case.
Browsing these topics is recommended as you may find some useful tips here, but feel free to learn the basics of Ansible first
and adopt these only if they seem relevant or useful to your environment.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

@ -1,34 +1,40 @@
Start and Step .. _playbooks_start_and_step:
======================
This shows a few alternative ways to run playbooks. These modes are very useful for testing new plays or debugging. ***************************************
Executing playbooks for troubleshooting
***************************************
When you are testing new plays or debugging playbooks, you may need to run the same play multiple times. To make this more efficient, Ansible offers two alternative ways to execute a playbook: start-at-task and step mode.
.. _start_at_task: .. _start_at_task:
Start-at-task start-at-task
````````````` -------------
If you want to start executing your playbook at a particular task, you can do so with the ``--start-at-task`` option::
ansible-playbook playbook.yml --start-at-task="install packages" To start executing your playbook at a particular task (usually the task that failed on the previous run), use the ``--start-at-task`` option::
The above will start executing your playbook at a task named "install packages". ansible-playbook playbook.yml --start-at-task="install packages"
In this example, Ansible starts executing your playbook at a task named "install packages". This feature does not work with tasks inside dynamically re-used roles or tasks (``include_*``), see :ref:`dynamic_vs_static`.
.. _step: .. _step:
Step Step mode
```` ---------
Playbooks can also be executed interactively with ``--step``:: To execute a playbook interactively, use ``--step``::
ansible-playbook playbook.yml --step ansible-playbook playbook.yml --step
This will cause ansible to stop on each task, and ask if it should execute that task. With this option, Ansible stops on each task, and asks if it should execute that task. For example, if you have a task called "configure ssh", the playbook run will stop and ask::
Say you had a task called "configure ssh", the playbook run will stop and ask::
Perform task: configure ssh (y/n/c): Perform task: configure ssh (y/n/c):
Answering "y" will execute the task, answering "n" will skip the task, and answering "c" Answer "y" to execute the task, answer "n" to skip the task, and answer "c" to exit step mode, executing all remaining tasks without asking.
will continue executing all the remaining tasks without asking.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbook_debugger`
Using the Ansible debugger

@ -3,15 +3,14 @@
Controlling playbook execution: strategies and more Controlling playbook execution: strategies and more
=================================================== ===================================================
By default, Ansible runs each task on all hosts affected by a play before starting the next task on any host, using 5 forks. If you want to change this default behavior, you can use a different strategy plugin, change the number of forks, or apply one of several play-level keywords like ``serial``. By default, Ansible runs each task on all hosts affected by a play before starting the next task on any host, using 5 forks. If you want to change this default behavior, you can use a different strategy plugin, change the number of forks, or apply one of several keywords like ``serial``.
.. contents:: .. contents::
:local: :local:
Selecting a strategy Selecting a strategy
-------------------- --------------------
The default behavior described above is the :ref:`linear strategy<linear_strategy>`. Ansible offers other strategies, including the :ref:`debug strategy<debug_strategy>` (see also :ref:`playbook_debugger`) and the :ref:`free strategy<free_strategy>`, which allows The default behavior described above is the :ref:`linear strategy<linear_strategy>`. Ansible offers other strategies, including the :ref:`debug strategy<debug_strategy>` (see also :ref:`playbook_debugger`) and the :ref:`free strategy<free_strategy>`, which allows each host to run until the end of the play as fast as it can::
each host to run until the end of the play as fast as it can::
- hosts: all - hosts: all
strategy: free strategy: free
@ -36,14 +35,116 @@ or pass it on the command line: `ansible-playbook -f 30 my_playbook.yml`.
Using keywords to control execution Using keywords to control execution
----------------------------------- -----------------------------------
Several play-level :ref:`keyword<playbook_keywords>` also affect play execution. The most common one is ``serial``, which sets a number, a percentage, or a list of numbers of hosts you want to manage at a time. Setting ``serial`` with any strategy directs Ansible to 'batch' the hosts, completing the play on the specified number or percentage of hosts before starting the next 'batch'. This is especially useful for :ref:`rolling updates<rolling_update_batch_size>`.
The ``throttle`` keyword also affects execution and can be set at the block and task level. This keyword limits the number of workers up to the maximum set with the forks setting or ``serial``. Use ``throttle`` to restrict tasks that may be CPU-intensive or interact with a rate-limiting API:: In addition to strategies, several :ref:`keywords<playbook_keywords>` also affect play execution. You can set a number, a percentage, or a list of numbers of hosts you want to manage at a time with ``serial``. Ansible completes the play on the specified number or percentage of hosts before starting the next batch of hosts. You can restrict the number of workers allotted to a block or task with ``throttle``. You can control how Ansible selects the next host in a group to execute against with ``order``. These keywords are not strategies. They are directives or options applied to a play, block, or task.
.. _rolling_update_batch_size:
Setting the batch size with ``serial``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, Ansible runs in parallel against all the hosts in the :ref:`pattern <intro_patterns>` you set in the ``hosts:`` field of each play. If you want to manage only a few machines at a time, for example during a rolling update, you can define how many hosts Ansible should manage at a single time using the ``serial`` keyword::
---
- name: test play
hosts: webservers
serial: 2
gather_facts: False
tasks:
- name: first task
command: hostname
- name: second task
command: hostname
In the above example, if we had 4 hosts in the group 'webservers', Ansible would execute the play completely (both tasks) on 2 of the hosts before moving on to the next 2 hosts::
PLAY [webservers] ****************************************
TASK [first task] ****************************************
changed: [web2]
changed: [web1]
TASK [second task] ***************************************
changed: [web1]
changed: [web2]
PLAY [webservers] ****************************************
TASK [first task] ****************************************
changed: [web3]
changed: [web4]
TASK [second task] ***************************************
changed: [web3]
changed: [web4]
PLAY RECAP ***********************************************
web1 : ok=2 changed=2 unreachable=0 failed=0
web2 : ok=2 changed=2 unreachable=0 failed=0
web3 : ok=2 changed=2 unreachable=0 failed=0
web4 : ok=2 changed=2 unreachable=0 failed=0
You can also specify a percentage with the ``serial`` keyword. Ansible applies the percentage to the total number of hosts in a play to determine the number of hosts per pass::
---
- name: test play
hosts: webservers
serial: "30%"
If the number of hosts does not divide equally into the number of passes, the final pass contains the remainder. In this example, if you had 20 hosts in the webservers group, the first batch would contain 6 hosts, the second batch would contain 6 hosts, the third batch would contain 6 hosts, and the last batch would contain 2 hosts.
You can also specify batch sizes as a list. For example::
---
- name: test play
hosts: webservers
serial:
- 1
- 5
- 10
In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left), every following batch would contain either 10 hosts or all the remaining hosts, if fewer than 10 hosts remained.
You can list multiple batch sizes as percentages::
---
- name: test play
hosts: webservers
serial:
- "10%"
- "20%"
- "100%"
You can also mix and match the values::
---
- name: test play
hosts: webservers
serial:
- 1
- 5
- "20%"
.. note::
No matter how small the percentage, the number of hosts per pass will always be 1 or greater.
Restricting execution with ``throttle``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``throttle`` keyword limits the number of workers for a particular task. It can be set at the block and task level. Use ``throttle`` to restrict tasks that may be CPU-intensive or interact with a rate-limiting API::
tasks: tasks:
- command: /path/to/cpu_intensive_command - command: /path/to/cpu_intensive_command
throttle: 1 throttle: 1
If you have already restricted the number of forks or the number of machines to execute against in parallel, you can reduce the number of workers with ``throttle``, but you cannot increase it. In other words, to have an effect, your ``throttle`` setting must be lower than your ``forks`` or ``serial`` setting if you are using them together.
Ordering execution based on inventory
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``order`` keyword controls the order in which hosts are run. Possible values for order are: The ``order`` keyword controls the order in which hosts are run. Possible values for order are:
inventory: inventory:
@ -57,7 +158,7 @@ reverse_sorted:
shuffle: shuffle:
Randomly ordered on each run Randomly ordered on each run
Other keywords that affect play execution include ``ignore_errors``, ``ignore_unreachable``, and ``any_errors_fatal``. Please note that these keywords are not strategies. They are play-level directives or options. Other keywords that affect play execution include ``ignore_errors``, ``ignore_unreachable``, and ``any_errors_fatal``. These options are documented in :ref:`playbooks_error_handling`.
.. seealso:: .. seealso::

File diff suppressed because it is too large Load Diff

@ -0,0 +1,663 @@
.. _vars_and_facts:
************************************************
Discovering variables: facts and magic variables
************************************************
With Ansible you can retrieve or discover certain variables containing information about your remote systems or about Ansible itself. Variables related to remote systems are called facts. With facts, you can use the behavior or state of one system as configuration on other systems. For example, you can use the IP address of one system as a configuration value on another system. Variables related to Ansible are called magic variables.
.. contents::
:local:
Ansible facts
=============
Ansible facts are data related to your remote systems, including operating systems, IP addresses, attached filesystems, and more. You can access this data in the ``ansible_facts`` variable. By default, you can also access some Ansible facts as top-level variables with the ``ansible_`` prefix. You can disable this behavior using the :ref:`INJECT_FACTS_AS_VARS` setting. To see all available facts, add this task to a play::
- debug: var=ansible_facts
To see the 'raw' information as gathered, run this command at the command line::
ansible <hostname> -m setup
Facts include a large amount of variable data, which may look like this on Ansible 2.7:
.. code-block:: json
{
"ansible_all_ipv4_addresses": [
"REDACTED IP ADDRESS"
],
"ansible_all_ipv6_addresses": [
"REDACTED IPV6 ADDRESS"
],
"ansible_apparmor": {
"status": "disabled"
},
"ansible_architecture": "x86_64",
"ansible_bios_date": "11/28/2013",
"ansible_bios_version": "4.1.5",
"ansible_cmdline": {
"BOOT_IMAGE": "/boot/vmlinuz-3.10.0-862.14.4.el7.x86_64",
"console": "ttyS0,115200",
"no_timer_check": true,
"nofb": true,
"nomodeset": true,
"ro": true,
"root": "LABEL=cloudimg-rootfs",
"vga": "normal"
},
"ansible_date_time": {
"date": "2018-10-25",
"day": "25",
"epoch": "1540469324",
"hour": "12",
"iso8601": "2018-10-25T12:08:44Z",
"iso8601_basic": "20181025T120844109754",
"iso8601_basic_short": "20181025T120844",
"iso8601_micro": "2018-10-25T12:08:44.109968Z",
"minute": "08",
"month": "10",
"second": "44",
"time": "12:08:44",
"tz": "UTC",
"tz_offset": "+0000",
"weekday": "Thursday",
"weekday_number": "4",
"weeknumber": "43",
"year": "2018"
},
"ansible_default_ipv4": {
"address": "REDACTED",
"alias": "eth0",
"broadcast": "REDACTED",
"gateway": "REDACTED",
"interface": "eth0",
"macaddress": "REDACTED",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "REDACTED",
"type": "ether"
},
"ansible_default_ipv6": {},
"ansible_device_links": {
"ids": {},
"labels": {
"xvda1": [
"cloudimg-rootfs"
],
"xvdd": [
"config-2"
]
},
"masters": {},
"uuids": {
"xvda1": [
"cac81d61-d0f8-4b47-84aa-b48798239164"
],
"xvdd": [
"2018-10-25-12-05-57-00"
]
}
},
"ansible_devices": {
"xvda": {
"holders": [],
"host": "",
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": []
},
"model": null,
"partitions": {
"xvda1": {
"holders": [],
"links": {
"ids": [],
"labels": [
"cloudimg-rootfs"
],
"masters": [],
"uuids": [
"cac81d61-d0f8-4b47-84aa-b48798239164"
]
},
"sectors": "83883999",
"sectorsize": 512,
"size": "40.00 GB",
"start": "2048",
"uuid": "cac81d61-d0f8-4b47-84aa-b48798239164"
}
},
"removable": "0",
"rotational": "0",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "deadline",
"sectors": "83886080",
"sectorsize": "512",
"size": "40.00 GB",
"support_discard": "0",
"vendor": null,
"virtual": 1
},
"xvdd": {
"holders": [],
"host": "",
"links": {
"ids": [],
"labels": [
"config-2"
],
"masters": [],
"uuids": [
"2018-10-25-12-05-57-00"
]
},
"model": null,
"partitions": {},
"removable": "0",
"rotational": "0",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "deadline",
"sectors": "131072",
"sectorsize": "512",
"size": "64.00 MB",
"support_discard": "0",
"vendor": null,
"virtual": 1
},
"xvde": {
"holders": [],
"host": "",
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": []
},
"model": null,
"partitions": {
"xvde1": {
"holders": [],
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": []
},
"sectors": "167770112",
"sectorsize": 512,
"size": "80.00 GB",
"start": "2048",
"uuid": null
}
},
"removable": "0",
"rotational": "0",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "deadline",
"sectors": "167772160",
"sectorsize": "512",
"size": "80.00 GB",
"support_discard": "0",
"vendor": null,
"virtual": 1
}
},
"ansible_distribution": "CentOS",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/redhat-release",
"ansible_distribution_file_variety": "RedHat",
"ansible_distribution_major_version": "7",
"ansible_distribution_release": "Core",
"ansible_distribution_version": "7.5.1804",
"ansible_dns": {
"nameservers": [
"127.0.0.1"
]
},
"ansible_domain": "",
"ansible_effective_group_id": 1000,
"ansible_effective_user_id": 1000,
"ansible_env": {
"HOME": "/home/zuul",
"LANG": "en_US.UTF-8",
"LESSOPEN": "||/usr/bin/lesspipe.sh %s",
"LOGNAME": "zuul",
"MAIL": "/var/mail/zuul",
"PATH": "/usr/local/bin:/usr/bin",
"PWD": "/home/zuul",
"SELINUX_LEVEL_REQUESTED": "",
"SELINUX_ROLE_REQUESTED": "",
"SELINUX_USE_CURRENT_RANGE": "",
"SHELL": "/bin/bash",
"SHLVL": "2",
"SSH_CLIENT": "REDACTED 55672 22",
"SSH_CONNECTION": "REDACTED 55672 REDACTED 22",
"USER": "zuul",
"XDG_RUNTIME_DIR": "/run/user/1000",
"XDG_SESSION_ID": "1",
"_": "/usr/bin/python2"
},
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "REDACTED",
"broadcast": "REDACTED",
"netmask": "255.255.255.0",
"network": "REDACTED"
},
"ipv6": [
{
"address": "REDACTED",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "REDACTED",
"module": "xen_netfront",
"mtu": 1500,
"pciid": "vif-0",
"promisc": false,
"type": "ether"
},
"ansible_eth1": {
"active": true,
"device": "eth1",
"ipv4": {
"address": "REDACTED",
"broadcast": "REDACTED",
"netmask": "255.255.224.0",
"network": "REDACTED"
},
"ipv6": [
{
"address": "REDACTED",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "REDACTED",
"module": "xen_netfront",
"mtu": 1500,
"pciid": "vif-1",
"promisc": false,
"type": "ether"
},
"ansible_fips": false,
"ansible_form_factor": "Other",
"ansible_fqdn": "centos-7-rax-dfw-0003427354",
"ansible_hostname": "centos-7-rax-dfw-0003427354",
"ansible_interfaces": [
"lo",
"eth1",
"eth0"
],
"ansible_is_chroot": false,
"ansible_kernel": "3.10.0-862.14.4.el7.x86_64",
"ansible_lo": {
"active": true,
"device": "lo",
"ipv4": {
"address": "127.0.0.1",
"broadcast": "host",
"netmask": "255.0.0.0",
"network": "127.0.0.0"
},
"ipv6": [
{
"address": "::1",
"prefix": "128",
"scope": "host"
}
],
"mtu": 65536,
"promisc": false,
"type": "loopback"
},
"ansible_local": {},
"ansible_lsb": {
"codename": "Core",
"description": "CentOS Linux release 7.5.1804 (Core)",
"id": "CentOS",
"major_release": "7",
"release": "7.5.1804"
},
"ansible_machine": "x86_64",
"ansible_machine_id": "2db133253c984c82aef2fafcce6f2bed",
"ansible_memfree_mb": 7709,
"ansible_memory_mb": {
"nocache": {
"free": 7804,
"used": 173
},
"real": {
"free": 7709,
"total": 7977,
"used": 268
},
"swap": {
"cached": 0,
"free": 0,
"total": 0,
"used": 0
}
},
"ansible_memtotal_mb": 7977,
"ansible_mounts": [
{
"block_available": 7220998,
"block_size": 4096,
"block_total": 9817227,
"block_used": 2596229,
"device": "/dev/xvda1",
"fstype": "ext4",
"inode_available": 10052341,
"inode_total": 10419200,
"inode_used": 366859,
"mount": "/",
"options": "rw,seclabel,relatime,data=ordered",
"size_available": 29577207808,
"size_total": 40211361792,
"uuid": "cac81d61-d0f8-4b47-84aa-b48798239164"
},
{
"block_available": 0,
"block_size": 2048,
"block_total": 252,
"block_used": 252,
"device": "/dev/xvdd",
"fstype": "iso9660",
"inode_available": 0,
"inode_total": 0,
"inode_used": 0,
"mount": "/mnt/config",
"options": "ro,relatime,mode=0700",
"size_available": 0,
"size_total": 516096,
"uuid": "2018-10-25-12-05-57-00"
}
],
"ansible_nodename": "centos-7-rax-dfw-0003427354",
"ansible_os_family": "RedHat",
"ansible_pkg_mgr": "yum",
"ansible_processor": [
"0",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"1",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"2",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"3",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"4",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"5",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"6",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"7",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz"
],
"ansible_processor_cores": 8,
"ansible_processor_count": 8,
"ansible_processor_nproc": 8,
"ansible_processor_threads_per_core": 1,
"ansible_processor_vcpus": 8,
"ansible_product_name": "HVM domU",
"ansible_product_serial": "REDACTED",
"ansible_product_uuid": "REDACTED",
"ansible_product_version": "4.1.5",
"ansible_python": {
"executable": "/usr/bin/python2",
"has_sslcontext": true,
"type": "CPython",
"version": {
"major": 2,
"micro": 5,
"minor": 7,
"releaselevel": "final",
"serial": 0
},
"version_info": [
2,
7,
5,
"final",
0
]
},
"ansible_python_version": "2.7.5",
"ansible_real_group_id": 1000,
"ansible_real_user_id": 1000,
"ansible_selinux": {
"config_mode": "enforcing",
"mode": "enforcing",
"policyvers": 31,
"status": "enabled",
"type": "targeted"
},
"ansible_selinux_python_present": true,
"ansible_service_mgr": "systemd",
"ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE",
"ansible_ssh_host_key_ed25519_public": "REDACTED KEY VALUE",
"ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE",
"ansible_swapfree_mb": 0,
"ansible_swaptotal_mb": 0,
"ansible_system": "Linux",
"ansible_system_capabilities": [
""
],
"ansible_system_capabilities_enforced": "True",
"ansible_system_vendor": "Xen",
"ansible_uptime_seconds": 151,
"ansible_user_dir": "/home/zuul",
"ansible_user_gecos": "",
"ansible_user_gid": 1000,
"ansible_user_id": "zuul",
"ansible_user_shell": "/bin/bash",
"ansible_user_uid": 1000,
"ansible_userspace_architecture": "x86_64",
"ansible_userspace_bits": "64",
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "xen",
"gather_subset": [
"all"
],
"module_setup": true
}
You can reference the model of the first disk in the facts shown above in a template or playbook as::
{{ ansible_facts['devices']['xvda']['model'] }}
To reference the system hostname::
{{ ansible_facts['nodename'] }}
You can use facts in conditionals (see :ref:`playbooks_conditionals`) and also in templates. You can also use facts to create dynamic groups of hosts that match particular criteria, see the :ref:`group_by module <group_by_module>` documentation for details.
.. _fact_caching:
Caching facts
-------------
Like registered variables, facts are stored in memory by default. However, unlike registered variables, facts can be gathered independently and cached for repeated use. With cached facts, you can refer to facts from one system when configuring a second system, even if Ansible executes the current play on the second system first. For example::
{{ hostvars['asdf.example.com']['ansible_facts']['os_family'] }}
Caching is controlled by the cache plugins. By default, Ansible uses the memory cache plugin, which stores facts in memory for the duration of the current playbook run. To retain Ansible facts for repeated use, select a different cache plugin. See :ref:`cache_plugins` for details.
Fact caching can improve performance. If you manage thousands of hosts, you can configure fact caching to run nightly, then manage configuration on a smaller set of servers periodically throughout the day. With cached facts, you have access to variables and information about all hosts even when you are only managing a small number of servers.
.. _disabling_facts:
Disabling facts
---------------
By default, Ansible gathers facts at the beginning of each play. If you do not need to gather facts (for example, if you know know everything about your systems centrally), you can turn off fact gathering at the play level to improve scalability. Disabling facts may particularly improve performance in push mode with very large numbers of systems, or if you are using Ansible on experimental platforms. To disable fact gathering::
- hosts: whatever
gather_facts: no
Adding custom facts
-------------------
The setup module in Ansible automatically discovers a standard set of facts about each host. If you want to add custom values to your facts, you can write a custom facts module, set temporary facts with a ``set_fact`` task, or provide permanent custom facts using the facts.d directory.
.. _local_facts:
facts.d or local facts
^^^^^^^^^^^^^^^^^^^^^^
.. versionadded:: 1.3
You can add static custom facts by adding static files to facts.d, or add dynamic facts by adding executable scripts to facts.d. For example, you can add a list of all users on a host to your facts by creating and running a script in facts.d.
To use facts.d, create an ``/etc/ansible/facts.d`` directory on the remote host or hosts. If you prefer a different directory, create it and specify it using the ``fact_path`` play keyword. Add files to the directory to supply your custom facts. All file names must end with ``.fact``. The files can be JSON, INI, or executable files returning JSON.
To add static facts, simply add a file with the ``.facts`` extension. For example, create ``/etc/ansible/facts.d/preferences.fact`` with this content::
[general]
asdf=1
bar=2
The next time fact gathering runs, your facts will include a hash variable fact named ``general`` with ``asdf`` and ``bar`` as members. To validate this, run the following::
ansible <hostname> -m setup -a "filter=ansible_local"
And you will see your custom fact added::
"ansible_local": {
"preferences": {
"general": {
"asdf" : "1",
"bar" : "2"
}
}
}
The ansible_local namespace separates custom facts created by facts.d from system facts or variables defined elsewhere in the playbook, so variables will not override each other. You can access this custom fact in a template or playbook as::
{{ ansible_local['preferences']['general']['asdf'] }}
.. note:: The key part in the key=value pairs will be converted into lowercase inside the ansible_local variable. Using the example above, if the ini file contained ``XYZ=3`` in the ``[general]`` section, then you should expect to access it as: ``{{ ansible_local['preferences']['general']['xyz'] }}`` and not ``{{ ansible_local['preferences']['general']['XYZ'] }}``. This is because Ansible uses Python's `ConfigParser`_ which passes all option names through the `optionxform`_ method and this method's default implementation converts option names to lower case.
.. _ConfigParser: https://docs.python.org/2/library/configparser.html
.. _optionxform: https://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser.optionxform
You can also use facts.d to execute a script on the remote host, generating dynamic custom facts to the ansible_local namespace. For example, you can generate a list of all users that exist on a remote host as a fact about that host. To generate dynamic custom facts using facts.d:
#. Write and test a script to generate the JSON data you want.
#. Save the script in your facts.d directory.
#. Make sure your script has the ``.fact`` file extension.
#. Make sure your script is executable by the Ansible connection user.
#. Gather facts to execute the script and add the JSON output to ansible_local.
By default, fact gathering runs once at the beginning of each play. If you create a custom fact using facts.d in a playbook, it will be available in the next play that gathers facts. If you want to use it in the same play where you created it, you must explicitly re-run the setup module. For example::
- hosts: webservers
tasks:
- name: create directory for ansible custom facts
file: state=directory recurse=yes path=/etc/ansible/facts.d
- name: install custom ipmi fact
copy: src=ipmi.fact dest=/etc/ansible/facts.d
- name: re-read facts after adding custom fact
setup: filter=ansible_local
If you use this pattern frequently, a custom facts module would be more efficient than facts.d.
.. _magic_variables_and_hostvars:
Information about Ansible: magic variables
==========================================
You can access information about Ansible operations, including the python version being used, the hosts and groups in inventory, and the directories for playbooks and roles, using "magic" variables. Like connection variables, magic variables are :ref:`special_variables`. Magic variable names are reserved - do not set variables with these names. The variable ``environment`` is also reserved.
The most commonly used magic variables are ``hostvars``, ``groups``, ``group_names``, and ``inventory_hostname``. With ``hostvars``, you can access variables defined for any host in the play, at any point in a playbook. You can access Ansible facts using the ``hostvars`` variable too, but only after you have gathered (or cached) facts.
If you want to configure your database server using the value of a 'fact' from another node, or the value of an inventory variable assigned to another node, you can use ``hostvars`` in a template or on an action line::
{{ hostvars['test.example.com']['ansible_facts']['distribution'] }}
With ``groups``, a list of all the groups (and hosts) in the inventory, you can enumerate all hosts within a group. For example:
.. code-block:: jinja
{% for host in groups['app_servers'] %}
# something that applies to all app servers.
{% endfor %}
You can use ``groups`` and ``hostvars`` together to find all the IP addresses in a group.
.. code-block:: jinja
{% for host in groups['app_servers'] %}
{{ hostvars[host]['ansible_facts']['eth0']['ipv4']['address'] }}
{% endfor %}
You can use this approach to point a frontend proxy server to all the hosts in your app servers group, to set up the correct firewall rules between servers, and so on. You must either cache facts or gather facts for those hosts before the task that fills out the template.
With ``group_names``, a list (array) of all the groups the current host is in, you can create templated files that vary based on the group membership (or role) of the host:
.. code-block:: jinja
{% if 'webserver' in group_names %}
# some part of a configuration file that only applies to webservers
{% endif %}
You can use the magic variable ``inventory_hostname``, the name of the host as configured in your inventory, as an alternative to ``ansible_hostname`` when fact-gathering is disabled. If you have a long FQDN, you can use ``inventory_hostname_short``, which contains the part up to the first period, without the rest of the domain.
Other useful magic variables refer to the current play or playbook. These vars may be useful for filling out templates with multiple hostnames or for injecting the list into the rules for a load balancer.
``ansible_play_hosts`` is the list of all hosts still active in the current play.
``ansible_play_batch`` is a list of hostnames that are in scope for the current 'batch' of the play.
The batch size is defined by ``serial``, when not set it is equivalent to the whole play (making it the same as ``ansible_play_hosts``).
``ansible_playbook_python`` is the path to the python executable used to invoke the Ansible command line tool.
``inventory_dir`` is the pathname of the directory holding Ansible's inventory host file.
``inventory_file`` is the pathname and the filename pointing to the Ansible's inventory host file.
``playbook_dir`` contains the playbook base directory.
``role_path`` contains the current role's pathname and only works inside a role.
``ansible_check_mode`` is a boolean, set to ``True`` if you run Ansible with ``--check``.
.. _ansible_version:
Ansible version
---------------
.. versionadded:: 1.8
To adapt playbook behavior to different versions of Ansible, you can use the variable ``ansible_version``, which has the following structure::
"ansible_version": {
"full": "2.0.0.2",
"major": 2,
"minor": 0,
"revision": 0,
"string": "2.0.0.2"
}
Loading…
Cancel
Save