diff --git a/docs/docsite/rst/plugins/lookup.rst b/docs/docsite/rst/plugins/lookup.rst index 2eb0c94901d..cb76fa26dab 100644 --- a/docs/docsite/rst/plugins/lookup.rst +++ b/docs/docsite/rst/plugins/lookup.rst @@ -7,20 +7,12 @@ Lookup Plugins :local: :depth: 2 -Lookup plugins allow Ansible to access data from outside sources. -This can include reading the filesystem in addition to contacting external datastores and services. -Like all templating, these plugins are evaluated on the Ansible control machine, not on the target/remote. - -The data returned by a lookup plugin is made available using the standard templating system in Ansible, -and are typically used to load variables or templates with information from those systems. - -Lookups are an Ansible-specific extension to the Jinja2 templating language. +Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all :ref:`templating `, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources. .. note:: - Lookups are executed with a working directory relative to the role or play, as opposed to local tasks, which are executed relative the executed script. - - Since Ansible version 1.9, you can pass wantlist=True to lookups to use in Jinja2 template "for" loops. - - Lookup plugins are an advanced feature; to best leverage them you should have a good working knowledge of how to use Ansible plays. + - Pass ``wantlist=True`` to lookups to use in Jinja2 template "for" loops. .. warning:: - Some lookups pass arguments to a shell. When using variables from a remote/untrusted source, use the `|quote` filter to ensure safe usage. @@ -31,7 +23,7 @@ Lookups are an Ansible-specific extension to the Jinja2 templating language. Enabling lookup plugins ----------------------- -You can activate a custom lookup by either dropping it into a ``lookup_plugins`` directory adjacent to your play, inside a role, or by putting it in one of the lookup directory sources configured in :ref:`ansible.cfg `. +Ansible enables all lookup plugins it can find. You can activate a custom lookup by either dropping it into a ``lookup_plugins`` directory adjacent to your play, inside the ``plugins/lookup/`` directory of a collection you have installed, inside a standalone role, or in one of the lookup directory sources configured in :ref:`ansible.cfg `. .. _using_lookup: @@ -39,22 +31,21 @@ You can activate a custom lookup by either dropping it into a ``lookup_plugins`` Using lookup plugins -------------------- -Lookup plugins can be used anywhere you can use templating in Ansible: in a play, in variables file, or in a Jinja2 template for the :ref:`template ` module. +You can use lookup plugins anywhere you can use templating in Ansible: in a play, in variables file, or in a Jinja2 template for the :ref:`template ` module. .. code-block:: YAML+Jinja vars: file_contents: "{{lookup('file', 'path/to/file.txt')}}" -Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. -This is also the reason most lookups output lists and take lists as input; for example, ``with_items`` uses the :ref:`items ` lookup:: +Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. For this reason, most lookups output lists and take lists as input; for example, ``with_items`` uses the :ref:`items ` lookup:: tasks: - name: count to 3 debug: msg={{item}} with_items: [1, 2, 3] -You can combine lookups with :ref:`playbooks_filters`, :ref:`playbooks_tests` and even each other to do some complex data generation and manipulation. For example:: +You can combine lookups with :ref:`filters `, :ref:`tests ` and even each other to do some complex data generation and manipulation. For example:: tasks: - name: valid but useless and over complicated chained lookups and filters @@ -66,16 +57,16 @@ You can combine lookups with :ref:`playbooks_filters`, :ref:`playbooks_tests` an .. versionadded:: 2.6 -You can now control how errors behave in all lookup plugins by setting ``errors`` to ``ignore``, ``warn``, or ``strict``. The default setting is ``strict``, which causes the task to fail. For example: +You can control how errors behave in all lookup plugins by setting ``errors`` to ``ignore``, ``warn``, or ``strict``. The default setting is ``strict``, which causes the task to fail if the lookup returns an error. For example: -To ignore errors:: +To ignore lookup errors:: - - name: file doesnt exist, but i dont care .. file plugin itself warns anyways ... - debug: msg="{{ lookup('file', '/idontexist', errors='ignore') }}" + - name: if this file does not exist, I do not care .. file plugin itself warns anyway ... + debug: msg="{{ lookup('file', '/nosuchfile', errors='ignore') }}" .. code-block:: ansible-output - [WARNING]: Unable to find '/idontexist' in expected paths (use -vvvvv to see paths) + [WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths) ok: [localhost] => { "msg": "" @@ -84,43 +75,43 @@ To ignore errors:: To get a warning instead of a failure:: - - name: file doesnt exist, let me know, but continue - debug: msg="{{ lookup('file', '/idontexist', errors='warn') }}" + - name: if this file does not exist, let me know, but continue + debug: msg="{{ lookup('file', '/nosuchfile', errors='warn') }}" .. code-block:: ansible-output - [WARNING]: Unable to find '/idontexist' in expected paths (use -vvvvv to see paths) + [WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths) - [WARNING]: An unhandled exception occurred while running the lookup plugin 'file'. Error was a , original message: could not locate file in lookup: /idontexist + [WARNING]: An unhandled exception occurred while running the lookup plugin 'file'. Error was a , original message: could not locate file in lookup: /nosuchfile ok: [localhost] => { "msg": "" } -Fatal error (the default):: +To get a fatal error (the default):: - - name: file doesnt exist, FAIL (this is the default) - debug: msg="{{ lookup('file', '/idontexist', errors='strict') }}" + - name: if this file does not exist, FAIL (this is the default) + debug: msg="{{ lookup('file', '/nosuchfile', errors='strict') }}" .. code-block:: ansible-output - [WARNING]: Unable to find '/idontexist' in expected paths (use -vvvvv to see paths) + [WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths) - fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a , original message: could not locate file in lookup: /idontexist"} + fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a , original message: could not locate file in lookup: /nosuchfile"} .. _query: -Invoking lookup plugins with ``query`` --------------------------------------- +Forcing lookups to return lists: ``query`` and ``wantlist=True`` +---------------------------------------------------------------- .. versionadded:: 2.5 -In Ansible 2.5, a new jinja2 function called ``query`` was added for invoking lookup plugins. The difference between ``lookup`` and ``query`` is largely that ``query`` will always return a list. +In Ansible 2.5, a new Jinja2 function called ``query`` was added for invoking lookup plugins. The difference between ``lookup`` and ``query`` is largely that ``query`` will always return a list. The default behavior of ``lookup`` is to return a string of comma separated values. ``lookup`` can be explicitly configured to return a list using ``wantlist=True``. -This was done primarily to provide an easier and more consistent interface for interacting with the new ``loop`` keyword, while maintaining backwards compatibility with other uses of ``lookup``. +This feature provides an easier and more consistent interface for interacting with the new ``loop`` keyword, while maintaining backwards compatibility with other uses of ``lookup``. The following examples are equivalent: @@ -130,7 +121,7 @@ The following examples are equivalent: query('dict', dict_variable) -As demonstrated above the behavior of ``wantlist=True`` is implicit when using ``query``. +As demonstrated above, the behavior of ``wantlist=True`` is implicit when using ``query``. Additionally, ``q`` was introduced as a shortform of ``query``: diff --git a/docs/docsite/rst/reference_appendices/special_variables.rst b/docs/docsite/rst/reference_appendices/special_variables.rst index fed8e9f8da1..53ad4b9f5d6 100644 --- a/docs/docsite/rst/reference_appendices/special_variables.rst +++ b/docs/docsite/rst/reference_appendices/special_variables.rst @@ -3,8 +3,8 @@ Special Variables ================= -Magic ------ +Magic variables +--------------- These variables cannot be set directly by the user; Ansible will always override them to reflect internal state. ansible_check_mode @@ -132,7 +132,7 @@ role_path Facts ----- -These are variables that contain information pertinent to the current host (`inventory_hostname`). They are only available if gathered first. +These are variables that contain information pertinent to the current host (`inventory_hostname`). They are only available if gathered first. See :ref:`vars_and_facts` for more information. ansible_facts Contains any facts gathered or cached for the `inventory_hostname` @@ -141,7 +141,7 @@ ansible_facts ansible_local Contains any 'local facts' gathered or cached for the `inventory_hostname`. The keys available depend on the custom facts created. - See the :ref:`setup ` module for more details. + See the :ref:`setup ` module and :ref:`local_facts` for more details. .. _connection_variables: diff --git a/docs/docsite/rst/user_guide/intro_inventory.rst b/docs/docsite/rst/user_guide/intro_inventory.rst index f0bdf1291ef..d0d35f8f9e9 100644 --- a/docs/docsite/rst/user_guide/intro_inventory.rst +++ b/docs/docsite/rst/user_guide/intro_inventory.rst @@ -177,6 +177,8 @@ For numeric patterns, leading zeros can be included or removed, as desired. Rang [databases] db-[a:f].example.com +.. _variables_in_inventory: + Adding variables to inventory ============================= diff --git a/docs/docsite/rst/user_guide/playbooks.rst b/docs/docsite/rst/user_guide/playbooks.rst index 5d34b4d3fca..e8fde483c27 100644 --- a/docs/docsite/rst/user_guide/playbooks.rst +++ b/docs/docsite/rst/user_guide/playbooks.rst @@ -24,6 +24,7 @@ You should look at `Example Playbooks ` (results of prior tasks). However, it is great for validating configuration management playbooks that run on one node at a time. To run a playbook in check mode:: ansible-playbook foo.yml --check .. _forcing_to_run_in_check_mode: -Enabling or disabling check mode for tasks -`````````````````````````````````````````` +Enforcing or preventing check mode on tasks +------------------------------------------- .. versionadded:: 2.2 -Sometimes you may want to modify the check mode behavior of individual tasks. This is done via the ``check_mode`` option, which can -be added to tasks. - -There are two options: - -1. Force a task to **run in check mode**, even when the playbook is called **without** ``--check``. This is called ``check_mode: yes``. -2. Force a task to **run in normal mode** and make changes to the system, even when the playbook is called **with** ``--check``. This is called ``check_mode: no``. - -.. note:: Prior to version 2.2 only the equivalent of ``check_mode: no`` existed. The notation for that was ``always_run: yes``. +If you want certain tasks to run in check mode always, or never, regardless of whether you run the playbook with or without ``--check``, you can add the ``check_mode`` option to those tasks: -Instead of ``yes``/``no`` you can use a Jinja2 expression, just like the ``when`` clause. + - To force a task to run in check mode, even when the playbook is called without ``--check``, set ``check_mode: yes``. + - To force a task to run in normal mode and make changes to the system, even when the playbook is called with ``--check``, set ``check_mode: no``. -Example:: +For example:: tasks: - - name: this task will make changes to the system even in check mode + - name: this task will always make changes to the system command: /something/to/run --even-in-check-mode check_mode: no - - name: this task will always run under checkmode and not change the system + - name: this task will never make changes to the system lineinfile: line: "important config" dest: /path/to/myconfig.conf state: present check_mode: yes + register: changes_to_important_config +Running single tasks with ``check_mode: yes`` can be useful for testing Ansible modules, either to test the module itself or to test the conditions under which a module would make changes. You can register variables (see :ref:`playbooks_conditionals`) on these tasks for even more detail on the potential changes. -Running single tasks with ``check_mode: yes`` can be useful to write tests for -ansible modules, either to test the module itself or to the conditions under -which a module would make changes. -With ``register`` (see :ref:`playbooks_conditionals`) you can check the -potential changes. +.. note:: Prior to version 2.2 only the equivalent of ``check_mode: no`` existed. The notation for that was ``always_run: yes``. -Information about check mode in variables -````````````````````````````````````````` +Skipping tasks or ignoring errors in check mode +----------------------------------------------- .. versionadded:: 2.1 -If you want to skip, or ignore errors on some tasks in check mode -you can use a boolean magic variable ``ansible_check_mode`` -which will be set to ``True`` during check mode. - -Example:: - +If you want to skip a task or ignore errors on a task when you run Ansible in check mode, you can use a boolean magic variable ``ansible_check_mode``, which is set to ``True`` when Ansible runs in check mode. For example:: tasks: @@ -86,23 +70,21 @@ Example:: .. _diff_mode: -Showing Differences with ``--diff`` -``````````````````````````````````` +Using diff mode +=============== -.. versionadded:: 1.1 +The ``--diff`` option for ansible-playbook can be used alone or with ``--check``. When you run in diff mode, any module that supports diff mode reports the changes made or, if used with ``--check``, the changes that would have been made. Diff mode is most common in modules that manipulate files (for example, the template module) but other modules might also show 'before and after' information (for example, the user module). -The ``--diff`` option to ansible-playbook works great with ``--check`` (detailed above) but can also be used by itself. -When this flag is supplied and the module supports this, Ansible will report back the changes made or, if used with ``--check``, the changes that would have been made. -This is mostly used in modules that manipulate files (i.e. template) but other modules might also show 'before and after' information (i.e. user). -Since the diff feature produces a large amount of output, it is best used when checking a single host at a time. For example:: +Diff mode produces a large amount of output, so it is best used when checking a single host at a time. For example:: ansible-playbook foo.yml --check --diff --limit foo.example.com .. versionadded:: 2.4 -The ``--diff`` option can reveal sensitive information. This option can be disabled for tasks by specifying ``diff: no``. +Enforcing or preventing diff mode on tasks +------------------------------------------ -Example:: +Because the ``--diff`` option can reveal sensitive information, you can disable it for a task by specifying ``diff: no``. For example:: tasks: - name: this task will not report a diff when the file changes diff --git a/docs/docsite/rst/user_guide/playbooks_conditionals.rst b/docs/docsite/rst/user_guide/playbooks_conditionals.rst index cf7263bae53..ac57cc83ad2 100644 --- a/docs/docsite/rst/user_guide/playbooks_conditionals.rst +++ b/docs/docsite/rst/user_guide/playbooks_conditionals.rst @@ -1,60 +1,144 @@ .. _playbooks_conditionals: +************ Conditionals -============ +************ -.. contents:: Topics +In a playbook, you may want to execute different tasks, or have different goals, depending on the value of a fact (data about the remote system), a variable, or the result of a previous task. You may want the value of some variables to depend on the value of other variables. Or you may want to create additional groups of hosts based on whether the hosts match other criteria. You can do all of these things with conditionals. - -Often the result of a play may depend on the value of a variable, fact (something learned about the remote system), or previous task result. -In some cases, the values of variables may depend on other variables. -Additional groups can be created to manage hosts based on whether the hosts match other criteria. This topic covers how conditionals are used in playbooks. +Ansible uses Jinja2 :ref:`tests ` and :ref:`filters ` in conditionals. Ansible supports all the standard tests and filters, and adds some unique ones as well. .. note:: - There are many options to control execution flow in Ansible. More examples of supported conditionals can be located here: https://jinja.palletsprojects.com/en/master/templates/#comparisons. + There are many options to control execution flow in Ansible. You can find more examples of supported conditionals at ``_. +.. contents:: + :local: .. _the_when_statement: -The When Statement -`````````````````` +Basic conditionals with ``when`` +================================ + +The simplest conditional statement applies to a single task. Create the task, then add a ``when`` statement that applies a test. The ``when`` clause is a raw Jinja2 expression without double curly braces (see :ref:`group_by_module`). When you run the task or playbook, Ansible evaluates the test for all hosts. On any host where the test passes (returns a value of True), Ansible runs that task. For example, if you are installing mysql on multiple machines, some of which have SELinux enabled, you might have a task to configure SELinux to allow mysql to run. You would only want that task to run on machines that have SELinux enabled: + +.. code-block:: yaml + + tasks: + - name: Configure SELinux to start mysql on any port + seboolean: name=mysql_connect_any state=true persistent=yes + when: ansible_selinux.status == "enabled" + # all variables can be used directly in conditionals without double curly braces + +Conditionals based on ansible_facts +----------------------------------- -Sometimes you will want to skip a particular step on a particular host. -This could be something as simple as not installing a certain package if the operating system is a particular version, -or it could be something like performing some cleanup steps if a filesystem is getting full. +Often you want to execute or skip a task based on facts. Facts are attributes of individual hosts, including IP address, operating system, the status of a filesystem, and many more. With conditionals based on facts: -This is easy to do in Ansible with the ``when`` clause, which contains a raw `Jinja2 expression `_ without double curly braces (see :ref:`group_by_module`). + - You can install a certain package only when the operating system is a particular version. + - You can skip configuring a firewall on hosts with internal IP addresses. + - You can perform cleanup tasks only when a filesystem is getting full. -.. note:: Jinja2 expressions are built up from comparisons, filters, tests, and logical combinations thereof. The below examples will give you an impression how to use them. However, for a more complete overview over all operators to use, please refer to the official `Jinja2 documentation `_ . +See :ref:`commonly_used_facts` for a list of facts that frequently appear in conditional statements. Not all facts exist for all hosts. For example, the 'lsb_major_release' fact used in an example below only exists when the lsb_release package is installed on the target host. To see what facts are available on your systems, add a debug task to your playbook:: -It's actually pretty simple:: + - debug: var=ansible_facts + +Here is a sample conditional based on a fact: + +.. code-block:: yaml tasks: - - name: "shut down Debian flavored systems" + - name: shut down Debian flavored systems command: /sbin/shutdown -t now when: ansible_facts['os_family'] == "Debian" - # note that all variables can be used directly in conditionals without double curly braces -You can also use `parentheses to group and logical operators `_ to combine conditions:: +If you have multiple conditions, you can group them with parentheses: + +.. code-block:: yaml tasks: - - name: "shut down CentOS 6 and Debian 7 systems" + - name: shut down CentOS 6 and Debian 7 systems command: /sbin/shutdown -t now when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "6") or (ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "7") -Multiple conditions that all need to be true (that is, a logical ``and``) can also be specified as a list:: +You can use `logical operators `_ to combine conditions. When you have multiple conditions that all need to be true (that is, a logical ``and``), you can specify them as a list:: tasks: - - name: "shut down CentOS 6 systems" + - name: shut down CentOS 6 systems command: /sbin/shutdown -t now when: - ansible_facts['distribution'] == "CentOS" - ansible_facts['distribution_major_version'] == "6" -A number of Jinja2 `"tests" and "filters" `_ can also be used in when statements, some of which are unique and provided by Ansible. -Suppose we want to ignore the error of one statement and then decide to do something conditionally based on success or failure:: +If a fact or variable is a string, and you need to run a mathematical comparison on it, use a filter to ensure that Ansible reads the value as an integer:: + + tasks: + - shell: echo "only on Red Hat 6, derivatives, and later" + when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6 + +.. _conditionals_registered_vars: + +Conditions based on registered variables +---------------------------------------- + +Often in a playbook you want to execute or skip a task based on the outcome of an earlier task. For example, you might want to configure a service after it is upgraded by an earlier task. To create a conditional based on a registered variable: + + #. Register the outcome of the earlier task as a variable. + #. Create a conditional test based on the registered variable. + +You create the name of the registered variable using the ``register`` keyword. A registered variable always contains the status of the task that created it as well as any output that task generated. You can use registered variables in templates and action lines as well as in conditional ``when`` statements. You can access the string contents of the registered variable using ``variable.stdout``. For example:: + + - name: test play + hosts: all + + tasks: + + - shell: cat /etc/motd + register: motd_contents + + - shell: echo "motd contains the word hi" + when: motd_contents.stdout.find('hi') != -1 + +You can use registered results in the loop of a task if the variable is a list. If the variable is not a list, you can convert it into a list, with either ``stdout_lines`` or with ``variable.stdout.split()``. You can also split the lines by other fields:: + + - name: registered variable usage as a loop list + hosts: all + tasks: + + - name: retrieve the list of home directories + command: ls /home + register: home_dirs + + - name: add home dirs to the backup spooler + file: + path: /mnt/bkspool/{{ item }} + src: /home/{{ item }} + state: link + loop: "{{ home_dirs.stdout_lines }}" + # same as loop: "{{ home_dirs.stdout.split() }}" + +The string content of a registered variable can be empty. If you want to run another task only on hosts where the stdout of your registered variable is empty, check the registered variable's string contents for emptiness: + +.. code-block:: yaml + + - name: check registered variable for emptiness + hosts: all + + tasks: + + - name: list contents of directory + command: ls mydir + register: contents + + - name: check contents for emptiness + debug: + msg: "Directory is empty" + when: contents.stdout == "" + +Ansible always registers something in a registered variable for every host, even on hosts where a task fails or Ansible skips a task because a condition is not met. To run a follow-up task on these hosts, query the registered variable for ``is skipped`` (not for "undefined" or "default"). See :ref:`registered_variables` for more information. Here are sample conditionals based on the success or failure of a task. Remember to ignore errors if you want Ansible to continue executing on a host when a failure occurs: + +.. code-block:: yaml tasks: - command: /bin/false @@ -64,53 +148,40 @@ Suppose we want to ignore the error of one statement and then decide to do somet - command: /bin/something when: result is failed - # Both `succeeded` and `success` both work. The former, however, is newer and uses the correct tense, while the latter is mainly used in older versions of Ansible. - command: /bin/something_else when: result is succeeded - command: /bin/still/something_else when: result is skipped +.. note:: Older versions of Ansible used ``success`` and ``fail``, but ``succeeded`` and ``failed`` use the correct tense. All of these options are now valid. -.. note:: both `success` and `succeeded` work (`similarly for fail`/`failed`, etc). -.. warning:: You might expect a variable of a skipped task to be undefined and use `defined` or `default` to check that. **This is incorrect**! Even when a task is failed or skipped the variable is still registered with a failed or skipped status. See :ref:`registered_variables`. - - -To see what facts are available on a particular system, you can do the following in a playbook:: - - - debug: var=ansible_facts - - -Tip: Sometimes you'll get back a variable that's a string and you'll want to do a math operation comparison on it. -You can do this like so:: - tasks: - - shell: echo "only on Red Hat 6, derivatives, and later" - when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release']|int >= 6 +Conditionals based on variables +------------------------------- -.. note:: the above example requires the lsb_release package on the target host in order to return the `lsb major_release` fact. +You can also create conditionals based on variables defined in the playbooks or inventory. Because conditionals require boolean input (a test must evaluate as True to trigger the condition), you must apply the ``| bool`` filter to non boolean variables, such as string variables with content like 'yes', 'on', '1', or 'true'. You can define variables like this: -Variables defined in the playbooks or inventory can also be used, just make sure to apply the ``|bool`` filter to non-boolean variables (e.g., `string` variables with content like ``yes``, ``on``, ``1``, ``true``). -An example may be the execution of a task based on a variable's boolean value:: +.. code-block:: yaml vars: epic: true monumental: "yes" -Then a conditional execution might look like:: +With the variables above, Ansible would run one of these tasks and skip the other: + +.. code-block:: yaml tasks: - shell: echo "This certainly is epic!" - when: epic or monumental|bool + when: epic or monumental | bool -or:: - - tasks: - shell: echo "This certainly isn't epic!" when: not epic -If a required variable has not been set, you can skip or fail using Jinja2's ``defined`` test. -For example:: +If a required variable has not been set, you can skip or fail using Jinja2's `defined` test. For example: + +.. code-block:: yaml tasks: - shell: echo "I've got '{{ foo }}' and am not afraid to use it!" @@ -120,27 +191,33 @@ For example:: when: bar is undefined This is especially useful in combination with the conditional import of vars files (see below). -As the examples show, you don't need to use ``{{ }}`` to use variables inside conditionals, as these are already implied. +As the examples show, you do not need to use `{{ }}` to use variables inside conditionals, as these are already implied. .. _loops_and_conditionals: -Loops and Conditionals -`````````````````````` -Combining ``when`` with loops (see :ref:`playbooks_loops`), be aware that the ``when`` statement is processed separately for each item. This is by design:: +Using conditionals in loops +--------------------------- + +If you combine a ``when`` statement with a :ref:`loop `, Ansible processes the condition separately for each item. This is by design, so you can execute the task on some items in the loop and skip it on other items. For example: + +.. code-block:: yaml tasks: - command: echo {{ item }} loop: [ 0, 2, 4, 6, 8, 10 ] when: item > 5 -If you need to skip the whole task depending on the loop variable being defined, used the ``|default`` filter to provide an empty iterator:: +If you need to skip the whole task when the loop variable is undefined, use the `|default` filter to provide an empty iterator. For example, when looping over a list: + +.. code-block:: yaml - command: echo {{ item }} loop: "{{ mylist|default([]) }}" when: item > 5 +You can do the same thing when looping over a dict: -If using a dict in a loop:: +.. code-block:: yaml - command: echo {{ item.key }} loop: "{{ query('dict', mydict|default({})) }}" @@ -148,12 +225,12 @@ If using a dict in a loop:: .. _loading_in_custom_facts: -Loading in Custom Facts -``````````````````````` +Loading custom facts +-------------------- + +You can provide your own facts, as described in :ref:`developing_modules`. To run them, just make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be accessible to future tasks: -It's also easy to provide your own facts if you want, which is covered in :ref:`developing_modules`. To run them, just -make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned -there will be accessible to future tasks:: +.. code-block:: yaml tasks: - name: gather site specific fact data @@ -161,36 +238,21 @@ there will be accessible to future tasks:: - command: /usr/bin/thingy when: my_custom_fact_just_retrieved_from_the_remote_system == '1234' -.. _when_roles_and_includes: - -Applying 'when' to roles, imports, and includes -``````````````````````````````````````````````` +.. _when_with_reuse: -Note that if you have several tasks that all share the same conditional statement, you can affix the conditional -to a task include statement as below. All the tasks get evaluated, but the conditional is applied to each and every task:: +Conditionals with re-use +------------------------ - - import_tasks: tasks/sometasks.yml - when: "'reticulating splines' in output" +You can use conditionals with re-usable tasks files, playbooks, or roles. Ansible executes these conditional statements differently for dynamic re-use (includes) and for static re-use (imports). See :ref:`playbooks_reuse` for more information on re-use in Ansible. -.. note:: In versions prior to 2.0 this worked with task includes but not playbook includes. 2.0 allows it to work with both. - -Or with a role:: - - - hosts: webservers - roles: - - role: debian_stock_config - when: ansible_facts['os_family'] == 'Debian' - -You will note a lot of ``skipped`` output by default in Ansible when using this approach on systems that don't match the criteria. -In many cases the :ref:`group_by module ` can be a more streamlined way to accomplish the same thing; see -:ref:`os_variance`. +.. _conditional_imports: -When a conditional is used with ``include_*`` tasks instead of imports, it is applied `only` to the include task itself and not -to any other tasks within the included file(s). A common situation where this distinction is important is as follows:: +Conditionals with imports +^^^^^^^^^^^^^^^^^^^^^^^^^ - # We wish to include a file to define a variable when it is not - # already defined +When you add a conditional to an import statement, Ansible applies the condition to all tasks within the imported file. This behavior is the equivalent of :ref:`tag_inheritance`. Ansible applies the condition to every task, and evaluates each task separately. For example, you might have a playbook called ``main.yml`` and a tasks file called ``other_tasks.yml``:: + # all tasks within an imported file inherit the condition from the import statement # main.yml - import_tasks: other_tasks.yml # note "import" when: x is not defined @@ -201,150 +263,135 @@ to any other tasks within the included file(s). A common situation where this di - debug: var: x -This expands at include time to the equivalent of:: +Ansible expands this at execution time to the equivalent of:: - set_fact: x: foo when: x is not defined + # this task sets a value for x + - debug: var: x when: x is not defined + # Ansible skips this task, because x is now defined -Thus if ``x`` is initially undefined, the ``debug`` task will be skipped. By using ``include_tasks`` instead of ``import_tasks``, -both tasks from ``other_tasks.yml`` will be executed as expected. +Thus if ``x`` is initially undefined, the ``debug`` task will be skipped. If this is not the behavior you want, use an ``include_*`` statement to apply a condition only to that statement itself. -For more information on the differences between ``include`` v ``import`` see :ref:`playbooks_reuse`. - -.. _conditional_imports: +You can apply conditions to ``import_playbook`` as well as to the other ``import_*`` statements. When you use this approach, Ansible returns a 'skipped' message for every task on every host that does not match the criteria, creating repetitive output. In many cases the :ref:`group_by module ` can be a more streamlined way to accomplish the same objective; see :ref:`os_variance`. -Conditional Imports -``````````````````` +.. _conditional_includes: -.. note:: This is an advanced topic that is infrequently used. +Conditionals with includes +^^^^^^^^^^^^^^^^^^^^^^^^^^ -Sometimes you will want to do certain things differently in a playbook based on certain criteria. -Having one playbook that works on multiple platforms and OS versions is a good example. +When you use a conditional on an ``include_*`` statement, the condition is applied only to the include task itself and not to any other tasks within the included file(s). To contrast with the example used for conditionals on imports above, look at the same playbook and tasks file, but using an include instead of an import:: -As an example, the name of the Apache package may be different between CentOS and Debian, -but it is easily handled with a minimum of syntax in an Ansible Playbook:: + # Includes let you re-use a file to define a variable when it is not already defined - --- - - hosts: all - remote_user: root - vars_files: - - "vars/common.yml" - - [ "vars/{{ ansible_facts['os_family'] }}.yml", "vars/os_defaults.yml" ] - tasks: - - name: make sure apache is started - service: name={{ apache }} state=started + # main.yml + - include_tasks: other_tasks.yml + when: x is not defined -.. note:: - The variable "ansible_facts['os_family']" is being interpolated into - the list of filenames being defined for vars_files. + # other_tasks.yml + - set_fact: + x: foo + - debug: + var: x -As a reminder, the various YAML files contain just keys and values:: +Ansible expands this at execution time to the equivalent of:: - --- - # for vars/RedHat.yml - apache: httpd - somethingelse: 42 + - include_tasks: other_tasks.yml + when: x is not defined + # if condition is met, Ansible includes other_tasks.yml -How does this work? For Red Hat operating systems ('CentOS', for example), the first file Ansible tries to import -is 'vars/RedHat.yml'. If that file does not exist, Ansible attempts to load 'vars/os_defaults.yml'. If no files in -the list were found, an error is raised. + - set_fact: + x: foo + # no condition applied to this task, Ansible sets the value of x to foo -On Debian, Ansible first looks for 'vars/Debian.yml' instead of 'vars/RedHat.yml', before -falling back on 'vars/os_defaults.yml'. + - debug: + var: x + # no condition applied to this task, Ansible prints the debug statement -Ansible's approach to configuration -- separating variables from tasks, keeping your playbooks -from turning into arbitrary code with nested conditionals - results in more streamlined and auditable configuration rules because there are fewer decision points to track. +By using ``include_tasks`` instead of ``import_tasks``, both tasks from ``other_tasks.yml`` will be executed as expected. For more information on the differences between ``include`` v ``import`` see :ref:`playbooks_reuse`. -Selecting Files And Templates Based On Variables -```````````````````````````````````````````````` +Conditionals with roles +^^^^^^^^^^^^^^^^^^^^^^^ -.. note:: This is an advanced topic that is infrequently used. You can probably skip this section. +There are three ways to apply conditions to roles: -Sometimes a configuration file you want to copy, or a template you will use may depend on a variable. -The following construct selects the first available file appropriate for the variables of a given host, which is often much cleaner than putting a lot of if conditionals in a template. + - Add the same condition or conditions to all tasks in the role by placing your ``when`` statement under the ``roles`` keyword. See the example in this section. + - Add the same condition or conditions to all tasks in the role by placing your ``when`` statement on a static ``import_role`` in your playbook. + - Add a condition or conditions to individual tasks or blocks within the role itself. This is the only approach that allows you to select or skip some tasks within the role based on your ``when`` statement. To select or skip tasks within the role, you must have conditions set on individual tasks or blocks, use the dynamic ``include_role`` in your playbook, and add the condition or conditions to the include. When you use this approach, Ansible applies the condition to the include itself plus any tasks in the role that also have that ``when`` statement. -The following example shows how to template out a configuration file that was very different between, say, CentOS and Debian:: +When you incorporate a role in your playbook statically with the ``roles`` keyword, Ansible adds the conditions you define to all the tasks in the role. For example: - - name: template a file - template: - src: "{{ item }}" - dest: /etc/myapp/foo.conf - loop: "{{ query('first_found', { 'files': myfiles, 'paths': mypaths}) }}" - vars: - myfiles: - - "{{ansible_facts['distribution']}}.conf" - - default.conf - mypaths: ['search_location_one/somedir/', '/opt/other_location/somedir/'] +.. code-block:: yaml -Register Variables -`````````````````` + - hosts: webservers + roles: + - role: debian_stock_config + when: ansible_facts['os_family'] == 'Debian' -Often in a playbook it may be useful to store the result of a given command in a variable and access -it later. Use of the command module in this way can in many ways eliminate the need to write site specific facts, for -instance, you could test for the existence of a particular program. +.. _conditional_variable_and_files: -.. note:: Registration happens even when a task is skipped due to the conditional. This way you can query the variable for `` is skipped`` to know if task was attempted or not. +Selecting variables, files, or templates based on facts +------------------------------------------------------- -The ``register`` keyword decides what variable to save a result in. The resulting variables can be used in templates, action lines, or *when* statements. It looks like this (in an obviously trivial example):: +Sometimes the facts about a host determine the values you want to use for certain variables or even the file or template you want to select for that host. For example, the names of packages are different on CentOS and on Debian. The configuration files for common services are also different on different OS flavors and versions. To load different variables file, templates, or other files based on a fact about the hosts you are managing: - - name: test play - hosts: all + # Name your vars files, templates, or files to match the Ansible fact that differentiates them + # Select the correct vars file, template, or file for each host with a variable based on that Ansible fact - tasks: - - shell: cat /etc/motd - register: motd_contents +Ansible separates variables from tasks, keeping your playbooks from turning into arbitrary code with nested conditionals. This approach results in more streamlined and auditable configuration rules because there are fewer decision points to track. - - shell: echo "motd contains the word hi" - when: motd_contents.stdout.find('hi') != -1 +Selecting variables files based on facts +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -As shown previously, the registered variable's string contents are accessible with the ``stdout`` value. -The registered result can be used in the loop of a task if it is converted into -a list (or already is a list) as shown below. ``stdout_lines`` is already available on the object as -well though you could also call ``home_dirs.stdout.split()`` if you wanted, and could split by other -fields:: +You can create a playbook that works on multiple platforms and OS versions with a minimum of syntax by placing your variable values in vars files and conditionally importing them. If you want to install Apache on some CentOS and some Debian servers, create variables files with YAML keys and values. For example:: - - name: registered variable usage as a loop list - hosts: all - tasks: + --- + # for vars/RedHat.yml + apache: httpd + somethingelse: 42 - - name: retrieve the list of home directories - command: ls /home - register: home_dirs +Then import those variables files based on the facts you gather on the hosts in your playbook:: - - name: add home dirs to the backup spooler - file: - path: /mnt/bkspool/{{ item }} - src: /home/{{ item }} - state: link - loop: "{{ home_dirs.stdout_lines }}" - # same as loop: "{{ home_dirs.stdout.split() }}" + --- + - hosts: webservers + remote_user: root + vars_files: + - "vars/common.yml" + - [ "vars/{{ ansible_facts['os_family'] }}.yml", "vars/os_defaults.yml" ] + tasks: + - name: make sure apache is started + service: name={{ apache }} state=started +Ansible gathers facts on the hosts in the webservers group, then interpolates the variable "ansible_facts['os_family']" into a list of filenames. If you have hosts with Red Hat operating systems ('CentOS', for example), Ansible looks for 'vars/RedHat.yml'. If that file does not exist, Ansible attempts to load 'vars/os_defaults.yml'. For Debian hosts, Ansible first looks for 'vars/Debian.yml', before falling back on 'vars/os_defaults.yml'. If no files in the list are found, Ansible raises an error. -As shown previously, the registered variable's string contents are accessible with the ``stdout`` value. -You may check the registered variable's string contents for emptiness:: +Selecting files and templates based on facts +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - name: check registered variable for emptiness - hosts: all +You can use the same approach when different OS flavors or versions require different configuration files or templates. Select the appropriate file or template based on the variables assigned to each host. This approach is often much cleaner than putting a lot of conditionals into a single template to cover multiple OS or package versions. - tasks: +For example, you can template out a configuration file that is very different between, say, CentOS and Debian:: - - name: list contents of directory - command: ls mydir - register: contents + - name: template a file + template: + src: "{{ item }}" + dest: /etc/myapp/foo.conf + loop: "{{ query('first_found', { 'files': myfiles, 'paths': mypaths}) }}" + vars: + myfiles: + - "{{ansible_facts['distribution']}}.conf" + - default.conf + mypaths: ['search_location_one/somedir/', '/opt/other_location/somedir/'] - - name: check contents for emptiness - debug: - msg: "Directory is empty" - when: contents.stdout == "" +.. _commonly_used_facts: -Commonly Used Facts -``````````````````` +Commonly-used facts +=================== -The following Facts are frequently used in Conditionals - see above for examples. +The following Ansible facts are frequently used in conditionals. .. _ansible_distribution: @@ -381,7 +428,7 @@ Possible values (sample, not complete list):: ansible_facts['distribution_major_version'] ------------------------------------------- -This will be the major version of the operating system. For example, the value will be `16` for Ubuntu 16.04. +The major version of the operating system. For example, the value is `16` for Ubuntu 16.04. .. _ansible_os_family: diff --git a/docs/docsite/rst/user_guide/playbooks_debugger.rst b/docs/docsite/rst/user_guide/playbooks_debugger.rst index 4360a85b629..eefee2c5b86 100644 --- a/docs/docsite/rst/user_guide/playbooks_debugger.rst +++ b/docs/docsite/rst/user_guide/playbooks_debugger.rst @@ -1,56 +1,55 @@ .. _playbook_debugger: -Playbook Debugger -================= +*************** +Debugging tasks +*************** -.. contents:: Topics +Ansible offers a task debugger so you can try to fix errors during execution instead of fixing them in the playbook and then running it again. You have access to all of the features of the debugger in the context of the task. You can check or set the value of variables, update module arguments, and re-run the task with the new variables and arguments. The debugger lets you resolve the cause of the failure and continue with playbook execution. -Ansible includes a debugger as part of the strategy plugins. This debugger enables you to debug a task. -You have access to all of the features of the debugger in the context of the task. You can then, for example, check or set the value of variables, update module arguments, and re-run the task with the new variables and arguments to help resolve the cause of the failure. +.. contents:: + :local: + +Invoking the debugger +===================== There are multiple ways to invoke the debugger. Using the debugger keyword -++++++++++++++++++++++++++ +-------------------------- .. versionadded:: 2.5 -The ``debugger`` keyword can be used on any block where you provide a ``name`` attribute, such as a play, role, block or task. +The ``debugger`` keyword can be used on any block where you provide a ``name`` attribute, such as a play, role, block or task. The ``debugger`` keyword accepts five values: -The ``debugger`` keyword accepts several values: +.. table:: + :class: documentation-table -always - Always invoke the debugger, regardless of the outcome + ========================= ====================================================== + Value Result + ========================= ====================================================== + always Always invoke the debugger, regardless of the outcome -never - Never invoke the debugger, regardless of the outcome + never Never invoke the debugger, regardless of the outcome -on_failed - Only invoke the debugger if a task fails + on_failed Only invoke the debugger if a task fails -on_unreachable - Only invoke the debugger if the a host was unreachable + on_unreachable Only invoke the debugger if a host was unreachable -on_skipped - Only invoke the debugger if the task is skipped + on_skipped Only invoke the debugger if the task is skipped -These options override any global configuration to enable or disable the debugger. + ========================= ====================================================== -On a task -````````` +When you use the ``debugger`` keyword, the setting you use overrides any global configuration to enable or disable the debugger. If you define ``debugger`` at two different levels, for example in a role and in a task, the more specific definition wins: the definition on a task overrides the definition on a block, which overrides the definition on a role or play. -:: +Here are examples of invoking the debugger with the ``debugger`` keyword:: + # on a task - name: Execute a command command: "false" debugger: on_failed -On a play -````````` - -:: - - - name: Play + # on a play + - name: My play hosts: all debugger: on_skipped tasks: @@ -58,7 +57,7 @@ On a play command: "true" when: False -When provided at a generic level and a more specific level, the more specific wins:: +In the example below, the task will open the debugger when it fails, because the task-level definition overrides the play-level definition:: - name: Play hosts: all @@ -68,30 +67,30 @@ When provided at a generic level and a more specific level, the more specific wi command: "false" debugger: on_failed - -Configuration or environment variable -+++++++++++++++++++++++++++++++++++++ +In configuration or an environment variable +------------------------------------------- .. versionadded:: 2.5 -In ansible.cfg:: +You can turn the task debugger on or off globally with a setting in ansible.cfg or with an environment variable. The only options are ``True`` or ``False``. If you set the configuration option or environment variable to ``True``, Ansible runs the debugger on failed tasks by default. + +To invoke the task debugger from ansible.cfg:: [defaults] enable_task_debugger = True -As an environment variable:: +To use an an environment variable to invoke the task debugger:: ANSIBLE_ENABLE_TASK_DEBUGGER=True ansible-playbook -i hosts site.yml -When using this method, any failed or unreachable task will invoke the debugger, -unless otherwise explicitly disabled. +When you invoke the debugger using this method, any failed task will invoke the debugger, unless it is explicitly disabled for that role, play, block, or task. If you need more granular control what conditions trigger the debugger, use the ``debugger`` keyword. -As a Strategy -+++++++++++++ +As a strategy +------------- .. note:: - This is a backwards compatible method, to match Ansible versions before 2.5, - and may be removed in a future release + + This backwards-compatible method, which matches Ansible versions before 2.5, may be removed in a future release. To use the ``debug`` strategy, change the ``strategy`` attribute like this:: @@ -100,17 +99,20 @@ To use the ``debug`` strategy, change the ``strategy`` attribute like this:: tasks: ... -If you don't want change the code, you can define ``ANSIBLE_STRATEGY=debug`` -environment variable in order to enable the debugger, or modify ``ansible.cfg`` such as:: +You can also set the strategy to ``debug`` with the environment variable ``ANSIBLE_STRATEGY=debug``, or by modifying ``ansible.cfg``: + +.. code-block:: yaml [defaults] strategy = debug -Examples -++++++++ +Using the debugger +================== + +Once you invoke the debugger, you can use the seven :ref:`debugger commands ` to work through the error Ansible encountered. For example, the playbook below defines the ``var1`` variable but uses the ``wrong_var`` variable, which is undefined, by mistake. -For example, run the playbook below:: +.. code-block:: yaml - hosts: test debugger: on_failed @@ -121,9 +123,7 @@ For example, run the playbook below:: - name: wrong variable ping: data={{ wrong_var }} -The debugger is invoked since the *wrong_var* variable is undefined. - -Let's change the module's arguments and run the task again +If you run this playbook, Ansible invokes the debugger when the task fails. From the debug prompt, you can change the module arguments or the variables and run the task again. .. code-block:: none @@ -158,19 +158,45 @@ Let's change the module's arguments and run the task again PLAY RECAP ********************************************************************* 192.0.2.10 : ok=1 changed=0 unreachable=0 failed=0 -This time, the task runs successfully! +As the example above shows, once the task arguments use ``var1`` instead of ``wrong_var``, the task runs successfully. .. _available_commands: -Available Commands -++++++++++++++++++ +Available debug commands +======================== + +You can use these seven commands at the debug prompt: + +.. table:: + :class: documentation-table + + ========================== ============ ========================================================= + Command Shortcut Action + ========================== ============ ========================================================= + print p Print information about the task + + task.args[*key*] = *value* no shortcut Update module arguments + + task_vars[*key*] = *value* no shortcut Update task variables (you must ``update_task`` next) + + update_task u Recreate a task with updated task variables + + redo r Run the task again + + continue c Continue executing, starting with the next task + + quit q Quit the debugger + + ========================== ============ ========================================================= + +For more details, see the individual descriptions and examples below. .. _pprint_command: -p(print) *task/task_vars/host/result* -````````````````````````````````````` +Print command +------------- -Print values used to execute a module:: +``print *task/task.args/task_vars/host/result*`` prints information about the task:: [192.0.2.10] TASK: install package (debug)> p task TASK: install package @@ -194,12 +220,10 @@ Print values used to execute a module:: .. _update_args_command: -task.args[*key*] = *value* -`````````````````````````` +Update args command +------------------- -Update module's argument. - -If you run a playbook like this:: +``task.args[*key*] = *value*`` updates a module argument. This sample playbook has an invalid package name:: - hosts: test strategy: debug @@ -210,7 +234,7 @@ If you run a playbook like this:: - name: install package apt: name={{ pkg_name }} -Debugger is invoked due to wrong package name, so let's fix the module's args:: +When you run the playbook, the invalid package name triggers an error, and Ansible invokes the debugger. You can fix the package name by viewing, then updating the module argument:: [192.0.2.10] TASK: install package (debug)> p task.args {u'name': u'{{ pkg_name }}'} @@ -219,16 +243,14 @@ Debugger is invoked due to wrong package name, so let's fix the module's args:: {u'name': 'bash'} [192.0.2.10] TASK: install package (debug)> redo -Then the task runs again with new args. +After you update the module argument, use ``redo`` to run the task again with the new args. .. _update_vars_command: -task_vars[*key*] = *value* -`````````````````````````` - -Update ``task_vars``. +Update vars command +------------------- -Let's use the same playbook above, but fix ``task_vars`` instead of args:: +``task_vars[*key*] = *value*`` updates the ``task_vars``. You could fix the playbook above by viewing, then updating the task variables instead of the module args:: [192.0.2.10] TASK: install package (debug)> p task_vars['pkg_name'] u'not_exist' @@ -238,53 +260,51 @@ Let's use the same playbook above, but fix ``task_vars`` instead of args:: [192.0.2.10] TASK: install package (debug)> update_task [192.0.2.10] TASK: install package (debug)> redo -Then the task runs again with new ``task_vars``. +After you update the task variables, you must use ``update_task`` to load the new variables before using ``redo`` to run the task again. .. note:: - In 2.5 this was updated from ``vars`` to ``task_vars`` to not conflict with the ``vars()`` python function. + In 2.5 this was updated from ``vars`` to ``task_vars`` to avoid conflicts with the ``vars()`` python function. .. _update_task_command: -u(pdate_task) -````````````` +Update task command +------------------- .. versionadded:: 2.8 -This command re-creates the task from the original task data structure, and templates with updated ``task_vars`` - -See the above documentation for :ref:`update_vars_command` for an example of use. +``u`` or ``update_task`` recreates the task from the original task data structure and templates with updated task variables. See the entry :ref:`update_vars_command` for an example of use. .. _redo_command: -r(edo) -`````` +Redo command +------------ -Run the task again. +``r`` or ``redo`` runs the task again. .. _continue_command: -c(ontinue) -`````````` +Continue command +---------------- -Just continue. +``c`` or ``continue`` continues executing, starting with the next task. .. _quit_command: -q(uit) -`````` +Quit command +------------ -Quit from the debugger. The playbook execution is aborted. +``q`` or ``quit`` quits the debugger. The playbook execution is aborted. -Use with the free strategy -++++++++++++++++++++++++++ +Debugging and the free strategy +=============================== -Using the debugger on the ``free`` strategy will cause no further tasks to be queued or executed -while the debugger is active. Additionally, using ``redo`` on a task to schedule it for re-execution -may cause the rescheduled task to execute after subsequent tasks listed in your playbook. +If you use the debugger with the ``free`` strategy, Ansible does not queue or execute any further tasks while the debugger is active. However, previously queued tasks remain in the queue and run as soon as you exit the debugger. If you use ``redo`` to reschedule a task from the debugger, other queued task may execute before your rescheduled task. .. seealso:: + :ref:`playbooks_start_and_step` + Running playbooks while debugging or testing :ref:`playbooks_intro` An introduction to playbooks `User Mailing List `_ diff --git a/docs/docsite/rst/user_guide/playbooks_delegation.rst b/docs/docsite/rst/user_guide/playbooks_delegation.rst index b71604a5478..58e01794103 100644 --- a/docs/docsite/rst/user_guide/playbooks_delegation.rst +++ b/docs/docsite/rst/user_guide/playbooks_delegation.rst @@ -1,156 +1,24 @@ .. _playbooks_delegation: -Delegation, Rolling Updates, and Local Actions -============================================== +Delegation and local actions +============================ -.. contents:: Topics +By default Ansible executes all tasks on the machines that match the ``hosts`` line of your playbook. If you want to run some tasks on a different machine, you can use delegation. For example, when updating webservers, you might want to retrieve information from your database servers. In this scenario, your play would target the webservers group and you would delegate the database tasks to your dbservers group. With delegation, you can perform a task on one host on behalf of another, or execute tasks locally on behalf of remote hosts. -Being designed for multi-tier deployments since the beginning, Ansible is great at doing things on one host on behalf of another, or doing local steps with reference to some remote hosts. +.. contents:: + :local: -This in particular is very applicable when setting up continuous deployment infrastructure or zero downtime rolling updates, where you might be talking with load balancers or monitoring systems. +Tasks that cannot be delegated +------------------------------ -Additional features allow for tuning the orders in which things complete, and assigning a batch window size for how many machines to process at once during a rolling update. - -This section covers all of these features. For examples of these items in use, `please see the ansible-examples repository `_. There are quite a few examples of zero-downtime update procedures for different kinds of applications. - -You should also consult the :ref:`module documentation` section. Modules like :ref:`ec2_elb`, :ref:`nagios`, :ref:`bigip_pool`, and other :ref:`network_modules` dovetail neatly with the concepts mentioned here. - -You'll also want to read up on :ref:`playbooks_reuse_roles`, as the 'pre_task' and 'post_task' concepts are the places where you would typically call these modules. - -Be aware that certain tasks are impossible to delegate, i.e. `include`, `add_host`, `debug`, etc as they always execute on the controller. - - -.. _rolling_update_batch_size: - -Rolling Update Batch Size -````````````````````````` - -By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the ``serial`` keyword:: - - --- - - name: test play - hosts: webservers - serial: 2 - gather_facts: False - - tasks: - - name: task one - command: hostname - - name: task two - command: hostname - -In the above example, if we had 4 hosts in the group 'webservers', 2 -would complete the play completely before moving on to the next 2 hosts:: - - - PLAY [webservers] **************************************** - - TASK [task one] ****************************************** - changed: [web2] - changed: [web1] - - TASK [task two] ****************************************** - changed: [web1] - changed: [web2] - - PLAY [webservers] **************************************** - - TASK [task one] ****************************************** - changed: [web3] - changed: [web4] - - TASK [task two] ****************************************** - changed: [web3] - changed: [web4] - - PLAY RECAP *********************************************** - web1 : ok=2 changed=2 unreachable=0 failed=0 - web2 : ok=2 changed=2 unreachable=0 failed=0 - web3 : ok=2 changed=2 unreachable=0 failed=0 - web4 : ok=2 changed=2 unreachable=0 failed=0 - - -The ``serial`` keyword can also be specified as a percentage, which will be applied to the total number of hosts in a -play, in order to determine the number of hosts per pass:: - - --- - - name: test play - hosts: webservers - serial: "30%" - -If the number of hosts does not divide equally into the number of passes, the final pass will contain the remainder. - -As of Ansible 2.2, the batch sizes can be specified as a list, as follows:: - - --- - - name: test play - hosts: webservers - serial: - - 1 - - 5 - - 10 - -In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left), -every following batch would contain 10 hosts until all available hosts are used. - -It is also possible to list multiple batch sizes as percentages:: - - --- - - name: test play - hosts: webservers - serial: - - "10%" - - "20%" - - "100%" - -You can also mix and match the values:: - - --- - - name: test play - hosts: webservers - serial: - - 1 - - 5 - - "20%" - -.. note:: - No matter how small the percentage, the number of hosts per pass will always be 1 or greater. - - -.. _maximum_failure_percentage: - -Maximum Failure Percentage -`````````````````````````` - -By default, Ansible will continue executing actions as long as there are hosts in the batch that have not yet failed. The batch size for a play is determined by the ``serial`` parameter. If ``serial`` is not set, then batch size is all the hosts specified in the ``hosts:`` field. -In some situations, such as with the rolling updates described above, it may be desirable to abort the play when a -certain threshold of failures have been reached. To achieve this, you can set a maximum failure -percentage on a play as follows:: - - --- - - hosts: webservers - max_fail_percentage: 30 - serial: 10 - -In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted. - -.. note:: - - The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort - when 2 of the systems failed, the percentage should be set at 49 rather than 50. +Some tasks always execute on the controller. These tasks, including ``include``, ``add_host``, and ``debug``, cannot be delegated. .. _delegation: -Delegation -`````````` +Delegating tasks +---------------- - -This isn't actually rolling update specific but comes up frequently in those cases. - -If you want to perform a task on one host with reference to other hosts, use the 'delegate_to' keyword on a task. -This is ideal for placing nodes in a load balanced pool, or removing them. It is also very useful for controlling outage windows. -Be aware that it does not make sense to delegate all tasks, debug, add_host, include, etc always get executed on the controller. -Using this with the 'serial' keyword to control the number of hosts executing at one time is also a good idea:: +If you want to perform a task on one host with reference to other hosts, use the 'delegate_to' keyword on a task. This is ideal for managing nodes in a load balanced pool or for controlling outage windows. You can use delegation with the :ref:`serial ` keyword to control the number of hosts executing at one time:: --- - hosts: webservers @@ -170,8 +38,7 @@ Using this with the 'serial' keyword to control the number of hosts executing at command: /usr/bin/add_back_to_pool {{ inventory_hostname }} delegate_to: 127.0.0.1 - -These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: 'local_action'. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1:: +The first and third tasks in this play run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: 'local_action'. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1:: --- # ... @@ -185,8 +52,7 @@ These commands will run on 127.0.0.1, which is the machine running Ansible. Ther - name: add back to load balancer pool local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }} -A common pattern is to use a local action to call 'rsync' to recursively copy files to the managed servers. -Here is an example:: +You can use a local action to call 'rsync' to recursively copy files to the managed servers:: --- # ... @@ -198,7 +64,7 @@ Here is an example:: Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync will need to ask for a passphrase. -In case you have to specify more arguments you can use the following syntax:: +To specify more arguments, use the following syntax:: --- # ... @@ -212,15 +78,14 @@ In case you have to specify more arguments you can use the following syntax:: body: "{{ mail_body }}" run_once: True -The `ansible_host` variable (`ansible_ssh_host` in 1.x or specific to ssh/paramiko plugins) reflects the host a task is delegated to. +The `ansible_host` variable reflects the host a task is delegated to. .. _delegate_facts: -Delegated facts -``````````````` +Delegating facts +---------------- -By default, any fact gathered by a delegated task are assigned to the `inventory_hostname` (the current host) instead of the host which actually produced the facts (the delegated to host). -The directive `delegate_facts` may be set to `True` to assign the task's gathered facts to the delegated host instead of the current one.:: +Delegating Ansible tasks is like delegating tasks in the real world - your groceries belong to you, even if someone else delivers them to your home. Similarly, any facts gathered by a delegated task are assigned by default to the `inventory_hostname` (the current host), not to the host which produced the facts (the delegated to host). To assign gathered facts to the delegated host instead of the current host, set `delegate_facts` to `True`:: --- - hosts: app_servers @@ -232,17 +97,14 @@ The directive `delegate_facts` may be set to `True` to assign the task's gathere delegate_facts: True loop: "{{groups['dbservers']}}" -The above will gather facts for the machines in the dbservers group and assign the facts to those machines and not to app_servers. -This way you can lookup `hostvars['dbhost1']['ansible_default_ipv4']['address']` even though dbservers were not part of the play, or left out by using `--limit`. - +This task gathers facts for the machines in the dbservers group and assigns the facts to those machines, even though the play targets the app_servers group. This way you can lookup `hostvars['dbhost1']['ansible_default_ipv4']['address']` even though dbservers were not part of the play, or left out by using `--limit`. .. _run_once: -Run Once -```````` +Run once +-------- -In some cases there may be a need to only run a task one time for a batch of hosts. -This can be achieved by configuring "run_once" on a task:: +If you want a task to run only on the first host in your batch of hosts, set ``run_once`` to true on that task:: --- # ... @@ -256,16 +118,12 @@ This can be achieved by configuring "run_once" on a task:: # ... -This directive forces the task to attempt execution on the first host in the current batch and then applies all results and facts to all the hosts in the same batch. - -This approach is similar to applying a conditional to a task such as:: +Ansible executes this task on the first host in the current batch and applies all results and facts to all the hosts in the same batch. This approach is similar to applying a conditional to a task such as:: - command: /opt/application/upgrade_db.py when: inventory_hostname == webservers[0] -But the results are applied to all the hosts. - -Like most tasks, this can be optionally paired with "delegate_to" to specify an individual host to execute on:: +However, with ``run_once``, the results are applied to all the hosts. To specify an individual host to execute on, delegate the task:: - command: /opt/application/upgrade_db.py run_once: true @@ -274,23 +132,21 @@ Like most tasks, this can be optionally paired with "delegate_to" to specify an As always with delegation, the action will be executed on the delegated host, but the information is still that of the original host in the task. .. note:: - When used together with "serial", tasks marked as "run_once" will be run on one host in *each* serial batch. - If it's crucial that the task is run only once regardless of "serial" mode, use + When used together with "serial", tasks marked as "run_once" will be run on one host in *each* serial batch. If the task must run only once regardless of "serial" mode, use :code:`when: inventory_hostname == ansible_play_hosts_all[0]` construct. .. note:: Any conditional (i.e `when:`) will use the variables of the 'first host' to decide if the task runs or not, no other hosts will be tested. .. note:: - If you want to avoid the default behaviour of setting the fact for all hosts, set `delegate_facts: True` for the specific task or block. + If you want to avoid the default behavior of setting the fact for all hosts, set `delegate_facts: True` for the specific task or block. .. _local_playbooks: -Local Playbooks +Local playbooks ``````````````` -It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful -for assuring the configuration of a system by putting a playbook in a crontab. This may also be used +It may be useful to use a playbook locally on a remote host, rather than by connecting over SSH. This can be useful for assuring the configuration of a system by putting a playbook in a crontab. This may also be used to run a playbook inside an OS installer, such as an Anaconda kickstart. To run an entire playbook locally, just set the "hosts:" line to "hosts: 127.0.0.1" and then run the playbook like so:: @@ -310,52 +166,6 @@ use the default remote connection type:: host_vars/localhost.yml, for example. You can avoid this issue by using ``local_action`` or ``delegate_to: localhost`` instead. - -.. _interrupt_execution_on_any_error: - -Interrupt execution on any error -```````````````````````````````` - -With the ''any_errors_fatal'' option, any failure on any host in a multi-host play will be treated as fatal and Ansible will exit as soon as all hosts in the current batch have finished the fatal task. Subsequent tasks and plays will not be executed. You can recover from what would be a fatal error by adding a rescue section to the block. - -Sometimes ''serial'' execution is unsuitable; the number of hosts is unpredictable (because of dynamic inventory) and speed is crucial (simultaneous execution is required), but all tasks must be 100% successful to continue playbook execution. - -For example, consider a service located in many datacenters with some load balancers to pass traffic from users to the service. There is a deploy playbook to upgrade service deb-packages. The playbook has the stages: - -- disable traffic on load balancers (must be turned off simultaneously) -- gracefully stop the service -- upgrade software (this step includes tests and starting the service) -- enable traffic on the load balancers (which should be turned on simultaneously) - -The service can't be stopped with "alive" load balancers; they must be disabled first. Because of this, the second stage can't be played if any server failed in the first stage. - -For datacenter "A", the playbook can be written this way:: - - --- - - hosts: load_balancers_dc_a - any_errors_fatal: True - - tasks: - - name: 'shutting down datacenter [ A ]' - command: /usr/bin/disable-dc - - - hosts: frontends_dc_a - - tasks: - - name: 'stopping service' - command: /usr/bin/stop-software - - name: 'updating software' - command: /usr/bin/upgrade-software - - - hosts: load_balancers_dc_a - - tasks: - - name: 'Starting datacenter [ A ]' - command: /usr/bin/enable-dc - - -In this example Ansible will start the software upgrade on the front ends only if all of the load balancers are successfully disabled. - .. seealso:: :ref:`playbooks_intro` diff --git a/docs/docsite/rst/user_guide/playbooks_environment.rst b/docs/docsite/rst/user_guide/playbooks_environment.rst index db49604945c..7d93ac1dcb2 100644 --- a/docs/docsite/rst/user_guide/playbooks_environment.rst +++ b/docs/docsite/rst/user_guide/playbooks_environment.rst @@ -1,15 +1,21 @@ .. _playbooks_environment: -Setting the Environment (and Working With Proxies) -================================================== +Setting the remote environment +============================== .. versionadded:: 1.1 -The ``environment`` keyword allows you to set an environment variable for the action to be taken on the remote target. -For example, it is quite possible that you may need to set a proxy for a task that does http requests. -Or maybe a utility or script that are called may also need certain environment variables set to run properly. +You can use the ``environment`` keyword at the play, block, or task level to set an environment variable for an action on a remote host. With this keyword, you can enable using a proxy for a task that does http requests, set the required environment variables for language-specific version managers, and more. -Here is an example:: +When you set a value with ``environment:`` at the play or block level, it is available only to tasks within the play or block that are executed by the same user. The ``environment:`` keyword does not affect Ansible itself, Ansible configuration settings, the environment for other users, or the execution of other plugins like lookups and filters. Variables set with ``environment:`` do not automatically become Ansible facts, even when you set them at the play level. You must include an explicit ``gather_facts`` task in your playbook and set the ``environment`` keyword on that task to turn these values into Ansible facts. + +.. contents:: + :local: + +Setting the remote environment in a task +---------------------------------------- + +You can set the environment directly at the task level:: - hosts: all remote_user: root @@ -23,16 +29,12 @@ Here is an example:: environment: http_proxy: http://proxy.example.com:8080 -.. note:: - ``environment:`` does not affect Ansible itself, ONLY the context of the specific task action and this does not include - Ansible's own configuration settings nor the execution of any other plugins, including lookups, filters, and so on. - -The environment can also be stored in a variable, and accessed like so:: +You can re-use environment settings by defining them as variables in your play and accessing them in a task as you would access any stored Ansible variable:: - hosts: all remote_user: root - # here we make a variable named "proxy_env" that is a dictionary + # create a variable named "proxy_env" that is a dictionary vars: proxy_env: http_proxy: http://proxy.example.com:8080 @@ -45,19 +47,7 @@ The environment can also be stored in a variable, and accessed like so:: state: present environment: "{{ proxy_env }}" -You can also use it at a play level:: - - - hosts: testhost - - roles: - - php - - nginx - - environment: - http_proxy: http://proxy.example.com:8080 - -While just proxy settings were shown above, any number of settings can be supplied. The most logical place -to define an environment hash might be a group_vars file, like so:: +You can store environment settings for re-use in multiple playbooks by defining them in a group_vars file:: --- # file: group_vars/boston @@ -68,11 +58,23 @@ to define an environment hash might be a group_vars file, like so:: http_proxy: http://proxy.bos.example.com:8080 https_proxy: http://proxy.bos.example.com:8080 +You can set the remote environment at the play level:: + + - hosts: testing + + roles: + - php + - nginx + + environment: + http_proxy: http://proxy.example.com:8080 + +These examples show proxy settings, but you can provide any number of settings this way. -Working With Language-Specific Version Managers +Working with language-specific version managers =============================================== -Some language-specific version managers (such as rbenv and nvm) require environment variables be set while these tools are in use. When using these tools manually, they usually require sourcing some environment variables via a script or lines added to your shell configuration file. In Ansible, you can instead use the environment directive:: +Some language-specific version managers (such as rbenv and nvm) require you to set environment variables while these tools are in use. When using these tools manually, you usually source some environment variables from a script or from lines added to your shell configuration file. In Ansible, you can do this with the environment keyword at the play level:: --- ### A playbook demonstrating a common npm workflow: @@ -109,10 +111,9 @@ Some language-specific version managers (such as rbenv and nvm) require environm when: packagejson.stat.exists .. note:: - ``ansible_env:`` is normally populated by fact gathering (M(gather_facts)) and the value of the variables depends on the user - that did the gathering action. If you change remote_user/become_user you might end up using the wrong values for those variables. + The example above uses ``ansible_env`` as part of the PATH. Basing variables on ``ansible_env`` is risky. Ansible populates ``ansible_env`` values by gathering facts, so the value of the variables depends on the remote_user or become_user Ansible used when gathering those facts. If you change remote_user/become_user the values in ``ansible-env`` may not be the ones you expect. -You might also want to simply specify the environment for a single task:: +You can also specify the environment at the task level:: --- - name: install ruby 2.3.1 diff --git a/docs/docsite/rst/user_guide/playbooks_error_handling.rst b/docs/docsite/rst/user_guide/playbooks_error_handling.rst index 5c24a216112..c771ba7024e 100644 --- a/docs/docsite/rst/user_guide/playbooks_error_handling.rst +++ b/docs/docsite/rst/user_guide/playbooks_error_handling.rst @@ -27,10 +27,7 @@ Ignoring unreachable host errors .. versionadded:: 2.7 -You may ignore task failure due to the host instance being 'UNREACHABLE' with the ``ignore_unreachable`` keyword. -Note that task errors are what's being ignored, not the unreachable host. - -Here's an example explaining the behavior for an unreachable host at the task level:: +You may ignore task failure due to the host instance being 'UNREACHABLE' with the ``ignore_unreachable`` keyword. Ansible ignores the task errors, but continues to execute future tasks against the unreachable host. For example, at the task level:: - name: this executes, fails, and the failure is ignored command: /bin/true @@ -162,7 +159,12 @@ The :ref:`command ` and :ref:`shell ` modules care Aborting a play on all hosts ============================ -Sometimes you want a failure on a single host to abort the entire play on all hosts. If you set ``any_errors_fatal`` and a task returns an error, Ansible lets all hosts in the current batch finish the fatal task and then stops executing the play on all hosts. You can set ``any_errors_fatal`` at the play or block level:: +Sometimes you want a failure on a single host, or failures on a certain percentage of hosts, to abort the entire play on all hosts. You can stop play execution after the first failure happens with ``any_errors_fatal``. For finer-grained control, you can use ``max_fail_percentage`` to abort the run after a given percentage of hosts has failed. + +Aborting on the first error: any_errors_fatal +--------------------------------------------- + +If you set ``any_errors_fatal`` and a task returns an error, Ansible finishes the fatal task on all hosts in the current batch, then stops executing the play on all hosts. Subsequent tasks and plays are not executed. You can recover from fatal errors by adding a :ref:`rescue section ` to the block. You can set ``any_errors_fatal`` at the play or block level:: - hosts: somehosts any_errors_fatal: true @@ -175,7 +177,49 @@ Sometimes you want a failure on a single host to abort the entire play on all ho - include_tasks: mytasks.yml any_errors_fatal: true -For finer-grained control, you can use ``max_fail_percentage`` to abort the run after a given percentage of hosts has failed. +You can use this feature when all tasks must be 100% successful to continue playbook execution. For example, if you run a service on machines in multiple data centers with load balancers to pass traffic from users to the service, you want all load balancers to be disabled before you stop the service for maintenance. To ensure that any failure in the task that disables the load balancers will stop all other tasks:: + + --- + - hosts: load_balancers_dc_a + any_errors_fatal: True + + tasks: + - name: 'shutting down datacenter [ A ]' + command: /usr/bin/disable-dc + + - hosts: frontends_dc_a + + tasks: + - name: 'stopping service' + command: /usr/bin/stop-software + - name: 'updating software' + command: /usr/bin/upgrade-software + + - hosts: load_balancers_dc_a + + tasks: + - name: 'Starting datacenter [ A ]' + command: /usr/bin/enable-dc + +In this example Ansible starts the software upgrade on the front ends only if all of the load balancers are successfully disabled. + +.. _maximum_failure_percentage: + +Setting a maximum failure percentage +------------------------------------ + +By default, Ansible continues to execute tasks as long as there are hosts that have not yet failed. In some situations, such as when executing a rolling update, you may want to abort the play when a certain threshold of failures has been reached. To achieve this, you can set a maximum failure percentage on a play:: + + --- + - hosts: webservers + max_fail_percentage: 30 + serial: 10 + +The ``max_fail_percentage`` setting applies to each batch when you use it with :ref:`serial `. In the example above, if more than 3 of the 10 servers in the first (or any) batch of servers failed, the rest of the play would be aborted. + +.. note:: + + The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort the play when 2 of the systems failed, set the max_fail_percentage at 49 rather than 50. Controlling errors in blocks ============================ diff --git a/docs/docsite/rst/user_guide/playbooks_filters.rst b/docs/docsite/rst/user_guide/playbooks_filters.rst index b4ff576b09e..90b010b026e 100644 --- a/docs/docsite/rst/user_guide/playbooks_filters.rst +++ b/docs/docsite/rst/user_guide/playbooks_filters.rst @@ -178,6 +178,12 @@ You can cast values as certain types. For example, if you expect the input "True msg: test when: some_string_value | bool +If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string:: + + - shell: echo "only on Red Hat 6, derivatives, and later" + when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6 + + .. versionadded:: 1.6 .. _filters_for_formatting_data: diff --git a/docs/docsite/rst/user_guide/playbooks_lookups.rst b/docs/docsite/rst/user_guide/playbooks_lookups.rst index 99eb3483a78..004db70836c 100644 --- a/docs/docsite/rst/user_guide/playbooks_lookups.rst +++ b/docs/docsite/rst/user_guide/playbooks_lookups.rst @@ -1,36 +1,17 @@ .. _playbooks_lookups: +******* Lookups -------- - -Lookup plugins allow access to outside data sources. Like all templating, these plugins are evaluated on the Ansible control machine, and can include reading the filesystem as well as contacting external datastores and services. This data is then made available using the standard templating system in Ansible. - -.. note:: - - Lookups occur on the local computer, not on the remote computer. - - They are executed within the directory containing the role or play, as opposed to local tasks which are executed with the directory of the executed script. - - You can pass wantlist=True to lookups to use in jinja2 template "for" loops. - - Lookups are an advanced feature. You should have a good working knowledge of Ansible plays before incorporating them. - -.. warning:: Some lookups pass arguments to a shell. When using variables from a remote/untrusted source, use the `|quote` filter to ensure safe usage. - -.. contents:: Topics - -.. _lookups_and_loops: - -Lookups and loops -````````````````` - -*lookup plugins* are a way to query external data sources, such as shell commands or even key value stores. - -Before Ansible 2.5, lookups were mostly used indirectly in ``with_`` constructs for looping. Starting with Ansible version 2.5, lookups are used more explicitly as part of Jinja2 expressions fed into the ``loop`` keyword. +******* +Lookup plugins retrieve data from outside sources such as files, databases, key/value stores, APIs, and other services. Like all templating, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. Before Ansible 2.5, lookups were mostly used indirectly in ``with_`` constructs for looping. Starting with Ansible 2.5, lookups are used more explicitly as part of Jinja2 expressions fed into the ``loop`` keyword. .. _lookups_and_variables: -Lookups and variables -````````````````````` +Using lookups in variables +========================== -One way of using lookups is to populate variables. These macros are evaluated each time they are used in a task (or template):: +You can populate variables using lookups. Ansible evaluates the value each time it is executed in a task (or template):: vars: motd_value: "{{ lookup('file', '/etc/motd') }}" @@ -38,7 +19,7 @@ One way of using lookups is to populate variables. These macros are evaluated ea - debug: msg: "motd value is {{ motd_value }}" -For more details and a complete list of lookup plugins available, please see :ref:`plugins_lookup`. +For more details and a list of lookup plugins in ansible-base, see :ref:`plugins_lookup`. You may also find lookup plugins in collections. You can review a list of lookup plugins installed on your control machine with the command ``ansible-doc -l -t lookup``. .. seealso:: diff --git a/docs/docsite/rst/user_guide/playbooks_module_defaults.rst b/docs/docsite/rst/user_guide/playbooks_module_defaults.rst index b7c08cc8202..4e43542c9a2 100644 --- a/docs/docsite/rst/user_guide/playbooks_module_defaults.rst +++ b/docs/docsite/rst/user_guide/playbooks_module_defaults.rst @@ -3,7 +3,7 @@ Module defaults =============== -If you find yourself calling the same module repeatedly with the same arguments, it can be useful to define default arguments for that particular module using the ``module_defaults`` attribute. +If you frequently call the same module with the same arguments, it can be useful to define default arguments for that particular module using the ``module_defaults`` attribute. Here is a basic example:: @@ -33,7 +33,7 @@ The ``module_defaults`` attribute can be used at the play, block, and task level debug: msg: "a default message" -It's also possible to remove any previously established defaults for a module by specifying an empty dict:: +You can remove any previously established defaults for a module by specifying an empty dict:: - file: state: touch @@ -82,8 +82,7 @@ Module defaults groups .. versionadded:: 2.7 -Ansible 2.7 adds a preview-status feature to group together modules that share common sets of parameters. This makes -it easier to author playbooks making heavy use of API-based modules such as cloud modules. +Ansible 2.7 adds a preview-status feature to group together modules that share common sets of parameters. This makes it easier to author playbooks making heavy use of API-based modules such as cloud modules. +---------+---------------------------+-----------------+ | Group | Purpose | Ansible Version | diff --git a/docs/docsite/rst/user_guide/playbooks_reuse_roles.rst b/docs/docsite/rst/user_guide/playbooks_reuse_roles.rst index 57efe7bce03..f2e874c5880 100644 --- a/docs/docsite/rst/user_guide/playbooks_reuse_roles.rst +++ b/docs/docsite/rst/user_guide/playbooks_reuse_roles.rst @@ -242,6 +242,8 @@ You can pass other keywords, including variables and tags, when importing roles: When you add a tag to an ``import_role`` statement, Ansible applies the tag to `all` tasks within the role. See :ref:`tag_inheritance` for details. +.. _run_role_twice: + Running a role multiple times in one playbook ============================================= diff --git a/docs/docsite/rst/user_guide/playbooks_special_topics.rst b/docs/docsite/rst/user_guide/playbooks_special_topics.rst index 95761f51a59..7abcb3077ab 100644 --- a/docs/docsite/rst/user_guide/playbooks_special_topics.rst +++ b/docs/docsite/rst/user_guide/playbooks_special_topics.rst @@ -3,9 +3,7 @@ Advanced Playbooks Features =========================== -Here are some playbook features that not everyone may need to learn, but can be quite useful for particular applications. -Browsing these topics is recommended as you may find some useful tips here, but feel free to learn the basics of Ansible first -and adopt these only if they seem relevant or useful to your environment. +As you write more playbooks and roles, you might have some special use cases. For example, you may want to execute "dry runs" of your playbooks (:ref:`check_mode_dry`), ask playbook users to supply information (:ref:`playbooks_prompts`), retrieve information from an external datastore or API (:ref:`lookup_plugins`), or change the way Ansible handles failures (:ref:`playbooks_error_handling`). The topics listed on this page cover these use cases and many more. If you cannot achieve your goals with basic Ansible concepts and actions, browse through these topics for help with your use case. .. toctree:: :maxdepth: 1 diff --git a/docs/docsite/rst/user_guide/playbooks_startnstep.rst b/docs/docsite/rst/user_guide/playbooks_startnstep.rst index 106fd2d5de4..e3b629619c1 100644 --- a/docs/docsite/rst/user_guide/playbooks_startnstep.rst +++ b/docs/docsite/rst/user_guide/playbooks_startnstep.rst @@ -1,34 +1,40 @@ -Start and Step -====================== +.. _playbooks_start_and_step: -This shows a few alternative ways to run playbooks. These modes are very useful for testing new plays or debugging. +*************************************** +Executing playbooks for troubleshooting +*************************************** +When you are testing new plays or debugging playbooks, you may need to run the same play multiple times. To make this more efficient, Ansible offers two alternative ways to execute a playbook: start-at-task and step mode. .. _start_at_task: -Start-at-task -````````````` -If you want to start executing your playbook at a particular task, you can do so with the ``--start-at-task`` option:: +start-at-task +------------- - ansible-playbook playbook.yml --start-at-task="install packages" +To start executing your playbook at a particular task (usually the task that failed on the previous run), use the ``--start-at-task`` option:: -The above will start executing your playbook at a task named "install packages". + ansible-playbook playbook.yml --start-at-task="install packages" +In this example, Ansible starts executing your playbook at a task named "install packages". This feature does not work with tasks inside dynamically re-used roles or tasks (``include_*``), see :ref:`dynamic_vs_static`. .. _step: -Step -```` +Step mode +--------- -Playbooks can also be executed interactively with ``--step``:: +To execute a playbook interactively, use ``--step``:: ansible-playbook playbook.yml --step -This will cause ansible to stop on each task, and ask if it should execute that task. -Say you had a task called "configure ssh", the playbook run will stop and ask:: +With this option, Ansible stops on each task, and asks if it should execute that task. For example, if you have a task called "configure ssh", the playbook run will stop and ask:: Perform task: configure ssh (y/n/c): -Answering "y" will execute the task, answering "n" will skip the task, and answering "c" -will continue executing all the remaining tasks without asking. +Answer "y" to execute the task, answer "n" to skip the task, and answer "c" to exit step mode, executing all remaining tasks without asking. + +.. seealso:: + :ref:`playbooks_intro` + An introduction to playbooks + :ref:`playbook_debugger` + Using the Ansible debugger diff --git a/docs/docsite/rst/user_guide/playbooks_strategies.rst b/docs/docsite/rst/user_guide/playbooks_strategies.rst index 787dd22ce53..7bd5dfe92d1 100644 --- a/docs/docsite/rst/user_guide/playbooks_strategies.rst +++ b/docs/docsite/rst/user_guide/playbooks_strategies.rst @@ -3,15 +3,14 @@ Controlling playbook execution: strategies and more =================================================== -By default, Ansible runs each task on all hosts affected by a play before starting the next task on any host, using 5 forks. If you want to change this default behavior, you can use a different strategy plugin, change the number of forks, or apply one of several play-level keywords like ``serial``. +By default, Ansible runs each task on all hosts affected by a play before starting the next task on any host, using 5 forks. If you want to change this default behavior, you can use a different strategy plugin, change the number of forks, or apply one of several keywords like ``serial``. .. contents:: :local: Selecting a strategy -------------------- -The default behavior described above is the :ref:`linear strategy`. Ansible offers other strategies, including the :ref:`debug strategy` (see also :ref:`playbook_debugger`) and the :ref:`free strategy`, which allows -each host to run until the end of the play as fast as it can:: +The default behavior described above is the :ref:`linear strategy`. Ansible offers other strategies, including the :ref:`debug strategy` (see also :ref:`playbook_debugger`) and the :ref:`free strategy`, which allows each host to run until the end of the play as fast as it can:: - hosts: all strategy: free @@ -36,14 +35,116 @@ or pass it on the command line: `ansible-playbook -f 30 my_playbook.yml`. Using keywords to control execution ----------------------------------- -Several play-level :ref:`keyword` also affect play execution. The most common one is ``serial``, which sets a number, a percentage, or a list of numbers of hosts you want to manage at a time. Setting ``serial`` with any strategy directs Ansible to 'batch' the hosts, completing the play on the specified number or percentage of hosts before starting the next 'batch'. This is especially useful for :ref:`rolling updates`. -The ``throttle`` keyword also affects execution and can be set at the block and task level. This keyword limits the number of workers up to the maximum set with the forks setting or ``serial``. Use ``throttle`` to restrict tasks that may be CPU-intensive or interact with a rate-limiting API:: +In addition to strategies, several :ref:`keywords` also affect play execution. You can set a number, a percentage, or a list of numbers of hosts you want to manage at a time with ``serial``. Ansible completes the play on the specified number or percentage of hosts before starting the next batch of hosts. You can restrict the number of workers allotted to a block or task with ``throttle``. You can control how Ansible selects the next host in a group to execute against with ``order``. These keywords are not strategies. They are directives or options applied to a play, block, or task. + +.. _rolling_update_batch_size: + +Setting the batch size with ``serial`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +By default, Ansible runs in parallel against all the hosts in the :ref:`pattern ` you set in the ``hosts:`` field of each play. If you want to manage only a few machines at a time, for example during a rolling update, you can define how many hosts Ansible should manage at a single time using the ``serial`` keyword:: + + --- + - name: test play + hosts: webservers + serial: 2 + gather_facts: False + + tasks: + - name: first task + command: hostname + - name: second task + command: hostname + +In the above example, if we had 4 hosts in the group 'webservers', Ansible would execute the play completely (both tasks) on 2 of the hosts before moving on to the next 2 hosts:: + + + PLAY [webservers] **************************************** + + TASK [first task] **************************************** + changed: [web2] + changed: [web1] + + TASK [second task] *************************************** + changed: [web1] + changed: [web2] + + PLAY [webservers] **************************************** + + TASK [first task] **************************************** + changed: [web3] + changed: [web4] + + TASK [second task] *************************************** + changed: [web3] + changed: [web4] + + PLAY RECAP *********************************************** + web1 : ok=2 changed=2 unreachable=0 failed=0 + web2 : ok=2 changed=2 unreachable=0 failed=0 + web3 : ok=2 changed=2 unreachable=0 failed=0 + web4 : ok=2 changed=2 unreachable=0 failed=0 + + +You can also specify a percentage with the ``serial`` keyword. Ansible applies the percentage to the total number of hosts in a play to determine the number of hosts per pass:: + + --- + - name: test play + hosts: webservers + serial: "30%" + +If the number of hosts does not divide equally into the number of passes, the final pass contains the remainder. In this example, if you had 20 hosts in the webservers group, the first batch would contain 6 hosts, the second batch would contain 6 hosts, the third batch would contain 6 hosts, and the last batch would contain 2 hosts. + +You can also specify batch sizes as a list. For example:: + + --- + - name: test play + hosts: webservers + serial: + - 1 + - 5 + - 10 + +In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left), every following batch would contain either 10 hosts or all the remaining hosts, if fewer than 10 hosts remained. + +You can list multiple batch sizes as percentages:: + + --- + - name: test play + hosts: webservers + serial: + - "10%" + - "20%" + - "100%" + +You can also mix and match the values:: + + --- + - name: test play + hosts: webservers + serial: + - 1 + - 5 + - "20%" + +.. note:: + No matter how small the percentage, the number of hosts per pass will always be 1 or greater. + +Restricting execution with ``throttle`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The ``throttle`` keyword limits the number of workers for a particular task. It can be set at the block and task level. Use ``throttle`` to restrict tasks that may be CPU-intensive or interact with a rate-limiting API:: tasks: - command: /path/to/cpu_intensive_command throttle: 1 +If you have already restricted the number of forks or the number of machines to execute against in parallel, you can reduce the number of workers with ``throttle``, but you cannot increase it. In other words, to have an effect, your ``throttle`` setting must be lower than your ``forks`` or ``serial`` setting if you are using them together. + +Ordering execution based on inventory +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + The ``order`` keyword controls the order in which hosts are run. Possible values for order are: inventory: @@ -57,7 +158,7 @@ reverse_sorted: shuffle: Randomly ordered on each run -Other keywords that affect play execution include ``ignore_errors``, ``ignore_unreachable``, and ``any_errors_fatal``. Please note that these keywords are not strategies. They are play-level directives or options. +Other keywords that affect play execution include ``ignore_errors``, ``ignore_unreachable``, and ``any_errors_fatal``. These options are documented in :ref:`playbooks_error_handling`. .. seealso:: diff --git a/docs/docsite/rst/user_guide/playbooks_variables.rst b/docs/docsite/rst/user_guide/playbooks_variables.rst index de853e5abb7..ac22e84d6b2 100644 --- a/docs/docsite/rst/user_guide/playbooks_variables.rst +++ b/docs/docsite/rst/user_guide/playbooks_variables.rst @@ -4,799 +4,130 @@ Using Variables *************** -.. contents:: - :local: - -While automation exists to make it easier to make things repeatable, all systems are not exactly alike; some may require configuration that is slightly different from others. In some instances, the observed behavior or state of one system might influence how you configure other systems. For example, you might need to find out the IP address of a system and use it as a configuration value on another system. +Ansible uses variables to manage differences between systems. With Ansible, you can execute tasks and playbooks on multiple different systems with a single command. To represent the variations among those different systems, you can create variables with standard YAML syntax, including lists and dictionaries. You can set these variables in your playbooks, in your :ref:`inventory `, in re-usable :ref:`files ` or :ref:`roles `, or at the command line. You can also create variables during a playbook run by registering the return value or values of a task as a new variable. -Ansible uses *variables* to help deal with differences between systems. +You can use the variables you created in module arguments, in :ref:`conditional "when" statements `, in :ref:`templates `, and in :ref:`loops `. The `ansible-examples github repository `_ contains many examples of using variables in Ansible. -To understand variables you'll also want to read :ref:`playbooks_conditionals` and :ref:`playbooks_loops`. -Useful things like the **group_by** module -and the ``when`` conditional can also be used with variables, and to help manage differences between systems. +Once you understand the concepts and examples on this page, read about :ref:`Ansible facts `, which are variables you retrieve from remote systems. -The `ansible-examples github repository `_ contains many examples of how variables are used in Ansible. +.. contents:: + :local: .. _valid_variable_names: Creating valid variable names ============================= -Before you start using variables, it's important to know what are valid variable names. - -Variable names should be letters, numbers, and underscores. Variables should always start with a letter. - -``foo_port`` is a great variable. ``foo5`` is fine too. - -``foo-port``, ``foo port``, ``foo.port`` and ``12`` are not valid variable names. - -`Python keywords `_ such as ``async`` and ``lambda`` are not valid variable names and thus must be avoided. - -YAML also supports dictionaries which map keys to values. For instance:: - - foo: - field1: one - field2: two - -You can then reference a specific field in the dictionary using either bracket -notation or dot notation:: - - foo['field1'] - foo.field1 - -These will both reference the same value ("one"). However, if you choose to -use dot notation be aware that some keys can cause problems because they -collide with attributes and methods of python dictionaries. You should use -bracket notation instead of dot notation if you use keys which start and end -with two underscores (Those are reserved for special meanings in python) or -are any of the known public attributes: +Not all strings are valid Ansible variable names. A variable name must start with a letter, and can only include letters, numbers, and underscores. `Python keywords`_ are not valid variable names. -``add``, ``append``, ``as_integer_ratio``, ``bit_length``, ``capitalize``, ``center``, ``clear``, ``conjugate``, ``copy``, ``count``, ``decode``, ``denominator``, ``difference``, ``difference_update``, ``discard``, ``encode``, ``endswith``, ``expandtabs``, ``extend``, ``find``, ``format``, ``fromhex``, ``fromkeys``, ``get``, ``has_key``, ``hex``, ``imag``, ``index``, ``insert``, ``intersection``, ``intersection_update``, ``isalnum``, ``isalpha``, ``isdecimal``, ``isdigit``, ``isdisjoint``, ``is_integer``, ``islower``, ``isnumeric``, ``isspace``, ``issubset``, ``issuperset``, ``istitle``, ``isupper``, ``items``, ``iteritems``, ``iterkeys``, ``itervalues``, ``join``, ``keys``, ``ljust``, ``lower``, ``lstrip``, ``numerator``, ``partition``, ``pop``, ``popitem``, ``real``, ``remove``, ``replace``, ``reverse``, ``rfind``, ``rindex``, ``rjust``, ``rpartition``, ``rsplit``, ``rstrip``, ``setdefault``, ``sort``, ``split``, ``splitlines``, ``startswith``, ``strip``, ``swapcase``, ``symmetric_difference``, ``symmetric_difference_update``, ``title``, ``translate``, ``union``, ``update``, ``upper``, ``values``, ``viewitems``, ``viewkeys``, ``viewvalues``, ``zfill``. +.. table:: + :class: documentation-table -.. _variables_in_inventory: + ====================== ==================================================================== + Valid variable names Not valid + ====================== ==================================================================== + ``foo`` ``*foo``, `Python keywords`_ such as ``async`` and ``lambda`` -Defining variables in inventory -=============================== + ``foo_port`` ``foo-port``, ``foo port``, ``foo.port`` -Often you'll want to set variables for an individual host, or for a group of hosts in your inventory. For instance, machines in Boston -may all use 'boston.ntp.example.com' as an NTP server. The :ref:`intro_inventory` page has details on setting :ref:`host_variables` and :ref:`group_variables` in inventory. + ``foo5`` ``5foo``, ``12`` + ====================== ==================================================================== -.. _playbook_variables: +.. _Python keywords: https://docs.python.org/3/reference/lexical_analysis.html#keywords -Defining variables in a playbook -================================ - -You can define variables directly in a playbook:: +Simple variables +================ - - hosts: webservers - vars: - http_port: 80 +Simple variables combine a variable name with a single value. You can use this syntax (and the syntax for lists and dictionaries shown below) in a variety of places. See :ref:`setting_variables` for information on where to set variables. -This can be nice as it's right there when you are reading the playbook. +Defining simple variables +------------------------- -.. _included_variables: +You can define a simple variable using standard YAML syntax. For example:: -Defining variables in included files and roles -============================================== + remote_install_path: /opt/my_app_config -As described in :ref:`playbooks_reuse_roles`, variables can also be included in the playbook via include files, which may or may -not be part of an Ansible Role. Usage of roles is preferred as it provides a nice organizational system. +Referencing simple variables +---------------------------- -.. _about_jinja2: - -Using variables with Jinja2 -=========================== - -Once you've defined variables, you can use them in your playbooks using the Jinja2 templating system. Here's a simple Jinja2 template:: - - My amp goes to {{ max_amp_value }} - -This expression provides the most basic form of variable substitution. - -You can use the same syntax in playbooks. For example:: +Once you have defined a variable, use Jinja2 syntax to reference it. Jinja2 variables use double curly braces. For example, the expression ``My amp goes to {{ max_amp_value }}`` demonstrates the most basic form of variable substitution. You can use Jinja2 syntax in playbooks. For example:: template: src=foo.cfg.j2 dest={{ remote_install_path }}/foo.cfg -Here the variable defines the location of a file, which can vary from one system to another. - -Inside a template you automatically have access to all variables that are in scope for a host. Actually -it's more than that -- you can also read variables about other hosts. We'll show how to do that in a bit. - -.. note:: ansible allows Jinja2 loops and conditionals in templates, but in playbooks, we do not use them. Ansible - playbooks are pure machine-parseable YAML. This is a rather important feature as it means it is possible to code-generate - pieces of files, or to have other ecosystem tools read Ansible files. Not everyone will need this but it can unlock - possibilities. - -.. seealso:: +In this example, the variable defines the location of a file, which can vary from one system to another. - :ref:`playbooks_templating` - More information about Jinja2 templating - -.. _jinja2_filters: - -Transforming variables with Jinja2 filters -========================================== +.. note:: -Jinja2 filters let you transform the value of a variable within a template expression. For example, the ``capitalize`` filter capitalizes any value passed to it; the ``to_yaml`` and ``to_json`` filters change the format of your variable values. Jinja2 includes many `built-in filters `_ and Ansible supplies :ref:`many more filters `. + Ansible allows Jinja2 loops and conditionals in :ref:`templates ` but not in playbooks. You cannot create a loop of tasks. Ansible playbooks are pure machine-parseable YAML. .. _yaml_gotchas: -Hey wait, a YAML gotcha -======================= +When to quote variables (a YAML gotcha) +======================================= -YAML syntax requires that if you start a value with ``{{ foo }}`` you quote the whole line, since it wants to be -sure you aren't trying to start a YAML dictionary. This is covered on the :ref:`yaml_syntax` documentation. +If you start a value with ``{{ foo }}``, you must quote the whole expression to create valid YAML syntax. If you do not quote the whole expression, the YAML parser cannot interpret the syntax - it might be a variable or it might be the start of a YAML dictionary. See the :ref:`yaml_syntax` documentation for more guidance on writing YAML. -This won't work:: +If you use a variable without quotes like this:: - hosts: app_servers vars: app_path: {{ base_path }}/22 -Do it like this and you'll be fine:: +You will see: ``ERROR! Syntax Error while loading YAML.`` If you add quotes, Ansible works correctly:: - hosts: app_servers vars: app_path: "{{ base_path }}/22" -.. _vars_and_facts: - -Variables discovered from systems: Facts -======================================== - -There are other places where variables can come from, but these are a type of variable that are discovered, not set by the user. - -Facts are information derived from speaking with your remote systems. You can find a complete set under the ``ansible_facts`` variable, -most facts are also 'injected' as top level variables preserving the ``ansible_`` prefix, but some are dropped due to conflicts. -This can be disabled via the :ref:`INJECT_FACTS_AS_VARS` setting. - -An example of this might be the IP address of the remote host, or what the operating system is. - -To see what information is available, try the following in a play:: - - - debug: var=ansible_facts - -To see the 'raw' information as gathered:: - - ansible hostname -m setup - -This will return a large amount of variable data, which may look like this on Ansible 2.7: - -.. code-block:: json - - { - "ansible_all_ipv4_addresses": [ - "REDACTED IP ADDRESS" - ], - "ansible_all_ipv6_addresses": [ - "REDACTED IPV6 ADDRESS" - ], - "ansible_apparmor": { - "status": "disabled" - }, - "ansible_architecture": "x86_64", - "ansible_bios_date": "11/28/2013", - "ansible_bios_version": "4.1.5", - "ansible_cmdline": { - "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-862.14.4.el7.x86_64", - "console": "ttyS0,115200", - "no_timer_check": true, - "nofb": true, - "nomodeset": true, - "ro": true, - "root": "LABEL=cloudimg-rootfs", - "vga": "normal" - }, - "ansible_date_time": { - "date": "2018-10-25", - "day": "25", - "epoch": "1540469324", - "hour": "12", - "iso8601": "2018-10-25T12:08:44Z", - "iso8601_basic": "20181025T120844109754", - "iso8601_basic_short": "20181025T120844", - "iso8601_micro": "2018-10-25T12:08:44.109968Z", - "minute": "08", - "month": "10", - "second": "44", - "time": "12:08:44", - "tz": "UTC", - "tz_offset": "+0000", - "weekday": "Thursday", - "weekday_number": "4", - "weeknumber": "43", - "year": "2018" - }, - "ansible_default_ipv4": { - "address": "REDACTED", - "alias": "eth0", - "broadcast": "REDACTED", - "gateway": "REDACTED", - "interface": "eth0", - "macaddress": "REDACTED", - "mtu": 1500, - "netmask": "255.255.255.0", - "network": "REDACTED", - "type": "ether" - }, - "ansible_default_ipv6": {}, - "ansible_device_links": { - "ids": {}, - "labels": { - "xvda1": [ - "cloudimg-rootfs" - ], - "xvdd": [ - "config-2" - ] - }, - "masters": {}, - "uuids": { - "xvda1": [ - "cac81d61-d0f8-4b47-84aa-b48798239164" - ], - "xvdd": [ - "2018-10-25-12-05-57-00" - ] - } - }, - "ansible_devices": { - "xvda": { - "holders": [], - "host": "", - "links": { - "ids": [], - "labels": [], - "masters": [], - "uuids": [] - }, - "model": null, - "partitions": { - "xvda1": { - "holders": [], - "links": { - "ids": [], - "labels": [ - "cloudimg-rootfs" - ], - "masters": [], - "uuids": [ - "cac81d61-d0f8-4b47-84aa-b48798239164" - ] - }, - "sectors": "83883999", - "sectorsize": 512, - "size": "40.00 GB", - "start": "2048", - "uuid": "cac81d61-d0f8-4b47-84aa-b48798239164" - } - }, - "removable": "0", - "rotational": "0", - "sas_address": null, - "sas_device_handle": null, - "scheduler_mode": "deadline", - "sectors": "83886080", - "sectorsize": "512", - "size": "40.00 GB", - "support_discard": "0", - "vendor": null, - "virtual": 1 - }, - "xvdd": { - "holders": [], - "host": "", - "links": { - "ids": [], - "labels": [ - "config-2" - ], - "masters": [], - "uuids": [ - "2018-10-25-12-05-57-00" - ] - }, - "model": null, - "partitions": {}, - "removable": "0", - "rotational": "0", - "sas_address": null, - "sas_device_handle": null, - "scheduler_mode": "deadline", - "sectors": "131072", - "sectorsize": "512", - "size": "64.00 MB", - "support_discard": "0", - "vendor": null, - "virtual": 1 - }, - "xvde": { - "holders": [], - "host": "", - "links": { - "ids": [], - "labels": [], - "masters": [], - "uuids": [] - }, - "model": null, - "partitions": { - "xvde1": { - "holders": [], - "links": { - "ids": [], - "labels": [], - "masters": [], - "uuids": [] - }, - "sectors": "167770112", - "sectorsize": 512, - "size": "80.00 GB", - "start": "2048", - "uuid": null - } - }, - "removable": "0", - "rotational": "0", - "sas_address": null, - "sas_device_handle": null, - "scheduler_mode": "deadline", - "sectors": "167772160", - "sectorsize": "512", - "size": "80.00 GB", - "support_discard": "0", - "vendor": null, - "virtual": 1 - } - }, - "ansible_distribution": "CentOS", - "ansible_distribution_file_parsed": true, - "ansible_distribution_file_path": "/etc/redhat-release", - "ansible_distribution_file_variety": "RedHat", - "ansible_distribution_major_version": "7", - "ansible_distribution_release": "Core", - "ansible_distribution_version": "7.5.1804", - "ansible_dns": { - "nameservers": [ - "127.0.0.1" - ] - }, - "ansible_domain": "", - "ansible_effective_group_id": 1000, - "ansible_effective_user_id": 1000, - "ansible_env": { - "HOME": "/home/zuul", - "LANG": "en_US.UTF-8", - "LESSOPEN": "||/usr/bin/lesspipe.sh %s", - "LOGNAME": "zuul", - "MAIL": "/var/mail/zuul", - "PATH": "/usr/local/bin:/usr/bin", - "PWD": "/home/zuul", - "SELINUX_LEVEL_REQUESTED": "", - "SELINUX_ROLE_REQUESTED": "", - "SELINUX_USE_CURRENT_RANGE": "", - "SHELL": "/bin/bash", - "SHLVL": "2", - "SSH_CLIENT": "REDACTED 55672 22", - "SSH_CONNECTION": "REDACTED 55672 REDACTED 22", - "USER": "zuul", - "XDG_RUNTIME_DIR": "/run/user/1000", - "XDG_SESSION_ID": "1", - "_": "/usr/bin/python2" - }, - "ansible_eth0": { - "active": true, - "device": "eth0", - "ipv4": { - "address": "REDACTED", - "broadcast": "REDACTED", - "netmask": "255.255.255.0", - "network": "REDACTED" - }, - "ipv6": [ - { - "address": "REDACTED", - "prefix": "64", - "scope": "link" - } - ], - "macaddress": "REDACTED", - "module": "xen_netfront", - "mtu": 1500, - "pciid": "vif-0", - "promisc": false, - "type": "ether" - }, - "ansible_eth1": { - "active": true, - "device": "eth1", - "ipv4": { - "address": "REDACTED", - "broadcast": "REDACTED", - "netmask": "255.255.224.0", - "network": "REDACTED" - }, - "ipv6": [ - { - "address": "REDACTED", - "prefix": "64", - "scope": "link" - } - ], - "macaddress": "REDACTED", - "module": "xen_netfront", - "mtu": 1500, - "pciid": "vif-1", - "promisc": false, - "type": "ether" - }, - "ansible_fips": false, - "ansible_form_factor": "Other", - "ansible_fqdn": "centos-7-rax-dfw-0003427354", - "ansible_hostname": "centos-7-rax-dfw-0003427354", - "ansible_interfaces": [ - "lo", - "eth1", - "eth0" - ], - "ansible_is_chroot": false, - "ansible_kernel": "3.10.0-862.14.4.el7.x86_64", - "ansible_lo": { - "active": true, - "device": "lo", - "ipv4": { - "address": "127.0.0.1", - "broadcast": "host", - "netmask": "255.0.0.0", - "network": "127.0.0.0" - }, - "ipv6": [ - { - "address": "::1", - "prefix": "128", - "scope": "host" - } - ], - "mtu": 65536, - "promisc": false, - "type": "loopback" - }, - "ansible_local": {}, - "ansible_lsb": { - "codename": "Core", - "description": "CentOS Linux release 7.5.1804 (Core)", - "id": "CentOS", - "major_release": "7", - "release": "7.5.1804" - }, - "ansible_machine": "x86_64", - "ansible_machine_id": "2db133253c984c82aef2fafcce6f2bed", - "ansible_memfree_mb": 7709, - "ansible_memory_mb": { - "nocache": { - "free": 7804, - "used": 173 - }, - "real": { - "free": 7709, - "total": 7977, - "used": 268 - }, - "swap": { - "cached": 0, - "free": 0, - "total": 0, - "used": 0 - } - }, - "ansible_memtotal_mb": 7977, - "ansible_mounts": [ - { - "block_available": 7220998, - "block_size": 4096, - "block_total": 9817227, - "block_used": 2596229, - "device": "/dev/xvda1", - "fstype": "ext4", - "inode_available": 10052341, - "inode_total": 10419200, - "inode_used": 366859, - "mount": "/", - "options": "rw,seclabel,relatime,data=ordered", - "size_available": 29577207808, - "size_total": 40211361792, - "uuid": "cac81d61-d0f8-4b47-84aa-b48798239164" - }, - { - "block_available": 0, - "block_size": 2048, - "block_total": 252, - "block_used": 252, - "device": "/dev/xvdd", - "fstype": "iso9660", - "inode_available": 0, - "inode_total": 0, - "inode_used": 0, - "mount": "/mnt/config", - "options": "ro,relatime,mode=0700", - "size_available": 0, - "size_total": 516096, - "uuid": "2018-10-25-12-05-57-00" - } - ], - "ansible_nodename": "centos-7-rax-dfw-0003427354", - "ansible_os_family": "RedHat", - "ansible_pkg_mgr": "yum", - "ansible_processor": [ - "0", - "GenuineIntel", - "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", - "1", - "GenuineIntel", - "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", - "2", - "GenuineIntel", - "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", - "3", - "GenuineIntel", - "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", - "4", - "GenuineIntel", - "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", - "5", - "GenuineIntel", - "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", - "6", - "GenuineIntel", - "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", - "7", - "GenuineIntel", - "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz" - ], - "ansible_processor_cores": 8, - "ansible_processor_count": 8, - "ansible_processor_nproc": 8, - "ansible_processor_threads_per_core": 1, - "ansible_processor_vcpus": 8, - "ansible_product_name": "HVM domU", - "ansible_product_serial": "REDACTED", - "ansible_product_uuid": "REDACTED", - "ansible_product_version": "4.1.5", - "ansible_python": { - "executable": "/usr/bin/python2", - "has_sslcontext": true, - "type": "CPython", - "version": { - "major": 2, - "micro": 5, - "minor": 7, - "releaselevel": "final", - "serial": 0 - }, - "version_info": [ - 2, - 7, - 5, - "final", - 0 - ] - }, - "ansible_python_version": "2.7.5", - "ansible_real_group_id": 1000, - "ansible_real_user_id": 1000, - "ansible_selinux": { - "config_mode": "enforcing", - "mode": "enforcing", - "policyvers": 31, - "status": "enabled", - "type": "targeted" - }, - "ansible_selinux_python_present": true, - "ansible_service_mgr": "systemd", - "ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE", - "ansible_ssh_host_key_ed25519_public": "REDACTED KEY VALUE", - "ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE", - "ansible_swapfree_mb": 0, - "ansible_swaptotal_mb": 0, - "ansible_system": "Linux", - "ansible_system_capabilities": [ - "" - ], - "ansible_system_capabilities_enforced": "True", - "ansible_system_vendor": "Xen", - "ansible_uptime_seconds": 151, - "ansible_user_dir": "/home/zuul", - "ansible_user_gecos": "", - "ansible_user_gid": 1000, - "ansible_user_id": "zuul", - "ansible_user_shell": "/bin/bash", - "ansible_user_uid": 1000, - "ansible_userspace_architecture": "x86_64", - "ansible_userspace_bits": "64", - "ansible_virtualization_role": "guest", - "ansible_virtualization_type": "xen", - "gather_subset": [ - "all" - ], - "module_setup": true - } - -In the above the model of the first disk may be referenced in a template or playbook as:: - - {{ ansible_facts['devices']['xvda']['model'] }} - -Similarly, the hostname as the system reports it is:: - - {{ ansible_facts['nodename'] }} - -Facts are frequently used in conditionals (see :ref:`playbooks_conditionals`) and also in templates. - -Facts can be also used to create dynamic groups of hosts that match particular criteria, see the :ref:`modules` documentation on **group_by** for details, as well as in generalized conditional statements as discussed in the :ref:`playbooks_conditionals` chapter. - -.. _disabling_facts: - -Disabling facts ---------------- - -If you know you don't need any fact data about your hosts, and know everything about your systems centrally, you -can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of -systems, mainly, or if you are using Ansible on experimental platforms. In any play, just do this:: - - - hosts: whatever - gather_facts: no - -.. _local_facts: - -Local facts (facts.d) ---------------------- - -.. versionadded:: 1.3 - -As discussed in the playbooks chapter, Ansible facts are a way of getting data about remote systems for use in playbook variables. - -Usually these are discovered automatically by the ``setup`` module in Ansible. Users can also write custom facts modules, as described in the API guide. However, what if you want to have a simple way to provide system or user provided data for use in Ansible variables, without writing a fact module? - -"Facts.d" is one mechanism for users to control some aspect of how their systems are managed. - -.. note:: Perhaps "local facts" is a bit of a misnomer, it means "locally supplied user values" as opposed to "centrally supplied user values", or what facts are -- "locally dynamically determined values". - -If a remotely managed system has an ``/etc/ansible/facts.d`` directory, any files in this directory -ending in ``.fact``, can be JSON, INI, or executable files returning JSON, and these can supply local facts in Ansible. -An alternate directory can be specified using the ``fact_path`` play keyword. - -For example, assume ``/etc/ansible/facts.d/preferences.fact`` contains:: - - [general] - asdf=1 - bar=2 - -This will produce a hash variable fact named ``general`` with ``asdf`` and ``bar`` as members. -To validate this, run the following:: - - ansible -m setup -a "filter=ansible_local" - -And you will see the following fact added:: - - "ansible_local": { - "preferences": { - "general": { - "asdf" : "1", - "bar" : "2" - } - } - } - -And this data can be accessed in a ``template/playbook`` as:: - - {{ ansible_local['preferences']['general']['asdf'] }} - -The local namespace prevents any user supplied fact from overriding system facts or variables defined elsewhere in the playbook. - -.. note:: The key part in the key=value pairs will be converted into lowercase inside the ansible_local variable. Using the example above, if the ini file contained ``XYZ=3`` in the ``[general]`` section, then you should expect to access it as: ``{{ ansible_local['preferences']['general']['xyz'] }}`` and not ``{{ ansible_local['preferences']['general']['XYZ'] }}``. This is because Ansible uses Python's `ConfigParser`_ which passes all option names through the `optionxform`_ method and this method's default implementation converts option names to lower case. - -.. _ConfigParser: https://docs.python.org/2/library/configparser.html -.. _optionxform: https://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser.optionxform +Defining variables as lists +--------------------------- -If you have a playbook that is copying over a custom fact and then running it, making an explicit call to re-run the setup module -can allow that fact to be used during that particular play. Otherwise, it will be available in the next play that gathers fact information. -Here is an example of what that might look like:: +You can define variables with multiple values using YAML lists. For example:: - - hosts: webservers - tasks: - - name: create directory for ansible custom facts - file: state=directory recurse=yes path=/etc/ansible/facts.d - - name: install custom ipmi fact - copy: src=ipmi.fact dest=/etc/ansible/facts.d - - name: re-read facts after adding custom fact - setup: filter=ansible_local + region: + - northeast + - southeast + - midwest -In this pattern however, you could also write a fact module as well, and may wish to consider this as an option. +Referencing list variables +-------------------------- -.. _ansible_version: +When you use variables defined as a list (also called an array), you can use individual, specific fields from that list. The first item in a list is item 0, the second item is item 1. For example:: -Ansible version ---------------- + region: "{{ region[0] }}" -.. versionadded:: 1.8 +The value of this expression would be "northeast". -To adapt playbook behavior to specific version of ansible, a variable ansible_version is available, with the following -structure:: +Dictionary variables +==================== - "ansible_version": { - "full": "2.0.0.2", - "major": 2, - "minor": 0, - "revision": 0, - "string": "2.0.0.2" - } +Defining variables as key:value dictionaries +-------------------------------------------- -.. _fact_caching: +You can define more complex variables using YAML dictionaries. A YAML dictionary maps keys to values. For example:: -Caching Facts -------------- - -.. versionadded:: 1.8 - -As shown elsewhere in the docs, it is possible for one server to reference variables about another, like so:: - - {{ hostvars['asdf.example.com']['ansible_facts']['os_family'] }} - -With "Fact Caching" disabled, in order to do this, Ansible must have already talked to 'asdf.example.com' in the -current play, or another play up higher in the playbook. This is the default configuration of ansible. - -To avoid this, Ansible 1.8 allows the ability to save facts between playbook runs, but this feature must be manually -enabled. Why might this be useful? - -With a very large infrastructure with thousands of hosts, fact caching could be configured to run nightly. Configuration of a small set of servers could run ad-hoc or periodically throughout the day. With fact caching enabled, it would -not be necessary to "hit" all servers to reference variables and information about them. - -With fact caching enabled, it is possible for machine in one group to reference variables about machines in the other group, despite the fact that they have not been communicated with in the current execution of /usr/bin/ansible-playbook. - -To benefit from cached facts, you will want to change the ``gathering`` setting to ``smart`` or ``explicit`` or set ``gather_facts`` to ``False`` in most plays. - -Currently, Ansible ships with two persistent cache plugins: redis and jsonfile. - -To configure fact caching using redis, enable it in ``ansible.cfg`` as follows:: - - [defaults] - gathering = smart - fact_caching = redis - fact_caching_timeout = 86400 - # seconds - -To get redis up and running, perform the equivalent OS commands:: - - yum install redis - service redis start - pip install redis - -Note that the Python redis library should be installed from pip, the version packaged in EPEL is too old for use by Ansible. + foo: + field1: one + field2: two -In current embodiments, this feature is in beta-level state and the Redis plugin does not support port or password configuration, this is expected to change in the near future. +Referencing key:value dictionary variables +------------------------------------------ -To configure fact caching using jsonfile, enable it in ``ansible.cfg`` as follows:: +When you use variables defined as a key:value dictionary (also called a hash), you can use individual, specific fields from that dictionary using either bracket notation or dot notation:: - [defaults] - gathering = smart - fact_caching = jsonfile - fact_caching_connection = /path/to/cachedir - fact_caching_timeout = 86400 - # seconds + foo['field1'] + foo.field1 -``fact_caching_connection`` is a local filesystem path to a writeable -directory (ansible will attempt to create the directory if one does not exist). +Both of these examples reference the same value ("one"). Bracket notation always works. Dot notation can cause problems because some keys collide with attributes and methods of python dictionaries. Use bracket notation if you use keys which start and end with two underscores (which are reserved for special meanings in python) or are any of the known public attributes: -``fact_caching_timeout`` is the number of seconds to cache the recorded facts. +``add``, ``append``, ``as_integer_ratio``, ``bit_length``, ``capitalize``, ``center``, ``clear``, ``conjugate``, ``copy``, ``count``, ``decode``, ``denominator``, ``difference``, ``difference_update``, ``discard``, ``encode``, ``endswith``, ``expandtabs``, ``extend``, ``find``, ``format``, ``fromhex``, ``fromkeys``, ``get``, ``has_key``, ``hex``, ``imag``, ``index``, ``insert``, ``intersection``, ``intersection_update``, ``isalnum``, ``isalpha``, ``isdecimal``, ``isdigit``, ``isdisjoint``, ``is_integer``, ``islower``, ``isnumeric``, ``isspace``, ``issubset``, ``issuperset``, ``istitle``, ``isupper``, ``items``, ``iteritems``, ``iterkeys``, ``itervalues``, ``join``, ``keys``, ``ljust``, ``lower``, ``lstrip``, ``numerator``, ``partition``, ``pop``, ``popitem``, ``real``, ``remove``, ``replace``, ``reverse``, ``rfind``, ``rindex``, ``rjust``, ``rpartition``, ``rsplit``, ``rstrip``, ``setdefault``, ``sort``, ``split``, ``splitlines``, ``startswith``, ``strip``, ``swapcase``, ``symmetric_difference``, ``symmetric_difference_update``, ``title``, ``translate``, ``union``, ``update``, ``upper``, ``values``, ``viewitems``, ``viewkeys``, ``viewvalues``, ``zfill``. .. _registered_variables: Registering variables ===================== -Another major use of variables is running a command and registering the result of that command as a variable. When you execute a task and save the return value in a variable for use in later tasks, you create a registered variable. There are more examples of this in the -:ref:`playbooks_conditionals` chapter. - -For example:: +You can create variables from the output of an Ansible task with the task keyword ``register``. You can use registered variables in any later tasks in your play. For example:: - hosts: web_servers @@ -809,117 +140,71 @@ For example:: - shell: /usr/bin/bar when: foo_result.rc == 5 -Results will vary from module to module. Each module's documentation includes a ``RETURN`` section describing that module's return values. To see the values for a particular task, run your playbook with ``-v``. +See :ref:`playbooks_conditionals` for more examples. Registered variables may be simple variables, list variables, dictionary variables, or complex nested data structures. The documentation for each module includes a ``RETURN`` section describing the return values for that module. To see the values for a particular task, run your playbook with ``-v``. -Registered variables are similar to facts, with a few key differences. Like facts, registered variables are host-level variables. However, registered variables are only stored in memory. (Ansible facts are backed by whatever cache plugin you have configured.) Registered variables are only valid on the host for the rest of the current playbook run. Finally, registered variables and facts have different :ref:`precedence levels `. +Registered variables are stored in memory. You cannot cache registered variables for use in future plays. Registered variables are only valid on the host for the rest of the current playbook run. -When you register a variable in a task with a loop, the registered variable contains a value for each item in the loop. The data structure placed in the variable during the loop will contain a ``results`` attribute, that is a list of all responses from the module. For a more in-depth example of how this works, see the :ref:`playbooks_loops` section on using register with a loop. +Registered variables are host-level variables. When you register a variable in a task with a loop, the registered variable contains a value for each item in the loop. The data structure placed in the variable during the loop will contain a ``results`` attribute, that is a list of all responses from the module. For a more in-depth example of how this works, see the :ref:`playbooks_loops` section on using register with a loop. -.. note:: If a task fails or is skipped, the variable still is registered with a failure or skipped status, the only way to avoid registering a variable is using tags. +.. note:: If a task fails or is skipped, Ansible still registers a variable with a failure or skipped status, unless the task is skipped based on tags. See :ref:`tags` for information on adding and using tags. .. _accessing_complex_variable_data: -Accessing complex variable data -=============================== +Referencing nested variables +============================ -We already described facts a little higher up in the documentation. - -Some provided facts, like networking information, are made available as nested data structures. To access -them a simple ``{{ foo }}`` is not sufficient, but it is still easy to do. Here's how we get an IP address:: +Many registered variables (and :ref:`facts `) are nested YAML or JSON data structures. You cannot access values from these nested data structures with the simple ``{{ foo }}`` syntax. You must use either bracket notation or dot notation. For example, to reference an IP address from your facts using the bracket notation:: {{ ansible_facts["eth0"]["ipv4"]["address"] }} -OR alternatively:: +Using the dot notation:: {{ ansible_facts.eth0.ipv4.address }} -Similarly, this is how we access the first element of an array:: - - {{ foo[0] }} - -.. _magic_variables_and_hostvars: - -Accessing information about other hosts with magic variables -============================================================ - -Whether or not you define any variables, you can access information about your hosts with the :ref:`special_variables` Ansible provides, including "magic" variables, facts, and connection variables. Magic variable names are reserved - do not set variables with these names. The variable ``environment`` is also reserved. - -The most commonly used magic variables are ``hostvars``, ``groups``, ``group_names``, and ``inventory_hostname``. - -``hostvars`` lets you access variables for another host, including facts that have been gathered about that host. You can access host variables at any point in a playbook. Even if you haven't connected to that host yet in any play in the playbook or set of playbooks, you can still get the variables, but you will not be able to see the facts. - -If your database server wants to use the value of a 'fact' from another node, or an inventory variable -assigned to another node, it's easy to do so within a template or even an action line:: - - {{ hostvars['test.example.com']['ansible_facts']['distribution'] }} - -``groups`` is a list of all the groups (and hosts) in the inventory. This can be used to enumerate all hosts within a group. For example: - -.. code-block:: jinja - - {% for host in groups['app_servers'] %} - # something that applies to all app servers. - {% endfor %} - -A frequently used idiom is walking a group to find all IP addresses in that group. - -.. code-block:: jinja - - {% for host in groups['app_servers'] %} - {{ hostvars[host]['ansible_facts']['eth0']['ipv4']['address'] }} - {% endfor %} - -You can use this idiom to point a frontend proxy server to all of the app servers, to set up the correct firewall rules between servers, etc. -You need to make sure that the facts of those hosts have been populated before though, for example by running a play against them if the facts have not been cached recently (fact caching was added in Ansible 1.8). - -``group_names`` is a list (array) of all the groups the current host is in. This can be used in templates using Jinja2 syntax to make template source files that vary based on the group membership (or role) of the host: - -.. code-block:: jinja - - {% if 'webserver' in group_names %} - # some part of a configuration file that only applies to webservers - {% endif %} +.. _about_jinja2: +.. _jinja2_filters: -``inventory_hostname`` is the name of the hostname as configured in Ansible's inventory host file. This can -be useful when you've disabled fact-gathering, or you don't want to rely on the discovered hostname ``ansible_hostname``. If you have a long FQDN, you can use ``inventory_hostname_short``, which contains the part up to the first -period, without the rest of the domain. +Transforming variables with Jinja2 filters +========================================== -Other useful magic variables refer to the current play or playbook, including: +Jinja2 filters let you transform the value of a variable within a template expression. For example, the ``capitalize`` filter capitalizes any value passed to it; the ``to_yaml`` and ``to_json`` filters change the format of your variable values. Jinja2 includes many `built-in filters `_ and Ansible supplies many more filters. See :ref:`playbooks_filters` for examples. -.. versionadded:: 2.2 +.. _setting_variables: -``ansible_play_hosts`` is the full list of all hosts still active in the current play. +Where to set variables +====================== -.. versionadded:: 2.2 +You can set variables in a variety of places, including in inventory, in playbooks, in re-usable files, in roles, and at the command line. Ansible loads every possible variable it finds, then chooses the variable to apply based on :ref:`variable precedence rules `. -``ansible_play_batch`` is available as a list of hostnames that are in scope for the current 'batch' of the play. The batch size is defined by ``serial``, when not set it is equivalent to the whole play (making it the same as ``ansible_play_hosts``). +.. _variables_in_inventory: -.. versionadded:: 2.3 +Setting variables in inventory +------------------------------ -``ansible_playbook_python`` is the path to the python executable used to invoke the Ansible command line tool. +You can set different variables for each individual host, or set shared variables for a group of hosts in your inventory. For example, if all machines in the ``[Boston]`` group use 'boston.ntp.example.com' as an NTP server, you can set a group variable. The :ref:`intro_inventory` page has details on setting :ref:`host variables ` and :ref:`group variables ` in inventory. -These vars may be useful for filling out templates with multiple hostnames or for injecting the list into the rules for a load balancer. +.. _playbook_variables: -Also available, ``inventory_dir`` is the pathname of the directory holding Ansible's inventory host file, ``inventory_file`` is the pathname and the filename pointing to the Ansible's inventory host file. +Setting variables in a playbook +------------------------------- -``playbook_dir`` contains the playbook base directory. +You can define variables directly in a playbook:: -We then have ``role_path`` which will return the current role's pathname (since 1.8). This will only work inside a role. + - hosts: webservers + vars: + http_port: 80 -And finally, ``ansible_check_mode`` (added in version 2.1), a boolean magic variable which will be set to ``True`` if you run Ansible with ``--check``. +When you set variables in a playbook, they are visible to anyone who runs that playbook. This is especially useful if you share playbooks widely. +.. _included_variables: .. _variable_file_separation_details: -Defining variables in files -=========================== +Setting variables in included files and roles +--------------------------------------------- -It's a great idea to keep your playbooks under source control, but -you may wish to make the playbook source public while keeping certain -important variables private. Similarly, sometimes you may just -want to keep certain information in different files, away from -the main playbook. +You can set variables in re-usable variables files and/or in re-usable roles. See :ref:`playbooks_reuse` for more details. -You can do this by using an external variables file, or files, just like this:: +Setting variables in included variables files lets you separate sensitive variables from playbooks, so you can keep your playbooks under source control and even share them without exposing passwords or other private information. You can do this by using an external variables file, or files, just like this:: --- @@ -935,9 +220,6 @@ You can do this by using an external variables file, or files, just like this:: - name: this is just a placeholder command: /bin/echo foo -This removes the risk of sharing sensitive data with others when -sharing your playbook source with them. - The contents of each variables file is a simple YAML dictionary, like this:: --- @@ -946,70 +228,62 @@ The contents of each variables file is a simple YAML dictionary, like this:: password: magic .. note:: - It's also possible to keep per-host and per-group variables in very - similar files, this is covered in :ref:`splitting_out_vars`. + You can keep per-host and per-group variables in similar files, see :ref:`splitting_out_vars`. .. _passing_variables_on_the_command_line: -Passing variables on the command line -===================================== +Setting variables at runtime +---------------------------- + +You can set variables when you run your playbook by passing variables at the command line using the ``--extra-vars`` (or ``-e``) argument. You can also request user input with a ``vars_prompt`` (see :ref:`playbooks_prompts`). When you pass variables at the command line, use a single quoted string (containing one or more variables) in one of the formats below. -In addition to ``vars_prompt`` and ``vars_files``, it is possible to set variables at the -command line using the ``--extra-vars`` (or ``-e``) argument. Variables can be defined using -a single quoted string (containing one or more variables) using one of the formats below +key=value format +^^^^^^^^^^^^^^^^ -key=value format:: +Values passed in using the ``key=value`` syntax are interpreted as strings. Use the JSON format if you need to pass non-string values (Booleans, integers, floats, lists, and so on). + +.. code-block:: text ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo" -.. note:: Values passed in using the ``key=value`` syntax are interpreted as strings. - Use the JSON format if you need to pass in anything that shouldn't be a string (Booleans, integers, floats, lists etc). +JSON string format +^^^^^^^^^^^^^^^^^^ -JSON string format:: +.. code-block:: text ansible-playbook release.yml --extra-vars '{"version":"1.23.45","other_variable":"foo"}' ansible-playbook arcade.yml --extra-vars '{"pacman":"mrs","ghosts":["inky","pinky","clyde","sue"]}' -vars from a JSON or YAML file:: +When passing variables with ``--extra-vars``, you must escape quotes and other special characters appropriately for both your markup (e.g. JSON), and for your shell:: - ansible-playbook release.yml --extra-vars "@some_file.json" + ansible-playbook arcade.yml --extra-vars "{\"name\":\"Conan O\'Brien\"}" + ansible-playbook arcade.yml --extra-vars '{"name":"Conan O'\\\''Brien"}' + ansible-playbook script.yml --extra-vars "{\"dialog\":\"He said \\\"I just can\'t get enough of those single and double-quotes"\!"\\\"\"}" -This is useful for, among other things, setting the hosts group or the user for the playbook. +If you have a lot of special characters, use a JSON or YAML file containing the variable definitions. -Escaping quotes and other special characters: +vars from a JSON or YAML file +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Ensure you're escaping quotes appropriately for both your markup (e.g. JSON), and for -the shell you're operating in.:: +.. code-block:: text - ansible-playbook arcade.yml --extra-vars "{\"name\":\"Conan O\'Brien\"}" - ansible-playbook arcade.yml --extra-vars '{"name":"Conan O'\\\''Brien"}' - ansible-playbook script.yml --extra-vars "{\"dialog\":\"He said \\\"I just can\'t get enough of those single and double-quotes"\!"\\\"\"}" + ansible-playbook release.yml --extra-vars "@some_file.json" -In these cases, it's probably best to use a JSON or YAML file containing the variable -definitions. .. _ansible_variable_precedence: Variable precedence: Where should I put a variable? =================================================== -A lot of folks may ask about how variables override another. Ultimately it's Ansible's philosophy that it's better -you know where to put a variable, and then you have to think about it a lot less. - -Avoid defining the variable "x" in 47 places and then ask the question "which x gets used". -Why? Because that's not Ansible's Zen philosophy of doing things. +You can set multiple variables with the same name in many different places. When you do this, Ansible loads every possible variable it finds, then chooses the variable to apply based on variable precedence. In other words, the different variables will override each other in a certain order. -There is only one Empire State Building. One Mona Lisa, etc. Figure out where to define a variable, and don't make -it complicated. +Ansible configuration, command-line options, and playbook keywords can also affect Ansible behavior. In general, variables take precedence, so that host-specific settings can override more general settings. For examples and more details on the precedence of these various settings, see :ref:`general_precedence_rules`. -However, let's go ahead and get precedence out of the way! It exists. It's a real thing, and you might have -a use for it. +Teams and projects that agree on guidelines for defining variables (where to define certain types of variables) usually avoid variable precedence concerns. We suggest you define each variable in one place: figure out where to define a variable, and keep it simple. However, this is not always possible. -If multiple variables of the same name are defined in different places, they get overridden in a certain order. +Ansible does apply variable precedence, and you might have a use for it. Here is the order of precedence from least to greatest (the last listed variables winning prioritization): -Here is the order of precedence from least to greatest (the last listed variables winning prioritization): - - #. command line values (eg "-u user") + #. command line values (for example, ``-u my_user``, these are not variables) #. role defaults (defined in role/defaults/main.yml) [1]_ #. inventory file or script group vars [2]_ #. inventory group_vars/all [3]_ @@ -1030,9 +304,11 @@ Here is the order of precedence from least to greatest (the last listed variable #. set_facts / registered vars #. role (and include_role) params #. include params - #. extra vars (always win precedence) + #. extra vars (for example, ``-e "user=my_user"``)(always win precedence) + +In general, Ansible gives higher precedence to variables that were defined more recently, more actively, and with more explicit scope. Variables in the the defaults folder inside a role are easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in the namespace. Host and/or inventory variables override role defaults, but do not override explicit includes like the vars directory or an ``include_vars`` task. -Basically, anything that goes into "role defaults" (the defaults folder inside the role) is the most malleable and easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in namespace. The idea here to follow is that the more explicit you get in scope, the more precedence it takes, with command line ``-e`` extra vars always winning. Host and/or inventory variables can win over role defaults, but not explicit includes like the vars directory or an ``include_vars`` task. +Ansible merges different variables set in inventory so that more specific settings override more generic settings. For example, ``ansible_ssh_user`` specified as a group_var has a higher precedence than ``ansible_user`` specified as a host_var. See :ref:`how_we_merge` for more details on the precedence of variables set in inventory. .. rubric:: Footnotes @@ -1045,46 +321,7 @@ Basically, anything that goes into "role defaults" (the defaults folder inside t .. note:: Within any section, redefining a var will override the previous instance. If multiple groups have the same variable, the last one loaded wins. If you define a variable twice in a play's ``vars:`` section, the second one wins. -.. note:: The previous describes the default config ``hash_behaviour=replace``, switch to ``merge`` to only partially override. -.. note:: Group loading follows parent/child relationships. Groups of the same 'parent/child' level are then merged following alphabetical order. - This last one can be superseded by the user via ``ansible_group_priority``, which defaults to ``1`` for all groups. - This variable, ``ansible_group_priority``, can only be set in the inventory source and not in group_vars/ as the variable is used in the loading of group_vars/. - -Another important thing to consider (for all versions) is that connection variables override config, command line and play/role/task specific options and keywords. See :ref:`general_precedence_rules` for more details. For example, if your inventory specifies ``ansible_user: ramon`` and you run:: - - ansible -u lola myhost - -This will still connect as ``ramon`` because the value from the variable takes priority (in this case, the variable came from the inventory, but the same would be true no matter where the variable was defined). - -For plays/tasks this is also true for ``remote_user``. Assuming the same inventory config, the following play:: - - - hosts: myhost - tasks: - - command: I'll connect as ramon still - remote_user: lola - -will have the value of ``remote_user`` overridden by ``ansible_user`` in the inventory. - -This is done so host-specific settings can override the general settings. These variables are normally defined per host or group in inventory, -but they behave like other variables. - -If you want to override the remote user globally (even over inventory) you can use extra vars. For instance, if you run:: - - ansible... -e "ansible_user=maria" -u lola - -the ``lola`` value is still ignored, but ``ansible_user=maria`` takes precedence over all other places where ``ansible_user`` (or ``remote_user``) might be set. - -A connection-specific version of a variable takes precedence over more generic -versions. For example, ``ansible_ssh_user`` specified as a group_var would have -a higher precedence than ``ansible_user`` specified as a host_var. - -You can also override as a normal variable in a play:: - - - hosts: all - vars: - ansible_user: lola - tasks: - - command: I'll connect as lola! +.. note:: The previous describes the default config ``hash_behaviour=replace``, switch to ``merge`` to only partially overwrite. .. _variable_scopes: @@ -1097,80 +334,58 @@ You can decide where to set a variable based on the scope you want that value to * Play: each play and contained structures, vars entries (vars; vars_files; vars_prompt), role defaults and vars. * Host: variables directly associated to a host, like inventory, include_vars, facts or registered task outputs +Inside a template you automatically have access to all variables that are in scope for a host, plus any registered variables, facts, and magic variables. + .. _variable_examples: -Examples of where to set a variable ------------------------------------ +Tips on where to set variables +------------------------------ - Let's show some examples and where you would choose to put what based on the kind of control you might want over values. +You should choose where to define a variable based on the kind of control you might want over values. -First off, group variables are powerful. +Set variables in inventory that deal with geography or behavior. Since groups are frequently the entity that maps roles onto hosts, you can often set variables on the group instead of defining them on a role. Remember: Child groups override parent groups, and host variables override group variables. See :ref:`variables_in_inventory` for details on setting host and group variables. -Site-wide defaults should be defined as a ``group_vars/all`` setting. Group variables are generally placed alongside -your inventory file. They can also be returned by a dynamic inventory script (see :ref:`intro_dynamic_inventory`) or defined -in things like :ref:`ansible_tower` from the UI or API:: +Set common defaults in a ``group_vars/all`` file. See :ref:`splitting_out_vars` for details on how to organize host and group variables in your inventory. Group variables are generally placed alongside your inventory file, but they can also be returned by dynamic inventory (see :ref:`intro_dynamic_inventory`) or defined in :ref:`ansible_tower` from the UI or API:: --- # file: /etc/ansible/group_vars/all # this is the site wide default ntp_server: default-time.example.com -Regional information might be defined in a ``group_vars/region`` variable. If this group is a child of the ``all`` group (which it is, because all groups are), it will override the group that is higher up and more general:: +Set location-specific variables in ``group_vars/my_location`` files. All groups are children of the ``all`` group, so variables set here override those set in ``group_vars/all``:: --- # file: /etc/ansible/group_vars/boston ntp_server: boston-time.example.com -If for some crazy reason we wanted to tell just a specific host to use a specific NTP server, it would then override the group variable!:: +If one host used a different NTP server, you could set that in a host_vars file, which would override the group variable:: --- # file: /etc/ansible/host_vars/xyz.boston.example.com ntp_server: override.example.com -So that covers inventory and what you would normally set there. It's a great place for things that deal with geography or behavior. Since groups are frequently the entity that maps roles onto hosts, it is sometimes a shortcut to set variables on the group instead of defining them on a role. You could go either way. - -Remember: Child groups override parent groups, and hosts always override their groups. - -Next up: learning about role variable precedence. - -We'll pretty much assume you are using roles at this point. You should be using roles for sure. Roles are great. You are using -roles aren't you? Hint hint. - -If you are writing a redistributable role with reasonable defaults, put those in the ``roles/x/defaults/main.yml`` file. This means -the role will bring along a default value but ANYTHING in Ansible will override it. -See :ref:`playbooks_reuse_roles` for more info about this:: +Set defaults in roles to avoid undefined-variable errors. If you share your roles, other users can rely on the reasonable defaults you added in the ``roles/x/defaults/main.yml`` file, or they can easily override those values in inventory or at the command line. See :ref:`playbooks_reuse_roles` for more info. For example:: --- # file: roles/x/defaults/main.yml - # if not overridden in inventory or as a parameter, this is the value that will be used + # if no other value is supplied in inventory or as a parameter, this value will be used http_port: 80 -If you are writing a role and want to ensure the value in the role is absolutely used in that role, and is not going to be overridden -by inventory, you should put it in ``roles/x/vars/main.yml`` like so, and inventory values cannot override it. ``-e`` however, still will:: +Set variables in roles to ensure a value is used in that role, and is not overridden by inventory variables. If you are not sharing your role with others, you can define app-specific behaviors like ports this way, in ``roles/x/vars/main.yml``. If you are sharing roles with others, putting variables here makes them harder to override, although they still can by passing a parameter to the role or setting a variable with ``-e``:: --- # file: roles/x/vars/main.yml # this will absolutely be used in this role http_port: 80 -This is one way to plug in constants about the role that are always true. If you are not sharing your role with others, -app specific behaviors like ports is fine to put in here. But if you are sharing roles with others, putting variables in here might -be bad. Nobody will be able to override them with inventory, but they still can by passing a parameter to the role. - -Parameterized roles are useful. - -If you are using a role and want to override a default, pass it as a parameter to the role like so:: +Pass variables as parameters when you call roles for maximum clarity, flexibility, and visibility. This approach overrides any defaults that exist for a role. For example:: roles: - role: apache vars: http_port: 8080 -This makes it clear to the playbook reader that you've made a conscious choice to override some default in the role, or pass in some -configuration that the role can't assume by itself. It also allows you to pass something site-specific that isn't really part of the -role you are sharing with others. - -This can often be used for things that might apply to some hosts multiple times. For example:: +When you read this playbook it is clear that you have chosen to set a variable or override a default. You can also pass multiple values, which allows you to run the same role multiple times. See :ref:`run_role_twice` for more details. For example:: roles: - role: app_user @@ -1186,13 +401,7 @@ This can often be used for things that might apply to some hosts multiple times. vars: myname: John -In this example, the same role was invoked multiple times. It's quite likely there was -no default for ``myname`` supplied at all. Ansible can warn you when variables aren't defined -- it's the default behavior in fact. - -There are a few other things that go on with roles. - -Generally speaking, variables set in one role are available to others. This means if you have a ``roles/common/vars/main.yml`` you -can set variables in there and make use of them in other roles and elsewhere in your playbook:: +Variables set in one role are available to later roles. You can set variables in a ``roles/common_settings/vars/main.yml`` file and use them in other roles and elsewhere in your playbook:: roles: - role: common_settings @@ -1202,14 +411,9 @@ can set variables in there and make use of them in other roles and elsewhere in - role: something_else .. note:: There are some protections in place to avoid the need to namespace variables. - In the above, variables defined in common_settings are most definitely available to 'something' and 'something_else' tasks, but if - "something's" guaranteed to have foo set at 12, even if somewhere deep in common settings it set foo to 20. - -So, that's precedence, explained in a more direct way. Don't worry about precedence, just think about if your role is defining a -variable that is a default, or a "live" variable you definitely want to use. Inventory lies in precedence right in the middle, and -if you want to forcibly override something, use ``-e``. + In this example, variables defined in 'common_settings' are available to 'something' and 'something_else' tasks, but tasks in 'something' have foo set at 12, even if 'common_settings' sets foo to 20. -If you found that a little hard to understand, take a look at the `ansible-examples `_ repo on GitHub for a bit more about how all of these things can work together. +Instead of worrying about variable precedence, we encourage you to think about how easily or how often you want to override a variable when deciding where to set it. If you are not sure what other variables are defined, and you need a particular value, use ``--extra-vars`` (``-e``) to override all other variables. Using advanced variable syntax ============================== diff --git a/docs/docsite/rst/user_guide/playbooks_vars_facts.rst b/docs/docsite/rst/user_guide/playbooks_vars_facts.rst new file mode 100644 index 00000000000..08a8d902915 --- /dev/null +++ b/docs/docsite/rst/user_guide/playbooks_vars_facts.rst @@ -0,0 +1,663 @@ +.. _vars_and_facts: + +************************************************ +Discovering variables: facts and magic variables +************************************************ + +With Ansible you can retrieve or discover certain variables containing information about your remote systems or about Ansible itself. Variables related to remote systems are called facts. With facts, you can use the behavior or state of one system as configuration on other systems. For example, you can use the IP address of one system as a configuration value on another system. Variables related to Ansible are called magic variables. + +.. contents:: + :local: + +Ansible facts +============= + +Ansible facts are data related to your remote systems, including operating systems, IP addresses, attached filesystems, and more. You can access this data in the ``ansible_facts`` variable. By default, you can also access some Ansible facts as top-level variables with the ``ansible_`` prefix. You can disable this behavior using the :ref:`INJECT_FACTS_AS_VARS` setting. To see all available facts, add this task to a play:: + + - debug: var=ansible_facts + +To see the 'raw' information as gathered, run this command at the command line:: + + ansible -m setup + +Facts include a large amount of variable data, which may look like this on Ansible 2.7: + +.. code-block:: json + + { + "ansible_all_ipv4_addresses": [ + "REDACTED IP ADDRESS" + ], + "ansible_all_ipv6_addresses": [ + "REDACTED IPV6 ADDRESS" + ], + "ansible_apparmor": { + "status": "disabled" + }, + "ansible_architecture": "x86_64", + "ansible_bios_date": "11/28/2013", + "ansible_bios_version": "4.1.5", + "ansible_cmdline": { + "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-862.14.4.el7.x86_64", + "console": "ttyS0,115200", + "no_timer_check": true, + "nofb": true, + "nomodeset": true, + "ro": true, + "root": "LABEL=cloudimg-rootfs", + "vga": "normal" + }, + "ansible_date_time": { + "date": "2018-10-25", + "day": "25", + "epoch": "1540469324", + "hour": "12", + "iso8601": "2018-10-25T12:08:44Z", + "iso8601_basic": "20181025T120844109754", + "iso8601_basic_short": "20181025T120844", + "iso8601_micro": "2018-10-25T12:08:44.109968Z", + "minute": "08", + "month": "10", + "second": "44", + "time": "12:08:44", + "tz": "UTC", + "tz_offset": "+0000", + "weekday": "Thursday", + "weekday_number": "4", + "weeknumber": "43", + "year": "2018" + }, + "ansible_default_ipv4": { + "address": "REDACTED", + "alias": "eth0", + "broadcast": "REDACTED", + "gateway": "REDACTED", + "interface": "eth0", + "macaddress": "REDACTED", + "mtu": 1500, + "netmask": "255.255.255.0", + "network": "REDACTED", + "type": "ether" + }, + "ansible_default_ipv6": {}, + "ansible_device_links": { + "ids": {}, + "labels": { + "xvda1": [ + "cloudimg-rootfs" + ], + "xvdd": [ + "config-2" + ] + }, + "masters": {}, + "uuids": { + "xvda1": [ + "cac81d61-d0f8-4b47-84aa-b48798239164" + ], + "xvdd": [ + "2018-10-25-12-05-57-00" + ] + } + }, + "ansible_devices": { + "xvda": { + "holders": [], + "host": "", + "links": { + "ids": [], + "labels": [], + "masters": [], + "uuids": [] + }, + "model": null, + "partitions": { + "xvda1": { + "holders": [], + "links": { + "ids": [], + "labels": [ + "cloudimg-rootfs" + ], + "masters": [], + "uuids": [ + "cac81d61-d0f8-4b47-84aa-b48798239164" + ] + }, + "sectors": "83883999", + "sectorsize": 512, + "size": "40.00 GB", + "start": "2048", + "uuid": "cac81d61-d0f8-4b47-84aa-b48798239164" + } + }, + "removable": "0", + "rotational": "0", + "sas_address": null, + "sas_device_handle": null, + "scheduler_mode": "deadline", + "sectors": "83886080", + "sectorsize": "512", + "size": "40.00 GB", + "support_discard": "0", + "vendor": null, + "virtual": 1 + }, + "xvdd": { + "holders": [], + "host": "", + "links": { + "ids": [], + "labels": [ + "config-2" + ], + "masters": [], + "uuids": [ + "2018-10-25-12-05-57-00" + ] + }, + "model": null, + "partitions": {}, + "removable": "0", + "rotational": "0", + "sas_address": null, + "sas_device_handle": null, + "scheduler_mode": "deadline", + "sectors": "131072", + "sectorsize": "512", + "size": "64.00 MB", + "support_discard": "0", + "vendor": null, + "virtual": 1 + }, + "xvde": { + "holders": [], + "host": "", + "links": { + "ids": [], + "labels": [], + "masters": [], + "uuids": [] + }, + "model": null, + "partitions": { + "xvde1": { + "holders": [], + "links": { + "ids": [], + "labels": [], + "masters": [], + "uuids": [] + }, + "sectors": "167770112", + "sectorsize": 512, + "size": "80.00 GB", + "start": "2048", + "uuid": null + } + }, + "removable": "0", + "rotational": "0", + "sas_address": null, + "sas_device_handle": null, + "scheduler_mode": "deadline", + "sectors": "167772160", + "sectorsize": "512", + "size": "80.00 GB", + "support_discard": "0", + "vendor": null, + "virtual": 1 + } + }, + "ansible_distribution": "CentOS", + "ansible_distribution_file_parsed": true, + "ansible_distribution_file_path": "/etc/redhat-release", + "ansible_distribution_file_variety": "RedHat", + "ansible_distribution_major_version": "7", + "ansible_distribution_release": "Core", + "ansible_distribution_version": "7.5.1804", + "ansible_dns": { + "nameservers": [ + "127.0.0.1" + ] + }, + "ansible_domain": "", + "ansible_effective_group_id": 1000, + "ansible_effective_user_id": 1000, + "ansible_env": { + "HOME": "/home/zuul", + "LANG": "en_US.UTF-8", + "LESSOPEN": "||/usr/bin/lesspipe.sh %s", + "LOGNAME": "zuul", + "MAIL": "/var/mail/zuul", + "PATH": "/usr/local/bin:/usr/bin", + "PWD": "/home/zuul", + "SELINUX_LEVEL_REQUESTED": "", + "SELINUX_ROLE_REQUESTED": "", + "SELINUX_USE_CURRENT_RANGE": "", + "SHELL": "/bin/bash", + "SHLVL": "2", + "SSH_CLIENT": "REDACTED 55672 22", + "SSH_CONNECTION": "REDACTED 55672 REDACTED 22", + "USER": "zuul", + "XDG_RUNTIME_DIR": "/run/user/1000", + "XDG_SESSION_ID": "1", + "_": "/usr/bin/python2" + }, + "ansible_eth0": { + "active": true, + "device": "eth0", + "ipv4": { + "address": "REDACTED", + "broadcast": "REDACTED", + "netmask": "255.255.255.0", + "network": "REDACTED" + }, + "ipv6": [ + { + "address": "REDACTED", + "prefix": "64", + "scope": "link" + } + ], + "macaddress": "REDACTED", + "module": "xen_netfront", + "mtu": 1500, + "pciid": "vif-0", + "promisc": false, + "type": "ether" + }, + "ansible_eth1": { + "active": true, + "device": "eth1", + "ipv4": { + "address": "REDACTED", + "broadcast": "REDACTED", + "netmask": "255.255.224.0", + "network": "REDACTED" + }, + "ipv6": [ + { + "address": "REDACTED", + "prefix": "64", + "scope": "link" + } + ], + "macaddress": "REDACTED", + "module": "xen_netfront", + "mtu": 1500, + "pciid": "vif-1", + "promisc": false, + "type": "ether" + }, + "ansible_fips": false, + "ansible_form_factor": "Other", + "ansible_fqdn": "centos-7-rax-dfw-0003427354", + "ansible_hostname": "centos-7-rax-dfw-0003427354", + "ansible_interfaces": [ + "lo", + "eth1", + "eth0" + ], + "ansible_is_chroot": false, + "ansible_kernel": "3.10.0-862.14.4.el7.x86_64", + "ansible_lo": { + "active": true, + "device": "lo", + "ipv4": { + "address": "127.0.0.1", + "broadcast": "host", + "netmask": "255.0.0.0", + "network": "127.0.0.0" + }, + "ipv6": [ + { + "address": "::1", + "prefix": "128", + "scope": "host" + } + ], + "mtu": 65536, + "promisc": false, + "type": "loopback" + }, + "ansible_local": {}, + "ansible_lsb": { + "codename": "Core", + "description": "CentOS Linux release 7.5.1804 (Core)", + "id": "CentOS", + "major_release": "7", + "release": "7.5.1804" + }, + "ansible_machine": "x86_64", + "ansible_machine_id": "2db133253c984c82aef2fafcce6f2bed", + "ansible_memfree_mb": 7709, + "ansible_memory_mb": { + "nocache": { + "free": 7804, + "used": 173 + }, + "real": { + "free": 7709, + "total": 7977, + "used": 268 + }, + "swap": { + "cached": 0, + "free": 0, + "total": 0, + "used": 0 + } + }, + "ansible_memtotal_mb": 7977, + "ansible_mounts": [ + { + "block_available": 7220998, + "block_size": 4096, + "block_total": 9817227, + "block_used": 2596229, + "device": "/dev/xvda1", + "fstype": "ext4", + "inode_available": 10052341, + "inode_total": 10419200, + "inode_used": 366859, + "mount": "/", + "options": "rw,seclabel,relatime,data=ordered", + "size_available": 29577207808, + "size_total": 40211361792, + "uuid": "cac81d61-d0f8-4b47-84aa-b48798239164" + }, + { + "block_available": 0, + "block_size": 2048, + "block_total": 252, + "block_used": 252, + "device": "/dev/xvdd", + "fstype": "iso9660", + "inode_available": 0, + "inode_total": 0, + "inode_used": 0, + "mount": "/mnt/config", + "options": "ro,relatime,mode=0700", + "size_available": 0, + "size_total": 516096, + "uuid": "2018-10-25-12-05-57-00" + } + ], + "ansible_nodename": "centos-7-rax-dfw-0003427354", + "ansible_os_family": "RedHat", + "ansible_pkg_mgr": "yum", + "ansible_processor": [ + "0", + "GenuineIntel", + "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", + "1", + "GenuineIntel", + "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", + "2", + "GenuineIntel", + "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", + "3", + "GenuineIntel", + "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", + "4", + "GenuineIntel", + "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", + "5", + "GenuineIntel", + "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", + "6", + "GenuineIntel", + "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz", + "7", + "GenuineIntel", + "Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz" + ], + "ansible_processor_cores": 8, + "ansible_processor_count": 8, + "ansible_processor_nproc": 8, + "ansible_processor_threads_per_core": 1, + "ansible_processor_vcpus": 8, + "ansible_product_name": "HVM domU", + "ansible_product_serial": "REDACTED", + "ansible_product_uuid": "REDACTED", + "ansible_product_version": "4.1.5", + "ansible_python": { + "executable": "/usr/bin/python2", + "has_sslcontext": true, + "type": "CPython", + "version": { + "major": 2, + "micro": 5, + "minor": 7, + "releaselevel": "final", + "serial": 0 + }, + "version_info": [ + 2, + 7, + 5, + "final", + 0 + ] + }, + "ansible_python_version": "2.7.5", + "ansible_real_group_id": 1000, + "ansible_real_user_id": 1000, + "ansible_selinux": { + "config_mode": "enforcing", + "mode": "enforcing", + "policyvers": 31, + "status": "enabled", + "type": "targeted" + }, + "ansible_selinux_python_present": true, + "ansible_service_mgr": "systemd", + "ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE", + "ansible_ssh_host_key_ed25519_public": "REDACTED KEY VALUE", + "ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE", + "ansible_swapfree_mb": 0, + "ansible_swaptotal_mb": 0, + "ansible_system": "Linux", + "ansible_system_capabilities": [ + "" + ], + "ansible_system_capabilities_enforced": "True", + "ansible_system_vendor": "Xen", + "ansible_uptime_seconds": 151, + "ansible_user_dir": "/home/zuul", + "ansible_user_gecos": "", + "ansible_user_gid": 1000, + "ansible_user_id": "zuul", + "ansible_user_shell": "/bin/bash", + "ansible_user_uid": 1000, + "ansible_userspace_architecture": "x86_64", + "ansible_userspace_bits": "64", + "ansible_virtualization_role": "guest", + "ansible_virtualization_type": "xen", + "gather_subset": [ + "all" + ], + "module_setup": true + } + +You can reference the model of the first disk in the facts shown above in a template or playbook as:: + + {{ ansible_facts['devices']['xvda']['model'] }} + +To reference the system hostname:: + + {{ ansible_facts['nodename'] }} + +You can use facts in conditionals (see :ref:`playbooks_conditionals`) and also in templates. You can also use facts to create dynamic groups of hosts that match particular criteria, see the :ref:`group_by module ` documentation for details. + +.. _fact_caching: + +Caching facts +------------- + +Like registered variables, facts are stored in memory by default. However, unlike registered variables, facts can be gathered independently and cached for repeated use. With cached facts, you can refer to facts from one system when configuring a second system, even if Ansible executes the current play on the second system first. For example:: + + {{ hostvars['asdf.example.com']['ansible_facts']['os_family'] }} + +Caching is controlled by the cache plugins. By default, Ansible uses the memory cache plugin, which stores facts in memory for the duration of the current playbook run. To retain Ansible facts for repeated use, select a different cache plugin. See :ref:`cache_plugins` for details. + +Fact caching can improve performance. If you manage thousands of hosts, you can configure fact caching to run nightly, then manage configuration on a smaller set of servers periodically throughout the day. With cached facts, you have access to variables and information about all hosts even when you are only managing a small number of servers. + +.. _disabling_facts: + +Disabling facts +--------------- + +By default, Ansible gathers facts at the beginning of each play. If you do not need to gather facts (for example, if you know know everything about your systems centrally), you can turn off fact gathering at the play level to improve scalability. Disabling facts may particularly improve performance in push mode with very large numbers of systems, or if you are using Ansible on experimental platforms. To disable fact gathering:: + + - hosts: whatever + gather_facts: no + +Adding custom facts +------------------- + +The setup module in Ansible automatically discovers a standard set of facts about each host. If you want to add custom values to your facts, you can write a custom facts module, set temporary facts with a ``set_fact`` task, or provide permanent custom facts using the facts.d directory. + +.. _local_facts: + +facts.d or local facts +^^^^^^^^^^^^^^^^^^^^^^ + +.. versionadded:: 1.3 + +You can add static custom facts by adding static files to facts.d, or add dynamic facts by adding executable scripts to facts.d. For example, you can add a list of all users on a host to your facts by creating and running a script in facts.d. + +To use facts.d, create an ``/etc/ansible/facts.d`` directory on the remote host or hosts. If you prefer a different directory, create it and specify it using the ``fact_path`` play keyword. Add files to the directory to supply your custom facts. All file names must end with ``.fact``. The files can be JSON, INI, or executable files returning JSON. + +To add static facts, simply add a file with the ``.facts`` extension. For example, create ``/etc/ansible/facts.d/preferences.fact`` with this content:: + + [general] + asdf=1 + bar=2 + +The next time fact gathering runs, your facts will include a hash variable fact named ``general`` with ``asdf`` and ``bar`` as members. To validate this, run the following:: + + ansible -m setup -a "filter=ansible_local" + +And you will see your custom fact added:: + + "ansible_local": { + "preferences": { + "general": { + "asdf" : "1", + "bar" : "2" + } + } + } + +The ansible_local namespace separates custom facts created by facts.d from system facts or variables defined elsewhere in the playbook, so variables will not override each other. You can access this custom fact in a template or playbook as:: + + {{ ansible_local['preferences']['general']['asdf'] }} + +.. note:: The key part in the key=value pairs will be converted into lowercase inside the ansible_local variable. Using the example above, if the ini file contained ``XYZ=3`` in the ``[general]`` section, then you should expect to access it as: ``{{ ansible_local['preferences']['general']['xyz'] }}`` and not ``{{ ansible_local['preferences']['general']['XYZ'] }}``. This is because Ansible uses Python's `ConfigParser`_ which passes all option names through the `optionxform`_ method and this method's default implementation converts option names to lower case. + +.. _ConfigParser: https://docs.python.org/2/library/configparser.html +.. _optionxform: https://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser.optionxform + +You can also use facts.d to execute a script on the remote host, generating dynamic custom facts to the ansible_local namespace. For example, you can generate a list of all users that exist on a remote host as a fact about that host. To generate dynamic custom facts using facts.d: + + #. Write and test a script to generate the JSON data you want. + #. Save the script in your facts.d directory. + #. Make sure your script has the ``.fact`` file extension. + #. Make sure your script is executable by the Ansible connection user. + #. Gather facts to execute the script and add the JSON output to ansible_local. + +By default, fact gathering runs once at the beginning of each play. If you create a custom fact using facts.d in a playbook, it will be available in the next play that gathers facts. If you want to use it in the same play where you created it, you must explicitly re-run the setup module. For example:: + + - hosts: webservers + tasks: + + - name: create directory for ansible custom facts + file: state=directory recurse=yes path=/etc/ansible/facts.d + + - name: install custom ipmi fact + copy: src=ipmi.fact dest=/etc/ansible/facts.d + + - name: re-read facts after adding custom fact + setup: filter=ansible_local + +If you use this pattern frequently, a custom facts module would be more efficient than facts.d. + +.. _magic_variables_and_hostvars: + +Information about Ansible: magic variables +========================================== + +You can access information about Ansible operations, including the python version being used, the hosts and groups in inventory, and the directories for playbooks and roles, using "magic" variables. Like connection variables, magic variables are :ref:`special_variables`. Magic variable names are reserved - do not set variables with these names. The variable ``environment`` is also reserved. + +The most commonly used magic variables are ``hostvars``, ``groups``, ``group_names``, and ``inventory_hostname``. With ``hostvars``, you can access variables defined for any host in the play, at any point in a playbook. You can access Ansible facts using the ``hostvars`` variable too, but only after you have gathered (or cached) facts. + +If you want to configure your database server using the value of a 'fact' from another node, or the value of an inventory variable assigned to another node, you can use ``hostvars`` in a template or on an action line:: + + {{ hostvars['test.example.com']['ansible_facts']['distribution'] }} + +With ``groups``, a list of all the groups (and hosts) in the inventory, you can enumerate all hosts within a group. For example: + +.. code-block:: jinja + + {% for host in groups['app_servers'] %} + # something that applies to all app servers. + {% endfor %} + +You can use ``groups`` and ``hostvars`` together to find all the IP addresses in a group. + +.. code-block:: jinja + + {% for host in groups['app_servers'] %} + {{ hostvars[host]['ansible_facts']['eth0']['ipv4']['address'] }} + {% endfor %} + +You can use this approach to point a frontend proxy server to all the hosts in your app servers group, to set up the correct firewall rules between servers, and so on. You must either cache facts or gather facts for those hosts before the task that fills out the template. + +With ``group_names``, a list (array) of all the groups the current host is in, you can create templated files that vary based on the group membership (or role) of the host: + +.. code-block:: jinja + + {% if 'webserver' in group_names %} + # some part of a configuration file that only applies to webservers + {% endif %} + +You can use the magic variable ``inventory_hostname``, the name of the host as configured in your inventory, as an alternative to ``ansible_hostname`` when fact-gathering is disabled. If you have a long FQDN, you can use ``inventory_hostname_short``, which contains the part up to the first period, without the rest of the domain. + +Other useful magic variables refer to the current play or playbook. These vars may be useful for filling out templates with multiple hostnames or for injecting the list into the rules for a load balancer. + +``ansible_play_hosts`` is the list of all hosts still active in the current play. + +``ansible_play_batch`` is a list of hostnames that are in scope for the current 'batch' of the play. + +The batch size is defined by ``serial``, when not set it is equivalent to the whole play (making it the same as ``ansible_play_hosts``). + +``ansible_playbook_python`` is the path to the python executable used to invoke the Ansible command line tool. + +``inventory_dir`` is the pathname of the directory holding Ansible's inventory host file. + +``inventory_file`` is the pathname and the filename pointing to the Ansible's inventory host file. + +``playbook_dir`` contains the playbook base directory. + +``role_path`` contains the current role's pathname and only works inside a role. + +``ansible_check_mode`` is a boolean, set to ``True`` if you run Ansible with ``--check``. + +.. _ansible_version: + +Ansible version +--------------- + +.. versionadded:: 1.8 + +To adapt playbook behavior to different versions of Ansible, you can use the variable ``ansible_version``, which has the following structure:: + + "ansible_version": { + "full": "2.0.0.2", + "major": 2, + "minor": 0, + "revision": 0, + "string": "2.0.0.2" + }