@ -27,7 +27,7 @@ Let's say we want to test the ``postgresql_user`` module invoked with the ``name
..code-block:: yaml
..code-block:: yaml
- name: Create PostgreSQL user and store module's output to the result variable
- name: Create PostgreSQL user and store module's output to the result variable
postgresql_user:
community.postgresql.postgresql_user:
name: test_user
name: test_user
register: result
register: result
@ -37,7 +37,7 @@ Let's say we want to test the ``postgresql_user`` module invoked with the ``name
- result is changed
- result is changed
- name: Check actual system state with another module, in other words, that the user exists
- name: Check actual system state with another module, in other words, that the user exists
postgresql_query:
community.postgresql.postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user'
query: SELECT * FROM pg_authid WHERE rolename = 'test_user'
register: query_result
register: query_result
@ -129,7 +129,7 @@ To check a task:
2. If the module changes the system state, check the actual system state using at least one other module. For example, if the module changes a file, we can check that the file has been changed by checking its checksum with the :ref:`stat <ansible_collections.ansible.builtin.stat_module>` module before and after the test tasks.
2. If the module changes the system state, check the actual system state using at least one other module. For example, if the module changes a file, we can check that the file has been changed by checking its checksum with the :ref:`stat <ansible_collections.ansible.builtin.stat_module>` module before and after the test tasks.
3. Run the same task with ``check_mode: yes`` if check-mode is supported by the module. Check with other modules that the actual system state has not been changed.
3. Run the same task with ``check_mode: yes`` if check-mode is supported by the module. Check with other modules that the actual system state has not been changed.
4. Cover cases when the module must fail. Use the ``ignore_errors: yes`` option and check the returned message with the ``assert`` module.
4. Cover cases when the module must fail. Use the ``ignore_errors: true`` option and check the returned message with the ``assert`` module.
Example:
Example:
@ -139,7 +139,6 @@ Example:
abstract_module:
abstract_module:
...
...
register: result
register: result
ignore_errors: yes
- name: Check the task fails and its error message
- name: Check the task fails and its error message
# We should also run the same tasks with check_mode: yes. We omit it here for simplicity.
# We should also run the same tasks with check_mode: yes. We omit it here for simplicity.
- name: Test for new_option, create new user WITHOUT the attribute
- name: Test for new_option, create new user WITHOUT the attribute
postgresql_user:
community.postgresql.postgresql_user:
name: test_user
name: test_user
add_attribute: no
register: result
register: result
- name: Check the module returns what we expect
- name: Check the module returns what we expect
@ -119,7 +118,7 @@ We will add the following code to the file.
- result is changed
- result is changed
- name: Query the database if the user exists but does not have the attribute (it is NULL)
- name: Query the database if the user exists but does not have the attribute (it is NULL)
postgresql_query:
community.postgresql.postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = NULL
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = NULL
register: result
register: result
@ -129,9 +128,8 @@ We will add the following code to the file.
- result.query_result.rowcount == 1
- result.query_result.rowcount == 1
- name: Test for new_option, create new user WITH the attribute
- name: Test for new_option, create new user WITH the attribute
postgresql_user:
community.postgresql.postgresql_user:
name: test_user
name: test_user
add_attribute: yes
register: result
register: result
- name: Check the module returns what we expect
- name: Check the module returns what we expect
@ -140,7 +138,7 @@ We will add the following code to the file.
- result is changed
- result is changed
- name: Query the database if the user has the attribute (it is TRUE)
- name: Query the database if the user has the attribute (it is TRUE)
postgresql_query:
community.postgresql.postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = 't'
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = 't'
register: result
register: result
@ -153,16 +151,16 @@ Then we :ref:`run the tests<collection_run_integration_tests>` with ``postgresql
In reality, we would alternate the tasks above with the same tasks run with the ``check_mode: yes`` option to be sure our option works as expected in check-mode as well. See :ref:`Recommendations on coverage<collection_integration_recommendations>` for details.
In reality, we would alternate the tasks above with the same tasks run with the ``check_mode: yes`` option to be sure our option works as expected in check-mode as well. See :ref:`Recommendations on coverage<collection_integration_recommendations>` for details.
If we expect a task to fail, we use the ``ignore_errors: yes`` option and check that the task actually failed and returned the message we expect:
If we expect a task to fail, we use the ``ignore_errors: true`` option and check that the task actually failed and returned the message we expect:
..code-block:: yaml
..code-block:: yaml
- name: Test for fail_when_true option
- name: Test for fail_when_true option
postgresql_user:
community.postgresql.postgresql_user:
name: test_user
name: test_user
fail_when_true: yes
fail_when_true: true
register: result
register: result
ignore_errors: yes
ignore_errors: true
- name: Check the module fails and returns message we expect
- name: Check the module fails and returns message we expect
@ -70,4 +70,4 @@ Several commonly-used utilities migrated to collections in Ansible 2.10, includi
- ``ismount.py`` migrated to ``ansible.posix.plugins.module_utils.mount.py`` - Single helper function that fixes os.path.ismount
- ``ismount.py`` migrated to ``ansible.posix.plugins.module_utils.mount.py`` - Single helper function that fixes os.path.ismount
- ``known_hosts.py`` migrated to ``community.general.plugins.module_utils.known_hosts.py`` - utilities for working with known_hosts file
- ``known_hosts.py`` migrated to ``community.general.plugins.module_utils.known_hosts.py`` - utilities for working with known_hosts file
For a list of migrated content with destination collections, see https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml.
For a list of migrated content with destination collections, see the `runtime.yml file <https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml>`_.
invalid-removal-version Documentation Error The version at which a feature is supposed to be removed cannot be parsed (for collections, it must be a semantic version, see https://semver.org/)
invalid-removal-version Documentation Error The version at which a feature is supposed to be removed cannot be parsed (for collections, it must be a `semantic version <https://semver.org/>`_)
invalid-requires-extension Naming Error Module ``#AnsibleRequires -CSharpUtil`` should not end in .cs, Module ``#Requires`` should not end in .psm1
invalid-requires-extension Naming Error Module ``#AnsibleRequires -CSharpUtil`` should not end in .cs, Module ``#Requires`` should not end in .psm1
missing-doc-fragment Documentation Error ``DOCUMENTATION`` fragment missing
missing-doc-fragment Documentation Error ``DOCUMENTATION`` fragment missing
missing-existing-doc-fragment Documentation Warning Pre-existing ``DOCUMENTATION`` fragment missing
missing-existing-doc-fragment Documentation Warning Pre-existing ``DOCUMENTATION`` fragment missing
If the log reports the port as ``None`` this means that the default port is being used.
If the log reports the port as ``None`` this means that the default port is being used.
A future Ansible release will improve this message so that the port is always logged.
A future Ansible release will improve this message so that the port is always logged.
Because the log files are verbose, you can use grep to look for specific information. For example, once you have identified the ``pid`` from the ``creating new control socket for host`` line you can search for other connection log entries::
Because the log files are verbose, you can use grep to look for specific information. For example, once you have identified the ``pid`` from the ``creating new control socket for host`` line you can search for other connection log entries:
..code:: shell
grep "p=28990" $ANSIBLE_LOG_PATH
grep "p=28990" $ANSIBLE_LOG_PATH
@ -164,7 +166,9 @@ For Ansible this can be done by ensuring you are only running against one remote
* Using ``ansible-playbook --limit switch1.example.net...``
* Using ``ansible-playbook --limit switch1.example.net...``
* Using an ad hoc ``ansible`` command
* Using an ad hoc ``ansible`` command
`ad hoc` refers to running Ansible to perform some quick command using ``/usr/bin/ansible``, rather than the orchestration language, which is ``/usr/bin/ansible-playbook``. In this case we can ensure connectivity by attempting to execute a single command on the remote device::
`ad hoc` refers to running Ansible to perform some quick command using ``/usr/bin/ansible``, rather than the orchestration language, which is ``/usr/bin/ansible-playbook``. In this case we can ensure connectivity by attempting to execute a single command on the remote device:
..note:: For a full list of format codes for working with python date format strings, see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.
..note:: For a full list of format codes for working with python date format strings, see the `python datetime documentation <https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior>`_.
Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. For this reason, lookups are expected to output lists; for example, ``with_items`` uses the :ref:`items <items_lookup>` lookup::
Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. For this reason, lookups are expected to output lists; for example, ``with_items`` uses the :ref:`items <items_lookup>` lookup:
..code-block:: YAML+Jinja
tasks:
tasks:
- name: count to 3
- name: count to 3
debug: msg={{ item }}
debug: msg={{ item }}
with_items: [1, 2, 3]
with_items: [1, 2, 3]
You can combine lookups with :ref:`filters <playbooks_filters>`, :ref:`tests <playbooks_tests>` and even each other to do some complex data generation and manipulation. For example::
You can combine lookups with :ref:`filters <playbooks_filters>`, :ref:`tests <playbooks_tests>` and even each other to do some complex data generation and manipulation. For example:
..code-block:: YAML+Jinja
tasks:
tasks:
- name: valid but useless and over complicated chained lookups and filters
- name: valid but useless and over complicated chained lookups and filters
@ -60,7 +64,9 @@ You can combine lookups with :ref:`filters <playbooks_filters>`, :ref:`tests <pl
You can control how errors behave in all lookup plugins by setting ``errors`` to ``ignore``, ``warn``, or ``strict``. The default setting is ``strict``, which causes the task to fail if the lookup returns an error. For example:
You can control how errors behave in all lookup plugins by setting ``errors`` to ``ignore``, ``warn``, or ``strict``. The default setting is ``strict``, which causes the task to fail if the lookup returns an error. For example:
To ignore lookup errors::
To ignore lookup errors:
..code-block:: YAML+Jinja
- name: if this file does not exist, I do not care .. file plugin itself warns anyway ...
- name: if this file does not exist, I do not care .. file plugin itself warns anyway ...
@ -26,7 +26,9 @@ to write lists and dictionaries in YAML.
There's another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) can optionally
There's another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) can optionally
begin with ``---`` and end with ``...``. This is part of the YAML format and indicates the start and end of a document.
begin with ``---`` and end with ``...``. This is part of the YAML format and indicates the start and end of a document.
All members of a list are lines beginning at the same indentation level starting with a ``"- "`` (a dash and a space)::
All members of a list are lines beginning at the same indentation level starting with a ``"- "`` (a dash and a space):
..code:: yaml
---
---
# A list of tasty fruits
# A list of tasty fruits
@ -36,7 +38,9 @@ All members of a list are lines beginning at the same indentation level starting
- Mango
- Mango
...
...
A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space)::
A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space):
..code:: yaml
# An employee record
# An employee record
martin:
martin:
@ -44,7 +48,9 @@ A dictionary is represented in a simple ``key: value`` form (the colon must be f
job: Developer
job: Developer
skill: Elite
skill: Elite
More complicated data structures are possible, such as lists of dictionaries, dictionaries whose values are lists or a mix of both::
More complicated data structures are possible, such as lists of dictionaries, dictionaries whose values are lists or a mix of both:
..code:: yaml
# Employee records
# Employee records
- martin:
- martin:
@ -62,7 +68,9 @@ More complicated data structures are possible, such as lists of dictionaries, di
- fortran
- fortran
- erlang
- erlang
Dictionaries and lists can also be represented in an abbreviated form if you really want to::
Dictionaries and lists can also be represented in an abbreviated form if you really want to:
..code:: yaml
---
---
martin: {name: Martin D'vloper, job: Developer, skill: Elite}
martin: {name: Martin D'vloper, job: Developer, skill: Elite}
@ -72,7 +80,9 @@ These are called "Flow collections".
.._truthiness:
.._truthiness:
Ansible doesn't really use these too much, but you can also specify a :ref:`boolean value <playbooks_variables>` (true/false) in several forms::
Ansible doesn't really use these too much, but you can also specify a :ref:`boolean value <playbooks_variables>` (true/false) in several forms:
..code:: yaml
create_key: true
create_key: true
needs_agent: false
needs_agent: false
@ -85,7 +95,9 @@ Use lowercase 'true' or 'false' for boolean values in dictionaries if you want t
Values can span multiple lines using ``|`` or ``>``. Spanning multiple lines using a "Literal Block Scalar" ``|`` will include the newlines and any trailing spaces.
Values can span multiple lines using ``|`` or ``>``. Spanning multiple lines using a "Literal Block Scalar" ``|`` will include the newlines and any trailing spaces.
Using a "Folded Block Scalar" ``>`` will fold newlines to spaces; it's used to make what would otherwise be a very long line easier to read and edit.
Using a "Folded Block Scalar" ``>`` will fold newlines to spaces; it's used to make what would otherwise be a very long line easier to read and edit.
In either case the indentation will be ignored.
In either case the indentation will be ignored.
Examples are::
Examples are:
..code:: yaml
include_newlines: |
include_newlines: |
exactly as you see
exactly as you see
@ -97,7 +109,9 @@ Examples are::
single line of text
single line of text
despite appearances
despite appearances
While in the above ``>`` example all newlines are folded into spaces, there are two ways to enforce a newline to be kept::
While in the above ``>`` example all newlines are folded into spaces, there are two ways to enforce a newline to be kept:
..code:: yaml
fold_some_newlines: >
fold_some_newlines: >
a
a
@ -108,12 +122,16 @@ While in the above ``>`` example all newlines are folded into spaces, there are
e
e
f
f
Alternatively, it can be enforced by including newline ``\n`` characters::
Alternatively, it can be enforced by including newline ``\n`` characters:
..code:: yaml
fold_same_newlines: "a b\nc d\n e\nf\n"
fold_same_newlines: "a b\nc d\n e\nf\n"
Let's combine what we learned so far in an arbitrary YAML example.
Let's combine what we learned so far in an arbitrary YAML example.
This really has nothing to do with Ansible, but will give you a feel for the format::
This really has nothing to do with Ansible, but will give you a feel for the format:
..code:: yaml
---
---
# An employee record
# An employee record
@ -144,17 +162,23 @@ While you can put just about anything into an unquoted scalar, there are some ex
A colon followed by a space (or newline) ``": "`` is an indicator for a mapping.
A colon followed by a space (or newline) ``": "`` is an indicator for a mapping.
A space followed by the pound sign ``" #"`` starts a comment.
A space followed by the pound sign ``" #"`` starts a comment.
Because of this, the following is going to result in a YAML syntax error::
Because of this, the following is going to result in a YAML syntax error:
..code:: text
foo: somebody said I should put a colon here: so I did
foo: somebody said I should put a colon here: so I did
windows_drive: c:
windows_drive: c:
...but this will work::
...but this will work:
..code:: yaml
windows_path: c:\windows
windows_path: c:\windows
You will want to quote hash values using colons followed by a space or the end of the line::
You will want to quote hash values using colons followed by a space or the end of the line:
..code:: yaml
foo: 'somebody said I should put a colon here: so I did'
foo: 'somebody said I should put a colon here: so I did'
@ -162,14 +186,18 @@ You will want to quote hash values using colons followed by a space or the end o
...and then the colon will be preserved.
...and then the colon will be preserved.
Alternatively, you can use double quotes::
Alternatively, you can use double quotes:
..code:: yaml
foo: "somebody said I should put a colon here: so I did"
foo: "somebody said I should put a colon here: so I did"
windows_drive: "c:"
windows_drive: "c:"
The difference between single quotes and double quotes is that in double quotes
The difference between single quotes and double quotes is that in double quotes
you can use escapes::
you can use escapes:
..code:: yaml
foo: "a \t TAB and a \n NEWLINE"
foo: "a \t TAB and a \n NEWLINE"
@ -183,17 +211,23 @@ The following is invalid YAML:
Further, Ansible uses "{{ var }}" for variables. If a value after a colon starts
Further, Ansible uses "{{ var }}" for variables. If a value after a colon starts
with a "{", YAML will think it is a dictionary, so you must quote it, like so::
with a "{", YAML will think it is a dictionary, so you must quote it, like so:
..code:: yaml
foo: "{{ variable }}"
foo: "{{ variable }}"
If your value starts with a quote the entire value must be quoted, not just part of it. Here are some additional examples of how to properly quote things::
If your value starts with a quote the entire value must be quoted, not just part of it. Here are some additional examples of how to properly quote things:
@ -299,7 +307,7 @@ There are a few common errors that one might run into when trying to execute Ans
To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work.
To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work.
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode via sftp however execution of python fails with
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode through sftp however execution of python fails with
..error::
..error::
SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
@ -313,6 +321,8 @@ There are a few common errors that one might run into when trying to execute Ans
To fix this set the path to the python installation in your inventory like so::
To fix this set the path to the python installation in your inventory like so::
* Start of python fails with ``The module libpython2.7.so was not found.``
* Start of python fails with ``The module libpython2.7.so was not found.``
@ -320,7 +330,9 @@ There are a few common errors that one might run into when trying to execute Ans
..error::
..error::
EE3501S The module libpython2.7.so was not found.
EE3501S The module libpython2.7.so was not found.
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``::
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``:
@ -333,7 +345,9 @@ It is known that it will not correctly expand the default tmp directory Ansible
If you see module failures, this is likely the problem.
If you see module failures, this is likely the problem.
The simple workaround is to set ``remote_tmp`` to a path that will expand correctly (see documentation of the shell plugin you are using for specifics).
The simple workaround is to set ``remote_tmp`` to a path that will expand correctly (see documentation of the shell plugin you are using for specifics).
For example, in the ansible config file (or via environment variable) you can set::
For example, in the ansible config file (or through environment variable) you can set:
..code-block:: ini
remote_tmp=$HOME/.ansible/tmp
remote_tmp=$HOME/.ansible/tmp
@ -429,7 +443,9 @@ file with a list of servers. To do this, you can just access the "$groups" dicti
{% endfor %}
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname,
If you need to access facts about these hosts, for instance, the IP address of each hostname,
you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers::
you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:
..code-block:: yaml
- hosts: db_servers
- hosts: db_servers
tasks:
tasks:
@ -449,7 +465,7 @@ How do I access a variable name programmatically?
+++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
via a role parameter or other input. Variable names can be built by adding strings together using "~", like so:
through a role parameter or other input. Variable names can be built by adding strings together using "~", like so:
@ -518,7 +536,9 @@ How do I access shell environment variables?
**On controller machine :** Access existing variables from controller use the ``env`` lookup plugin.
**On controller machine :** Access existing variables from controller use the ``env`` lookup plugin.
For example, to access the value of the HOME environment variable on the management machine::
For example, to access the value of the HOME environment variable on the management machine:
..code-block:: yaml+jinja
---
---
# ...
# ...
@ -526,7 +546,7 @@ For example, to access the value of the HOME environment variable on the managem
local_home: "{{ lookup('env','HOME') }}"
local_home: "{{ lookup('env','HOME') }}"
**On target machines :** Environment variables are available via facts in the ``ansible_env`` variable:
**On target machines :** Environment variables are available through facts in the ``ansible_env`` variable:
..code-block:: jinja
..code-block:: jinja
@ -613,7 +633,9 @@ When is it unsafe to bulk-set task arguments from a variable?
You can set all of a task's arguments from a dictionary-typed variable. This
You can set all of a task's arguments from a dictionary-typed variable. This
technique can be useful in some dynamic execution scenarios. However, it
technique can be useful in some dynamic execution scenarios. However, it
introduces a security risk. We do not recommend it, so Ansible issues a
introduces a security risk. We do not recommend it, so Ansible issues a
warning when you do something like this::
warning when you do something like this:
..code-block:: yaml+jinja
#...
#...
vars:
vars:
@ -663,7 +685,9 @@ How do I keep secret data in my playbook?
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`.
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`.
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful::
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:
@ -47,11 +47,15 @@ When you type something directly at the command line, you may feel that your han
You can override all other settings from all other sources in all other precedence categories at the command line by :ref:`general_precedence_extra_vars`, but that is not a command-line option, it is a way of passing a :ref:`variable<general_precedence_variables>`.
You can override all other settings from all other sources in all other precedence categories at the command line by :ref:`general_precedence_extra_vars`, but that is not a command-line option, it is a way of passing a :ref:`variable<general_precedence_variables>`.
At the command line, if you pass multiple values for a parameter that accepts only a single value, the last defined value wins. For example, this :ref:`ad hoc task<intro_adhoc>` will connect as ``carol``, not as ``mike``::
At the command line, if you pass multiple values for a parameter that accepts only a single value, the last defined value wins. For example, this :ref:`ad hoc task<intro_adhoc>` will connect as ``carol``, not as ``mike``:
..code:: shell
ansible -u mike -m ping myhost -u carol
ansible -u mike -m ping myhost -u carol
Some parameters allow multiple values. In this case, Ansible will append all values from the hosts listed in inventory files inventory1 and inventory2::
Some parameters allow multiple values. In this case, Ansible will append all values from the hosts listed in inventory files inventory1 and inventory2:
..code:: shell
ansible -i /path/inventory1 -i /path/inventory2 -m ping all
ansible -i /path/inventory1 -i /path/inventory2 -m ping all
@ -68,7 +72,9 @@ Within playbook keywords, precedence flows with the playbook itself; the more sp
- blocks/includes/imports/roles (optional and can contain tasks and each other)
- blocks/includes/imports/roles (optional and can contain tasks and each other)
- tasks (most specific)
- tasks (most specific)
A simple example::
A simple example:
..code:: yaml
- hosts: all
- hosts: all
connection: ssh
connection: ssh
@ -97,7 +103,9 @@ Variables that have equivalent playbook keywords, command-line options, and conf
Connection variables, like all variables, can be set in multiple ways and places. You can define variables for hosts and groups in :ref:`inventory<intro_inventory>`. You can define variables for tasks and plays in ``vars:`` blocks in :ref:`playbooks<about_playbooks>`. However, they are still variables - they are data, not keywords or configuration settings. Variables that override playbook keywords, command-line options, and configuration settings follow the same rules of :ref:`variable precedence <ansible_variable_precedence>` as any other variables.
Connection variables, like all variables, can be set in multiple ways and places. You can define variables for hosts and groups in :ref:`inventory<intro_inventory>`. You can define variables for tasks and plays in ``vars:`` blocks in :ref:`playbooks<about_playbooks>`. However, they are still variables - they are data, not keywords or configuration settings. Variables that override playbook keywords, command-line options, and configuration settings follow the same rules of :ref:`variable precedence <ansible_variable_precedence>` as any other variables.
When set in a playbook, variables follow the same inheritance rules as playbook keywords. You can set a value for the play, then override it in a task, block, or role::
When set in a playbook, variables follow the same inheritance rules as playbook keywords. You can set a value for the play, then override it in a task, block, or role:
..code:: yaml
- hosts: cloud
- hosts: cloud
gather_facts: false
gather_facts: false
@ -126,14 +134,16 @@ Variable scope: how long is a value available?
Variable values set in a playbook exist only within the playbook object that defines them. These 'playbook object scope' variables are not available to subsequent objects, including other plays.
Variable values set in a playbook exist only within the playbook object that defines them. These 'playbook object scope' variables are not available to subsequent objects, including other plays.
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like :ref:`set_fact<set_fact_module>` and :ref:`include_vars<include_vars_module>`, are available to all plays. These 'host scope' variables are also available via the ``hostvars[]`` dictionary.
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like :ref:`set_fact<set_fact_module>` and :ref:`include_vars<include_vars_module>`, are available to all plays. These 'host scope' variables are also available through the ``hostvars[]`` dictionary.
.._general_precedence_extra_vars:
.._general_precedence_extra_vars:
Using ``-e`` extra variables at the command line
Using ``-e`` extra variables at the command line
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To override all other settings in all other categories, you can use extra variables: ``--extra-vars`` or ``-e`` at the command line. Values passed with ``-e`` are variables, not command-line options, and they will override configuration settings, command-line options, and playbook keywords as well as variables set elsewhere. For example, this task will connect as ``brian`` not as ``carol``::
To override all other settings in all other categories, you can use extra variables: ``--extra-vars`` or ``-e`` at the command line. Values passed with ``-e`` are variables, not command-line options, and they will override configuration settings, command-line options, and playbook keywords as well as variables set elsewhere. For example, this task will connect as ``brian`` not as ``carol``:
..code:: shell
ansible -u carol -e 'ansible_user=brian' -a whoami all
ansible -u carol -e 'ansible_user=brian' -a whoami all
@ -47,7 +47,9 @@ existing system, using the ``--check`` flag to the `ansible` command will report
bring the system into a desired state.
bring the system into a desired state.
This can let you know up front if there is any need to deploy onto the given system. Ordinarily, scripts and commands don't run in check mode, so if you
This can let you know up front if there is any need to deploy onto the given system. Ordinarily, scripts and commands don't run in check mode, so if you
want certain steps to execute in normal mode even when the ``--check`` flag is used, such as calls to the script module, disable check mode for those tasks::
want certain steps to execute in normal mode even when the ``--check`` flag is used, such as calls to the script module, disable check mode for those tasks:
..code:: yaml
roles:
roles:
@ -60,7 +62,9 @@ want certain steps to execute in normal mode even when the ``--check`` flag is u
Modules That Are Useful for Testing
Modules That Are Useful for Testing
```````````````````````````````````
```````````````````````````````````
Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open::
Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open:
..code:: yaml
tasks:
tasks:
@ -69,7 +73,9 @@ Certain playbook modules are particularly good for testing. Below is an example
port: 22
port: 22
delegate_to: localhost
delegate_to: localhost
Here's an example of using the URI module to make sure a web service returns::
Here's an example of using the URI module to make sure a web service returns:
..code:: yaml
tasks:
tasks:
@ -80,7 +86,9 @@ Here's an example of using the URI module to make sure a web service returns::
msg: 'service is not happy'
msg: 'service is not happy'
when: "'AWESOME' not in webpage.content"
when: "'AWESOME' not in webpage.content"
It's easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a non-zero return code::
It's easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a non-zero return code:
..code:: yaml
tasks:
tasks:
@ -89,7 +97,9 @@ It's easy to push an arbitrary script (in any language) on a remote host and the
If using roles (you should be, roles are great!), scripts pushed by the script module can live in the 'files/' directory of a role.
If using roles (you should be, roles are great!), scripts pushed by the script module can live in the 'files/' directory of a role.
And the assert module makes it very easy to validate various kinds of truth::
And the assert module makes it very easy to validate various kinds of truth:
..code:: yaml
tasks:
tasks:
@ -101,7 +111,9 @@ And the assert module makes it very easy to validate various kinds of truth::
- "'not ready' not in cmd_result.stderr"
- "'not ready' not in cmd_result.stderr"
- "'gizmo enabled' in cmd_result.stdout"
- "'gizmo enabled' in cmd_result.stdout"
Should you feel the need to test for the existence of files that are not declaratively set by your Ansible configuration, the 'stat' module is a great choice::
Should you feel the need to test for the existence of files that are not declaratively set by your Ansible configuration, the 'stat' module is a great choice:
..code:: yaml
tasks:
tasks:
@ -128,7 +140,9 @@ If writing some degree of basic validation of your application into your playboo
As such, deploying into a local development VM and a staging environment will both validate that things are according to plan
As such, deploying into a local development VM and a staging environment will both validate that things are according to plan
ahead of your production deploy.
ahead of your production deploy.
Your workflow may be something like this::
Your workflow may be something like this:
..code:: text
- Use the same playbook all the time with embedded tests in development
- Use the same playbook all the time with embedded tests in development
- Use the playbook to deploy to a staging environment (with the same playbooks) that simulates production
- Use the playbook to deploy to a staging environment (with the same playbooks) that simulates production
@ -147,7 +161,9 @@ Integrating Testing With Rolling Updates
If you have read into :ref:`playbooks_delegation` it may quickly become apparent that the rolling update pattern can be extended, and you
If you have read into :ref:`playbooks_delegation` it may quickly become apparent that the rolling update pattern can be extended, and you
can use the success or failure of the playbook run to decide whether to add a machine into a load balancer or not.
can use the success or failure of the playbook run to decide whether to add a machine into a load balancer or not.
This is the great culmination of embedded tests::
This is the great culmination of embedded tests:
..code:: yaml
---
---
@ -182,7 +198,9 @@ the machine will not go back into the pool.
Read the delegation chapter about "max_fail_percentage" and you can also control how many failing tests will stop a rolling update
Read the delegation chapter about "max_fail_percentage" and you can also control how many failing tests will stop a rolling update
from proceeding.
from proceeding.
This above approach can also be modified to run a step from a testing machine remotely against a machine::
This above approach can also be modified to run a step from a testing machine remotely against a machine:
@ -13,7 +13,9 @@ All Alicloud modules require ``footmark`` - install it on your control machine w
Cloud modules, including Alicloud modules, execute on your local machine (the control machine) with ``connection: local``, rather than on remote machines defined in your hosts.
Cloud modules, including Alicloud modules, execute on your local machine (the control machine) with ``connection: local``, rather than on remote machines defined in your hosts.
Normally, you'll use the following pattern for plays that provision Alicloud resources::
Normally, you'll use the following pattern for plays that provision Alicloud resources:
..code-block:: yaml
- hosts: localhost
- hosts: localhost
connection: local
connection: local
@ -30,7 +32,9 @@ Authentication
You can specify your Alicloud authentication credentials (access key and secret key) by passing them as
You can specify your Alicloud authentication credentials (access key and secret key) by passing them as
environment variables or by storing them in a vars file.
environment variables or by storing them in a vars file.
To pass authentication credentials as environment variables::
To pass authentication credentials as environment variables:
..code-block:: shell
export ALICLOUD_ACCESS_KEY='Alicloud123'
export ALICLOUD_ACCESS_KEY='Alicloud123'
export ALICLOUD_SECRET_KEY='AlicloudSecret123'
export ALICLOUD_SECRET_KEY='AlicloudSecret123'
@ -62,7 +66,7 @@ creates 3 more. If there are 8 instances with that tag, the task terminates 3 of
If you do not specify a ``count_tag``, the task creates the number of instances you specify in ``count`` with the ``instance_name`` you provide.
If you do not specify a ``count_tag``, the task creates the number of instances you specify in ``count`` with the ``instance_name`` you provide.
`Packet.net <https://packet.net>`_ is a bare metal infrastructure host that's supported by Ansible (>=2.3) via a dynamic inventory script and two cloud modules. The two modules are:
`Packet.net <https://packet.net>`_ is a bare metal infrastructure host that's supported by Ansible (>=2.3) through a dynamic inventory script and two cloud modules. The two modules are:
- packet_sshkey: adds a public SSH key from file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
- packet_sshkey: adds a public SSH key from file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
- packet_device: manages servers on Packet. You can use this module to create, restart and delete devices.
- packet_device: manages servers on Packet. You can use this module to create, restart and delete devices.
@ -21,9 +21,9 @@ The Packet modules and inventory script connect to the Packet API using the pack
$ pip install packet-python
$ pip install packet-python
In order to check the state of devices created by Ansible on Packet, it's a good idea to install one of the `Packet CLI clients <https://www.packet.net/developers/integrations/>`_. Otherwise you can check them via the `Packet portal <https://app.packet.net/portal>`_.
In order to check the state of devices created by Ansible on Packet, it's a good idea to install one of the `Packet CLI clients <https://www.packet.net/developers/integrations/>`_. Otherwise you can check them through the `Packet portal <https://app.packet.net/portal>`_.
To use the modules and inventory script you'll need a Packet API token. You can generate an API token via the Packet portal `here <https://app.packet.net/portal#/api-keys>`__. The simplest way to authenticate yourself is to set the Packet API token in an environment variable:
To use the modules and inventory script you'll need a Packet API token. You can generate an API token through the Packet portal `here <https://app.packet.net/portal#/api-keys>`__. The simplest way to authenticate yourself is to set the Packet API token in an environment variable:
..code-block:: bash
..code-block:: bash
@ -31,7 +31,7 @@ To use the modules and inventory script you'll need a Packet API token. You can
If you're not comfortable exporting your API token, you can pass it as a parameter to the modules.
If you're not comfortable exporting your API token, you can pass it as a parameter to the modules.
On Packet, devices and reserved IP addresses belong to `projects <https://www.packet.com/developers/api/#projects>`_. In order to use the packet_device module, you need to specify the UUID of the project in which you want to create or manage devices. You can find a project's UUID in the Packet portal `here <https://app.packet.net/portal#/projects/list/table/>`_ (it's just under the project table) or via one of the available `CLIs <https://www.packet.net/developers/integrations/>`_.
On Packet, devices and reserved IP addresses belong to `projects <https://www.packet.com/developers/api/#projects>`_. In order to use the packet_device module, you need to specify the UUID of the project in which you want to create or manage devices. You can find a project's UUID in the Packet portal `here <https://app.packet.net/portal#/projects/list/table/>`_ (it's just under the project table) or through one of the available `CLIs <https://www.packet.net/developers/integrations/>`_.
If you want to use a new SSH key pair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
If you want to use a new SSH key pair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
@ -46,7 +46,7 @@ If you want to use an existing key pair, just copy the private and public key ov
Device Creation
Device Creation
===============
===============
The following code block is a simple playbook that creates one `Type 0 <https://www.packet.com/cloud/servers/t1-small/>`_ server (the 'plan' parameter). You have to supply 'plan' and 'operating_system'. 'location' defaults to 'ewr1' (Parsippany, NJ). You can find all the possible values for the parameters via a `CLI client <https://www.packet.net/developers/integrations/>`_.
The following code block is a simple playbook that creates one `Type 0 <https://www.packet.com/cloud/servers/t1-small/>`_ server (the 'plan' parameter). You have to supply 'plan' and 'operating_system'. 'location' defaults to 'ewr1' (Parsippany, NJ). You can find all the possible values for the parameters through a `CLI client <https://www.packet.net/developers/integrations/>`_.
..code-block:: yaml
..code-block:: yaml
@ -67,7 +67,7 @@ The following code block is a simple playbook that creates one `Type 0 <https://
plan: baremetal_0
plan: baremetal_0
facility: sjc1
facility: sjc1
After running ``ansible-playbook playbook_create.yml``, you should have a server provisioned on Packet. You can verify via a CLI or in the `Packet portal <https://app.packet.net/portal#/projects/list/table>`__.
After running ``ansible-playbook playbook_create.yml``, you should have a server provisioned on Packet. You can verify through a CLI or in the `Packet portal <https://app.packet.net/portal#/projects/list/table>`__.
If you get an error with the message "failed to set machine state present, error: Error 404: Not Found", please verify your project UUID.
If you get an error with the message "failed to set machine state present, error: Error 404: Not Found", please verify your project UUID.
@ -183,7 +183,7 @@ The following playbook will create an SSH key, 3 Packet servers, and then wait u
As with most Ansible modules, the default states of the Packet modules are idempotent, meaning the resources in your project will remain the same after re-runs of a playbook. Thus, we can keep the ``packet_sshkey`` module call in our playbook. If the public key is already in your Packet account, the call will have no effect.
As with most Ansible modules, the default states of the Packet modules are idempotent, meaning the resources in your project will remain the same after re-runs of a playbook. Thus, we can keep the ``packet_sshkey`` module call in our playbook. If the public key is already in your Packet account, the call will have no effect.
The second module call provisions 3 Packet Type 0 (specified using the 'plan' parameter) servers in the project identified via the 'project_id' parameter. The servers are all provisioned with CoreOS beta (the 'operating_system' parameter) and are customized with cloud-config user data passed to the 'user_data' parameter.
The second module call provisions 3 Packet Type 0 (specified using the 'plan' parameter) servers in the project identified by the 'project_id' parameter. The servers are all provisioned with CoreOS beta (the 'operating_system' parameter) and are customized with cloud-config user data passed to the 'user_data' parameter.
The ``packet_device`` module has a ``wait_for_public_IPv`` that is used to specify the version of the IP address to wait for (valid values are ``4`` or ``6`` for IPv4 or IPv6). If specified, Ansible will wait until the GET API call for a device contains an Internet-routeable IP address of the specified version. When referring to an IP address of a created device in subsequent module calls, it's wise to use the ``wait_for_public_IPv`` parameter, or ``state: active`` in the packet_device module call.
The ``packet_device`` module has a ``wait_for_public_IPv`` that is used to specify the version of the IP address to wait for (valid values are ``4`` or ``6`` for IPv4 or IPv6). If specified, Ansible will wait until the GET API call for a device contains an Internet-routeable IP address of the specified version. When referring to an IP address of a created device in subsequent module calls, it's wise to use the ``wait_for_public_IPv`` parameter, or ``state: active`` in the packet_device module call.
@ -193,7 +193,7 @@ Run the playbook:
$ ansible-playbook playbook_coreos.yml
$ ansible-playbook playbook_coreos.yml
Once the playbook quits, your new devices should be reachable via SSH. Try to connect to one and check if etcd has started properly:
Once the playbook quits, your new devices should be reachable through SSH. Try to connect to one and check if etcd has started properly:
@ -18,7 +18,7 @@ all of the modules require and are tested against pyrax 1.5 or higher.
You'll need this Python module installed on the execution host.
You'll need this Python module installed on the execution host.
``pyrax`` is not currently available in many operating system
``pyrax`` is not currently available in many operating system
package repositories, so you will likely need to install it via pip:
package repositories, so you will likely need to install it through pip:
..code-block:: bash
..code-block:: bash
@ -69,7 +69,7 @@ Running from a Python Virtual Environment (Optional)
Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running via 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done through the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running through 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
..code-block:: ini
..code-block:: ini
@ -154,7 +154,7 @@ to the next section.
Host Inventory
Host Inventory
``````````````
``````````````
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, and so on. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances through other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, and so on. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
`Scaleway <https://scaleway.com>`_ is a cloud provider supported by Ansible, version 2.6 or higher via a dynamic inventory plugin and modules.
`Scaleway <https://scaleway.com>`_ is a cloud provider supported by Ansible, version 2.6 or higher through a dynamic inventory plugin and modules.
Those modules are:
Those modules are:
- :ref:`scaleway_sshkey_module`: adds a public SSH key from a file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
- :ref:`scaleway_sshkey_module`: adds a public SSH key from a file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
@ -27,7 +27,7 @@ Requirements
The Scaleway modules and inventory script connect to the Scaleway API using `Scaleway REST API <https://developer.scaleway.com>`_.
The Scaleway modules and inventory script connect to the Scaleway API using `Scaleway REST API <https://developer.scaleway.com>`_.
To use the modules and inventory script you'll need a Scaleway API token.
To use the modules and inventory script you'll need a Scaleway API token.
You can generate an API token via the Scaleway console `here <https://cloud.scaleway.com/#/credentials>`__.
You can generate an API token through the Scaleway console `here <https://cloud.scaleway.com/#/credentials>`__.
The simplest way to authenticate yourself is to set the Scaleway API token in an environment variable:
The simplest way to authenticate yourself is to set the Scaleway API token in an environment variable: