[backport][docs][2.10]Docsbackportapalooza 8 (#71379)

* Move 2.10.0rc1 release date a few days forward. (#71270)

At yesterday's meeting it was decided to have ansible-2.10.0 depend on
ansible-base-2.10.1 so that we can get several fixes for ansible-base's
routing (including adding the gluster.gluster collection).
ansible-base-2.10.1 will release on September 8th.  So we will plan on
releasing ansible-2.10.0rc1 on the 10th.

https://meetbot.fedoraproject.org/ansible-community/2020-08-12/ansible_community_meeting.2020-08-12-18.00.html
(cherry picked from commit e507c127e5)

* a few writing style updates (#71212)

(cherry picked from commit 4f0bd5de38)

* Fix code markups and add link to CVE (#71082)

(cherry picked from commit 92d59a58c0)

* Fix 404 links (#71256)

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit ecea018506)

* Writing style updates to Developing dynamic inventory topic (#71245)

* modified the writing style

* incorporated peer feedback

(cherry picked from commit ecd3b52ad7)

* Fix roadmap formatting. (#71275)

(cherry picked from commit ee48e0b0ad)

* Update password.py (#71295)

List md5_crypt, bcrypt, sha256_crypt, sha512_crypt as hash schemes in the password plugin.

(cherry picked from commit 1d1de2c6fd)

* Update ansible european IRC channel (#71326)

Signed-off-by: Rémi VERCHERE <remi@verchere.fr>
(cherry picked from commit 824cd4cbeb)

* Add warning about copyright year change (#71251)

To simplify project administration and avoid any legal issues,
add a warning in the docs. This reflects - https://github.com/ansible/ansible/issues/45989#issuecomment-423635622 and fixes: #45989

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 606604bb97)

* subelements: Clarify parameter docs (#71177)

skip_missing parameter in subelements lookup plugin is accepted from
inside the dictionary.

Fixes: #38182

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 6d17736ef4)

* Writing style updates to Using Variables topic (#71194)

* updated topic title, underline length for headings, and incorporated peer feedback

(cherry picked from commit 4d68efbe24)

* cron module defaults to current user, not root (#71337)

(cherry picked from commit 4792d83e13)

* Update Network Getting Started for FQCN/collection world (#71188)

* pull out network roles, cleanup, update first playbook examples, update gather facts section, some inventory conversion to .yml, update inventory and roles, simplify the navigation titles, fix tocs, feedback comments

(cherry picked from commit f79a7c5585)

* Add documentation about info/facts module development (#71250)

Fixes: #40151

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit 4f993922c8)

* network: Correct documentation (#71246)

ini-style inventory does not support Ansible Vault password.
This fixes network_best_practices_2.5 doc.
Fixes: #69039

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
(cherry picked from commit a1257d75aa)

* tidies up vars page (#71339)

(cherry picked from commit 02ea80f6d7)

* base.yml: Fix typos (#71346)

(cherry picked from commit 41d7d53573)

* quick fix to change main back to devel (#71342)

* quick fix to change main back to devel
* Update docs/docsite/rst/dev_guide/developing_collections.rst

Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 74f88c56a5)

* Add note about integration tests for new modules to the dev guide (#71345)

(cherry picked from commit b82889eef5)

* update fest link (#71376)

(cherry picked from commit 80b8fde946)

* incorporate minimalism feedback on debugging page (#71272)

Co-authored-by: bobjohnsrh <50667510+bobjohnsrh@users.noreply.github.com>

(cherry picked from commit 5073cfc8bc)

* fix header problem

Co-authored-by: Toshio Kuratomi <a.badger@gmail.com>
Co-authored-by: Sayee <57951841+sayee-jadhav@users.noreply.github.com>
Co-authored-by: Baptiste Mille-Mathias <baptiste.millemathias@gmail.com>
Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: rovshango <rovshan.go@gmail.com>
Co-authored-by: Remi Verchere <rverchere@users.noreply.github.com>
Co-authored-by: Jake Howard <RealOrangeOne@users.noreply.github.com>
Co-authored-by: Alicia Cozine <879121+acozine@users.noreply.github.com>
Co-authored-by: Per Lundberg <perlun@gmail.com>
Co-authored-by: Andrew Klychkov <aaklychkov@mail.ru>
pull/71389/head
Sandra McCann 4 years ago committed by GitHub
parent 9a26fbe58e
commit 26bb114ccb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,2 @@
minor_changes:
- subelements - clarify the lookup plugin documentation for parameter handling (https://github.com/ansible/ansible/issues/38182).

@ -16,7 +16,7 @@
// no banner for latest release
// temp banner to advertise AnsibleFest
document.write('<div id="banner_id" class="admonition important">');
document.write('<p><a href="https://www.ansible.com/ansiblefest">AnsibleFest</a> is going virtual with two days of expert speakers, live demos and hands-on labs Oct 13-14!</p>');
document.write('<p><a href="https://www.ansible.com/ansiblefest?sc_cid=7013a000002gyPxAAI">AnsibleFest</a> is going virtual with two days of expert speakers, live demos and hands-on labs Oct 13-14!</p>');
document.write('</div>');
} else if (startsWith(current_url_path, "/ansible/devel/")) {

@ -84,7 +84,7 @@ Regional and Language-specific channels
---------------------------------------
- ``#ansible-es`` - Channel for Spanish speaking Ansible community.
- ``#ansibleu`` - Channel for the European Ansible Community.
- ``#ansible-eu`` - Channel for the European Ansible Community.
- ``#ansible-fr`` - Channel for French speaking Ansible community.
- ``#ansiblezh`` - Channel for Zurich/Swiss Ansible community.

@ -716,7 +716,7 @@ If you clone a fork, add the original repository as a remote ``upstream``::
cd ~/dev/ansible/collections/ansible_collections/community/general
git remote add upstream git@github.com:ansible-collections/community.general.git
Now you can use this checkout of ``community.general`` in playbooks and roles with whichever version of Ansible you have installed locally, including a local checkout of the ``main`` branch.
Now you can use this checkout of ``community.general`` in playbooks and roles with whichever version of Ansible you have installed locally, including a local checkout of ``ansible/ansible``'s ``devel`` branch.
For collections hosted in the ``ansible_collections`` GitHub org, create a branch and commit your changes on the branch. When you are done (remember to add tests, see :ref:`testing_collections`), push your changes to your fork of the collection and create a Pull Request. For other collections, especially for collections not hosted on GitHub, check the ``README.md`` of the collection for information on contributing to it.

@ -4,19 +4,16 @@
Developing dynamic inventory
****************************
.. contents:: Topics
:local:
As described in :ref:`dynamic_inventory`, Ansible can pull inventory information from dynamic sources,
including cloud sources, using the supplied :ref:`inventory plugins <inventory_plugins>`.
If the source you want is not currently covered by existing plugins, you can create your own as with any other plugin type.
Ansible can pull inventory information from dynamic sources, including cloud sources, by using the supplied :ref:`inventory plugins <inventory_plugins>`. For details about how to pull inventory information, see :ref:`dynamic_inventory`. If the source you want is not currently covered by existing plugins, you can create your own inventory plugin as with any other plugin type.
In previous versions you had to create a script or program that can output JSON in the correct format when invoked with the proper arguments.
In previous versions, you had to create a script or program that could output JSON in the correct format when invoked with the proper arguments.
You can still use and write inventory scripts, as we ensured backwards compatibility via the :ref:`script inventory plugin <script_inventory>`
and there is no restriction on the programming language used.
If you choose to write a script, however, you will need to implement some features yourself
such as caching, configuration management, dynamic variable and group composition, and other features.
If you use :ref:`inventory plugins <inventory_plugins>` instead, you can leverage the Ansible codebase to add these common features.
If you choose to write a script, however, you will need to implement some features yourself such as caching, configuration management, dynamic variable and group composition, and so on.
If you use :ref:`inventory plugins <inventory_plugins>` instead, you can leverage the Ansible codebase and add these common features automatically.
.. contents:: Topics
:local:
.. _inventory_sources:
@ -27,7 +24,7 @@ Inventory sources
Inventory sources are the input strings that inventory plugins work with.
An inventory source can be a path to a file or to a script, or it can be raw data that the plugin can interpret.
The table below shows some examples of inventory plugins and the kinds of source you can pass to them with ``-i`` on the command line.
The table below shows some examples of inventory plugins and the source types that you can pass to them with ``-i`` on the command line.
+--------------------------------------------+-----------------------------------------+
| Plugin | Source |
@ -51,14 +48,14 @@ The table below shows some examples of inventory plugins and the kinds of source
Inventory plugins
=================
Like most plugin types (except modules), inventory plugins must be developed in Python. They execute on the controller and should therefore match the :ref:`control_node_requirements`.
Like most plugin types (except modules), inventory plugins must be developed in Python. They execute on the controller and should therefore adhere to the :ref:`control_node_requirements`.
Most of the documentation in :ref:`developing_plugins` also applies here. You should read that document first for a general understanding and then come back to this document for specifics on inventory plugins.
Inventory plugins normally only execute at the start of a run, before playbooks, plays, and roles are loaded.
However, you can use the ``meta: refresh_inventory`` task to clear the current inventory and to execute the inventory plugins again, which will generate a new inventory.
Normally, inventory plugins are executed at the start of a run, and before the playbooks, plays, or roles are loaded.
However, you can use the ``meta: refresh_inventory`` task to clear the current inventory and execute the inventory plugins again, and this task will generate a new inventory.
If you use the persistent cache, inventory plugins can also use the configured cache plugin to store and retrieve data. This avoids repeating costly external calls.
If you use the persistent cache, inventory plugins can also use the configured cache plugin to store and retrieve data. Caching inventory avoids making repeated and costly external calls.
.. _developing_an_inventory_plugin:
@ -75,11 +72,9 @@ The first thing you want to do is use the base class:
NAME = 'myplugin' # used internally by Ansible, it should match the file name but not required
If the inventory plugin is in a collection the NAME should be in the format of 'namespace.collection_name.myplugin'.
This class has a couple of methods each plugin should implement and a few helpers for parsing the inventory source and updating the inventory.
If the inventory plugin is in a collection, the NAME should be in the 'namespace.collection_name.myplugin' format. The base class has a couple of methods that each plugin should implement and a few helpers for parsing the inventory source and updating the inventory.
After you have the basic plugin working you might want to to incorporate other features by adding more base classes:
After you have the basic plugin working, you can incorporate other features by adding more base classes:
.. code-block:: python
@ -89,14 +84,14 @@ After you have the basic plugin working you might want to to incorporate other f
NAME = 'myplugin'
For the bulk of the work in the plugin, We mostly want to deal with 2 methods ``verify_file`` and ``parse``.
For the bulk of the work in a plugin, we mostly want to deal with 2 methods ``verify_file`` and ``parse``.
.. _inventory_plugin_verify_file:
verify_file
^^^^^^^^^^^
verify_file method
^^^^^^^^^^^^^^^^^^
This method is used by Ansible to make a quick determination if the inventory source is usable by the plugin. It does not need to be 100% accurate as there might be overlap in what plugins can handle and Ansible will try the enabled plugins (in order) by default.
Ansible uses this method to quickly determine if the inventory source is usable by the plugin. The determination does not need to be 100% accurate, as there might be an overlap in what plugins can handle and by default Ansible will try the enabled plugins as per their sequence.
.. code-block:: python
@ -109,9 +104,9 @@ This method is used by Ansible to make a quick determination if the inventory so
valid = True
return valid
In this case, from the :ref:`virtualbox inventory plugin <virtualbox_inventory>`, we screen for specific file name patterns to avoid attempting to consume any valid yaml file. You can add any type of condition here, but the most common one is 'extension matching'. If you implement extension matching for YAML configuration files the path suffix <plugin_name>.<yml|yaml> should be accepted. All valid extensions should be documented in the plugin description.
In the above example, from the :ref:`virtualbox inventory plugin <virtualbox_inventory>`, we screen for specific file name patterns to avoid attempting to consume any valid YAML file. You can add any type of condition here, but the most common one is 'extension matching'. If you implement extension matching for YAML configuration files, the path suffix <plugin_name>.<yml|yaml> should be accepted. All valid extensions should be documented in the plugin description.
Another example that actually does not use a 'file' but the inventory source string itself,
The following is another example that does not use a 'file' but the inventory source string itself,
from the :ref:`host list <host_list_inventory>` plugin:
.. code-block:: python
@ -130,11 +125,10 @@ This method is just to expedite the inventory process and avoid unnecessary pars
.. _inventory_plugin_parse:
parse
^^^^^
parse method
^^^^^^^^^^^^
This method does the bulk of the work in the plugin.
It takes the following parameters:
* inventory: inventory object with existing data and the methods to add hosts/groups/variables to inventory
@ -153,7 +147,7 @@ The base class does some minimal assignment for reuse in other methods.
self.inventory = inventory
self.templar = Templar(loader=loader)
It is up to the plugin now to deal with the inventory source provided and translate that into the Ansible inventory.
It is up to the plugin now to parse the provided inventory source and translate it into Ansible inventory.
To facilitate this, the example below uses a few helper functions:
.. code-block:: python
@ -190,7 +184,7 @@ To facilitate this, the example below uses a few helper functions:
self.inventory.add_host(server['name'])
self.inventory.set_variable(server['name'], 'ansible_host', server['external_ip'])
The specifics will vary depending on API and structure returned. But one thing to keep in mind, if the inventory source or any other issue crops up you should ``raise AnsibleParserError`` to let Ansible know that the source was invalid or the process failed.
The specifics will vary depending on API and structure returned. Remember that if you get an inventory source error or any other issue, you should ``raise AnsibleParserError`` to let Ansible know that the source was invalid or the process failed.
For examples on how to implement an inventory plugin, see the source code here:
`lib/ansible/plugins/inventory <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/inventory>`_.
@ -200,7 +194,7 @@ For examples on how to implement an inventory plugin, see the source code here:
inventory cache
^^^^^^^^^^^^^^^
Extend the inventory plugin documentation with the inventory_cache documentation fragment and use the Cacheable base class to have the caching system at your disposal.
To cache the inventory, extend the inventory plugin documentation with the inventory_cache documentation fragment and use the Cacheable base class.
.. code-block:: yaml
@ -213,7 +207,7 @@ Extend the inventory plugin documentation with the inventory_cache documentation
NAME = 'myplugin'
Next, load the cache plugin specified by the user to read from and update the cache. If your inventory plugin uses YAML based configuration files and the ``_read_config_data`` method, the cache plugin is loaded within that method. If your inventory plugin does not use ``_read_config_data``, you must load the cache explicitly with ``load_cache_plugin``.
Next, load the cache plugin specified by the user to read from and update the cache. If your inventory plugin uses YAML-based configuration files and the ``_read_config_data`` method, the cache plugin is loaded within that method. If your inventory plugin does not use ``_read_config_data``, you must load the cache explicitly with ``load_cache_plugin``.
.. code-block:: python
@ -224,7 +218,7 @@ Next, load the cache plugin specified by the user to read from and update the ca
self.load_cache_plugin()
Before using the cache, retrieve a unique cache key using the ``get_cache_key`` method. This needs to be done by all inventory modules using the cache, so you don't use/overwrite other parts of the cache.
Before using the cache plugin, you must retrieve a unique cache key by using the ``get_cache_key`` method. This task needs to be done by all inventory modules using the cache, so that you don't use/overwrite other parts of the cache.
.. code-block:: python
@ -272,25 +266,25 @@ Now that you've enabled caching, loaded the correct plugin, and retrieved a uniq
After the ``parse`` method is complete, the contents of ``self._cache`` is used to set the cache plugin if the contents of the cache have changed.
You have three other cache methods available:
- ``set_cache_plugin`` forces the cache plugin to be set with the contents of ``self._cache`` before the ``parse`` method completes
- ``update_cache_if_changed`` sets the cache plugin only if ``self._cache`` has been modified before the ``parse`` method completes
- ``clear_cache`` deletes the keys in ``self._cache`` from your cache plugin
- ``set_cache_plugin`` forces the cache plugin to be set with the contents of ``self._cache``, before the ``parse`` method completes
- ``update_cache_if_changed`` sets the cache plugin only if ``self._cache`` has been modified, before the ``parse`` method completes
- ``clear_cache`` flushes the cache, ultimately by calling the cache plugin's ``flush()`` method, whose implementation is dependent upon the particular cache plugin in use. Note that if the user is using the same cache backend for facts and inventory, both will get flushed. To avoid this, the user can specify a distinct cache backend in their inventory plugin configuration.
.. _inventory_source_common_format:
Inventory source common format
------------------------------
Common format for inventory sources
-----------------------------------
To simplify development, most plugins use a mostly standard configuration file as the inventory source, YAML based and with just one required field ``plugin`` which should contain the name of the plugin that is expected to consume the file.
Depending on other common features used, other fields might be needed, but each plugin can also add its own custom options as needed.
For example, if you use the integrated caching, ``cache_plugin``, ``cache_timeout`` and other cache related fields could be present.
To simplify development, most plugins use a standard YAML-based configuration file as the inventory source. The file has only one required field ``plugin``, which should contain the name of the plugin that is expected to consume the file.
Depending on other common features used, you might need other fields, and you can add custom options in each plugin as required.
For example, if you use the integrated caching, ``cache_plugin``, ``cache_timeout`` and other cache-related fields could be present.
.. _inventory_development_auto:
The 'auto' plugin
-----------------
Since Ansible 2.5, we include the :ref:`auto inventory plugin <auto_inventory>` enabled by default, which itself just loads other plugins if they use the common YAML configuration format that specifies a ``plugin`` field that matches an inventory plugin name, this makes it easier to use your plugin w/o having to update configurations.
From Ansible 2.5 onwards, we include the :ref:`auto inventory plugin <auto_inventory>` and enable it by default. If the ``plugin`` field in your standard configuration file matches the name of your inventory plugin, the ``auto`` inventory plugin will load your plugin. The 'auto' plugin makes it easier to use your plugin without having to update configurations.
.. _inventory_scripts:
@ -307,13 +301,11 @@ Even though we now have inventory plugins, we still support inventory scripts, n
Inventory script conventions
----------------------------
Inventory scripts must accept the ``--list`` and ``--host <hostname>`` arguments, other arguments are allowed but Ansible will not use them.
They might still be useful for when executing the scripts directly.
Inventory scripts must accept the ``--list`` and ``--host <hostname>`` arguments. Although other arguments are allowed, Ansible will not use them.
Such arguments might still be useful for executing the scripts directly.
When the script is called with the single argument ``--list``, the script must output to stdout a JSON-encoded hash or
dictionary containing all of the groups to be managed.
Each group's value should be either a hash or dictionary containing a list of each host, any child groups,
and potential group variables, or simply a list of hosts::
dictionary that contains all the groups to be managed. Each group's value should be either a hash or dictionary containing a list of each host, any child groups, and potential group variables, or simply a list of hosts::
{
@ -334,9 +326,9 @@ and potential group variables, or simply a list of hosts::
}
If any of the elements of a group are empty they may be omitted from the output.
If any of the elements of a group are empty, they may be omitted from the output.
When called with the argument ``--host <hostname>`` (where <hostname> is a host from above), the script must print either an empty JSON hash/dictionary, or a hash/dictionary of variables to make available to templates and playbooks. For example::
When called with the argument ``--host <hostname>`` (where <hostname> is a host from above), the script must print either an empty JSON hash/dictionary, or a hash/dictionary of variables to make them available to templates and playbooks. For example::
{
@ -344,7 +336,7 @@ When called with the argument ``--host <hostname>`` (where <hostname> is a host
"VAR002": "VALUE",
}
Printing variables is optional. If the script does not do this, it should print an empty hash or dictionary.
Printing variables is optional. If the script does not print variables, it should print an empty hash or dictionary.
.. _inventory_script_tuning:
@ -353,17 +345,11 @@ Tuning the external inventory script
.. versionadded:: 1.3
The stock inventory script system detailed above works for all versions of Ansible,
but calling ``--host`` for every host can be rather inefficient,
especially if it involves API calls to a remote subsystem.
The stock inventory script system mentioned above works for all versions of Ansible, but calling ``--host`` for every host can be rather inefficient, especially if it involves API calls to a remote subsystem.
To avoid this inefficiency, if the inventory script returns a top level element called "_meta",
it is possible to return all of the host variables in one script execution.
When this meta element contains a value for "hostvars",
the inventory script will not be invoked with ``--host`` for each host.
This results in a significant performance increase for large numbers of hosts.
To avoid this inefficiency, if the inventory script returns a top-level element called "_meta", it is possible to return all the host variables in a single script execution. When this meta element contains a value for "hostvars", the inventory script will not be invoked with ``--host`` for each host. This behavior results in a significant performance increase for large numbers of hosts.
The data to be added to the top level JSON dictionary looks like this::
The data to be added to the top-level JSON dictionary looks like this::
{
@ -398,10 +384,7 @@ For example::
.. _replacing_inventory_ini_with_dynamic_provider:
If you intend to replace an existing static inventory file with an inventory script,
it must return a JSON object which contains an 'all' group that includes every
host in the inventory as a member and every group in the inventory as a child.
It should also include an 'ungrouped' group which contains all hosts which are not members of any other group.
If you intend to replace an existing static inventory file with an inventory script, it must return a JSON object which contains an 'all' group that includes every host in the inventory as a member and every group in the inventory as a child. It should also include an 'ungrouped' group which contains all hosts which are not members of any other group.
A skeleton example of this JSON object is:
.. code-block:: json

@ -44,7 +44,9 @@ After the shebang and UTF-8 coding, there should be a `copyright line <https://w
# Copyright: (c) 2018, Terry Jones <terry.jones@example.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
Major additions to the module (for instance, rewrites) may add additional copyright lines. Any legal review will include the source control history, so an exhaustive copyright header is not necessary. When adding a second copyright line for a significant feature or rewrite, add the newer line above the older one:
Major additions to the module (for instance, rewrites) may add additional copyright lines. Any legal review will include the source control history, so an exhaustive copyright header is not necessary.
Please do not edit the existing copyright year. This simplifies project administration and is unlikely to cause any interesting legal issues.
When adding a second copyright line for a significant feature or rewrite, add the newer line above the older one:
.. code-block:: python

@ -5,7 +5,9 @@
Ansible module development: getting started
*******************************************
A module is a reusable, standalone script that Ansible runs on your behalf, either locally or remotely. Modules interact with your local machine, an API, or a remote system to perform specific tasks like changing a database password or spinning up a cloud instance. Each module can be used by the Ansible API, or by the :command:`ansible` or :command:`ansible-playbook` programs. A module provides a defined interface, accepting arguments and returning information to Ansible by printing a JSON string to stdout before exiting. Ansible ships with thousands of modules, and you can easily write your own. If you're writing a module for local use, you can choose any programming language and follow your own rules. This tutorial illustrates how to get started developing an Ansible module in Python.
A module is a reusable, standalone script that Ansible runs on your behalf, either locally or remotely. Modules interact with your local machine, an API, or a remote system to perform specific tasks like changing a database password or spinning up a cloud instance. Each module can be used by the Ansible API, or by the :command:`ansible` or :command:`ansible-playbook` programs. A module provides a defined interface, accepts arguments, and returns information to Ansible by printing a JSON string to stdout before exiting.
If you need functionality that is not available in any of the thousands of Ansible modules found in collections, you can easily write your own custom module. When you write a module for local use, you can choose any programming language and follow your own rules. Use this topic to learn how to create an Ansible module in Python. After you create a module, you must add it locally to the appropriate directory so that Ansible can find and execute it. For details about adding a module locally, see :ref:`developing_locally`.
.. contents:: Topics
:local:
@ -46,145 +48,56 @@ Common environment setup
``$ . venv/bin/activate && . hacking/env-setup``
Starting a new module
=====================
Creating an info or a facts module
==================================
Ansible gathers information about the target machines using facts modules, and gathers information on other objects or files using info modules.
If you find yourself trying to add ``state: info`` or ``state: list`` to an existing module, that is often a sign that a new dedicated ``_facts`` or ``_info`` module is needed.
In Ansible 2.8 and onwards, we have two type of information modules, they are ``*_info`` and ``*_facts``.
If a module is named ``<something>_facts``, it should be because its main purpose is returning ``ansible_facts``. Do not name modules that do not do this with ``_facts``.
Only use ``ansible_facts`` for information that is specific to the host machine, for example network interfaces and their configuration, which operating system and which programs are installed.
Modules that query/return general information (and not ``ansible_facts``) should be named ``_info``.
General information is non-host specific information, for example information on online/cloud services (you can access different accounts for the same online service from the same host), or information on VMs and containers accessible from the machine, or information on individual files or programs.
Info and facts modules, are just like any other Ansible Module, with a few minor requirements:
1. They MUST be named ``<something>_info`` or ``<something>_facts``, where <something> is singular.
2. Info ``*_info`` modules MUST return in the form of the :ref:`result dictionary<common_return_values>` so other modules can access them.
3. Fact ``*_facts`` modules MUST return in the ``ansible_facts`` field of the :ref:`result dictionary<common_return_values>` so other modules can access them.
4. They MUST support :ref:`check_mode <check_mode_dry>`.
5. They MUST NOT make any changes to the system.
6. They MUST document the :ref:`return fields<return_block>` and :ref:`examples<examples_block>`.
To create an info module:
1. Navigate to the correct directory for your new module: ``$ cd lib/ansible/modules/``. If you are developing module using collection, ``$ cd plugins/modules/`` inside your collection development tree.
2. Create your new module file: ``$ touch my_test_info.py``.
3. Paste the content below into your new info module file. It includes the :ref:`required Ansible format and documentation <developing_modules_documenting>` and some example code.
4. Modify and extend the code to do what you want your new info module to do. See the :ref:`programming tips <developing_modules_best_practices>` and :ref:`Python 3 compatibility <developing_python_3>` pages for pointers on writing clean and concise module code.
.. literalinclude:: ../../../../examples/scripts/my_test_info.py
:language: python
Use the same process to create a facts module.
.. literalinclude:: ../../../../examples/scripts/my_test_facts.py
:language: python
Creating a module
=================
To create a new module:
1. Navigate to the correct directory for your new module: ``$ cd lib/ansible/modules/``
2. Create your new module file: ``$ touch my_test.py``
1. Navigate to the correct directory for your new module: ``$ cd lib/ansible/modules/``. If you are developing module using collection, ``$ cd plugins/modules/`` inside your collection development tree.
2. Create your new module file: ``$ touch my_test.py``.
3. Paste the content below into your new module file. It includes the :ref:`required Ansible format and documentation <developing_modules_documenting>` and some example code.
4. Modify and extend the code to do what you want your new module to do. See the :ref:`programming tips <developing_modules_best_practices>` and :ref:`Python 3 compatibility <developing_python_3>` pages for pointers on writing clean, concise module code.
.. code-block:: python
#!/usr/bin/python
# Copyright: (c) 2018, Terry Jones <terry.jones@example.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
---
module: my_test
short_description: This is my test module
version_added: "2.4"
description:
- "This is my longer description explaining my test module."
options:
name:
description:
- This is the message to send to the test module.
required: true
type: str
new:
description:
- Control to demo if the result of this module is changed or not.
required: false
type: bool
extends_documentation_fragment:
- azure
author:
- Your Name (@yourhandle)
'''
EXAMPLES = r'''
# Pass in a message
- name: Test with a message
my_test:
name: hello world
# pass in a message and have changed true
- name: Test with a message and changed output
my_test:
name: hello world
new: true
# fail the module
- name: Test failure of the module
my_test:
name: fail me
'''
RETURN = r'''
original_message:
description: The original name param that was passed in
type: str
returned: always
message:
description: The output message that the test module generates
type: str
returned: always
'''
from ansible.module_utils.basic import AnsibleModule
def run_module():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True),
new=dict(type='bool', required=False, default=False)
)
# seed the result dict in the object
# we primarily care about changed and state
# changed is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_message='',
message=''
)
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
# if the user is working with this module in only check mode we do not
# want to make any changes to the environment, just return the current
# state with no modifications
if module.check_mode:
module.exit_json(**result)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_message'] = module.params['name']
result['message'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
if module.params['new']:
result['changed'] = True
# during the execution of the module, if there is an exception or a
# conditional state that effectively causes a failure, run
# AnsibleModule.fail_json() to pass in the message and the result
if module.params['name'] == 'fail me':
module.fail_json(msg='You requested this to fail', **result)
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()
.. literalinclude:: ../../../../examples/scripts/my_test.py
:language: python
Exercising your module code
===========================
@ -249,15 +162,19 @@ Testing basics
These two examples will get you started with testing your module code. Please review our :ref:`testing <developing_testing>` section for more detailed
information, including instructions for :ref:`testing module documentation <testing_module_documentation>`, adding :ref:`integration tests <testing_integration>`, and more.
Sanity tests
------------
.. note::
Every new module and plugin should have integration tests, even if the tests cannot be run on Ansible CI infrastructure.
In this case, the tests should be marked with the ``unsupported`` alias in `aliases file <https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/integration-aliases.html>`_.
Performing sanity tests
-----------------------
You can run through Ansible's sanity checks in a container:
``$ ansible-test sanity -v --docker --python 2.7 MODULE_NAME``
Note that this example requires Docker to be installed and running. If you'd rather not use a
container for this, you can choose to use ``--venv`` instead of ``--docker``.
.. note::
Note that this example requires Docker to be installed and running. If you'd rather not use a container for this, you can choose to use ``--venv`` instead of ``--docker``.
Unit tests
----------
@ -265,7 +182,8 @@ Unit tests
You can add unit tests for your module in ``./test/units/modules``. You must first setup your testing environment. In this example, we're using Python 3.5.
- Install the requirements (outside of your virtual environment): ``$ pip3 install -r ./test/lib/ansible_test/_data/requirements/units.txt``
- To run all tests do the following: ``$ ansible-test units --python 3.5`` (you must run ``. hacking/env-setup`` prior to this)
- Run ``. hacking/env-setup``
- To run all tests do the following: ``$ ansible-test units --python 3.5``. If you are using a CI environment, these tests will run automatically.
.. note:: Ansible uses pytest for unit testing.

@ -16,6 +16,10 @@ Some tests may require credentials. Credentials may be specified with `credenti
Some tests may require root.
.. note::
Every new module and plugin should have integration tests, even if the tests cannot be run on Ansible CI infrastructure.
In this case, the tests should be marked with the ``unsupported`` alias in `aliases file <https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/integration-aliases.html>`_.
Quick Start
===========

@ -153,10 +153,8 @@ directory, which is then included directly.
Module test case common code
````````````````````````````
Keep common code as specific as possible within the `test/units/` directory structure. For
example, if it's specific to testing Amazon modules, it should be in
`test/units/modules/cloud/amazon/`. Don't import common unit test code from directories
outside the current or parent directories.
Keep common code as specific as possible within the `test/units/` directory structure.
Don't import common unit test code from directories outside the current or parent directories.
Don't import other unit tests from a unit test. Any common code should be in dedicated
files that aren't themselves tests.
@ -168,15 +166,10 @@ Fixtures files
To mock out fetching results from devices, or provide other complex data structures that
come from external libraries, you can use ``fixtures`` to read in pre-generated data.
Text files live in ``test/units/modules/network/PLATFORM/fixtures/``
You can check how `fixtures <https://github.com/ansible/ansible/tree/devel/test/units/module_utils/facts/fixtures/cpuinfo>`_
are used in `cpuinfo fact tests <https://github.com/ansible/ansible/blob/9f72ff80e3fe173baac83d74748ad87cb6e20e64/test/units/module_utils/facts/hardware/linux_data.py#L384>`_
Data is loaded using the ``load_fixture`` method
See `eos_banner test
<https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py>`_
for a practical example.
If you are simulating APIs you may find that python placebo is useful. See
If you are simulating APIs you may find that Python placebo is useful. See
:ref:`testing_units_modules` for more information.

@ -1,7 +1,7 @@
.. _network_developer_guide:
**********************************
Network Automation Developer Guide
Network Developer Guide
**********************************
Welcome to the Developer Guide for Ansible Network Automation!

@ -4,9 +4,10 @@ Build Your Inventory
Running a playbook without an inventory requires several command-line flags. Also, running a playbook against a single device is not a huge efficiency gain over making the same change manually. The next step to harnessing the full power of Ansible is to use an inventory file to organize your managed nodes into groups with information like the ``ansible_network_os`` and the SSH user. A fully-featured inventory file can serve as the source of truth for your network. Using an inventory file, a single playbook can maintain hundreds of network devices with a single command. This page shows you how to build an inventory file, step by step.
.. contents:: Topics
.. contents::
:local:
Basic Inventory
Basic inventory
==================================================
First, group your inventory logically. Best practice is to group servers and network devices by their What (application, stack or microservice), Where (datacenter or region), and When (development stage):
@ -19,6 +20,45 @@ Avoid spaces, hyphens, and preceding numbers (use ``floor_19``, not ``19th_floor
This tiny example data center illustrates a basic group structure. You can group groups using the syntax ``[metagroupname:children]`` and listing groups as members of the metagroup. Here, the group ``network`` includes all leafs and all spines; the group ``datacenter`` includes all network devices plus all webservers.
.. code-block:: yaml
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
leaf02:
ansible_host: 10.16.10.12
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
spine02:
ansible_host: 10.16.10.14
network:
children:
leafs:
spines:
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
webserver02:
ansible_host: 10.16.10.16
datacenter:
children:
network:
webservers:
You can also create this same inventory in INI format.
.. code-block:: ini
[leafs]
@ -42,140 +82,270 @@ This tiny example data center illustrates a basic group structure. You can group
webservers
Add Variables to Inventory
Add variables to the inventory
================================================================================
Next, you can set values for many of the variables you needed in your first Ansible command in the inventory, so you can skip them in the ansible-playbook command. In this example, the inventory includes each network device's IP, OS, and SSH user. If your network devices are only accessible by IP, you must add the IP to the inventory file. If you access your network devices using hostnames, the IP is not necessary.
.. code-block:: ini
[leafs]
leaf01 ansible_host=10.16.10.11 ansible_network_os=vyos ansible_user=my_vyos_user
leaf02 ansible_host=10.16.10.12 ansible_network_os=vyos ansible_user=my_vyos_user
[spines]
spine01 ansible_host=10.16.10.13 ansible_network_os=vyos ansible_user=my_vyos_user
spine02 ansible_host=10.16.10.14 ansible_network_os=vyos ansible_user=my_vyos_user
[network:children]
leafs
spines
Next, you can set values for many of the variables you needed in your first Ansible command in the inventory, so you can skip them in the ``ansible-playbook`` command. In this example, the inventory includes each network device's IP, OS, and SSH user. If your network devices are only accessible by IP, you must add the IP to the inventory file. If you access your network devices using hostnames, the IP is not necessary.
[servers]
server01 ansible_host=10.16.10.15 ansible_user=my_server_user
server02 ansible_host=10.16.10.16 ansible_user=my_server_user
[datacenter:children]
leafs
spines
servers
.. code-block:: yaml
Group Variables within Inventory
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
leaf02:
ansible_host: 10.16.10.12
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
spine02:
ansible_host: 10.16.10.14
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
network:
children:
leafs:
spines:
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
ansible_user: my_server_user
webserver02:
ansible_host: 10.16.10.16
ansible_user: my_server_user
datacenter:
children:
network:
webservers:
Group variables within inventory
================================================================================
When devices in a group share the same variable values, such as OS or SSH user, you can reduce duplication and simplify maintenance by consolidating these into group variables:
.. code-block:: ini
[leafs]
leaf01 ansible_host=10.16.10.11
leaf02 ansible_host=10.16.10.12
.. code-block:: yaml
[leafs:vars]
ansible_network_os=vyos
ansible_user=my_vyos_user
---
[spines]
spine01 ansible_host=10.16.10.13
spine02 ansible_host=10.16.10.14
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
leaf02:
ansible_host: 10.16.10.12
vars:
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
[spines:vars]
ansible_network_os=vyos
ansible_user=my_vyos_user
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
spine02:
ansible_host: 10.16.10.14
vars:
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
[network:children]
leafs
spines
network:
children:
leafs:
spines:
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
webserver02:
ansible_host: 10.16.10.16
vars:
ansible_user: my_server_user
[servers]
server01 ansible_host=10.16.10.15
server02 ansible_host=10.16.10.16
datacenter:
children:
network:
webservers:
[datacenter:children]
leafs
spines
servers
Variable Syntax
Variable syntax
================================================================================
The syntax for variable values is different in inventory, in playbooks and in ``group_vars`` files, which are covered below. Even though playbook and ``group_vars`` files are both written in YAML, you use variables differently in each.
The syntax for variable values is different in inventory, in playbooks, and in the ``group_vars`` files, which are covered below. Even though playbook and ``group_vars`` files are both written in YAML, you use variables differently in each.
- In an ini-style inventory file you **must** use the syntax ``key=value`` for variable values: ``ansible_network_os=vyos``.
- In any file with the ``.yml`` or ``.yaml`` extension, including playbooks and ``group_vars`` files, you **must** use YAML syntax: ``key: value``
- In an ini-style inventory file you **must** use the syntax ``key=value`` for variable values: ``ansible_network_os=vyos.vyos.vyos``.
- In any file with the ``.yml`` or ``.yaml`` extension, including playbooks and ``group_vars`` files, you **must** use YAML syntax: ``key: value``.
- In ``group_vars`` files, use the full ``key`` name: ``ansible_network_os: vyos``.
- In playbooks, use the short-form ``key`` name, which drops the ``ansible`` prefix: ``network_os: vyos``
- In ``group_vars`` files, use the full ``key`` name: ``ansible_network_os: vyos.vyos.vyos``.
- In playbooks, use the short-form ``key`` name, which drops the ``ansible`` prefix: ``network_os: vyos.vyos.vyos``.
Group Inventory by Platform
Group inventory by platform
================================================================================
As your inventory grows, you may want to group devices by platform. This allows you to specify platform-specific variables easily for all devices on that platform:
.. code-block:: ini
[vyos_leafs]
leaf01 ansible_host=10.16.10.11
leaf02 ansible_host=10.16.10.12
[vyos_spines]
spine01 ansible_host=10.16.10.13
spine02 ansible_host=10.16.10.14
.. code-block:: yaml
[vyos:children]
vyos_leafs
vyos_spines
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
leaf02:
ansible_host: 10.16.10.12
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
spine02:
ansible_host: 10.16.10.14
network:
children:
leafs:
spines:
vars:
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
[vyos:vars]
ansible_connection=network_cli
ansible_network_os=vyos
ansible_user=my_vyos_user
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
webserver02:
ansible_host: 10.16.10.16
vars:
ansible_user: my_server_user
[network:children]
vyos
datacenter:
children:
network:
webservers:
[servers]
server01 ansible_host=10.16.10.15
server02 ansible_host=10.16.10.16
With this setup, you can run ``first_playbook.yml`` with only two flags:
[datacenter:children]
vyos
servers
.. code-block:: console
With this setup, you can run first_playbook.yml with only two flags:
ansible-playbook -i inventory.yml -k first_playbook.yml
.. code-block:: console
With the ``-k`` flag, you provide the SSH password(s) at the prompt. Alternatively, you can store SSH and other secrets and passwords securely in your group_vars files with ``ansible-vault``. See :ref:`network_vault` for details.
ansible-playbook -i inventory -k first_playbook.yml
Verifying the inventory
=========================
With the ``-k`` flag, you provide the SSH password(s) at the prompt. Alternatively, you can store SSH and other secrets and passwords securely in your group_vars files with ``ansible-vault``.
You can use the :ref:`ansible-inventory` CLI command to display the inventory as Ansible sees it.
.. code-block:: console
Protecting Sensitive Variables with ``ansible-vault``
$ ansible-inventory -i test.yml --list
{
"_meta": {
"hostvars": {
"leaf01": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.11",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"leaf02": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.12",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"spine01": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.13",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"spine02": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.14",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"webserver01": {
"ansible_host": "10.16.10.15",
"ansible_user": "my_server_user"
},
"webserver02": {
"ansible_host": "10.16.10.16",
"ansible_user": "my_server_user"
}
}
},
"all": {
"children": [
"datacenter",
"ungrouped"
]
},
"datacenter": {
"children": [
"network",
"webservers"
]
},
"leafs": {
"hosts": [
"leaf01",
"leaf02"
]
},
"network": {
"children": [
"leafs",
"spines"
]
},
"spines": {
"hosts": [
"spine01",
"spine02"
]
},
"webservers": {
"hosts": [
"webserver01",
"webserver02"
]
}
}
.. _network_vault:
Protecting sensitive variables with ``ansible-vault``
================================================================================
The ``ansible-vault`` command provides encryption for files and/or individual variables like passwords. This tutorial will show you how to encrypt a single SSH password. You can use the commands below to encrypt other sensitive information, such as database passwords, privilege-escalation passwords and more.
First you must create a password for ansible-vault itself. It is used as the encryption key, and with this you can encrypt dozens of different passwords across your Ansible project. You can access all those secrets (encrypted values) with a single password (the ansible-vault password) when you run your playbooks. Here's a simple example.
Create a file and write your password for ansible-vault to it:
1. Create a file and write your password for ansible-vault to it:
.. code-block:: console
echo "my-ansible-vault-pw" > ~/my-ansible-vault-pw-file
Create the encrypted ssh password for your VyOS network devices, pulling your ansible-vault password from the file you just created:
2. Create the encrypted ssh password for your VyOS network devices, pulling your ansible-vault password from the file you just created:
.. code-block:: console
@ -210,8 +380,8 @@ This is an example using an extract from a YAML inventory, as the INI format do
vyos: # this is a group in yaml inventory, but you can also do under a host
vars:
ansible_connection: network_cli
ansible_network_os: vyos
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
ansible_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;my_user

@ -7,14 +7,15 @@ Run Your First Command and Playbook
Put the concepts you learned to work with this quick tutorial. Install Ansible, execute a network configuration command manually, execute the same command with Ansible, then create a playbook so you can execute the command any time on multiple network devices.
.. contents:: Topics
.. contents::
:local:
Prerequisites
==================================================
Before you work through this tutorial you need:
- Ansible 2.5 (or higher) installed
- Ansible 2.10 (or higher) installed
- One or more network devices that are compatible with Ansible
- Basic Linux command line knowledge
- Basic knowledge of network switch & router configuration
@ -24,14 +25,14 @@ Install Ansible
Install Ansible using your preferred method. See :ref:`installation_guide`. Then return to this tutorial.
Confirm the version of Ansible (must be >= 2.5):
Confirm the version of Ansible (must be >= 2.10):
.. code-block:: bash
ansible --version
Establish a Manual Connection to a Managed Node
Establish a manual connection to a managed node
==================================================
To confirm your credentials, connect to a network device manually and retrieve its configuration. Replace the sample user and device name with your real credentials. For example, for a VyOS router:
@ -45,14 +46,14 @@ To confirm your credentials, connect to a network device manually and retrieve i
This manual connection also establishes the authenticity of the network device, adding its RSA key fingerprint to your list of known hosts. (If you have connected to the device before, you have already established its authenticity.)
Run Your First Network Ansible Command
Run your first network Ansible command
==================================================
Instead of manually connecting and running a command on the network device, you can retrieve its configuration with a single, stripped-down Ansible command:
.. code-block:: bash
ansible all -i vyos.example.net, -c network_cli -u my_vyos_user -k -m vyos_facts -e ansible_network_os=vyos
ansible all -i vyos.example.net, -c ansible.netcommon.network_cli -u my_vyos_user -k -m vyos.vyos.vyos_facts -e ansible_network_os=vyos.vyos.vyos
The flags in this command set seven values:
- the host group(s) to which the command should apply (in this case, all)
@ -60,7 +61,7 @@ The flags in this command set seven values:
- the connection method (-c, the method for connecting and executing ansible)
- the user (-u, the username for the SSH connection)
- the SSH connection method (-k, please prompt for the password)
- the module (-m, the ansible module to run)
- the module (-m, the Ansible module to run, using the fully qualified collection name (FQCN))
- an extra variable ( -e, in this case, setting the network OS value)
NOTE: If you use ``ssh-agent`` with ssh keys, Ansible loads them automatically. You can omit ``-k`` flag.
@ -70,29 +71,29 @@ NOTE: If you use ``ssh-agent`` with ssh keys, Ansible loads them automatically.
If you are running Ansible in a virtual environment, you will also need to add the variable ``ansible_python_interpreter=/path/to/venv/bin/python``
Create and Run Your First Network Ansible Playbook
Create and run your first network Ansible Playbook
==================================================
If you want to run this command every day, you can save it in a playbook and run it with ansible-playbook instead of ansible. The playbook can store a lot of the parameters you provided with flags at the command line, leaving less to type at the command line. You need two files for this - a playbook and an inventory file.
If you want to run this command every day, you can save it in a playbook and run it with ``ansible-playbook`` instead of ``ansible``. The playbook can store a lot of the parameters you provided with flags at the command line, leaving less to type at the command line. You need two files for this - a playbook and an inventory file.
1. Download :download:`first_playbook.yml <sample_files/first_playbook.yml>`, which looks like this:
.. literalinclude:: sample_files/first_playbook.yml
:language: YAML
The playbook sets three of the seven values from the command line above: the group (``hosts: all``), the connection method (``connection: network_cli``) and the module (in each task). With those values set in the playbook, you can omit them on the command line. The playbook also adds a second task to show the config output. When a module runs in a playbook, the output is held in memory for use by future tasks instead of written to the console. The debug task here lets you see the results in your shell.
The playbook sets three of the seven values from the command line above: the group (``hosts: all``), the connection method (``connection: ansible.netcommon.network_cli``) and the module (in each task). With those values set in the playbook, you can omit them on the command line. The playbook also adds a second task to show the config output. When a module runs in a playbook, the output is held in memory for use by future tasks instead of written to the console. The debug task here lets you see the results in your shell.
2. Run the playbook with the command:
.. code-block:: bash
ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos first_playbook.yml
ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook.yml
The playbook contains one play with two tasks, and should generate output like this:
.. code-block:: bash
$ ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos first_playbook.yml
$ ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook.yml
PLAY [First Playbook]
***************************************************************************************************************************
@ -104,7 +105,7 @@ The playbook contains one play with two tasks, and should generate output like t
TASK [Display the config]
***************************************************************************************************************************
ok: [vyos.example.net] => {
"msg": "The hostname is vyos and the OS is VyOS"
"msg": "The hostname is vyos and the OS is VyOS 1.1.8"
}
3. Now that you can retrieve the device config, try updating it with Ansible. Download :download:`first_playbook_ext.yml <sample_files/first_playbook_ext.yml>`, which is an extended version of the first playbook:
@ -116,7 +117,7 @@ The extended first playbook has four tasks in a single play. Run it with the sam
.. code-block:: bash
$ ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos first_playbook_ext.yml
$ ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook_ext.yml
PLAY [First Playbook]
************************************************************************************************************************************
@ -128,7 +129,7 @@ The extended first playbook has four tasks in a single play. Run it with the sam
TASK [Display the config]
*************************************************************************************************************************************
ok: [vyos.example.net] => {
"msg": "The hostname is vyos and the OS is VyOS"
"msg": "The hostname is vyos and the OS is VyOS 1.1.8"
}
TASK [Update the hostname]
@ -142,12 +143,12 @@ The extended first playbook has four tasks in a single play. Run it with the sam
TASK [Display the changed config]
*************************************************************************************************************************************
ok: [vyos.example.net] => {
"msg": "The hostname is vyos-changed and the OS is VyOS"
"msg": "The new hostname is vyos-changed and the OS is VyOS 1.1.8"
}
PLAY RECAP
************************************************************************************************************************************
vyos.example.net : ok=6 changed=1 unreachable=0 failed=0
vyos.example.net : ok=5 changed=1 unreachable=0 failed=0
@ -158,42 +159,54 @@ Gathering facts from network devices
The ``gather_facts`` keyword now supports gathering network device facts in standardized key/value pairs. You can feed these network facts into further tasks to manage the network device.
You can also use the new ``gather_network_resources`` parameter with the network ``*_facts`` modules (such as :ref:`eos_facts <eos_facts_module>`) to return just a subset of the device configuration, as shown below.
You can also use the new ``gather_network_resources`` parameter with the network ``*_facts`` modules (such as :ref:`arista.eos.eos_facts <ansible_collections.arista.eos.eos_facts_module>`) to return just a subset of the device configuration, as shown below.
.. code-block:: yaml
- hosts: arista
gather_facts: True
gather_subset: min
gather_subset: interfaces
module_defaults:
eos_facts:
arista.eos.eos_facts:
gather_network_resources: interfaces
The playbook returns the following interface facts:
.. code-block:: yaml
ansible_facts:
ansible_network_resources:
interfaces:
- enabled: true
name: Ethernet1
mtu: '1476'
- enabled: true
name: Loopback0
- enabled: true
name: Loopback1
- enabled: true
mtu: '1476'
name: Tunnel0
- enabled: true
name: Ethernet1
- enabled: true
name: Tunnel1
- enabled: true
name: Ethernet1
"network_resources": {
"interfaces": [
{
"description": "test-interface",
"enabled": true,
"mtu": "512",
"name": "Ethernet1"
},
{
"enabled": true,
"mtu": "3000",
"name": "Ethernet2"
},
{
"enabled": true,
"name": "Ethernet3"
},
{
"enabled": true,
"name": "Ethernet4"
},
{
"enabled": true,
"name": "Ethernet5"
},
{
"enabled": true,
"name": "Ethernet6"
},
]
}
Note that this returns a subset of what is returned by just setting ``gather_subset: interfaces``.
You can store these facts and use them directly in another task, such as with the :ref:`eos_interfaces <eos_interfaces_module>` resource module.
You can store these facts and use them directly in another task, such as with the :ref:`eos_interfaces <ansible_collections.arista.eos.eos_interfaces_module>` resource module.

@ -1,10 +1,10 @@
.. _network_getting_started:
**********************************
Network Automation Getting Started
Network Getting Started
**********************************
Ansible modules support a wide range of vendors, device types, and actions, so you can manage your entire network with a single automation tool. With Ansible, you can:
Ansible collections support a wide range of vendors, device types, and actions, so you can manage your entire network with a single automation tool. With Ansible, you can:
- Automate repetitive tasks to speed routine network changes and free up your time for more strategic work
- Leverage the same simple, powerful, and agentless automation tool for network tasks that operations and development use

@ -4,7 +4,7 @@
Working with network connection options
***************************************
Network modules can support multiple connection protocols, such as ``network_cli``, ``netconf``, and ``httpapi``. These connections include some common options you can set to control how the connection to your network device behaves.
Network modules can support multiple connection protocols, such as ``ansible.netcommon.network_cli``, ``ansible.netcommon.netconf``, and ``ansible.netcommon.httpapi``. These connections include some common options you can set to control how the connection to your network device behaves.
Common options are:
@ -27,7 +27,7 @@ Using vars (per task):
.. code-block:: yaml
- name: save running-config
ios_command:
cisco.ios.ios_command:
commands: copy running-config startup-config
vars:
ansible_command_timeout: 30

@ -4,16 +4,17 @@ How Network Automation is Different
Network automation leverages the basic Ansible concepts, but there are important differences in how the network modules work. This introduction prepares you to understand the exercises in this guide.
.. contents:: Topics
.. contents::
:local:
Execution on the Control Node
Execution on the control node
================================================================================
Unlike most Ansible modules, network modules do not run on the managed nodes. From a user's point of view, network modules work like any other modules. They work with ad-hoc commands, playbooks, and roles. Behind the scenes, however, network modules use a different methodology than the other (Linux/Unix and Windows) modules use. Ansible is written and executed in Python. Because the majority of network devices can not run Python, the Ansible network modules are executed on the Ansible control node, where ``ansible`` or ``ansible-playbook`` runs.
Network modules also use the control node as a destination for backup files, for those modules that offer a ``backup`` option. With Linux/Unix modules, where a configuration file already exists on the managed node(s), the backup file gets written by default in the same directory as the new, changed file. Network modules do not update configuration files on the managed nodes, because network configuration is not written in files. Network modules write backup files on the control node, usually in the `backup` directory under the playbook root directory.
Multiple Communication Protocols
Multiple communication protocols
================================================================================
Because network modules execute on the control node instead of on the managed nodes, they can support multiple communication protocols. The communication protocol (XML over SSH, CLI over SSH, API over HTTPS) selected for each network module depends on the platform and the purpose of the module. Some network modules support only one protocol; some offer a choice. The most common protocol is CLI over SSH. You set the communication protocol with the ``ansible_connection`` variable:
@ -22,26 +23,26 @@ Because network modules execute on the control node instead of on the managed no
:header: "Value of ansible_connection", "Protocol", "Requires", "Persistent?"
:widths: 30, 10, 10, 10
"network_cli", "CLI over SSH", "network_os setting", "yes"
"netconf", "XML over SSH", "network_os setting", "yes"
"httpapi", "API over HTTP/HTTPS", "network_os setting", "yes"
"ansible.netcommon.network_cli", "CLI over SSH", "network_os setting", "yes"
"ansible.netcommon.netconf", "XML over SSH", "network_os setting", "yes"
"ansible.netcommon.httpapi", "API over HTTP/HTTPS", "network_os setting", "yes"
"local", "depends on provider", "provider setting", "no"
.. note::
``httpapi`` deprecates ``eos_eapi`` and ``nxos_nxapi``. See :ref:`httpapi_plugins` for details and an example.
``ansible.netcommon.httpapi`` deprecates ``eos_eapi`` and ``nxos_nxapi``. See :ref:`httpapi_plugins` for details and an example.
Beginning with Ansible 2.6, we recommend using one of the persistent connection types listed above instead of ``local``. With persistent connections, you can define the hosts and credentials only once, rather than in every task. You also need to set the ``network_os`` variable for the specific network platform you are communicating with. For more details on using each connection type on various platforms, see the :ref:`platform-specific <platform_options>` pages.
The ``ansible_connection: local`` has been deprecated. Please use one of the persistent connection types listed above instead. With persistent connections, you can define the hosts and credentials only once, rather than in every task. You also need to set the ``network_os`` variable for the specific network platform you are communicating with. For more details on using each connection type on various platforms, see the :ref:`platform-specific <platform_options>` pages.
Modules Organized by Network Platform
Collections organized by network platform
================================================================================
A network platform is a set of network devices with a common operating system that can be managed by a collection of modules. The modules for each network platform share a prefix, for example:
A network platform is a set of network devices with a common operating system that can be managed by an Ansible collection, for example:
- Arista: ``eos_``
- Cisco: ``ios_``, ``iosxr_``, ``nxos_``
- Juniper: ``junos_``
- VyOS ``vyos_``
- Arista: `arista.eos <https://galaxy.ansible.com/arista/eos>`_
- Cisco: `cisco.ios <https://galaxy.ansible.com/cisco/ios>`_, `cisco.iosxr <https://galaxy.ansible.com/cisco/iosxr>`_, `cisco.nxos <https://galaxy.ansible.com/cisco/nxos>`_
- Juniper: `junipernetworks.junos <https://galaxy.ansible.com/junipernetworks/junos>`_
- VyOS `vyos.vyos <https://galaxy.ansible.com/vyos/vyos>`_
All modules within a network platform share certain requirements. Some network platforms have specific differences - see the :ref:`platform-specific <platform_options>` documentation for details.
@ -50,52 +51,18 @@ All modules within a network platform share certain requirements. Some network p
Privilege Escalation: ``enable`` mode, ``become``, and ``authorize``
================================================================================
Several network platforms support privilege escalation, where certain tasks must be done by a privileged user. On network devices this is called ``enable`` mode (the equivalent of ``sudo`` in \*nix administration). Ansible network modules offer privilege escalation for those network devices that support it. For details of which platforms support ``enable`` mode, with examples of how to use it, see the :ref:`platform-specific <platform_options>` documentation.
Several network platforms support privilege escalation, where certain tasks must be done by a privileged user. On network devices this is called the ``enable`` mode (the equivalent of ``sudo`` in \*nix administration). Ansible network modules offer privilege escalation for those network devices that support it. For details of which platforms support ``enable`` mode, with examples of how to use it, see the :ref:`platform-specific <platform_options>` documentation.
Using ``become`` for privilege escalation
-----------------------------------------
As of Ansible 2.6, you can use the top-level Ansible parameter ``become: yes`` with ``become_method: enable`` to run a task, play, or playbook with escalated privileges on any network platform that supports privilege escalation. You must use either ``connection: network_cli`` or ``connection: httpapi`` with ``become: yes`` with ``become_method: enable``. If you are using ``network_cli`` to connect Ansible to your network devices, a ``group_vars`` file would look like:
Use the top-level Ansible parameter ``become: yes`` with ``become_method: enable`` to run a task, play, or playbook with escalated privileges on any network platform that supports privilege escalation. You must use either ``connection: network_cli`` or ``connection: httpapi`` with ``become: yes`` with ``become_method: enable``. If you are using ``network_cli`` to connect Ansible to your network devices, a ``group_vars`` file would look like:
.. code-block:: yaml
ansible_connection: network_cli
ansible_network_os: ios
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: cisco.ios.ios
ansible_become: yes
ansible_become_method: enable
Legacy playbooks: ``authorize`` for privilege escalation
-----------------------------------------------------------------
If you are running Ansible 2.5 or older, some network platforms support privilege escalation but not ``network_cli`` or ``httpapi`` connections. This includes all platforms in versions 2.4 and older, and HTTPS connections using ``eapi`` in version 2.5. With a ``local`` connection, you must use a ``provider`` dictionary and include ``authorize: yes`` and ``auth_pass: my_enable_password``. For that use case, a ``group_vars`` file looks like:
.. code-block:: yaml
ansible_connection: local
ansible_network_os: eos
# provider settings
eapi:
authorize: yes
auth_pass: " {{ secret_auth_pass }}"
port: 80
transport: eapi
use_ssl: no
And you use the ``eapi`` variable in your task(s):
.. code-block:: yaml
tasks:
- name: provider demo with eos
eos_banner:
banner: motd
text: |
this is test
of multiline
string
state: present
provider: "{{ eapi }}"
Note that while Ansible 2.6 supports the use of ``connection: local`` with ``provider`` dictionaries, this usage will be deprecated in the future and eventually removed.
For more information, see :ref:`Become and Networks<become_network>`

@ -32,9 +32,8 @@ Ansible hosts module code, examples, demonstrations, and other content on GitHub
- `Network-Automation <https://github.com/network-automation>`_ is an open community for all things network automation. Have an idea, some playbooks, or roles to share? Email ansible-network@redhat.com and we will add you as a contributor to the repository.
- `Ansible <https://github.com/ansible/ansible>`_ is the main codebase, including code for network modules
- `Ansible collections <https://github.com/ansible-collections>`_ is the main repository for Ansible-maintained and community collections, including collections for network devices.
- `ansible-network <https://github.com/ansible-network>`_ is the main codebase for the Ansible network team roles
IRC and Slack

@ -27,18 +27,18 @@ To demonstrate the concept of what a role is, the example ``playbook.yml`` below
---
- name: configure cisco routers
hosts: routers
connection: network_cli
connection: ansible.netcommon.network_cli
gather_facts: no
vars:
dns: "8.8.8.8 8.8.4.4"
tasks:
- name: configure hostname
ios_config:
cisco.ios.ios_config:
lines: hostname {{ inventory_hostname }}
- name: configure DNS
ios_config:
cisco.ios.ios_config:
lines: ip name-server {{dns}}
If you run this playbook using the ``ansible-playbook`` command, you'll see the output below. This example used ``-l`` option to limit the playbook to only executing on the **rtr1** node.
@ -113,11 +113,11 @@ Next, move the content of the ``vars`` and ``tasks`` sections from the original
[user@ansible system-demo]$ cat tasks/main.yml
---
- name: configure hostname
ios_config:
cisco.ios.ios_config:
lines: hostname {{ inventory_hostname }}
- name: configure DNS
ios_config:
cisco.ios.ios_config:
lines: ip name-server {{dns}}
Next, move the variables into the ``vars/main.yml`` file:
@ -135,7 +135,7 @@ Finally, modify the original Ansible Playbook to remove the ``tasks`` and ``vars
---
- name: configure cisco routers
hosts: routers
connection: network_cli
connection: ansible.netcommon.network_cli
gather_facts: no
roles:
@ -211,7 +211,7 @@ Add a new ``vars`` section to the playbook to override the default behavior (whe
---
- name: configure cisco routers
hosts: routers
connection: network_cli
connection: ansible.netcommon.network_cli
gather_facts: no
vars:
dns: 1.1.1.1
@ -252,39 +252,6 @@ The result on the Cisco IOS XE router will only contain the highest precedence s
How is this useful? Why should you care? Extra vars are commonly used by network operators to override defaults. A powerful example of this is with Red Hat Ansible Tower and the Survey feature. It is possible through the web UI to prompt a network operator to fill out parameters with a Web form. This can be really simple for non-technical playbook writers to execute a playbook using their Web browser. See `Ansible Tower Job Template Surveys <https://docs.ansible.com/ansible-tower/latest/html/userguide/workflow_templates.html#surveys>`_ for more details.
Ansible supported network roles
===============================
The Ansible Network team develops and supports a set of `network-related roles <https://galaxy.ansible.com/ansible-network>`_ on Ansible Galaxy. You can use these roles to jump start your network automation efforts. These roles are updated approximately every two weeks to give you access to the latest Ansible networking content.
These roles come in the following categories:
* **User roles** - User roles focus on tasks, such as managing your configuration. Use these roles, such as `config_manager <https://galaxy.ansible.com/ansible-network/config_manager>`_ and `cloud_vpn <https://galaxy.ansible.com/ansible-network/cloud_vpn>`_, directly in your playbooks. These roles are platform/provider agnostic, allowing you to use the same roles and playbooks across different network platforms or cloud providers.
* **Platform provider roles** - Provider roles translate between the user roles and the various network OSs, each of which has a different API. Each provider role accepts input from a supported user role and translates it for a specific network OS. Network user roles depend on these provider roles to implement their functions. For example, the `config_manager <https://galaxy.ansible.com/ansible-network/config_manager>`_ user role uses the `cisco_ios <https://galaxy.ansible.com/ansible-network/cisco_ios>`_ provider role to implement tasks on Cisco IOS network devices.
* **Cloud provider and provisioner roles** - Similarly, cloud user roles depend on cloud provider and provisioner roles to implement cloud functions for specific cloud providers. For example, the `cloud_vpn <https://galaxy.ansible.com/ansible-network/cloud_vpn>`_ role depends on the `aws <https://galaxy.ansible.com/ansible-network/aws>`_ provider role to communicate with AWS.
You need to install at least one platform provider role for your network user roles, and set ``ansible_network_provider`` to that provider (for example, ``ansible_network_provider: ansible-network.cisco_ios``). Ansible Galaxy automatically installs any other dependencies listed in the role details on Ansible Galaxy.
For example, to use the ``config_manager`` role with Cisco IOS devices, you would use the following commands:
.. code-block:: bash
[user@ansible]$ ansible-galaxy install ansible-network.cisco_ios
[user@ansible]$ ansible-galaxy install ansible-network.config_manager
Roles are fully documented with examples in Ansible Galaxy on the **Read Me** tab for each role.
Network roles release cycle
===========================
The Ansible network team releases updates and new roles every two weeks. The role details on Ansible Galaxy lists the role versions available, and you can look in the GitHub repository to find the changelog file (for example, the ``cisco_ios`` `CHANGELOG.rst <https://github.com/ansible-network/cisco_ios/blob/devel/CHANGELOG.rst>`_ ) that lists what has changed in each version of the role.
The Ansible Galaxy role version has two components:
* Major release number - (for example, 2.6) which shows the Ansible engine version this role supports.
* Minor release number (for example .1) which denotes the role release cycle and does not reflect the Ansible engine minor release version.
Update an installed role
------------------------
@ -292,12 +259,9 @@ The Ansible Galaxy page for a role lists all available versions. To update a loc
.. code-block:: bash
[user@ansible]$ ansible-galaxy install ansible-network.network_engine,v2.7.0 --force
[user@ansible]$ ansible-galaxy install ansible-network.cisco_nxos,v2.7.1 --force
[user@ansible]$ ansible-galaxy install mynamespace.my_role,v2.7.1 --force
.. seealso::
`Ansible Galaxy documentation <https://galaxy.ansible.com/docs/>`_
Ansible Galaxy user guide
`Ansible supported network roles <https://galaxy.ansible.com/ansible-network>`_
List of Ansible-supported network and cloud roles on Ansible Galaxy

@ -1,13 +1,13 @@
---
- name: Network Getting Started First Playbook
connection: network_cli
connection: ansible.netcommon.network_cli
gather_facts: false
hosts: all
tasks:
- name: Get config for VyOS devices
vyos_facts:
vyos.vyos.vyos_facts:
gather_subset: all
- name: Display the config

@ -1,13 +1,13 @@
---
- name: Network Getting Started First Playbook Extended
connection: network_cli
connection: ansible.netcommon.network_cli
gather_facts: false
hosts: all
tasks:
- name: Get config for VyOS devices
vyos_facts:
vyos.vyos.vyos_facts:
gather_subset: all
- name: Display the config
@ -15,15 +15,15 @@
msg: "The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}"
- name: Update the hostname
vyos_config:
vyos.vyos.vyos_config:
backup: yes
lines:
- set system host-name vyos-changed
- name: Get changed config for VyOS devices
vyos_facts:
vyos.vyos.vyos_facts:
gather_subset: all
- name: Display the changed config
debug:
msg: "The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}"
msg: "The new hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}"

@ -1,7 +1,7 @@
.. _network_advanced:
**********************************
Network Automation Advanced Topics
Network Advanced Topics
**********************************
Once you have mastered the basics of network automation with Ansible, as presented in :ref:`network_getting_started`, use this guide understand platform-specific details, optimization, and troubleshooting tips for Ansible for network automation.

@ -29,7 +29,7 @@ An ``inventory`` file is a YAML or INI-like configuration file that defines the
In our example, the inventory file defines the groups ``eos``, ``ios``, ``vyos`` and a "group of groups" called ``switches``. Further details about subgroups and inventory files can be found in the :ref:`Ansible inventory Group documentation <subgroups>`.
Because Ansible is a flexible tool, there are a number of ways to specify connection information and credentials. We recommend using the ``[my_group:vars]`` capability in your inventory file. Here's what it would look like if you specified your SSH passwords (encrypted with Ansible Vault) among your variables:
Because Ansible is a flexible tool, there are a number of ways to specify connection information and credentials. We recommend using the ``[my_group:vars]`` capability in your inventory file.
.. code-block:: ini
@ -54,13 +54,7 @@ Because Ansible is a flexible tool, there are a number of ways to specify connec
ansible_become_method=enable
ansible_network_os=eos
ansible_user=my_eos_user
ansible_password= !vault |
$ANSIBLE_VAULT;1.1;AES256
37373735393636643261383066383235363664386633386432343236663533343730353361653735
6131363539383931353931653533356337353539373165320a316465383138636532343463633236
37623064393838353962386262643230303438323065356133373930646331623731656163623333
3431353332343530650a373038366364316135383063356531633066343434623631303166626532
9562
ansible_password=my_eos_password
[ios]
ios01 ansible_host=ios-01.example.net
@ -72,13 +66,7 @@ Because Ansible is a flexible tool, there are a number of ways to specify connec
ansible_become_method=enable
ansible_network_os=ios
ansible_user=my_ios_user
ansible_password= !vault |
$ANSIBLE_VAULT;1.1;AES256
34623431313336343132373235313066376238386138316466636437653938623965383732373130
3466363834613161386538393463663861636437653866620a373136356366623765373530633735
34323262363835346637346261653137626539343534643962376139366330626135393365353739
3431373064656165320a333834613461613338626161633733343566666630366133623265303563
8472
ansible_password=my_ios_password
[vyos]
vyos01 ansible_host=vyos-01.example.net
@ -88,13 +76,7 @@ Because Ansible is a flexible tool, there are a number of ways to specify connec
[vyos:vars]
ansible_network_os=vyos
ansible_user=my_vyos_user
ansible_password= !vault |
$ANSIBLE_VAULT;1.1;AES256
39336231636137663964343966653162353431333566633762393034646462353062633264303765
6331643066663534383564343537343334633031656538370a333737656236393835383863306466
62633364653238323333633337313163616566383836643030336631333431623631396364663533
3665626431626532630a353564323566316162613432373738333064366130303637616239396438
9853
ansible_password=my_vyos_password
If you use ssh-agent, you do not need the ``ansible_password`` lines. If you use ssh keys, but not ssh-agent, and you have multiple keys, specify the key to use for each connection in the ``[group:vars]`` section with ``ansible_ssh_private_key_file=/path/to/correct/key``. For more information on ``ansible_ssh_`` options see :ref:`behavioral_parameters`.
@ -107,6 +89,21 @@ Ansible vault for password encryption
The "Vault" feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See :ref:`playbooks_vault` for more information.
Here's what it would look like if you specified your SSH passwords (encrypted with Ansible Vault) among your variables:
.. code-block:: yaml
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
39336231636137663964343966653162353431333566633762393034646462353062633264303765
6331643066663534383564343537343334633031656538370a333737656236393835383863306466
62633364653238323333633337313163616566383836643030336631333431623631396364663533
3665626431626532630a353564323566316162613432373738333064366130303637616239396438
9853
Common inventory variables
--------------------------
@ -134,7 +131,7 @@ Certain network platforms, such as Arista EOS and Cisco IOS, have the concept of
.. code-block:: ini
[eos:vars]
ansible_connection=network_cli
ansible_connection=ansible.netcommon.network_cli
ansible_network_os=eos
ansible_become=yes
ansible_become_method=enable
@ -198,15 +195,15 @@ Next, create a playbook file called ``facts-demo.yml`` containing the following:
# Collect data
#
- name: Gather facts (eos)
eos_facts:
arista.eos.eos_facts:
when: ansible_network_os == 'eos'
- name: Gather facts (ios)
ios_facts:
cisco.ios.ios_facts:
when: ansible_network_os == 'ios'
- name: Gather facts (vyos)
vyos_facts:
vyos.vyos.vyos_facts:
when: ansible_network_os == 'vyos'
###
@ -255,13 +252,13 @@ Next, create a playbook file called ``facts-demo.yml`` containing the following:
#
- name: Backup switch (eos)
eos_config:
arista.eos.eos_config:
backup: yes
register: backup_eos_location
when: ansible_network_os == 'eos'
- name: backup switch (vyos)
vyos_config:
vyos.vyos.vyos_config:
backup: yes
register: backup_vyos_location
when: ansible_network_os == 'vyos'
@ -343,17 +340,17 @@ This example assumes three platforms, Arista EOS, Cisco NXOS, and Juniper JunOS.
---
- name: Run Arista command
eos_command:
arista.eos.eos_command:
commands: show ip int br
when: ansible_network_os == 'eos'
- name: Run Cisco NXOS command
nxos_command:
cisco.nxos.nxos_command:
commands: show ip int br
when: ansible_network_os == 'nxos'
- name: Run Vyos command
vyos_command:
vyos.vyos.vyos_command:
commands: show interface
when: ansible_network_os == 'vyos'
@ -373,7 +370,7 @@ You can replace these platform-specific modules with the network agnostic ``cli_
- name: Run cli_command on Arista and display results
block:
- name: Run cli_command on Arista
cli_command:
ansible.netcommon.cli_command:
command: show ip int br
register: result
@ -385,7 +382,7 @@ You can replace these platform-specific modules with the network agnostic ``cli_
- name: Run cli_command on Cisco IOS and display results
block:
- name: Run cli_command on Cisco IOS
cli_command:
ansible.netcommon.cli_command:
command: show ip int br
register: result
@ -397,7 +394,7 @@ You can replace these platform-specific modules with the network agnostic ``cli_
- name: Run cli_command on Vyos and display results
block:
- name: Run cli_command on Vyos
cli_command:
ansible.netcommon.cli_command:
command: show interfaces
register: result
@ -418,7 +415,7 @@ If you use groups and group_vars by platform type, this playbook can be further
tasks:
- name: Run show command
cli_command:
ansible.netcommon.cli_command:
command: "{{show_interfaces}}"
register: command_output
@ -434,7 +431,7 @@ The ``cli_command`` also supports multiple prompts.
---
- name: Change password to default
cli_command:
ansible.netcommon.cli_command:
command: "{{ item }}"
prompt:
- "New password"
@ -449,7 +446,7 @@ The ``cli_command`` also supports multiple prompts.
- "set system root-authentication plain-text-password"
- "commit"
See the :ref:`cli_command <cli_command_module>` for full documentation on this command.
See the :ref:`ansible.netcommon.cli_command <cli_command_module>` for full documentation on this command.
Implementation Notes
@ -468,7 +465,7 @@ For more information, see :ref:`magic_variables_and_hostvars`.
Get running configuration
-------------------------
The :ref:`eos_config <eos_config_module>` and :ref:`vyos_config <vyos_config_module>` modules have a ``backup:`` option that when set will cause the module to create a full backup of the current ``running-config`` from the remote device before any changes are made. The backup file is written to the ``backup`` folder in the playbook root directory. If the directory does not exist, it is created.
The :ref:`arista.eos.eos_config <eos_config_module>` and :ref:`vyos.vyos.vyos_config <vyos_config_module>` modules have a ``backup:`` option that when set will cause the module to create a full backup of the current ``running-config`` from the remote device before any changes are made. The backup file is written to the ``backup`` folder in the playbook root directory. If the directory does not exist, it is created.
To demonstrate how we can move the backup file to a different location, we register the result and move the file to the path stored in ``backup_path``.

@ -26,7 +26,7 @@ Playbook
========
* Fixed a bug on boolean keywords that made random strings return 'False', now they should return an error if they are not a proper boolean
Example: `diff: yes-` was returning `False`.
Example: ``diff: yes-`` was returning ``False``.
* A new fact, ``ansible_processor_nproc`` reflects the number of vcpus
available to processes (falls back to the number of vcpus available to
the scheduler).
@ -50,7 +50,7 @@ Modules
Change to Default File Permissions
----------------------------------
To address CVE-2020-1736, the default permissions for certain files created by Ansible using ``atomic_move()`` were changed from ``0o666`` to ``0o600``. The default permissions value was only used for the temporary file before it was moved into its place or newly created files. If the file existed when the new temporary file was moved into place, Ansible would use the permissions of the existing file. If there was no existing file, Ansible would retain the default file permissions, combined with the system ``umask``, of the temporary file.
To address `CVE-2020-1736 <https://nvd.nist.gov/vuln/detail/CVE-2020-1736>`_, the default permissions for certain files created by Ansible using ``atomic_move()`` were changed from ``0o666`` to ``0o600``. The default permissions value was only used for the temporary file before it was moved into its place or newly created files. If the file existed when the new temporary file was moved into place, Ansible would use the permissions of the existing file. If there was no existing file, Ansible would retain the default file permissions, combined with the system ``umask``, of the temporary file.
Most modules that call ``atomic_move()`` also call ``set_fs_attributes_if_different()`` or ``set_mode_if_different()``, which will set the permissions of the file to what is specified in the task.

@ -286,14 +286,14 @@ Dynamic Inventory Script
If you are not familiar with Ansible's dynamic inventory scripts, check out :ref:`Intro to Dynamic Inventory <intro_dynamic_inventory>`.
The Azure Resource Manager inventory script is called `azure_rm.py <https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py>`_. It authenticates with the Azure API exactly the same as the
The Azure Resource Manager inventory script is called `azure_rm.py <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/azure_rm.py>`_. It authenticates with the Azure API exactly the same as the
Azure modules, which means you will either define the same environment variables described above in `Using Environment Variables`_,
create a ``$HOME/.azure/credentials`` file (also described above in `Storing in a File`_), or pass command line parameters. To see available command
line options execute the following:
.. code-block:: bash
$ wget https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py
$ wget https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/azure_rm.py
$ ./azure_rm.py --help
As with all dynamic inventory scripts, the script can be executed directly, passed as a parameter to the ansible command,
@ -397,7 +397,7 @@ If you don't need the powerstate, you can improve performance by turning off pow
* AZURE_INCLUDE_POWERSTATE=no
A sample azure_rm.ini file is included along with the inventory script in
`here <https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.ini>`_.
`here <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/azure_rm.ini>`_.
An .ini file will contain the following:
.. code-block:: ini
@ -432,7 +432,7 @@ Here are some examples using the inventory script:
.. code-block:: bash
# Download inventory script
$ wget https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py
$ wget https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/azure_rm.py
# Execute /bin/uname on all instances in the Testing resource group
$ ansible -i azure_rm.py Testing -m shell -a "/bin/uname -a"

@ -204,7 +204,7 @@ examples to get you started:
Configuration
.............
You can control the behavior of the inventory script by defining environment variables, or
creating a docker.yml file (sample provided in https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/docker.py). The order of precedence is the docker.yml
creating a docker.yml file (sample provided in https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/docker.py). The order of precedence is the docker.yml
file and then environment variables.

@ -248,9 +248,9 @@ Dynamic inventory script
You can use the Infoblox dynamic inventory script to import your network node inventory with Infoblox NIOS. To gather the inventory from Infoblox, you need two files:
- `infoblox.yaml <https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.yaml>`_ - A file that specifies the NIOS provider arguments and optional filters.
- `infoblox.yaml <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/infoblox.yaml>`_ - A file that specifies the NIOS provider arguments and optional filters.
- `infoblox.py <https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.py>`_ - The python script that retrieves the NIOS inventory.
- `infoblox.py <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/infoblox.py>`_ - The python script that retrieves the NIOS inventory.
To use the Infoblox dynamic inventory script:

@ -4,7 +4,6 @@
Other useful VMware resources
*****************************
* `PyVmomi Documentation <https://github.com/vmware/pyvmomi/tree/master/docs>`_
* `VMware API and SDK Documentation <https://www.vmware.com/support/pubs/sdk_pubs.html>`_
* `VCSIM test container image <https://quay.io/repository/ansible/vcenter-test-container>`_
* `Ansible VMware community wiki page <https://github.com/ansible/community/wiki/VMware>`_

@ -35,7 +35,7 @@ Installing vCenter SSL certificates for Ansible
Installing ESXi SSL certificates for Ansible
--------------------------------------------
* Enable SSH Service on ESXi either by using Ansible VMware module `vmware_host_service_manager <https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_host_config_manager.py>`_ or manually using vSphere Web interface.
* Enable SSH Service on ESXi either by using Ansible VMware module `vmware_host_service_manager <https://github.com/ansible-collections/vmware/blob/main/plugins/modules/vmware_host_config_manager.py>`_ or manually using vSphere Web interface.
* SSH to ESXi server using administrative credentials, and navigate to directory ``/etc/vmware/ssl``

@ -13,10 +13,15 @@ Inventory
A list of managed nodes. An inventory file is also sometimes called a "hostfile". Your inventory can specify information like IP address for each managed node. An inventory can also organize managed nodes, creating and nesting groups for easier scaling. To learn more about inventory, see :ref:`the Working with Inventory<intro_inventory>` section.
Collections
===========
Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. You can install and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_. To learn more about collections, see :ref:`collections`.
Modules
=======
The units of code Ansible executes. Each module has a particular use, from administering users on a specific type of database to managing VLAN interfaces on a specific type of network device. You can invoke a single module with a task, or invoke several different modules in a playbook. For an idea of how many modules Ansible includes, take a look at the :ref:`list of all modules <modules_by_category>`.
The units of code Ansible executes. Each module has a particular use, from administering users on a specific type of database to managing VLAN interfaces on a specific type of network device. You can invoke a single module with a task, or invoke several different modules in a playbook. Starting in Ansible 2.10, modules are grouped in collections. For an idea of how many collections Ansible includes, take a look at the :ref:`list_of_collections`.
Tasks
=====

@ -28,7 +28,7 @@ Ansible integrates seamlessly with `Cobbler <https://cobbler.github.io>`_, a Lin
While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic
layer that can represent data for multiple configuration management systems (even at the same time) and serve as a 'lightweight CMDB'.
To tie your Ansible inventory to Cobbler, copy `this script <https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/cobbler.py>`_ to ``/etc/ansible`` and ``chmod +x`` the file. Run ``cobblerd`` any time you use Ansible and use the ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``) to communicate with Cobbler using Cobbler's XMLRPC API.
To tie your Ansible inventory to Cobbler, copy `this script <https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/cobbler.py>`_ to ``/etc/ansible`` and ``chmod +x`` the file. Run ``cobblerd`` any time you use Ansible and use the ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``) to communicate with Cobbler using Cobbler's XMLRPC API.
Add a ``cobbler.ini`` file in ``/etc/ansible`` so Ansible knows where the Cobbler server is and some cache improvements can be used. For example:
@ -111,7 +111,7 @@ So in other words, you can use those variables in arguments/actions as well.
Inventory script example: AWS EC2
=================================
If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For this reason, you can use the `EC2 external inventory <https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py>`_ script.
If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For this reason, you can use the `EC2 external inventory <https://raw.githubusercontent.com/ansible-collections/community.aws/main/scripts/inventory/ec2.py>`_ script.
You can use this script in one of two ways. The easiest is to use Ansible's ``-i`` command line option and specify the path to the script after marking it executable:
@ -119,7 +119,7 @@ You can use this script in one of two ways. The easiest is to use Ansible's ``-i
ansible -i ec2.py -u ubuntu us-east-1d -m ping
The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You must also copy the `ec2.ini <https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You must also copy the `ec2.ini <https://raw.githubusercontent.com/ansible-collections/community.aws/main/scripts/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
To make a successful API call to AWS, you must configure Boto (the Python interface to AWS). You can do this in `several ways <http://docs.pythonboto.org/en/latest/boto_config_tut.html>`_ available, but the simplest is to export two environment variables:
@ -133,7 +133,7 @@ You can test the script by itself to make sure your config is correct:
.. code-block:: bash
cd /etc/ansible/
wget https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible-collections/community.aws/main/scripts/inventory/ec2.py
./ec2.py --list
After a few moments, you should see your entire EC2 inventory across all regions in JSON.
@ -254,7 +254,7 @@ To see the complete list of variables available for an instance, run the script
.. code-block:: bash
cd /etc/ansible
wget https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible-collections/community.aws/main/scripts/inventory/ec2.py
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com
Note that the AWS inventory script will cache results to avoid repeated API calls, and this cache setting is configurable in ec2.ini. To

@ -4,22 +4,28 @@
Debugging tasks
***************
Ansible offers a task debugger so you can try to fix errors during execution instead of fixing them in the playbook and then running it again. You have access to all of the features of the debugger in the context of the task. You can check or set the value of variables, update module arguments, and re-run the task with the new variables and arguments. The debugger lets you resolve the cause of the failure and continue with playbook execution.
Ansible offers a task debugger so you can fix errors during execution instead of editing your playbook and running it again to see if your change worked. You have access to all of the features of the debugger in the context of the task. You can check or set the value of variables, update module arguments, and re-run the task with the new variables and arguments. The debugger lets you resolve the cause of the failure and continue with playbook execution.
.. contents::
:local:
Invoking the debugger
Enabling the debugger
=====================
There are multiple ways to invoke the debugger.
The debugger is not enabled by default. If you want to invoke the debugger during playbook execution, you must enable it first.
Using the debugger keyword
--------------------------
Use one of these three methods to enable the debugger:
* with the debugger keyword
* in configuration or an environment variable, or
* as a strategy
Enabling the debugger with the ``debugger`` keyword
---------------------------------------------------
.. versionadded:: 2.5
The ``debugger`` keyword can be used on any block where you provide a ``name`` attribute, such as a play, role, block or task. The ``debugger`` keyword accepts five values:
You can use the ``debugger`` keyword to enable (or disable) the debugger for a specific play, role, block, or task. This option is especially useful when developing or extending playbooks, plays, and roles. You can enable the debugger on new or updated tasks. If they fail, you can fix the errors efficiently. The ``debugger`` keyword accepts five values:
.. table::
:class: documentation-table
@ -33,22 +39,29 @@ The ``debugger`` keyword can be used on any block where you provide a ``name`` a
on_failed Only invoke the debugger if a task fails
on_unreachable Only invoke the debugger if a host was unreachable
on_unreachable Only invoke the debugger if a host is unreachable
on_skipped Only invoke the debugger if the task is skipped
========================= ======================================================
When you use the ``debugger`` keyword, the setting you use overrides any global configuration to enable or disable the debugger. If you define ``debugger`` at two different levels, for example in a role and in a task, the more specific definition wins: the definition on a task overrides the definition on a block, which overrides the definition on a role or play.
When you use the ``debugger`` keyword, the value you specify overrides any global configuration to enable or disable the debugger. If you define ``debugger`` at multiple levels, such as in a role and in a task, Ansible honors the most granular definition. The definition at the play or role level applies to all blocks and tasks within that play or role, unless they specify a different value. The definition at the block level overrides the definition at the play or role level, and applies to all tasks within that block, unless they specify a different value. The definition at the task level always applies to the task; it overrides the definitions at the block, play, or role level.
Here are examples of invoking the debugger with the ``debugger`` keyword::
Examples of using the ``debugger`` keyword
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Example of setting the ``debugger`` keyword on a task:
.. code-block:: yaml
# on a task
- name: Execute a command
command: "false"
debugger: on_failed
# on a play
Example of setting the ``debugger`` keyword on a play:
.. code-block:: yaml
- name: My play
hosts: all
debugger: on_skipped
@ -57,7 +70,10 @@ Here are examples of invoking the debugger with the ``debugger`` keyword::
command: "true"
when: False
In the example below, the task will open the debugger when it fails, because the task-level definition overrides the play-level definition::
Example of setting the ``debugger`` keyword at multiple levels:
.. code-block:: yaml
- name: Play
hosts: all
@ -67,50 +83,51 @@ In the example below, the task will open the debugger when it fails, because the
command: "false"
debugger: on_failed
In configuration or an environment variable
-------------------------------------------
In this example, the debugger is set to ``never`` at the play level and to ``on_failed`` at the task level. If the task fails, Ansible invokes the debugger, because the definition on the task overrides the definition on its parent play.
Enabling the debugger in configuration or an environment variable
-----------------------------------------------------------------
.. versionadded:: 2.5
You can turn the task debugger on or off globally with a setting in ansible.cfg or with an environment variable. The only options are ``True`` or ``False``. If you set the configuration option or environment variable to ``True``, Ansible runs the debugger on failed tasks by default.
You can enable the task debugger globally with a setting in ansible.cfg or with an environment variable. The only options are ``True`` or ``False``. If you set the configuration option or environment variable to ``True``, Ansible runs the debugger on failed tasks by default.
To invoke the task debugger from ansible.cfg::
To enable the task debugger from ansible.cfg, add this setting to the defaults section::
[defaults]
enable_task_debugger = True
To use an an environment variable to invoke the task debugger::
To enable the task debugger with an environment variable, pass the variable when you run your playbook::
ANSIBLE_ENABLE_TASK_DEBUGGER=True ansible-playbook -i hosts site.yml
When you invoke the debugger using this method, any failed task will invoke the debugger, unless it is explicitly disabled for that role, play, block, or task. If you need more granular control what conditions trigger the debugger, use the ``debugger`` keyword.
When you enable the debugger globally, every failed task invokes the debugger, unless the role, play, block, or task explicity disables the debugger. If you need more granular control over what conditions trigger the debugger, use the ``debugger`` keyword.
As a strategy
-------------
Enabling the debugger as a strategy
-----------------------------------
.. note::
If you are running legacy playbooks or roles, you may see the debugger enabled as a :ref:`strategy <strategy_plugins>`. You can do this at the play level, in ansible.cfg, or with the environment variable ``ANSIBLE_STRATEGY=debug``. For example:
This backwards-compatible method, which matches Ansible versions before 2.5, may be removed in a future release.
To use the ``debug`` strategy, change the ``strategy`` attribute like this::
- hosts: test
strategy: debug
tasks:
...
.. code-block:: yaml
You can also set the strategy to ``debug`` with the environment variable ``ANSIBLE_STRATEGY=debug``, or by modifying ``ansible.cfg``:
- hosts: test
strategy: debug
tasks:
...
.. code-block:: yaml
Or in ansible.cfg::
[defaults]
strategy = debug
.. note::
This backwards-compatible method, which matches Ansible versions before 2.5, may be removed in a future release.
Using the debugger
==================
Resolving errors in the debugger
================================
Once you invoke the debugger, you can use the seven :ref:`debugger commands <available_commands>` to work through the error Ansible encountered. For example, the playbook below defines the ``var1`` variable but uses the ``wrong_var`` variable, which is undefined, by mistake.
After Ansible invokes the debugger, you can use the seven :ref:`debugger commands <available_commands>` to resolve the error that Ansible encountered. Consider this example playbook, which defines the ``var1`` variable but uses the undefined ``wrong_var`` variable in a task by mistake.
.. code-block:: yaml
@ -158,7 +175,7 @@ If you run this playbook, Ansible invokes the debugger when the task fails. From
PLAY RECAP *********************************************************************
192.0.2.10 : ok=1 changed=0 unreachable=0 failed=0
As the example above shows, once the task arguments use ``var1`` instead of ``wrong_var``, the task runs successfully.
Changing the task arguments in the debugger to use ``var1`` instead of ``wrong_var`` makes the task run successfully.
.. _available_commands:
@ -295,11 +312,10 @@ Quit command
``q`` or ``quit`` quits the debugger. The playbook execution is aborted.
Debugging and the free strategy
===============================
If you use the debugger with the ``free`` strategy, Ansible does not queue or execute any further tasks while the debugger is active. However, previously queued tasks remain in the queue and run as soon as you exit the debugger. If you use ``redo`` to reschedule a task from the debugger, other queued task may execute before your rescheduled task.
How the debugger interacts with the free strategy
=================================================
With the default ``linear`` strategy enabled, Ansible halts execution while the debugger is active, and runs the debugged task immediately after you enter the ``redo`` command. With the ``free`` strategy enabled, however, Ansible does not wait for all hosts, and may queue later tasks on one host before a task fails on another host. With the ``free`` strategy, Ansible does not queue or execute any tasks while the debugger is active. However, all queued tasks remain in the queue and run as soon as you exit the debugger. If you use ``redo`` to reschedule a task from the debugger, other queued tasks may execute before your rescheduled task. For more information about strategies, see :ref:`playbooks_strategies`.
.. seealso::

@ -4,7 +4,7 @@
Roles
*****
Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily re-use them and share them with other users.
Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily reuse them and share them with other users.
.. contents::
:local:
@ -98,9 +98,9 @@ Using roles
You can use roles in three ways:
- at the play level with the ``roles`` option,
- at the tasks level with ``include_role``, or
- at the tasks level with ``import_role``
- at the play level with the ``roles`` option: This is the classic way of using roles in a play.
- at the tasks level with ``include_role``: You can reuse roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``.
- at the tasks level with ``import_role``: You can reuse roles statically anywhere in the ``tasks`` section of a play using ``import_role``.
.. _roles_keyword:
@ -162,10 +162,10 @@ When you add a tag to the ``role`` option, Ansible applies the tag to ALL tasks
When using ``vars:`` within the ``roles:`` section of a playbook, the variables are added to the play variables, making them available to all tasks within the play before and after the role. This behavior can be changed by :ref:`DEFAULT_PRIVATE_ROLE_VARS`.
Including roles: dynamic re-use
-------------------------------
Including roles: dynamic reuse
------------------------------
You can re-use roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``. While roles added in a ``roles`` section run before any other tasks in a playbook, included roles run in the order they are defined. If there are other tasks before an ``include_role`` task, the other tasks will run first.
You can reuse roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``. While roles added in a ``roles`` section run before any other tasks in a playbook, included roles run in the order they are defined. If there are other tasks before an ``include_role`` task, the other tasks will run first.
To include a role:
@ -209,10 +209,10 @@ You can conditionally include a role:
name: some_role
when: "ansible_facts['os_family'] == 'RedHat'"
Importing roles: static re-use
------------------------------
Importing roles: static reuse
-----------------------------
You can re-use roles statically anywhere in the ``tasks`` section of a play using ``import_role``. The behavior is the same as using the ``roles`` keyword. For example:
You can reuse roles statically anywhere in the ``tasks`` section of a play using ``import_role``. The behavior is the same as using the ``roles`` keyword. For example:
.. code-block:: yaml
@ -321,8 +321,8 @@ Role dependencies are stored in the ``meta/main.yml`` file within the role direc
Ansible always executes role dependencies before the role that includes them. Ansible executes recursive role dependencies as well. If one role depends on a second role, and the second role depends on a third role, Ansible executes the third role, then the second role, then the first role.
Running role dependencies multiple times
----------------------------------------
Running role dependencies multiple times in one playbook
--------------------------------------------------------
Ansible treats duplicate role dependencies like duplicate roles listed under ``roles:``: Ansible only executes role dependencies once, even if defined multiple times, unless the parameters defined on the role are different for each definition. If two roles in a playbook both list a third role as a dependency, Ansible only runs that role dependency once, unless you pass different parameters or use ``allow_duplicates: true`` in the dependent (third) role. See :ref:`Galaxy role dependencies <galaxy_dependencies>` for more details.
@ -386,7 +386,8 @@ Embedding modules and plugins in roles
If you write a custom module (see :ref:`developing_modules`) or a plugin (see :ref:`developing_plugins`), you might wish to distribute it as part of a role. For example, if you write a module that helps configure your company's internal software, and you want other people in your organization to use this module, but you do not want to tell everyone how to configure their Ansible library path, you can include the module in your internal_config role.
Alongside the 'tasks' and 'handlers' structure of a role, add a directory named 'library'. In this 'library' directory, then include the module directly inside of it.
To add a module or a plugin to a role:
Alongside the 'tasks' and 'handlers' structure of a role, add a directory named 'library' and then include the module directly inside the 'library' directory.
Assuming you had this:

@ -4,9 +4,9 @@
Using Variables
***************
Ansible uses variables to manage differences between systems. With Ansible, you can execute tasks and playbooks on multiple different systems with a single command. To represent the variations among those different systems, you can create variables with standard YAML syntax, including lists and dictionaries. You can set these variables in your playbooks, in your :ref:`inventory <intro_inventory>`, in re-usable :ref:`files <playbooks_reuse>` or :ref:`roles <playbooks_reuse_roles>`, or at the command line. You can also create variables during a playbook run by registering the return value or values of a task as a new variable.
Ansible uses variables to manage differences between systems. With Ansible, you can execute tasks and playbooks on multiple different systems with a single command. To represent the variations among those different systems, you can create variables with standard YAML syntax, including lists and dictionaries. You can define these variables in your playbooks, in your :ref:`inventory <intro_inventory>`, in re-usable :ref:`files <playbooks_reuse>` or :ref:`roles <playbooks_reuse_roles>`, or at the command line. You can also create variables during a playbook run by registering the return value or values of a task as a new variable.
You can use the variables you created in module arguments, in :ref:`conditional "when" statements <playbooks_conditionals>`, in :ref:`templates <playbooks_templating>`, and in :ref:`loops <playbooks_loops>`. The `ansible-examples github repository <https://github.com/ansible/ansible-examples>`_ contains many examples of using variables in Ansible.
After you create variables, either by defining them in a file, passing them at the command line, or registering the return value or values of a task as a new variable, you can use those variables in module arguments, in :ref:`conditional "when" statements <playbooks_conditionals>`, in :ref:`templates <playbooks_templating>`, and in :ref:`loops <playbooks_loops>`. The `ansible-examples github repository <https://github.com/ansible/ansible-examples>`_ contains many examples of using variables in Ansible.
Once you understand the concepts and examples on this page, read about :ref:`Ansible facts <vars_and_facts>`, which are variables you retrieve from remote systems.
@ -18,10 +18,12 @@ Once you understand the concepts and examples on this page, read about :ref:`Ans
Creating valid variable names
=============================
Not all strings are valid Ansible variable names. A variable name can only include letters, numbers, and underscores. `Python keywords`_ and :ref:`playbook keywords<playbook_keywords>` are not valid variable names. A variable name cannot begin with a number.
Not all strings are valid Ansible variable names. A variable name can only include letters, numbers, and underscores. `Python keywords`_ or :ref:`playbook keywords<playbook_keywords>` are not valid variable names. A variable name cannot begin with a number.
Variable names can begin with an underscore. In many programming languages, variables that begin with an underscore are private. This is not true in Ansible. Variables that begin with an underscore are treated exactly the same as any other variable. Do not rely on this convention for privacy or security.
This table gives examples of valid and invalid variable names:
.. table::
:class: documentation-table
@ -42,7 +44,7 @@ Variable names can begin with an underscore. In many programming languages, vari
Simple variables
================
Simple variables combine a variable name with a single value. You can use this syntax (and the syntax for lists and dictionaries shown below) in a variety of places. See :ref:`setting_variables` for information on where to set variables.
Simple variables combine a variable name with a single value. You can use this syntax (and the syntax for lists and dictionaries shown below) in a variety of places. For details about setting variables in inventory, in playbooks, in reusable files, in roles, or at the command line, see :ref:`setting_variables`.
Defining simple variables
-------------------------
@ -54,7 +56,7 @@ You can define a simple variable using standard YAML syntax. For example::
Referencing simple variables
----------------------------
Once you have defined a variable, use Jinja2 syntax to reference it. Jinja2 variables use double curly braces. For example, the expression ``My amp goes to {{ max_amp_value }}`` demonstrates the most basic form of variable substitution. You can use Jinja2 syntax in playbooks. For example::
After you define a variable, use Jinja2 syntax to reference it. Jinja2 variables use double curly braces. For example, the expression ``My amp goes to {{ max_amp_value }}`` demonstrates the most basic form of variable substitution. You can use Jinja2 syntax in playbooks. For example::
template: src=foo.cfg.j2 dest={{ remote_install_path }}/foo.cfg
@ -69,7 +71,7 @@ In this example, the variable defines the location of a file, which can vary fro
When to quote variables (a YAML gotcha)
=======================================
If you start a value with ``{{ foo }}``, you must quote the whole expression to create valid YAML syntax. If you do not quote the whole expression, the YAML parser cannot interpret the syntax - it might be a variable or it might be the start of a YAML dictionary. See the :ref:`yaml_syntax` documentation for more guidance on writing YAML.
If you start a value with ``{{ foo }}``, you must quote the whole expression to create valid YAML syntax. If you do not quote the whole expression, the YAML parser cannot interpret the syntax - it might be a variable or it might be the start of a YAML dictionary. For guidance on writing YAML, see the :ref:`yaml_syntax` documentation.
If you use a variable without quotes like this::
@ -83,6 +85,13 @@ You will see: ``ERROR! Syntax Error while loading YAML.`` If you add quotes, Ans
vars:
app_path: "{{ base_path }}/22"
.. _list_variables:
List variables
==============
A list variable combines a variable name with multiple values. The multiple values can be stored as an itemized list or in square brackets ``[]``, separated with commas.
Defining variables as lists
---------------------------
@ -102,9 +111,13 @@ When you use variables defined as a list (also called an array), you can use ind
The value of this expression would be "northeast".
.. _dictionary_variables:
Dictionary variables
====================
A dictionary stores the data in key-value pairs. Usually, dictionaries are used to store related data, such as the information contained in an ID or a user profile.
Defining variables as key:value dictionaries
--------------------------------------------
@ -144,7 +157,7 @@ You can create variables from the output of an Ansible task with the task keywor
- shell: /usr/bin/bar
when: foo_result.rc == 5
See :ref:`playbooks_conditionals` for more examples. Registered variables may be simple variables, list variables, dictionary variables, or complex nested data structures. The documentation for each module includes a ``RETURN`` section describing the return values for that module. To see the values for a particular task, run your playbook with ``-v``.
For more examples of using registered variables in conditions on later tasks, see :ref:`playbooks_conditionals`. Registered variables may be simple variables, list variables, dictionary variables, or complex nested data structures. The documentation for each module includes a ``RETURN`` section describing the return values for that module. To see the values for a particular task, run your playbook with ``-v``.
Registered variables are stored in memory. You cannot cache registered variables for use in future plays. Registered variables are only valid on the host for the rest of the current playbook run.
@ -161,7 +174,7 @@ Many registered variables (and :ref:`facts <vars_and_facts>`) are nested YAML or
{{ ansible_facts["eth0"]["ipv4"]["address"] }}
Using the dot notation::
To reference an IP address from your facts using the dot notation::
{{ ansible_facts.eth0.ipv4.address }}
@ -171,26 +184,26 @@ Using the dot notation::
Transforming variables with Jinja2 filters
==========================================
Jinja2 filters let you transform the value of a variable within a template expression. For example, the ``capitalize`` filter capitalizes any value passed to it; the ``to_yaml`` and ``to_json`` filters change the format of your variable values. Jinja2 includes many `built-in filters <http://jinja.pocoo.org/docs/templates/#builtin-filters>`_ and Ansible supplies many more filters. See :ref:`playbooks_filters` for examples.
Jinja2 filters let you transform the value of a variable within a template expression. For example, the ``capitalize`` filter capitalizes any value passed to it; the ``to_yaml`` and ``to_json`` filters change the format of your variable values. Jinja2 includes many `built-in filters <http://jinja.pocoo.org/docs/templates/#builtin-filters>`_ and Ansible supplies many more filters. To find more examples of filters, see :ref:`playbooks_filters`.
.. _setting_variables:
Where to set variables
======================
You can set variables in a variety of places, including in inventory, in playbooks, in re-usable files, in roles, and at the command line. Ansible loads every possible variable it finds, then chooses the variable to apply based on :ref:`variable precedence rules <ansible_variable_precedence>`.
You can define variables in a variety of places, such as in inventory, in playbooks, in reusable files, in roles, and at the command line. Ansible loads every possible variable it finds, then chooses the variable to apply based on :ref:`variable precedence rules <ansible_variable_precedence>`.
.. _variables_in_inventory:
Setting variables in inventory
------------------------------
Defining variables in inventory
-------------------------------
You can set different variables for each individual host, or set shared variables for a group of hosts in your inventory. For example, if all machines in the ``[Boston]`` group use 'boston.ntp.example.com' as an NTP server, you can set a group variable. The :ref:`intro_inventory` page has details on setting :ref:`host variables <host_variables>` and :ref:`group variables <group_variables>` in inventory.
You can define different variables for each individual host, or set shared variables for a group of hosts in your inventory. For example, if all machines in the ``[Boston]`` group use 'boston.ntp.example.com' as an NTP server, you can set a group variable. The :ref:`intro_inventory` page has details on setting :ref:`host variables <host_variables>` and :ref:`group variables <group_variables>` in inventory.
.. _playbook_variables:
Setting variables in a playbook
-------------------------------
Defining variables in a playbook
--------------------------------
You can define variables directly in a playbook::
@ -198,17 +211,17 @@ You can define variables directly in a playbook::
vars:
http_port: 80
When you set variables in a playbook, they are visible to anyone who runs that playbook. This is especially useful if you share playbooks widely.
When you define variables in a playbook, they are visible to anyone who runs that playbook. This is especially useful if you share playbooks widely.
.. _included_variables:
.. _variable_file_separation_details:
Setting variables in included files and roles
---------------------------------------------
Defining variables in included files and roles
----------------------------------------------
You can set variables in re-usable variables files and/or in re-usable roles. See :ref:`playbooks_reuse` for more details.
You can define variables in reusable variables files and/or in reusable roles. When you define variables in reusable variable files, the sensitive variables are separated from playbooks. This separation enables you to store your playbooks in a source control software and even share the playbooks, without the risk of exposing passwords or other sensitive and personal data. For information about creating reusable files and roles, see :ref:`playbooks_reuse`.
Setting variables in included variables files lets you separate sensitive variables from playbooks, so you can keep your playbooks under source control and even share them without exposing passwords or other private information. You can do this by using an external variables file, or files, just like this::
This example shows how you can include variables defined in an external file::
---
@ -224,7 +237,7 @@ Setting variables in included variables files lets you separate sensitive variab
- name: this is just a placeholder
command: /bin/echo foo
The contents of each variables file is a simple YAML dictionary, like this::
The contents of each variables file is a simple YAML dictionary. For example::
---
# in the above example, this would be vars/external_vars.yml
@ -232,19 +245,19 @@ The contents of each variables file is a simple YAML dictionary, like this::
password: magic
.. note::
You can keep per-host and per-group variables in similar files, see :ref:`splitting_out_vars`.
You can keep per-host and per-group variables in similar files. To learn about organizing your variables, see :ref:`splitting_out_vars`.
.. _passing_variables_on_the_command_line:
Setting variables at runtime
----------------------------
Defining variables at runtime
-----------------------------
You can set variables when you run your playbook by passing variables at the command line using the ``--extra-vars`` (or ``-e``) argument. You can also request user input with a ``vars_prompt`` (see :ref:`playbooks_prompts`). When you pass variables at the command line, use a single quoted string (containing one or more variables) in one of the formats below.
You can define variables when you run your playbook by passing variables at the command line using the ``--extra-vars`` (or ``-e``) argument. You can also request user input with a ``vars_prompt`` (see :ref:`playbooks_prompts`). When you pass variables at the command line, use a single quoted string, that contains one or more variables, in one of the formats below.
key=value format
^^^^^^^^^^^^^^^^
Values passed in using the ``key=value`` syntax are interpreted as strings. Use the JSON format if you need to pass non-string values (Booleans, integers, floats, lists, and so on).
Values passed in using the ``key=value`` syntax are interpreted as strings. Use the JSON format if you need to pass non-string values such as Booleans, integers, floats, lists, and so on.
.. code-block:: text
@ -281,11 +294,14 @@ Variable precedence: Where should I put a variable?
You can set multiple variables with the same name in many different places. When you do this, Ansible loads every possible variable it finds, then chooses the variable to apply based on variable precedence. In other words, the different variables will override each other in a certain order.
Ansible configuration, command-line options, and playbook keywords can also affect Ansible behavior. In general, variables take precedence, so that host-specific settings can override more general settings. For examples and more details on the precedence of these various settings, see :ref:`general_precedence_rules`.
Teams and projects that agree on guidelines for defining variables (where to define certain types of variables) usually avoid variable precedence concerns. We suggest that you define each variable in one place: figure out where to define a variable, and keep it simple. For examples, see :ref:`variable_examples`.
Some behavioral parameters that you can set in variables you can also set in Ansible configuration, as command-line options, and using playbook keywords. For example, you can define the user Ansible uses to connect to remote devices as a variable with ``ansible_user``, in a configuration file with ``DEFAULT_REMOTE_USER``, as a command-line option with ``-u``, and with the playbook keyword ``remote_user``. If you define the same parameter in a variable and by another method, the variable overrides the other setting. This approach allows host-specific settings to override more general settings. For examples and more details on the precedence of these various settings, see :ref:`general_precedence_rules`.
Teams and projects that agree on guidelines for defining variables (where to define certain types of variables) usually avoid variable precedence concerns. We suggest you define each variable in one place: figure out where to define a variable, and keep it simple. However, this is not always possible.
Understanding variable precedence
---------------------------------
Ansible does apply variable precedence, and you might have a use for it. Here is the order of precedence from least to greatest (the last listed variables winning prioritization):
Ansible does apply variable precedence, and you might have a use for it. Here is the order of precedence from least to greatest (the last listed variables override all other variables):
#. command line values (for example, ``-u my_user``, these are not variables)
#. role defaults (defined in role/defaults/main.yml) [1]_
@ -310,9 +326,9 @@ Ansible does apply variable precedence, and you might have a use for it. Here is
#. include params
#. extra vars (for example, ``-e "user=my_user"``)(always win precedence)
In general, Ansible gives higher precedence to variables that were defined more recently, more actively, and with more explicit scope. Variables in the the defaults folder inside a role are easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in the namespace. Host and/or inventory variables override role defaults, but do not override explicit includes like the vars directory or an ``include_vars`` task.
In general, Ansible gives precedence to variables that were defined more recently, more actively, and with more explicit scope. Variables in the the defaults folder inside a role are easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in the namespace. Host and/or inventory variables override role defaults, but explicit includes such as the vars directory or an ``include_vars`` task override inventory variables.
Ansible merges different variables set in inventory so that more specific settings override more generic settings. For example, ``ansible_ssh_user`` specified as a group_var has a higher precedence than ``ansible_user`` specified as a host_var. See :ref:`how_we_merge` for more details on the precedence of variables set in inventory.
Ansible merges different variables set in inventory so that more specific settings override more generic settings. For example, ``ansible_ssh_user`` specified as a group_var is overridden by ``ansible_user`` specified as a host_var. For details about the precedence of variables set in inventory, see :ref:`how_we_merge`.
.. rubric:: Footnotes
@ -338,7 +354,7 @@ You can decide where to set a variable based on the scope you want that value to
* Play: each play and contained structures, vars entries (vars; vars_files; vars_prompt), role defaults and vars.
* Host: variables directly associated to a host, like inventory, include_vars, facts or registered task outputs
Inside a template you automatically have access to all variables that are in scope for a host, plus any registered variables, facts, and magic variables.
Inside a template, you automatically have access to all variables that are in scope for a host, plus any registered variables, facts, and magic variables.
.. _variable_examples:

@ -0,0 +1,134 @@
#!/usr/bin/env python
# Copyright: (c) 2018, Terry Jones <terry.jones@example.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
---
module: my_test
short_description: This is my test module
# If this is part of a collection, you need to use semantic versioning,
# i.e. the version is of the form "2.5.0" and not "2.4".
version_added: "1.0.0"
description: This is my longer description explaining my test module.
options:
name:
description: This is the message to send to the test module.
required: true
type: str
new:
description:
- Control to demo if the result of this module is changed or not.
- Parameter description can be a list as well.
required: false
type: bool
# Specify this value according to your collection
# in format of namespace.collection.doc_fragment_name
extends_documentation_fragment:
- my_namespace.my_collection.my_doc_fragment_name
author:
- Your Name (@yourGitHubHandle)
'''
EXAMPLES = r'''
# Pass in a message
- name: Test with a message
my_namespace.my_collection.my_test:
name: hello world
# pass in a message and have changed true
- name: Test with a message and changed output
my_namespace.my_collection.my_test:
name: hello world
new: true
# fail the module
- name: Test failure of the module
my_namespace.my_collection.my_test:
name: fail me
'''
RETURN = r'''
# These are examples of possible return values, and in general should use other names for return values.
original_message:
description: The original name param that was passed in.
type: str
returned: always
sample: 'hello world'
message:
description: The output message that the test module generates.
type: str
returned: always
sample: 'goodbye'
'''
from ansible.module_utils.basic import AnsibleModule
def run_module():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True),
new=dict(type='bool', required=False, default=False)
)
# seed the result dict in the object
# we primarily care about changed and state
# changed is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_message='',
message=''
)
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
# if the user is working with this module in only check mode we do not
# want to make any changes to the environment, just return the current
# state with no modifications
if module.check_mode:
module.exit_json(**result)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_message'] = module.params['name']
result['message'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
if module.params['new']:
result['changed'] = True
# during the execution of the module, if there is an exception or a
# conditional state that effectively causes a failure, run
# AnsibleModule.fail_json() to pass in the message and the result
if module.params['name'] == 'fail me':
module.fail_json(msg='You requested this to fail', **result)
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()

@ -0,0 +1,96 @@
#!/usr/bin/env python
# Copyright: (c) 2020, Your Name <YourName@example.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
---
module: my_test_facts
short_description: This is my test facts module
version_added: "1.0.0"
description: This is my longer description explaining my test facts module.
author:
- Your Name (@yourGitHubHandle)
'''
EXAMPLES = r'''
- name: Return ansible_facts
my_namespace.my_collection.my_test_facts:
'''
RETURN = r'''
# These are examples of possible return values, and in general should use other names for return values.
ansible_facts:
description: Facts to add to ansible_facts.
returned: always
type: dict
contains:
foo:
description: Foo facts about operating system.
type: str
returned: when operating system foo fact is present
sample: 'bar'
answer:
description:
- Answer facts about operating system.
- This description can be a list as well.
type: str
returned: when operating system answer fact is present
sample: '42'
'''
from ansible.module_utils.basic import AnsibleModule
def run_module():
# define available arguments/parameters a user can pass to the module
module_args = dict()
# seed the result dict in the object
# we primarily care about changed and state
# changed is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
ansible_facts=dict(),
)
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
# if the user is working with this module in only check mode we do not
# want to make any changes to the environment, just return the current
# state with no modifications
if module.check_mode:
module.exit_json(**result)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['ansible_facts'] = {
'foo': 'bar',
'answer': '42',
}
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()

@ -0,0 +1,111 @@
#!/usr/bin/env python
# Copyright: (c) 2020, Your Name <YourName@example.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
---
module: my_test_info
short_description: This is my test info module
version_added: "1.0.0"
description: This is my longer description explaining my test info module.
options:
name:
description: This is the message to send to the test module.
required: true
type: str
author:
- Your Name (@yourGitHubHandle)
'''
EXAMPLES = r'''
# Pass in a message
- name: Test with a message
my_namespace.my_collection.my_test_info:
name: hello world
'''
RETURN = r'''
# These are examples of possible return values, and in general should use other names for return values.
original_message:
description: The original name param that was passed in.
type: str
returned: always
sample: 'hello world'
message:
description: The output message that the test module generates.
type: str
returned: always
sample: 'goodbye'
my_useful_info:
description: The dictionary containing information about your system.
type: dict
returned: always
sample: {
'foo': 'bar',
'answer': 42,
}
'''
from ansible.module_utils.basic import AnsibleModule
def run_module():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True),
)
# seed the result dict in the object
# we primarily care about changed and state
# changed is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_message='',
message='',
my_useful_info={},
)
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
# if the user is working with this module in only check mode we do not
# want to make any changes to the environment, just return the current
# state with no modifications
if module.check_mode:
module.exit_json(**result)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_message'] = module.params['name']
result['message'] = 'goodbye'
result['my_useful_info'] = {
'foo': 'bar',
'answer': 42,
}
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()

@ -48,7 +48,7 @@ ANSIBLE_COW_WHITELIST:
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This options forces color mode even when running without a TTY or the "nocolor" setting is True.
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
@ -90,7 +90,7 @@ ANSIBLE_PIPELINING:
- This can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This options is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
- This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING

@ -41,7 +41,7 @@ options:
user:
description:
- The specific user whose crontab should be modified.
- When unset, this parameter defaults to using C(root).
- When unset, this parameter defaults to the current user.
type: str
job:
description:
@ -222,7 +222,7 @@ class CronTab(object):
"""
CronTab object to write time based crontab file
user - the user of the crontab (defaults to root)
user - the user of the crontab (defaults to current user)
cron_file - a cron file under /etc/cron.d, or an absolute path
"""

@ -28,7 +28,7 @@ DOCUMENTATION = """
required: True
encrypt:
description:
- Which hash scheme to encrypt the returning password, should be one hash scheme from C(passlib.hash).
- Which hash scheme to encrypt the returning password, should be one hash scheme from C(passlib.hash; md5_crypt, bcrypt, sha256_crypt, sha512_crypt)
- If not provided, the password will be returned in plain text.
- Note that the password is always stored as plain text, only the returning password is encrypted.
- Encrypt also forces saving the salt value for idempotence.

@ -18,8 +18,9 @@ DOCUMENTATION = """
skip_missing:
default: False
description:
- If set to True, the lookup plugin will skip the lists items that do not contain the given subkey.
If False, the plugin will yield an error and complain about the missing subkey.
- Lookup accepts this flag from a dictionary as optional. See Example section for more information.
- If set to C(True), the lookup plugin will skip the lists items that do not contain the given subkey.
- If set to C(False), the plugin will yield an error and complain about the missing subkey.
"""
EXAMPLES = """
@ -74,7 +75,7 @@ EXAMPLES = """
- name: list groups for users that have them, don't error if groups key is missing
debug: var=item
loop: "{{lookup('subelements', users, 'groups', {'skip_missing': True})}}"
loop: "{{ q('subelements', users, 'groups', {'skip_missing': True}) }}"
"""
RETURN = """

@ -54,6 +54,9 @@ def assemble_files_to_ship(complete_file_list):
'.mailmap',
# Possibly should be included
'examples/scripts/uptime.py',
'examples/scripts/my_test.py',
'examples/scripts/my_test_info.py',
'examples/scripts/my_test_facts.py',
'examples/DOCUMENTATION.yml',
'examples/play.yml',
'examples/hosts.yaml',

Loading…
Cancel
Save