Backport hacktoberfest 4 (#79070)

* docs: replace latin terms with english in the reference_appendices (#79010)

(cherry picked from commit 4d3c12ae9e)

* docs: replace latin terms with english in the scenario_guides (#79008)

(cherry picked from commit 367cdae3b2)

* Docs: true/false with boolean values (#78980)

* Adding FQCN community.postgresql. to postgresql modules

(cherry picked from commit 56285b1d2b)

* Docs: Add code-block wrappers to code examples (#79037)

* Docs: Add code-block wrappers to code examples

(cherry picked from commit 63b5fc4b8d)

* Docs: Provide descriptive labels for http references (#78959)

(cherry picked from commit f7c01bc866)

* Docs: Adding code blocks wrapper (#79042)

* Adding code blocks wrapper

(cherry picked from commit 3a788314a2)

* Cleaned up test_strategies doc (#79045)

Authored-by: Shade Alabsa <shadealabsa@microsoft.com>
(cherry picked from commit 3fc3371463)

* Docs: fixed configs docs to properly display code blocks (#79040)

* fixed some docs to properly display code blocks
Co-authored-by: Shade Alabsa <shadealabsa@microsoft.com>

(cherry picked from commit 25a770de37)

* Docs: Add code-block wrappers in faq.rst (#79047)

(cherry picked from commit 35700f57cc)

* Docs: Add code-block wrappers in lookup.rst & strategy.rst (#79032, #79033) (#79048)

(cherry picked from commit 680bf029b1)

* Add code-block wrappers in network_debug_troubleshooting.rst

(cherry picked from commit 57f22529cb)

Co-authored-by: Samuel Gaist <samuel.gaist@idiap.ch>
Co-authored-by: prayatharth <73949575+prayatharth@users.noreply.github.com>
Co-authored-by: Mudit Choudhary <74391865+muditchoudhary@users.noreply.github.com>
Co-authored-by: Blaster4385 <53873108+Blaster4385@users.noreply.github.com>
Co-authored-by: Deepshri M <92997066+Deepshaded@users.noreply.github.com>
Co-authored-by: shade34321 <shade34321@users.noreply.github.com>
Co-authored-by: Shellylo <54233305+Shellylo@users.noreply.github.com>
Co-authored-by: Shubhadeep Das <dshubhadeep@gmail.com>
pull/79076/head
Sandra McCann 2 years ago committed by GitHub
parent 9d7989fbe8
commit b5d8c954b8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -27,7 +27,7 @@ Let's say we want to test the ``postgresql_user`` module invoked with the ``name
.. code-block:: yaml
- name: Create PostgreSQL user and store module's output to the result variable
postgresql_user:
community.postgresql.postgresql_user:
name: test_user
register: result
@ -37,7 +37,7 @@ Let's say we want to test the ``postgresql_user`` module invoked with the ``name
- result is changed
- name: Check actual system state with another module, in other words, that the user exists
postgresql_query:
community.postgresql.postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user'
register: query_result
@ -129,7 +129,7 @@ To check a task:
2. If the module changes the system state, check the actual system state using at least one other module. For example, if the module changes a file, we can check that the file has been changed by checking its checksum with the :ref:`stat <ansible_collections.ansible.builtin.stat_module>` module before and after the test tasks.
3. Run the same task with ``check_mode: yes`` if check-mode is supported by the module. Check with other modules that the actual system state has not been changed.
4. Cover cases when the module must fail. Use the ``ignore_errors: yes`` option and check the returned message with the ``assert`` module.
4. Cover cases when the module must fail. Use the ``ignore_errors: true`` option and check the returned message with the ``assert`` module.
Example:
@ -139,7 +139,6 @@ Example:
abstract_module:
...
register: result
ignore_errors: yes
- name: Check the task fails and its error message
assert:

@ -189,7 +189,7 @@ With all of the targets now removed, the current state is as if we do not have a
creates: /etc/postgresql/12/
- name: Start PostgreSQL service
service:
ansible.builtin.service:
name: postgresql
state: started
@ -213,9 +213,9 @@ That is enough for our very basic example.
.. code-block:: yaml
- name: Test postgresql_info module
become: yes
become: true
become_user: postgres
postgresql_info:
community.postgresql.postgresql_info:
login_user: postgres
login_db: postgres
register: result

@ -44,7 +44,7 @@ We will add the following code to the file.
# https://github.com/ansible-collections/community.postgresql/issues/NUM
- name: Test user name containing underscore
postgresql_user:
community.postgresql.postgresql_user:
name: underscored_user
register: result
@ -54,7 +54,7 @@ We will add the following code to the file.
- result is changed
- name: Query the database if the user exists
postgresql_query:
community.postgresql.postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'underscored_user'
register: result
@ -108,9 +108,8 @@ We will add the following code to the file.
# https://github.com/ansible-collections/community.postgresql/issues/NUM
# We should also run the same tasks with check_mode: yes. We omit it here for simplicity.
- name: Test for new_option, create new user WITHOUT the attribute
postgresql_user:
community.postgresql.postgresql_user:
name: test_user
add_attribute: no
register: result
- name: Check the module returns what we expect
@ -119,7 +118,7 @@ We will add the following code to the file.
- result is changed
- name: Query the database if the user exists but does not have the attribute (it is NULL)
postgresql_query:
community.postgresql.postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = NULL
register: result
@ -129,9 +128,8 @@ We will add the following code to the file.
- result.query_result.rowcount == 1
- name: Test for new_option, create new user WITH the attribute
postgresql_user:
community.postgresql.postgresql_user:
name: test_user
add_attribute: yes
register: result
- name: Check the module returns what we expect
@ -140,7 +138,7 @@ We will add the following code to the file.
- result is changed
- name: Query the database if the user has the attribute (it is TRUE)
postgresql_query:
community.postgresql.postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = 't'
register: result
@ -153,16 +151,16 @@ Then we :ref:`run the tests<collection_run_integration_tests>` with ``postgresql
In reality, we would alternate the tasks above with the same tasks run with the ``check_mode: yes`` option to be sure our option works as expected in check-mode as well. See :ref:`Recommendations on coverage<collection_integration_recommendations>` for details.
If we expect a task to fail, we use the ``ignore_errors: yes`` option and check that the task actually failed and returned the message we expect:
If we expect a task to fail, we use the ``ignore_errors: true`` option and check that the task actually failed and returned the message we expect:
.. code-block:: yaml
- name: Test for fail_when_true option
postgresql_user:
community.postgresql.postgresql_user:
name: test_user
fail_when_true: yes
fail_when_true: true
register: result
ignore_errors: yes
ignore_errors: true
- name: Check the module fails and returns message we expect
assert:

@ -70,4 +70,4 @@ Several commonly-used utilities migrated to collections in Ansible 2.10, includi
- ``ismount.py`` migrated to ``ansible.posix.plugins.module_utils.mount.py`` - Single helper function that fixes os.path.ismount
- ``known_hosts.py`` migrated to ``community.general.plugins.module_utils.known_hosts.py`` - utilities for working with known_hosts file
For a list of migrated content with destination collections, see https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml.
For a list of migrated content with destination collections, see the `runtime.yml file <https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml>`_.

@ -695,7 +695,7 @@ idempotent and does not report changes. For example:
path: C:\temp
state: absent
register: remove_file_check
check_mode: yes
check_mode: true
- name: get result of remove a file (check mode)
win_command: powershell.exe "if (Test-Path -Path 'C:\temp') { 'true' } else { 'false' }"

@ -11,4 +11,4 @@ Additionally, the following code is checked against Python versions supported on
* ``lib/ansible/modules/``
* ``lib/ansible/module_utils/``
See https://mypy.readthedocs.io/en/stable/ for additional details.
See `the mypy documentation <https://mypy.readthedocs.io/en/stable/>`_

@ -95,7 +95,7 @@ Codes
invalid-examples Documentation Error ``EXAMPLES`` is not valid YAML
invalid-extension Naming Error Official Ansible modules must have a ``.py`` extension for python modules or a ``.ps1`` for powershell modules
invalid-module-schema Documentation Error ``AnsibleModule`` schema validation error
invalid-removal-version Documentation Error The version at which a feature is supposed to be removed cannot be parsed (for collections, it must be a semantic version, see https://semver.org/)
invalid-removal-version Documentation Error The version at which a feature is supposed to be removed cannot be parsed (for collections, it must be a `semantic version <https://semver.org/>`_)
invalid-requires-extension Naming Error Module ``#AnsibleRequires -CSharpUtil`` should not end in .cs, Module ``#Requires`` should not end in .psm1
missing-doc-fragment Documentation Error ``DOCUMENTATION`` fragment missing
missing-existing-doc-fragment Documentation Warning Pre-existing ``DOCUMENTATION`` fragment missing

@ -87,7 +87,9 @@ From the log notice:
If the log reports the port as ``None`` this means that the default port is being used.
A future Ansible release will improve this message so that the port is always logged.
Because the log files are verbose, you can use grep to look for specific information. For example, once you have identified the ``pid`` from the ``creating new control socket for host`` line you can search for other connection log entries::
Because the log files are verbose, you can use grep to look for specific information. For example, once you have identified the ``pid`` from the ``creating new control socket for host`` line you can search for other connection log entries:
.. code:: shell
grep "p=28990" $ANSIBLE_LOG_PATH
@ -164,7 +166,9 @@ For Ansible this can be done by ensuring you are only running against one remote
* Using ``ansible-playbook --limit switch1.example.net...``
* Using an ad hoc ``ansible`` command
`ad hoc` refers to running Ansible to perform some quick command using ``/usr/bin/ansible``, rather than the orchestration language, which is ``/usr/bin/ansible-playbook``. In this case we can ensure connectivity by attempting to execute a single command on the remote device::
`ad hoc` refers to running Ansible to perform some quick command using ``/usr/bin/ansible``, rather than the orchestration language, which is ``/usr/bin/ansible-playbook``. In this case we can ensure connectivity by attempting to execute a single command on the remote device:
.. code-block:: text
ansible -m arista.eos.eos_command -a 'commands=?' -i inventory switch1.example.net -e 'ansible_connection=ansible.netcommon.network_cli' -u admin -k

@ -2106,7 +2106,7 @@ To get a date object from a string use the `to_datetime` filter:
# get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
.. note:: For a full list of format codes for working with python date format strings, see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.
.. note:: For a full list of format codes for working with python date format strings, see the `python datetime documentation <https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior>`_.
.. versionadded:: 2.4

@ -134,7 +134,7 @@ Inventory plugins that support caching can use the general settings for the fact
# demo.aws_ec2.yml
plugin: amazon.aws.aws_ec2
cache: yes
cache: true
cache_plugin: ansible.builtin.jsonfile
cache_timeout: 7200
cache_connection: /tmp/aws_inventory

@ -39,14 +39,18 @@ You can use lookup plugins anywhere you can use templating in Ansible: in a play
vars:
file_contents: "{{ lookup('file', 'path/to/file.txt') }}"
Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. For this reason, lookups are expected to output lists; for example, ``with_items`` uses the :ref:`items <items_lookup>` lookup::
Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. For this reason, lookups are expected to output lists; for example, ``with_items`` uses the :ref:`items <items_lookup>` lookup:
.. code-block:: YAML+Jinja
tasks:
- name: count to 3
debug: msg={{ item }}
with_items: [1, 2, 3]
You can combine lookups with :ref:`filters <playbooks_filters>`, :ref:`tests <playbooks_tests>` and even each other to do some complex data generation and manipulation. For example::
You can combine lookups with :ref:`filters <playbooks_filters>`, :ref:`tests <playbooks_tests>` and even each other to do some complex data generation and manipulation. For example:
.. code-block:: YAML+Jinja
tasks:
- name: valid but useless and over complicated chained lookups and filters
@ -60,7 +64,9 @@ You can combine lookups with :ref:`filters <playbooks_filters>`, :ref:`tests <pl
You can control how errors behave in all lookup plugins by setting ``errors`` to ``ignore``, ``warn``, or ``strict``. The default setting is ``strict``, which causes the task to fail if the lookup returns an error. For example:
To ignore lookup errors::
To ignore lookup errors:
.. code-block:: YAML+Jinja
- name: if this file does not exist, I do not care .. file plugin itself warns anyway ...
debug: msg="{{ lookup('file', '/nosuchfile', errors='ignore') }}"
@ -74,7 +80,9 @@ To ignore lookup errors::
}
To get a warning instead of a failure::
To get a warning instead of a failure:
.. code-block:: YAML+Jinja
- name: if this file does not exist, let me know, but continue
debug: msg="{{ lookup('file', '/nosuchfile', errors='warn') }}"
@ -90,7 +98,9 @@ To get a warning instead of a failure::
}
To get a fatal error (the default)::
To get a fatal error (the default):
.. code-block:: YAML+Jinja
- name: if this file does not exist, FAIL (this is the default)
debug: msg="{{ lookup('file', '/nosuchfile', errors='strict') }}"

@ -35,7 +35,9 @@ or in the `ansible.cfg` file:
[defaults]
strategy=linear
You can also specify the strategy plugin in the play via the :ref:`strategy keyword <playbook_keywords>` in a play::
You can also specify the strategy plugin in the play via the :ref:`strategy keyword <playbook_keywords>` in a play:
.. code-block:: yaml
- hosts: all
strategy: debug

@ -26,7 +26,9 @@ to write lists and dictionaries in YAML.
There's another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) can optionally
begin with ``---`` and end with ``...``. This is part of the YAML format and indicates the start and end of a document.
All members of a list are lines beginning at the same indentation level starting with a ``"- "`` (a dash and a space)::
All members of a list are lines beginning at the same indentation level starting with a ``"- "`` (a dash and a space):
.. code:: yaml
---
# A list of tasty fruits
@ -36,7 +38,9 @@ All members of a list are lines beginning at the same indentation level starting
- Mango
...
A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space)::
A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space):
.. code:: yaml
# An employee record
martin:
@ -44,7 +48,9 @@ A dictionary is represented in a simple ``key: value`` form (the colon must be f
job: Developer
skill: Elite
More complicated data structures are possible, such as lists of dictionaries, dictionaries whose values are lists or a mix of both::
More complicated data structures are possible, such as lists of dictionaries, dictionaries whose values are lists or a mix of both:
.. code:: yaml
# Employee records
- martin:
@ -62,7 +68,9 @@ More complicated data structures are possible, such as lists of dictionaries, di
- fortran
- erlang
Dictionaries and lists can also be represented in an abbreviated form if you really want to::
Dictionaries and lists can also be represented in an abbreviated form if you really want to:
.. code:: yaml
---
martin: {name: Martin D'vloper, job: Developer, skill: Elite}
@ -72,7 +80,9 @@ These are called "Flow collections".
.. _truthiness:
Ansible doesn't really use these too much, but you can also specify a :ref:`boolean value <playbooks_variables>` (true/false) in several forms::
Ansible doesn't really use these too much, but you can also specify a :ref:`boolean value <playbooks_variables>` (true/false) in several forms:
.. code:: yaml
create_key: true
needs_agent: false
@ -85,7 +95,9 @@ Use lowercase 'true' or 'false' for boolean values in dictionaries if you want t
Values can span multiple lines using ``|`` or ``>``. Spanning multiple lines using a "Literal Block Scalar" ``|`` will include the newlines and any trailing spaces.
Using a "Folded Block Scalar" ``>`` will fold newlines to spaces; it's used to make what would otherwise be a very long line easier to read and edit.
In either case the indentation will be ignored.
Examples are::
Examples are:
.. code:: yaml
include_newlines: |
exactly as you see
@ -97,7 +109,9 @@ Examples are::
single line of text
despite appearances
While in the above ``>`` example all newlines are folded into spaces, there are two ways to enforce a newline to be kept::
While in the above ``>`` example all newlines are folded into spaces, there are two ways to enforce a newline to be kept:
.. code:: yaml
fold_some_newlines: >
a
@ -108,12 +122,16 @@ While in the above ``>`` example all newlines are folded into spaces, there are
e
f
Alternatively, it can be enforced by including newline ``\n`` characters::
Alternatively, it can be enforced by including newline ``\n`` characters:
.. code:: yaml
fold_same_newlines: "a b\nc d\n e\nf\n"
Let's combine what we learned so far in an arbitrary YAML example.
This really has nothing to do with Ansible, but will give you a feel for the format::
This really has nothing to do with Ansible, but will give you a feel for the format:
.. code:: yaml
---
# An employee record
@ -144,17 +162,23 @@ While you can put just about anything into an unquoted scalar, there are some ex
A colon followed by a space (or newline) ``": "`` is an indicator for a mapping.
A space followed by the pound sign ``" #"`` starts a comment.
Because of this, the following is going to result in a YAML syntax error::
Because of this, the following is going to result in a YAML syntax error:
.. code:: text
foo: somebody said I should put a colon here: so I did
windows_drive: c:
...but this will work::
...but this will work:
.. code:: yaml
windows_path: c:\windows
You will want to quote hash values using colons followed by a space or the end of the line::
You will want to quote hash values using colons followed by a space or the end of the line:
.. code:: yaml
foo: 'somebody said I should put a colon here: so I did'
@ -162,14 +186,18 @@ You will want to quote hash values using colons followed by a space or the end o
...and then the colon will be preserved.
Alternatively, you can use double quotes::
Alternatively, you can use double quotes:
.. code:: yaml
foo: "somebody said I should put a colon here: so I did"
windows_drive: "c:"
The difference between single quotes and double quotes is that in double quotes
you can use escapes::
you can use escapes:
.. code:: yaml
foo: "a \t TAB and a \n NEWLINE"
@ -183,17 +211,23 @@ The following is invalid YAML:
Further, Ansible uses "{{ var }}" for variables. If a value after a colon starts
with a "{", YAML will think it is a dictionary, so you must quote it, like so::
with a "{", YAML will think it is a dictionary, so you must quote it, like so:
.. code:: yaml
foo: "{{ variable }}"
If your value starts with a quote the entire value must be quoted, not just part of it. Here are some additional examples of how to properly quote things::
If your value starts with a quote the entire value must be quoted, not just part of it. Here are some additional examples of how to properly quote things:
.. code:: yaml
foo: "{{ variable }}/additional/string/literal"
foo2: "{{ variable }}\\backslashes\\are\\also\\special\\characters"
foo3: "even if it's just a string literal it must all be quoted"
Not valid::
Not valid:
.. code:: text
foo: "E:\\path\\"rest\\of\\path
@ -203,14 +237,18 @@ as the first character of an unquoted scalar: ``[] {} > | * & ! % # ` @ ,``.
You should also be aware of ``? : -``. In YAML, they are allowed at the beginning of a string if a non-space
character follows, but YAML processor implementations differ, so it's better to use quotes.
In Flow Collections, the rules are a bit more strict::
In Flow Collections, the rules are a bit more strict:
.. code:: text
a scalar in block mapping: this } is [ all , valid
flow mapping: { key: "you { should [ use , quotes here" }
Boolean conversion is helpful, but this can be a problem when you want a literal `yes` or other boolean values as a string.
In these cases just use quotes::
In these cases just use quotes:
.. code:: yaml
non_boolean: "yes"
other_string: "False"
@ -219,7 +257,9 @@ In these cases just use quotes::
YAML converts certain strings into floating-point values, such as the string
`1.0`. If you need to specify a version number (in a requirements.yml file, for
example), you will need to quote the value if it looks like a floating-point
value::
value:
.. code:: yaml
version: "1.0"

@ -114,7 +114,9 @@ to the relevant host(s). Consider the following inventory group:
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create `group_vars/gatewayed.yml` with the following contents::
You can create `group_vars/gatewayed.yml` with the following contents:
.. code-block:: yaml
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user@gateway.example.com"'
@ -166,10 +168,10 @@ want on the system if :command:`/usr/bin/python` on your system does not point t
Python interpreter.
Some platforms may only have Python 3 installed by default. If it is not installed as
:command:`/usr/bin/python`, you will need to configure the path to the interpreter via
:command:`/usr/bin/python`, you will need to configure the path to the interpreter through
``ansible_python_interpreter``. Although most core modules will work with Python 3, there may be some
special purpose ones which do not or you may encounter a bug in an edge case. As a temporary
workaround you can install Python 2 on the managed host and configure Ansible to use that Python via
workaround you can install Python 2 on the managed host and configure Ansible to use that Python through
``ansible_python_interpreter``. If there's no mention in the module's documentation that the module
requires Python 2, you can also report a bug on our `bug tracker
<https://github.com/ansible/ansible/issues>`_ so that the incompatibility can be fixed in a future release.
@ -223,7 +225,7 @@ If you want to run under Python 3 instead of Python 2 you may want to change tha
$ source ./ansible/bin/activate
$ pip install ansible
If you need to use any libraries which are not available via pip (for instance, SELinux Python
If you need to use any libraries which are not available through pip (for instance, SELinux Python
bindings on systems such as Red Hat Enterprise Linux or Fedora that have SELinux enabled), then you
need to install them into the virtualenv. There are two methods:
@ -274,17 +276,23 @@ is likely the problem. There are several workarounds:
* You can set ``remote_tmp`` to a path that will expand correctly with the shell you are using
(see the plugin documentation for :ref:`C shell<csh_shell>`, :ref:`fish shell<fish_shell>`,
and :ref:`Powershell<powershell_shell>`). For example, in the ansible config file you can set::
and :ref:`Powershell<powershell_shell>`). For example, in the ansible config file you can set:
.. code-block:: ini
remote_tmp=$HOME/.ansible/tmp
In Ansible 2.5 and later, you can also set it per-host in inventory like this::
In Ansible 2.5 and later, you can also set it per-host in inventory like this:
.. code-block:: ini
solaris1 ansible_remote_tmp=$HOME/.ansible/tmp
* You can set :ref:`ansible_shell_executable<ansible_shell_executable>` to the path to a POSIX compatible shell. For
instance, many Solaris hosts have a POSIX shell located at :file:`/usr/xpg4/bin/sh` so you can set
this in inventory like so::
this in inventory like so:
.. code-block:: ini
solaris1 ansible_shell_executable=/usr/xpg4/bin/sh
@ -299,7 +307,7 @@ There are a few common errors that one might run into when trying to execute Ans
To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work.
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode via sftp however execution of python fails with
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode through sftp however execution of python fails with
.. error::
SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
@ -313,6 +321,8 @@ There are a few common errors that one might run into when trying to execute Ans
To fix this set the path to the python installation in your inventory like so::
.. code-block:: ini
zos1 ansible_python_interpreter=/usr/lpp/python/python-2017-04-12-py27/python27/bin/python
* Start of python fails with ``The module libpython2.7.so was not found.``
@ -320,7 +330,9 @@ There are a few common errors that one might run into when trying to execute Ans
.. error::
EE3501S The module libpython2.7.so was not found.
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``::
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``:
.. code-block:: ini
zos1 ansible_shell_executable=/usr/lpp/bash/bin/bash
@ -333,7 +345,9 @@ It is known that it will not correctly expand the default tmp directory Ansible
If you see module failures, this is likely the problem.
The simple workaround is to set ``remote_tmp`` to a path that will expand correctly (see documentation of the shell plugin you are using for specifics).
For example, in the ansible config file (or via environment variable) you can set::
For example, in the ansible config file (or through environment variable) you can set:
.. code-block:: ini
remote_tmp=$HOME/.ansible/tmp
@ -429,7 +443,9 @@ file with a list of servers. To do this, you can just access the "$groups" dicti
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname,
you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers::
you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:
.. code-block:: yaml
- hosts: db_servers
tasks:
@ -449,7 +465,7 @@ How do I access a variable name programmatically?
+++++++++++++++++++++++++++++++++++++++++++++++++
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
via a role parameter or other input. Variable names can be built by adding strings together using "~", like so:
through a role parameter or other input. Variable names can be built by adding strings together using "~", like so:
.. code-block:: jinja
@ -495,7 +511,9 @@ Anyway, here's the trick:
{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}
Notice how we're pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you
could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact::
could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact:
.. code-block:: yaml+jinja
- set_fact: headnode={{ groups['webservers'][0] }}
@ -518,7 +536,9 @@ How do I access shell environment variables?
**On controller machine :** Access existing variables from controller use the ``env`` lookup plugin.
For example, to access the value of the HOME environment variable on the management machine::
For example, to access the value of the HOME environment variable on the management machine:
.. code-block:: yaml+jinja
---
# ...
@ -526,7 +546,7 @@ For example, to access the value of the HOME environment variable on the managem
local_home: "{{ lookup('env','HOME') }}"
**On target machines :** Environment variables are available via facts in the ``ansible_env`` variable:
**On target machines :** Environment variables are available through facts in the ``ansible_env`` variable:
.. code-block:: jinja
@ -613,7 +633,9 @@ When is it unsafe to bulk-set task arguments from a variable?
You can set all of a task's arguments from a dictionary-typed variable. This
technique can be useful in some dynamic execution scenarios. However, it
introduces a security risk. We do not recommend it, so Ansible issues a
warning when you do something like this::
warning when you do something like this:
.. code-block:: yaml+jinja
#...
vars:
@ -663,7 +685,9 @@ How do I keep secret data in my playbook?
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`.
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful::
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:
.. code-block:: yaml+jinja
- name: secret task
shell: /usr/bin/do_something --value={{ secret_value }}
@ -671,14 +695,16 @@ If you have a task that you don't want to show the results or command given to i
This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.
The ``no_log`` attribute can also apply to an entire play::
The ``no_log`` attribute can also apply to an entire play:
.. code-block:: yaml
- hosts: all
no_log: True
Though this will make the play somewhat difficult to debug. It's recommended that this
be applied to single tasks only, once a playbook is completed. Note that the use of the
``no_log`` attribute does not prevent data from being shown when debugging Ansible itself via
``no_log`` attribute does not prevent data from being shown when debugging Ansible itself through
the :envvar:`ANSIBLE_DEBUG` environment variable.
@ -724,7 +750,9 @@ How do I get the original ansible_host when I delegate a task?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As the documentation states, connection variables are taken from the ``delegate_to`` host so ``ansible_host`` is overwritten,
but you can still access the original via ``hostvars``::
but you can still access the original through ``hostvars``:
.. code-block:: yaml+jinja
original_host: "{{ hostvars[inventory_hostname]['ansible_host'] }}"
@ -737,7 +765,9 @@ How do I fix 'protocol error: filename does not match request' when fetching a f
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Since release ``7.9p1`` of OpenSSH there is a `bug <https://bugzilla.mindrot.org/show_bug.cgi?id=2966>`_
in the SCP client that can trigger this error on the Ansible controller when using SCP as the file transfer mechanism::
in the SCP client that can trigger this error on the Ansible controller when using SCP as the file transfer mechanism:
.. error::
failed to transfer file to /tmp/ansible/file.txt\r\nprotocol error: filename does not match request

@ -47,11 +47,15 @@ When you type something directly at the command line, you may feel that your han
You can override all other settings from all other sources in all other precedence categories at the command line by :ref:`general_precedence_extra_vars`, but that is not a command-line option, it is a way of passing a :ref:`variable<general_precedence_variables>`.
At the command line, if you pass multiple values for a parameter that accepts only a single value, the last defined value wins. For example, this :ref:`ad hoc task<intro_adhoc>` will connect as ``carol``, not as ``mike``::
At the command line, if you pass multiple values for a parameter that accepts only a single value, the last defined value wins. For example, this :ref:`ad hoc task<intro_adhoc>` will connect as ``carol``, not as ``mike``:
.. code:: shell
ansible -u mike -m ping myhost -u carol
Some parameters allow multiple values. In this case, Ansible will append all values from the hosts listed in inventory files inventory1 and inventory2::
Some parameters allow multiple values. In this case, Ansible will append all values from the hosts listed in inventory files inventory1 and inventory2:
.. code:: shell
ansible -i /path/inventory1 -i /path/inventory2 -m ping all
@ -68,7 +72,9 @@ Within playbook keywords, precedence flows with the playbook itself; the more sp
- blocks/includes/imports/roles (optional and can contain tasks and each other)
- tasks (most specific)
A simple example::
A simple example:
.. code:: yaml
- hosts: all
connection: ssh
@ -97,7 +103,9 @@ Variables that have equivalent playbook keywords, command-line options, and conf
Connection variables, like all variables, can be set in multiple ways and places. You can define variables for hosts and groups in :ref:`inventory<intro_inventory>`. You can define variables for tasks and plays in ``vars:`` blocks in :ref:`playbooks<about_playbooks>`. However, they are still variables - they are data, not keywords or configuration settings. Variables that override playbook keywords, command-line options, and configuration settings follow the same rules of :ref:`variable precedence <ansible_variable_precedence>` as any other variables.
When set in a playbook, variables follow the same inheritance rules as playbook keywords. You can set a value for the play, then override it in a task, block, or role::
When set in a playbook, variables follow the same inheritance rules as playbook keywords. You can set a value for the play, then override it in a task, block, or role:
.. code:: yaml
- hosts: cloud
gather_facts: false
@ -126,14 +134,16 @@ Variable scope: how long is a value available?
Variable values set in a playbook exist only within the playbook object that defines them. These 'playbook object scope' variables are not available to subsequent objects, including other plays.
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like :ref:`set_fact<set_fact_module>` and :ref:`include_vars<include_vars_module>`, are available to all plays. These 'host scope' variables are also available via the ``hostvars[]`` dictionary.
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like :ref:`set_fact<set_fact_module>` and :ref:`include_vars<include_vars_module>`, are available to all plays. These 'host scope' variables are also available through the ``hostvars[]`` dictionary.
.. _general_precedence_extra_vars:
Using ``-e`` extra variables at the command line
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To override all other settings in all other categories, you can use extra variables: ``--extra-vars`` or ``-e`` at the command line. Values passed with ``-e`` are variables, not command-line options, and they will override configuration settings, command-line options, and playbook keywords as well as variables set elsewhere. For example, this task will connect as ``brian`` not as ``carol``::
To override all other settings in all other categories, you can use extra variables: ``--extra-vars`` or ``-e`` at the command line. Values passed with ``-e`` are variables, not command-line options, and they will override configuration settings, command-line options, and playbook keywords as well as variables set elsewhere. For example, this task will connect as ``brian`` not as ``carol``:
.. code:: shell
ansible -u carol -e 'ansible_user=brian' -a whoami all

@ -126,12 +126,12 @@ when a term comes up on the mailing list.
executing the internal :ref:`setup module <setup_module>` on the remote nodes. You
never have to call the setup module explicitly, it just runs, but it
can be disabled to save time if it is not needed or you can tell
ansible to collect only a subset of the full facts via the
ansible to collect only a subset of the full facts through the
``gather_subset:`` option. For the convenience of users who are
switching from other configuration management systems, the fact module
will also pull in facts from the :program:`ohai` and :program:`facter`
tools if they are installed. These are fact libraries from Chef and
Puppet, respectively. (These may also be disabled via
Puppet, respectively. (These may also be disabled through
``gather_subset:``)
Filter Plugin
@ -233,7 +233,7 @@ when a term comes up on the mailing list.
Inventory
A file (by default, Ansible uses a simple INI format) that describes
:term:`Hosts <Host>` and :term:`Groups <Group>` in Ansible. Inventory
can also be provided via an :term:`Inventory Script` (sometimes called
can also be provided through an :term:`Inventory Script` (sometimes called
an "External Inventory Script").
Inventory Script
@ -460,7 +460,7 @@ when a term comes up on the mailing list.
SSH (Native)
Native OpenSSH as an Ansible transport is specified with ``-c ssh``
(or a config file, or a keyword in the :term:`playbook <playbooks>`)
and can be useful if wanting to login via Kerberized SSH or using SSH
and can be useful if wanting to login through Kerberized SSH or using SSH
jump hosts, and so on. In 1.2.1, ``ssh`` will be used by default if the
OpenSSH binary on the control machine is sufficiently new.
Previously, Ansible selected ``paramiko`` as a default. Using

@ -21,7 +21,7 @@ version of pip. This will make the default :command:`/usr/bin/ansible` run with
python version = 3.6.2 (default, Sep 22 2017, 08:28:09) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)]
If you are running Ansible :ref:`from_source` and want to use Python 3 with your source checkout, run your
command via ``python3``. For example:
command through ``python3``. For example:
.. code-block:: shell
@ -32,7 +32,7 @@ command via ``python3``. For example:
.. note:: Individual Linux distribution packages may be packaged for Python2 or Python3. When running from
distro packages you'll only be able to use Ansible with the Python version for which it was
installed. Sometimes distros will provide a means of installing for several Python versions
(via a separate package or via some commands that are run after install). You'll need to check
(through a separate package or through some commands that are run after install). You'll need to check
with your distro to see if that applies in your case.

@ -29,7 +29,7 @@ ansible_limit
Contents of the ``--limit`` CLI option for the current execution of Ansible
ansible_loop
A dictionary/map containing extended loop information when enabled via ``loop_control.extended``
A dictionary/map containing extended loop information when enabled through ``loop_control.extended``
ansible_loop_var
The name of the value provided to ``loop_control.loop_var``. Added in ``2.8``
@ -58,7 +58,7 @@ ansible_play_hosts_all
ansible_play_role_names
The names of the roles currently imported into the current play. This list does **not** contain the role names that are
implicitly included via dependencies.
implicitly included through dependencies.
ansible_playbook_python
The path to the python interpreter being used by Ansible on the controller

@ -47,7 +47,9 @@ existing system, using the ``--check`` flag to the `ansible` command will report
bring the system into a desired state.
This can let you know up front if there is any need to deploy onto the given system. Ordinarily, scripts and commands don't run in check mode, so if you
want certain steps to execute in normal mode even when the ``--check`` flag is used, such as calls to the script module, disable check mode for those tasks::
want certain steps to execute in normal mode even when the ``--check`` flag is used, such as calls to the script module, disable check mode for those tasks:
.. code:: yaml
roles:
@ -60,7 +62,9 @@ want certain steps to execute in normal mode even when the ``--check`` flag is u
Modules That Are Useful for Testing
```````````````````````````````````
Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open::
Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open:
.. code:: yaml
tasks:
@ -69,7 +73,9 @@ Certain playbook modules are particularly good for testing. Below is an example
port: 22
delegate_to: localhost
Here's an example of using the URI module to make sure a web service returns::
Here's an example of using the URI module to make sure a web service returns:
.. code:: yaml
tasks:
@ -80,7 +86,9 @@ Here's an example of using the URI module to make sure a web service returns::
msg: 'service is not happy'
when: "'AWESOME' not in webpage.content"
It's easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a non-zero return code::
It's easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a non-zero return code:
.. code:: yaml
tasks:
@ -89,7 +97,9 @@ It's easy to push an arbitrary script (in any language) on a remote host and the
If using roles (you should be, roles are great!), scripts pushed by the script module can live in the 'files/' directory of a role.
And the assert module makes it very easy to validate various kinds of truth::
And the assert module makes it very easy to validate various kinds of truth:
.. code:: yaml
tasks:
@ -101,7 +111,9 @@ And the assert module makes it very easy to validate various kinds of truth::
- "'not ready' not in cmd_result.stderr"
- "'gizmo enabled' in cmd_result.stdout"
Should you feel the need to test for the existence of files that are not declaratively set by your Ansible configuration, the 'stat' module is a great choice::
Should you feel the need to test for the existence of files that are not declaratively set by your Ansible configuration, the 'stat' module is a great choice:
.. code:: yaml
tasks:
@ -128,7 +140,9 @@ If writing some degree of basic validation of your application into your playboo
As such, deploying into a local development VM and a staging environment will both validate that things are according to plan
ahead of your production deploy.
Your workflow may be something like this::
Your workflow may be something like this:
.. code:: text
- Use the same playbook all the time with embedded tests in development
- Use the playbook to deploy to a staging environment (with the same playbooks) that simulates production
@ -147,7 +161,9 @@ Integrating Testing With Rolling Updates
If you have read into :ref:`playbooks_delegation` it may quickly become apparent that the rolling update pattern can be extended, and you
can use the success or failure of the playbook run to decide whether to add a machine into a load balancer or not.
This is the great culmination of embedded tests::
This is the great culmination of embedded tests:
.. code:: yaml
---
@ -182,7 +198,9 @@ the machine will not go back into the pool.
Read the delegation chapter about "max_fail_percentage" and you can also control how many failing tests will stop a rolling update
from proceeding.
This above approach can also be modified to run a step from a testing machine remotely against a machine::
This above approach can also be modified to run a step from a testing machine remotely against a machine:
.. code:: yaml
---
@ -221,7 +239,9 @@ Achieving Continuous Deployment
If desired, the above techniques may be extended to enable continuous deployment practices.
The workflow may look like this::
The workflow may look like this:
.. code:: text
- Write and use automation to deploy local development VMs
- Have a CI system like Jenkins deploy to a staging environment on every code change

@ -13,7 +13,9 @@ All Alicloud modules require ``footmark`` - install it on your control machine w
Cloud modules, including Alicloud modules, execute on your local machine (the control machine) with ``connection: local``, rather than on remote machines defined in your hosts.
Normally, you'll use the following pattern for plays that provision Alicloud resources::
Normally, you'll use the following pattern for plays that provision Alicloud resources:
.. code-block:: yaml
- hosts: localhost
connection: local
@ -30,7 +32,9 @@ Authentication
You can specify your Alicloud authentication credentials (access key and secret key) by passing them as
environment variables or by storing them in a vars file.
To pass authentication credentials as environment variables::
To pass authentication credentials as environment variables:
.. code-block:: shell
export ALICLOUD_ACCESS_KEY='Alicloud123'
export ALICLOUD_SECRET_KEY='AlicloudSecret123'
@ -62,7 +66,7 @@ creates 3 more. If there are 8 instances with that tag, the task terminates 3 of
If you do not specify a ``count_tag``, the task creates the number of instances you specify in ``count`` with the ``instance_name`` you provide.
::
.. code-block:: yaml
# alicloud_setup.yml

@ -77,20 +77,20 @@ order of precedence is parameters, then environment variables, and finally a fil
Using Environment Variables
```````````````````````````
To pass service principal credentials via the environment, define the following variables:
To pass service principal credentials through the environment, define the following variables:
* AZURE_CLIENT_ID
* AZURE_SECRET
* AZURE_SUBSCRIPTION_ID
* AZURE_TENANT
To pass Active Directory username/password via the environment, define the following variables:
To pass Active Directory username/password through the environment, define the following variables:
* AZURE_AD_USER
* AZURE_PASSWORD
* AZURE_SUBSCRIPTION_ID
To pass Active Directory username/password in ADFS via the environment, define the following variables:
To pass Active Directory username/password in ADFS through the environment, define the following variables:
* AZURE_AD_USER
* AZURE_PASSWORD
@ -478,7 +478,7 @@ Disabling certificate validation on Azure endpoints
When an HTTPS proxy is present, or when using Azure Stack, it may be necessary to disable certificate validation for
Azure endpoints in the Azure modules. This is not a recommended security practice, but may be necessary when the system
CA store cannot be altered to include the necessary CA certificate. Certificate validation can be controlled by setting
the "cert_validation_mode" value in a credential profile, via the "AZURE_CERT_VALIDATION_MODE" environment variable, or
the "cert_validation_mode" value in a credential profile, through the "AZURE_CERT_VALIDATION_MODE" environment variable, or
by passing the "cert_validation_mode" argument to any Azure module. The default value is "validate"; setting the value
to "ignore" will prevent all certificate validation. The module argument takes precedence over a credential profile value,
which takes precedence over the environment value.

@ -5,7 +5,7 @@ Packet.net Guide
Introduction
============
`Packet.net <https://packet.net>`_ is a bare metal infrastructure host that's supported by Ansible (>=2.3) via a dynamic inventory script and two cloud modules. The two modules are:
`Packet.net <https://packet.net>`_ is a bare metal infrastructure host that's supported by Ansible (>=2.3) through a dynamic inventory script and two cloud modules. The two modules are:
- packet_sshkey: adds a public SSH key from file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
- packet_device: manages servers on Packet. You can use this module to create, restart and delete devices.
@ -21,9 +21,9 @@ The Packet modules and inventory script connect to the Packet API using the pack
$ pip install packet-python
In order to check the state of devices created by Ansible on Packet, it's a good idea to install one of the `Packet CLI clients <https://www.packet.net/developers/integrations/>`_. Otherwise you can check them via the `Packet portal <https://app.packet.net/portal>`_.
In order to check the state of devices created by Ansible on Packet, it's a good idea to install one of the `Packet CLI clients <https://www.packet.net/developers/integrations/>`_. Otherwise you can check them through the `Packet portal <https://app.packet.net/portal>`_.
To use the modules and inventory script you'll need a Packet API token. You can generate an API token via the Packet portal `here <https://app.packet.net/portal#/api-keys>`__. The simplest way to authenticate yourself is to set the Packet API token in an environment variable:
To use the modules and inventory script you'll need a Packet API token. You can generate an API token through the Packet portal `here <https://app.packet.net/portal#/api-keys>`__. The simplest way to authenticate yourself is to set the Packet API token in an environment variable:
.. code-block:: bash
@ -31,7 +31,7 @@ To use the modules and inventory script you'll need a Packet API token. You can
If you're not comfortable exporting your API token, you can pass it as a parameter to the modules.
On Packet, devices and reserved IP addresses belong to `projects <https://www.packet.com/developers/api/#projects>`_. In order to use the packet_device module, you need to specify the UUID of the project in which you want to create or manage devices. You can find a project's UUID in the Packet portal `here <https://app.packet.net/portal#/projects/list/table/>`_ (it's just under the project table) or via one of the available `CLIs <https://www.packet.net/developers/integrations/>`_.
On Packet, devices and reserved IP addresses belong to `projects <https://www.packet.com/developers/api/#projects>`_. In order to use the packet_device module, you need to specify the UUID of the project in which you want to create or manage devices. You can find a project's UUID in the Packet portal `here <https://app.packet.net/portal#/projects/list/table/>`_ (it's just under the project table) or through one of the available `CLIs <https://www.packet.net/developers/integrations/>`_.
If you want to use a new SSH key pair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
@ -46,7 +46,7 @@ If you want to use an existing key pair, just copy the private and public key ov
Device Creation
===============
The following code block is a simple playbook that creates one `Type 0 <https://www.packet.com/cloud/servers/t1-small/>`_ server (the 'plan' parameter). You have to supply 'plan' and 'operating_system'. 'location' defaults to 'ewr1' (Parsippany, NJ). You can find all the possible values for the parameters via a `CLI client <https://www.packet.net/developers/integrations/>`_.
The following code block is a simple playbook that creates one `Type 0 <https://www.packet.com/cloud/servers/t1-small/>`_ server (the 'plan' parameter). You have to supply 'plan' and 'operating_system'. 'location' defaults to 'ewr1' (Parsippany, NJ). You can find all the possible values for the parameters through a `CLI client <https://www.packet.net/developers/integrations/>`_.
.. code-block:: yaml
@ -67,7 +67,7 @@ The following code block is a simple playbook that creates one `Type 0 <https://
plan: baremetal_0
facility: sjc1
After running ``ansible-playbook playbook_create.yml``, you should have a server provisioned on Packet. You can verify via a CLI or in the `Packet portal <https://app.packet.net/portal#/projects/list/table>`__.
After running ``ansible-playbook playbook_create.yml``, you should have a server provisioned on Packet. You can verify through a CLI or in the `Packet portal <https://app.packet.net/portal#/projects/list/table>`__.
If you get an error with the message "failed to set machine state present, error: Error 404: Not Found", please verify your project UUID.
@ -183,7 +183,7 @@ The following playbook will create an SSH key, 3 Packet servers, and then wait u
As with most Ansible modules, the default states of the Packet modules are idempotent, meaning the resources in your project will remain the same after re-runs of a playbook. Thus, we can keep the ``packet_sshkey`` module call in our playbook. If the public key is already in your Packet account, the call will have no effect.
The second module call provisions 3 Packet Type 0 (specified using the 'plan' parameter) servers in the project identified via the 'project_id' parameter. The servers are all provisioned with CoreOS beta (the 'operating_system' parameter) and are customized with cloud-config user data passed to the 'user_data' parameter.
The second module call provisions 3 Packet Type 0 (specified using the 'plan' parameter) servers in the project identified by the 'project_id' parameter. The servers are all provisioned with CoreOS beta (the 'operating_system' parameter) and are customized with cloud-config user data passed to the 'user_data' parameter.
The ``packet_device`` module has a ``wait_for_public_IPv`` that is used to specify the version of the IP address to wait for (valid values are ``4`` or ``6`` for IPv4 or IPv6). If specified, Ansible will wait until the GET API call for a device contains an Internet-routeable IP address of the specified version. When referring to an IP address of a created device in subsequent module calls, it's wise to use the ``wait_for_public_IPv`` parameter, or ``state: active`` in the packet_device module call.
@ -193,7 +193,7 @@ Run the playbook:
$ ansible-playbook playbook_coreos.yml
Once the playbook quits, your new devices should be reachable via SSH. Try to connect to one and check if etcd has started properly:
Once the playbook quits, your new devices should be reachable through SSH. Try to connect to one and check if etcd has started properly:
.. code-block:: bash

@ -18,7 +18,7 @@ all of the modules require and are tested against pyrax 1.5 or higher.
You'll need this Python module installed on the execution host.
``pyrax`` is not currently available in many operating system
package repositories, so you will likely need to install it via pip:
package repositories, so you will likely need to install it through pip:
.. code-block:: bash
@ -69,7 +69,7 @@ Running from a Python Virtual Environment (Optional)
Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running via 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done through the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running through 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
.. code-block:: ini
@ -154,7 +154,7 @@ to the next section.
Host Inventory
``````````````
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, and so on. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances through other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, and so on. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.

@ -9,7 +9,7 @@ Scaleway Guide
Introduction
============
`Scaleway <https://scaleway.com>`_ is a cloud provider supported by Ansible, version 2.6 or higher via a dynamic inventory plugin and modules.
`Scaleway <https://scaleway.com>`_ is a cloud provider supported by Ansible, version 2.6 or higher through a dynamic inventory plugin and modules.
Those modules are:
- :ref:`scaleway_sshkey_module`: adds a public SSH key from a file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
@ -27,7 +27,7 @@ Requirements
The Scaleway modules and inventory script connect to the Scaleway API using `Scaleway REST API <https://developer.scaleway.com>`_.
To use the modules and inventory script you'll need a Scaleway API token.
You can generate an API token via the Scaleway console `here <https://cloud.scaleway.com/#/credentials>`__.
You can generate an API token through the Scaleway console `here <https://cloud.scaleway.com/#/credentials>`__.
The simplest way to authenticate yourself is to set the Scaleway API token in an environment variable:
.. code-block:: bash

@ -188,10 +188,10 @@ Ansible loads any file called ``main.yml`` in a role sub-directory. This sample
tags: ntp
- name: be sure ntpd is running and enabled
service:
ansible.builtin.service:
name: ntpd
state: started
enabled: yes
enabled: true
tags: ntp
Here is an example handlers file. Handlers are only triggered when certain tasks report changes. Handlers run at the end of each play:
@ -201,7 +201,7 @@ Here is an example handlers file. Handlers are only triggered when certain tasks
---
# file: roles/common/handlers/main.yml
- name: restart ntpd
service:
ansible.builtin.service:
name: ntpd
state: restarted

Loading…
Cancel
Save