Rewrite Docker scenario guide (#73069)

pull/73136/head
Felix Fontein 4 years ago committed by GitHub
parent 1d8760779c
commit c9f28c1735
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,112 +1,62 @@
Docker Guide Docker Guide
============ ============
Ansible offers the following modules for orchestrating Docker containers: The `community.docker collection <https://galaxy.ansible.com/community/docker>`_ offers several modules and plugins for orchestrating Docker containers and Docker Swarm.
docker_compose .. contents::
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on :local:
Swarm. Supports compose versions 1 and 2. :depth: 1
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a
container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_info
Inspects one or more images in the Docker host's image cache, providing the information for making
decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which
in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
Ansible 2.1.0 includes major updates to the Docker modules, marking the start of a project to create a complete and
integrated set of tools for orchestrating containers. In addition to the above modules, we are also working on the
following:
Still using Dockerfile to build images? Check out `ansible-bender <https://github.com/ansible-community/ansible-bender>`_,
and start building images from your Ansible playbooks.
Use `Ansible Operator <https://learn.openshift.com/ansibleop/ansible-operator-overview/>`_
to launch your docker-compose file on `OpenShift <https://www.okd.io/>`_. Go from an app on your laptop to a fully
scalable app in the cloud with Kubernetes in just a few moments.
There's more planned. See the latest ideas and thinking at the `Ansible proposal repo <https://github.com/ansible/proposals/tree/master/docker>`_.
Requirements Requirements
------------ ------------
Using the docker modules requires having the `Docker SDK for Python <https://docker-py.readthedocs.io/en/stable/>`_ Most of the modules and plugins in community.docker require the `Docker SDK for Python <https://docker-py.readthedocs.io/en/stable/>`_. The SDK needs to be installed on the machines where the modules and plugins are executed, and for the Python version(s) with which the modules and plugins are executed. You can use the :ref:`community.general.python_requirements_info module <ansible_collections.community.general.python_requirements_info_module>` to make sure that the Docker SDK for Python is installed on the correct machine and for the Python version used by Ansible.
installed on the host running Ansible. You will need to have >= 1.7.0 installed. For Python 2.7 or
Python 3, you can install it as follows: Note that plugins (inventory plugins and connection plugins) are always executed in the context of Ansible itself. If you use a plugin that requires the Docker SDK for Python, you need to install it on the machine running ``ansible`` or ``ansible-playbook`` and for the same Python interpreter used by Ansible. To see which Python is used, run ``ansible --version``.
You can install the Docker SDK for Python for Python 2.7 or Python 3 as follows:
.. code-block:: bash .. code-block:: bash
$ pip install docker $ pip install docker
For Python 2.6, you need a version before 2.0. For these versions, the SDK was called ``docker-py``, For Python 2.6, you need a version before 2.0. For these versions, the SDK was called ``docker-py``, so you need to install it as follows:
so you need to install it as follows:
.. code-block:: bash .. code-block:: bash
$ pip install 'docker-py>=1.7.0' $ pip install 'docker-py>=1.10.0'
Please note that only one of ``docker`` and ``docker-py`` must be installed. Installing both will result in Please install only one of ``docker`` or ``docker-py``. Installing both will result in a broken installation. If this happens, Ansible will detect it and inform you about it. If that happens, you must uninstall both and reinstall the correct version.
a broken installation. If this happens, Ansible will detect it and inform you about it::
Cannot have both the docker-py and docker python modules installed together as they use the same If in doubt, always install ``docker`` and never ``docker-py``.
namespace and cause a corrupt installation. Please uninstall both packages, and re-install only
the docker-py or docker python module. It is recommended to install the docker module if no support
for Python 2.6 is required. Please note that simply uninstalling one of the modules can leave the
other module in a broken state.
The docker_compose module also requires `docker-compose <https://github.com/docker/compose>`_
.. code-block:: bash
$ pip install 'docker-compose>=1.7.0'
Connecting to the Docker API Connecting to the Docker API
---------------------------- ----------------------------
You can connect to a local or remote API using parameters passed to each task or by setting environment variables. You can connect to a local or remote API using parameters passed to each task or by setting environment variables. The order of precedence is command line parameters and then environment variables. If neither a command line option nor an environment variable is found, Ansible uses the default value provided under `Parameters`_.
The order of precedence is command line parameters and then environment variables. If neither a command line
option or an environment variable is found, a default value will be used. The default values are provided under
`Parameters`_
Parameters Parameters
.......... ..........
Control how modules connect to the Docker API by passing the following parameters: Most plugins and modules can be configured by the following parameters:
docker_host docker_host
The URL or Unix socket path used to connect to the Docker API. Defaults to ``unix://var/run/docker.sock``. The URL or Unix socket path used to connect to the Docker API. Defaults to ``unix://var/run/docker.sock``. To connect to a remote host, provide the TCP connection string (for example: ``tcp://192.0.2.23:2376``). If TLS is used to encrypt the connection to the API, then the module will automatically replace 'tcp' in the connection URL with 'https'.
To connect to a remote host, provide the TCP connection string. For example: ``tcp://192.0.2.23:2376``. If
TLS is used to encrypt the connection to the API, then the module will automatically replace 'tcp' in the
connection URL with 'https'.
api_version api_version
The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported by the Docker SDK for Python installed.
by docker-py.
timeout timeout
The maximum amount of time in seconds to wait on a response from the API. Defaults to 60 seconds. The maximum amount of time in seconds to wait on a response from the API. Defaults to 60 seconds.
tls tls
Secure the connection to the API by using TLS without verifying the authenticity of the Docker host server. Secure the connection to the API by using TLS without verifying the authenticity of the Docker host server. Defaults to ``false``.
Defaults to False.
tls_verify validate_certs
Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server. Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server. Default is ``false``.
Default is False.
cacert_path cacert_path
Use a CA certificate when performing server verification by providing the path to a CA certificate file. Use a CA certificate when performing server verification by providing the path to a CA certificate file.
@ -118,19 +68,18 @@ Control how modules connect to the Docker API by passing the following parameter
Path to the client's TLS key file. Path to the client's TLS key file.
tls_hostname tls_hostname
When verifying the authenticity of the Docker Host server, provide the expected name of the server. Defaults When verifying the authenticity of the Docker Host server, provide the expected name of the server. Defaults to ``localhost``.
to 'localhost'.
ssl_version ssl_version
Provide a valid SSL version number. Default value determined by docker-py, which at the time of this writing Provide a valid SSL version number. The default value is determined by the Docker SDK for Python.
was 1.0
Environment Variables Environment variables
..................... .....................
Control how the modules connect to the Docker API by setting the following variables in the environment of the host You can also control how the plugins and modules connect to the Docker API by setting the following environment variables.
running Ansible:
For plugins, they have to be set for the environment Ansible itself runs in. For modules, they have to be set for the environment the modules are executed in. For modules running on remote machines, the environment variables have to be set on that machine for the user used to execute the modules with.
DOCKER_HOST DOCKER_HOST
The URL or Unix socket path used to connect to the Docker API. The URL or Unix socket path used to connect to the Docker API.
@ -155,176 +104,124 @@ running Ansible:
Secure the connection to the API by using TLS and verify the authenticity of the Docker Host. Secure the connection to the API by using TLS and verify the authenticity of the Docker Host.
Dynamic Inventory Script Plain Docker daemon: images, networks, volumes, and containers
------------------------ --------------------------------------------------------------
The inventory script generates dynamic inventory by making API requests to one or more Docker APIs. It's dynamic
because the inventory is generated at run-time rather than being read from a static file. The script generates the
inventory by connecting to one or many Docker APIs and inspecting the containers it finds at each API. Which APIs the
script contacts can be defined using environment variables or a configuration file.
Groups For working with a plain Docker daemon, that is without Swarm, there are connection plugins, an inventory plugin, and several modules available:
......
The script will create the following host groups:
- container id docker connection plugin
- container name The :ref:`community.docker.docker connection plugin <ansible_collections.community.docker.docker_connection>` uses the Docker CLI utility to connect to Docker containers and execute modules in them. It essentially wraps ``docker exec`` and ``docker cp``. This connection plugin is supported by the :ref:`ansible.posix.synchronize module <ansible_collections.ansible.posix.synchronize_module>`.
- container short id
- image_name (image_<image name>)
- docker_host
- running
- stopped
Examples docker_api connection plugin
........ The :ref:`community.docker.docker_api connection plugin <ansible_collections.community.docker.docker_api_connection>` talks directly to the Docker daemon to connect to Docker containers and execute modules in them.
You can run the script interactively from the command line or pass it as the inventory to a playbook. Here are few docker_containers inventory plugin
examples to get you started: The :ref:`community.docker.docker_containers inventory plugin <ansible_collections.community.docker.docker_containers_inventory>` allows you to dynamically add Docker containers from a Docker Daemon to your Ansible inventory. See :ref:`dynamic_inventory` for details on dynamic inventories.
.. code-block:: bash The `docker inventory script <https://github.com/ansible-collections/community.general/blob/main/scripts/inventory/docker.py>`_ is deprecated. Please use the inventory plugin instead. The inventory plugin has several compatibility options. If you need to collect Docker containers from multiple Docker daemons, you need to add every Docker daemon as an individual inventory source.
# Connect to the Docker API on localhost port 4243 and format the JSON output docker_host_info module
DOCKER_HOST=tcp://localhost:4243 ./docker.py --pretty The :ref:`community.docker.docker_host_info module <ansible_collections.community.docker.docker_host_info_module>` allows you to retrieve information on a Docker daemon, such as all containers, images, volumes, networks and so on.
# Any container's ssh port exposed on 0.0.0.0 will be mapped to docker_login module
# another IP address (where Ansible will attempt to connect via SSH) The :ref:`community.docker.docker_login module <ansible_collections.community.docker.docker_login_module>` allows you to log in and out of a remote registry, such as Docker Hub or a private registry. It provides similar functionality to the ``docker login`` and ``docker logout`` CLI commands.
DOCKER_DEFAULT_IP=192.0.2.5 ./docker.py --pretty
# Run as input to a playbook: docker_prune module
ansible-playbook -i ./docker.py docker_inventory_test.yml The :ref:`community.docker.docker_prune module <ansible_collections.community.docker.docker_prune_module>` allows you to prune no longer needed containers, images, volumes and so on. It provides similar functionality to the ``docker prune`` CLI command.
# Simple playbook to invoke with the above example: docker_image module
The :ref:`community.docker.docker_image module <ansible_collections.community.docker.docker_image_module>` provides full control over images, including: build, pull, push, tag and remove.
- name: Test docker_inventory, this will not connect to any hosts docker_image_info module
hosts: all The :ref:`community.docker.docker_image_info module <ansible_collections.community.docker.docker_image_info_module>` allows you to list and inspect images.
gather_facts: no
tasks:
- debug:
msg: "Container - {{ inventory_hostname }}"
Configuration docker_network module
............. The :ref:`community.docker.docker_network module <ansible_collections.community.docker.docker_network_module>` provides full control over Docker networks.
You can control the behavior of the inventory script by defining environment variables, or
creating a docker.yml file (sample provided in https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/docker.py). The order of precedence is the docker.yml
file and then environment variables.
docker_network_info module
The :ref:`community.docker.docker_network_info module <ansible_collections.community.docker.docker_network_info_module>` allows you to inspect Docker networks.
Environment Variables docker_volume_info module
;;;;;;;;;;;;;;;;;;;;;; The :ref:`community.docker.docker_volume_info module <ansible_collections.community.docker.docker_volume_info_module>` provides full control over Docker volumes.
To connect to a single Docker API the following variables can be defined in the environment to control the connection docker_volume module
options. These are the same environment variables used by the Docker modules. The :ref:`community.docker.docker_volume module <ansible_collections.community.docker.docker_volume_module>` allows you to inspect Docker volumes.
DOCKER_HOST docker_container module
The URL or Unix socket path used to connect to the Docker API. Defaults to unix://var/run/docker.sock. The :ref:`community.docker.docker_container module <ansible_collections.community.docker.docker_container_module>` manages the container lifecycle by providing the ability to create, update, stop, start and destroy a Docker container.
DOCKER_API_VERSION: docker_container_info module
The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported The :ref:`community.docker.docker_container_info module <ansible_collections.community.docker.docker_container_info_module>` allows you to inspect a Docker container.
by docker-py.
DOCKER_TIMEOUT:
The maximum amount of time in seconds to wait on a response from the API. Defaults to 60 seconds.
DOCKER_TLS: Docker Compose
Secure the connection to the API by using TLS without verifying the authenticity of the Docker host server. --------------
Defaults to False.
DOCKER_TLS_VERIFY: The :ref:`community.docker.docker_compose module <ansible_collections.community.docker.docker_compose_module>`
Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server. allows you to use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm.
Default is False Supports compose versions 1 and 2.
DOCKER_TLS_HOSTNAME: Next to Docker SDK for Python, you need to install `docker-compose <https://github.com/docker/compose>`_ on the remote machines to use the module.
When verifying the authenticity of the Docker Host server, provide the expected name of the server. Defaults
to localhost.
DOCKER_CERT_PATH:
Path to the directory containing the client certificate, client key and CA certificate.
DOCKER_SSL_VERSION: Docker Machine
Provide a valid SSL version number. Default value determined by docker-py, which at the time of this writing --------------
was 1.0
In addition to the connection variables there are a couple variables used to control the execution and output of the The :ref:`community.docker.docker_machine inventory plugin <ansible_collections.community.docker.docker_machine_inventory>` allows you to dynamically add Docker Machine hosts to your Ansible inventory.
script:
DOCKER_CONFIG_FILE
Path to the configuration file. Defaults to ./docker.yml.
DOCKER_PRIVATE_SSH_PORT: Docker stack
The private port (container port) on which SSH is listening for connections. Defaults to 22. ------------
The :ref:`community.docker.docker_stack module <ansible_collections.community.docker.docker_stack_module>` module allows you to control Docker stacks. Information on stacks can be retrieved by the :ref:`community.docker.docker_stack_info module <ansible_collections.community.docker.docker_stack_info_module>`, and information on stack tasks can be retrieved by the :ref:`community.docker.docker_stack_task_info module <ansible_collections.community.docker.docker_stack_task_info_module>`.
DOCKER_DEFAULT_IP:
The IP address to assign to ansible_host when the container's SSH port is mapped to interface '0.0.0.0'.
Docker Swarm
------------
Configuration File The community.docker collection provides multiple plugins and modules for managing Docker Swarms.
;;;;;;;;;;;;;;;;;;
Using a configuration file provides a means for defining a set of Docker APIs from which to build an inventory. Swarm management
................
The default name of the file is derived from the name of the inventory script. By default the script will look for One inventory plugin and several modules are provided to manage Docker Swarms:
basename of the script (in other words, docker) with an extension of '.yml'.
You can also override the default name of the script by defining DOCKER_CONFIG_FILE in the environment. docker_swarm inventory plugin
The :ref:`community.docker.docker_swarm inventory plugin <ansible_collections.community.docker.docker_swarm_inventory>` allows you to dynamically add all Docker Swarm nodes to your Ansible inventory.
Here's what you can define in docker_inventory.yml: docker_swarm module
The :ref:`community.docker.docker_swarm module <ansible_collections.community.docker.docker_swarm_module>` allows you to globally configure Docker Swarm manager nodes to join and leave swarms, and to change the Docker Swarm configuration.
defaults docker_swarm_info module
Defines a default connection. Defaults will be taken from this and applied to any values not provided The :ref:`community.docker.docker_swarm_info module <ansible_collections.community.docker.docker_swarm_info_module>` allows you to retrieve information on Docker Swarm.
for a host defined in the hosts list.
hosts docker_node module
If you wish to get inventory from more than one Docker host, define a hosts list. The :ref:`community.docker.docker_node module <ansible_collections.community.docker.docker_node_module>` allows you to manage Docker Swarm nodes.
For the default host and each host in the hosts list define the following attributes: docker_node_info module
The :ref:`community.docker.docker_node_info module <ansible_collections.community.docker.docker_node_info_module>` allows you to retrieve information on Docker Swarm nodes.
.. code-block:: yaml Configuration management
........................
host: The community.docker collection offers modules to manage Docker Swarm configurations and secrets:
description: The URL or Unix socket path used to connect to the Docker API.
required: yes
tls: docker_config module
description: Connect using TLS without verifying the authenticity of the Docker host server. The :ref:`community.docker.docker_config module <ansible_collections.community.docker.docker_config_module>` allows you to create and modify Docker Swarm configs.
default: false
required: false
tls_verify: docker_secret module
description: Connect using TLS without verifying the authenticity of the Docker host server. The :ref:`community.docker.docker_secret module <ansible_collections.community.docker.docker_secret_module>` allows you to create and modify Docker Swarm secrets.
default: false
required: false
cert_path:
description: Path to the client's TLS certificate file.
default: null
required: false
cacert_path: Swarm services
description: Use a CA certificate when performing server verification by providing the path to a CA certificate file. ..............
default: null
required: false
key_path: Docker Swarm services can be created and updated with the :ref:`community.docker.docker_swarm_service module <ansible_collections.community.docker.docker_swarm_service_module>`, and information on them can be queried by the :ref:`community.docker.docker_swarm_service_info module <ansible_collections.community.docker.docker_swarm_service_info_module>`.
description: Path to the client's TLS key file.
default: null
required: false
version:
description: The Docker API version.
required: false
default: will be supplied by the docker-py module.
timeout: Helpful links
description: The amount of time in seconds to wait on an API response. -------------
required: false
default: 60
default_ip: Still using Dockerfile to build images? Check out `ansible-bender <https://github.com/ansible-community/ansible-bender>`_, and start building images from your Ansible playbooks.
description: The IP address to assign to ansible_host when the container's SSH port is mapped to interface
'0.0.0.0'.
required: false
default: 127.0.0.1
private_ssh_port: Use `Ansible Operator <https://learn.openshift.com/ansibleop/ansible-operator-overview/>`_ to launch your docker-compose file on `OpenShift <https://www.okd.io/>`_. Go from an app on your laptop to a fully scalable app in the cloud with Kubernetes in just a few moments.
description: The port containers use for SSH
required: false
default: 22

Loading…
Cancel
Save