mirror of https://github.com/ansible/ansible.git
forwarded docker_extra_args to latest upstream/origin/devel
commit
cd2c140f69
@ -0,0 +1,50 @@
|
||||
<!--- Verify first that your issue/request is not already reported in GitHub -->
|
||||
|
||||
##### ISSUE TYPE
|
||||
<!--- Pick one below and delete the rest: -->
|
||||
- Bug Report
|
||||
- Feature Idea
|
||||
- Documentation Report
|
||||
|
||||
|
||||
##### ANSIBLE VERSION
|
||||
```
|
||||
<!--- Paste verbatim output from “ansible --version” between quotes -->
|
||||
```
|
||||
|
||||
##### CONFIGURATION
|
||||
<!---
|
||||
Mention any settings you have changed/added/removed in ansible.cfg
|
||||
(or using the ANSIBLE_* environment variables).
|
||||
-->
|
||||
|
||||
##### OS / ENVIRONMENT
|
||||
<!---
|
||||
Mention the OS you are running Ansible from, and the OS you are
|
||||
managing, or say “N/A” for anything that is not platform-specific.
|
||||
-->
|
||||
|
||||
##### SUMMARY
|
||||
<!--- Explain the problem briefly -->
|
||||
|
||||
##### STEPS TO REPRODUCE
|
||||
<!---
|
||||
For bugs, show exactly how to reproduce the problem.
|
||||
For new features, show how the feature would be used.
|
||||
-->
|
||||
|
||||
```
|
||||
<!--- Paste example playbooks or commands between quotes -->
|
||||
```
|
||||
|
||||
<!--- You can also paste gist.github.com links for larger files -->
|
||||
|
||||
##### EXPECTED RESULTS
|
||||
<!--- What did you expect to happen when running the steps above? -->
|
||||
|
||||
##### ACTUAL RESULTS
|
||||
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
|
||||
|
||||
```
|
||||
<!--- Paste verbatim command output between quotes -->
|
||||
```
|
@ -0,0 +1,24 @@
|
||||
##### ISSUE TYPE
|
||||
<!--- Pick one below and delete the rest: -->
|
||||
- Feature Pull Request
|
||||
- New Module Pull Request
|
||||
- Bugfix Pull Request
|
||||
- Docs Pull Request
|
||||
|
||||
##### ANSIBLE VERSION
|
||||
```
|
||||
<!--- Paste verbatim output from “ansible --version” between quotes -->
|
||||
```
|
||||
|
||||
##### SUMMARY
|
||||
<!--- Describe the change, including rationale and design decisions -->
|
||||
|
||||
<!---
|
||||
If you are fixing an existing issue, please include "Fixes #nnn" in your
|
||||
commit message and your description; but you should still explain what
|
||||
the change does.
|
||||
-->
|
||||
|
||||
```
|
||||
<!-- Paste verbatim command output here, e.g. before and after your change -->
|
||||
```
|
File diff suppressed because it is too large
Load Diff
@ -1,27 +1,29 @@
|
||||
Welcome To Ansible GitHub
|
||||
=========================
|
||||
# WELCOME TO ANSIBLE GITHUB
|
||||
|
||||
Hi! Nice to see you here!
|
||||
|
||||
If you'd like to ask a question
|
||||
===============================
|
||||
|
||||
Please see [this web page ](http://docs.ansible.com/community.html) for community information, which includes pointers on how to ask questions on the [mailing lists](http://docs.ansible.com/community.html#mailing-list-information) and IRC.
|
||||
## QUESTIONS ?
|
||||
|
||||
The github issue tracker is not the best place for questions for various reasons, but both IRC and the mailing list are very helpful places for those things, and that page has the pointers to those.
|
||||
Please see the [community page](http://docs.ansible.com/community.html) for information on how to ask questions on the [mailing lists](http://docs.ansible.com/community.html#mailing-list-information) and IRC.
|
||||
|
||||
If you'd like to contribute code
|
||||
================================
|
||||
The GitHub issue tracker is not the best place for questions for various reasons, but both IRC and the mailing list are very helpful places for those things, as the community page explains best.
|
||||
|
||||
Please see [this web page](http://docs.ansible.com/community.html) for information about the contribution process. Important license agreement information is also included on that page.
|
||||
|
||||
If you'd like to file a bug
|
||||
===========================
|
||||
## CONTRIBUTING ?
|
||||
|
||||
I'd also read the community page above, but in particular, make sure you copy [this issue template](https://github.com/ansible/ansible/blob/devel/ISSUE_TEMPLATE.md) into your ticket description. We have a friendly neighborhood bot that will remind you if you forget :) This template helps us organize tickets faster and prevents asking some repeated questions, so it's very helpful to us and we appreciate your help with it.
|
||||
Please see the [community page](http://docs.ansible.com/community.html) for information regarding the contribution process. Important license agreement information is also included on that page.
|
||||
|
||||
Also please make sure you are testing on the latest released version of Ansible or the development branch.
|
||||
|
||||
Thanks!
|
||||
## BUG TO REPORT ?
|
||||
|
||||
First and foremost, also check the [community page](http://docs.ansible.com/community.html).
|
||||
|
||||
You can report bugs or make enhancement requests at the [Ansible GitHub issue page](http://github.com/ansible/ansible/issues/new) by filling out the issue template that will be presented.
|
||||
|
||||
Also please make sure you are testing on the latest released version of Ansible or the development branch. You can find the latest releases and development branch at:
|
||||
|
||||
- https://github.com/ansible/ansible/releases
|
||||
- https://github.com/ansible/ansible/archive/devel.tar.gz
|
||||
|
||||
Thanks!
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1 @@
|
||||
ansible
|
@ -0,0 +1,80 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
import json
|
||||
import requests
|
||||
import os
|
||||
import argparse
|
||||
import types
|
||||
|
||||
RACKHD_URL = 'http://localhost:8080'
|
||||
|
||||
class RackhdInventory(object):
|
||||
def __init__(self, nodeids):
|
||||
self._inventory = {}
|
||||
for nodeid in nodeids:
|
||||
self._load_inventory_data(nodeid)
|
||||
inventory = {}
|
||||
for nodeid,info in self._inventory.iteritems():
|
||||
inventory[nodeid]= (self._format_output(nodeid, info))
|
||||
print(json.dumps(inventory))
|
||||
|
||||
def _load_inventory_data(self, nodeid):
|
||||
info = {}
|
||||
info['ohai'] = RACKHD_URL + '/api/common/nodes/{0}/catalogs/ohai'.format(nodeid )
|
||||
info['lookup'] = RACKHD_URL + '/api/common/lookups/?q={0}'.format(nodeid)
|
||||
|
||||
results = {}
|
||||
for key,url in info.iteritems():
|
||||
r = requests.get( url, verify=False)
|
||||
results[key] = r.text
|
||||
self._inventory[nodeid] = results
|
||||
|
||||
def _format_output(self, nodeid, info):
|
||||
try:
|
||||
node_info = json.loads(info['lookup'])
|
||||
ipaddress = ''
|
||||
if len(node_info) > 0:
|
||||
ipaddress = node_info[0]['ipAddress']
|
||||
output = { 'hosts':[ipaddress],'vars':{}}
|
||||
for key,result in info.iteritems():
|
||||
output['vars'][key] = json.loads(result)
|
||||
output['vars']['ansible_ssh_user'] = 'monorail'
|
||||
except KeyError:
|
||||
pass
|
||||
return output
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--host')
|
||||
parser.add_argument('--list', action='store_true')
|
||||
return parser.parse_args()
|
||||
|
||||
try:
|
||||
#check if rackhd url(ie:10.1.1.45:8080) is specified in the environment
|
||||
RACKHD_URL = 'http://' + str(os.environ['RACKHD_URL'])
|
||||
except:
|
||||
#use default values
|
||||
pass
|
||||
|
||||
# Use the nodeid specified in the environment to limit the data returned
|
||||
# or return data for all available nodes
|
||||
nodeids = []
|
||||
|
||||
if (parse_args().host):
|
||||
try:
|
||||
nodeids += parse_args().host.split(',')
|
||||
RackhdInventory(nodeids)
|
||||
except:
|
||||
pass
|
||||
if (parse_args().list):
|
||||
try:
|
||||
url = RACKHD_URL + '/api/common/nodes'
|
||||
r = requests.get( url, verify=False)
|
||||
data = json.loads(r.text)
|
||||
for entry in data:
|
||||
if entry['type'] == 'compute':
|
||||
nodeids.append(entry['id'])
|
||||
RackhdInventory(nodeids)
|
||||
except:
|
||||
pass
|
@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
# (c) 2014, Matt Martz <matt@sivel.net>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
#
|
||||
# Script to be used with vault_password_file or --vault-password-file
|
||||
# to retrieve the vault password via your OSes native keyring application
|
||||
#
|
||||
# This script requires the ``keyring`` python module
|
||||
#
|
||||
# Add a [vault] section to your ansible.cfg file,
|
||||
# the only option is 'username'. Example:
|
||||
#
|
||||
# [vault]
|
||||
# username = 'ansible_vault'
|
||||
#
|
||||
# Additionally, it would be a good idea to configure vault_password_file in
|
||||
# ansible.cfg
|
||||
#
|
||||
# [defaults]
|
||||
# ...
|
||||
# vault_password_file = /path/to/vault-keyring.py
|
||||
# ...
|
||||
#
|
||||
# To set your password: python /path/to/vault-keyring.py set
|
||||
#
|
||||
# If you choose to not configure the path to vault_password_file in ansible.cfg
|
||||
# your ansible-playbook command may look like:
|
||||
#
|
||||
# ansible-playbook --vault-password-file=/path/to/vault-keyring.py site.yml
|
||||
|
||||
import sys
|
||||
import getpass
|
||||
import keyring
|
||||
|
||||
import ansible.constants as C
|
||||
|
||||
|
||||
def main():
|
||||
parser = C.load_config_file()
|
||||
try:
|
||||
username = parser.get('vault', 'username')
|
||||
except:
|
||||
sys.stderr.write('No [vault] section configured\n')
|
||||
sys.exit(1)
|
||||
|
||||
if len(sys.argv) == 2 and sys.argv[1] == 'set':
|
||||
password = getpass.getpass()
|
||||
confirm = getpass.getpass('Confirm password: ')
|
||||
if password == confirm:
|
||||
keyring.set_password('ansible', username, password)
|
||||
else:
|
||||
sys.stderr.write('Passwords do not match\n')
|
||||
sys.exit(1)
|
||||
else:
|
||||
sys.stdout.write('%s\n' % keyring.get_password('ansible', username))
|
||||
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -0,0 +1,487 @@
|
||||
# Docker_Container Module Proposal
|
||||
|
||||
## Purpose and Scope:
|
||||
|
||||
The purpose of docker_container is to manage the lifecycle of a container. The module will provide a mechanism for
|
||||
moving the container between absent, present, stopped and started states. It will focus purely on managing container
|
||||
state. The intention of the narrow focus is to make understanding and using the module clear and keep maintenance
|
||||
and testing as easy as possible.
|
||||
|
||||
Docker_container will manage a container using docker-py to communicate with either a local or remote API. It will
|
||||
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar to
|
||||
how other cloud modules operate.
|
||||
|
||||
The container world is moving rapidly, so the goal is to create a suite of docker modules that keep pace, with docker_container
|
||||
leading the way. If this project is successful, it will naturally deprecate the existing docker module.
|
||||
|
||||
## Parameters:
|
||||
|
||||
Docker_container will accept the parameters listed below. An attempt has been made to represent all the options available to
|
||||
docker's create, kill, pause, run, rm, start, stop and update commands.
|
||||
|
||||
Parameters for connecting to the API are not listed here. They are included in the common utility module mentioned above.
|
||||
|
||||
```
|
||||
blkio_weight:
|
||||
description:
|
||||
- Block IO (relative weight), between 10 and 1000.
|
||||
default: null
|
||||
|
||||
capabilities:
|
||||
description:
|
||||
- List of capabilities to add to the container.
|
||||
default: null
|
||||
|
||||
command:
|
||||
description:
|
||||
- Command or list of commands to execute in the container when it starts.
|
||||
default: null
|
||||
|
||||
cpu_period:
|
||||
description:
|
||||
- Limit CPU CFS (Completely Fair Scheduler) period
|
||||
default: 0
|
||||
|
||||
cpu_quota:
|
||||
description:
|
||||
- Limit CPU CFS (Completely Fair Scheduler) quota
|
||||
default: 0
|
||||
|
||||
cpuset_cpus:
|
||||
description:
|
||||
- CPUs in which to allow execution C(1,3) or C(1-3).
|
||||
default: null
|
||||
|
||||
cpuset_mems:
|
||||
description:
|
||||
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1)
|
||||
default: null
|
||||
|
||||
cpu_shares:
|
||||
description:
|
||||
- CPU shares (relative weight).
|
||||
default: null
|
||||
|
||||
detach:
|
||||
description:
|
||||
- Enable detached mode to leave the container running in background.
|
||||
If disabled, fail unless the process exits cleanly.
|
||||
default: true
|
||||
|
||||
devices:
|
||||
description:
|
||||
- List of host device bindings to add to the container. Each binding is a mapping expressed
|
||||
in the format: <path_on_host>:<path_in_container>:<cgroup_permissions>
|
||||
default: null
|
||||
|
||||
dns_servers:
|
||||
description:
|
||||
- List of custom DNS servers.
|
||||
default: null
|
||||
|
||||
dns_search_domains:
|
||||
description:
|
||||
- List of custom DNS search domains.
|
||||
default: null
|
||||
|
||||
env:
|
||||
description:
|
||||
- Dictionary of key,value pairs.
|
||||
default: null
|
||||
|
||||
entrypoint:
|
||||
description:
|
||||
- String or list of commands that overwrite the default ENTRYPOINT of the image.
|
||||
default: null
|
||||
|
||||
etc_hosts:
|
||||
description:
|
||||
- Dict of host-to-IP mappings, where each host name is key in the dictionary. Hostname will be added to the
|
||||
container's /etc/hosts file.
|
||||
default: null
|
||||
|
||||
exposed_ports:
|
||||
description:
|
||||
- List of additional container ports to expose for port mappings or links.
|
||||
If the port is already exposed using EXPOSE in a Dockerfile, it does not
|
||||
need to be exposed again.
|
||||
default: null
|
||||
aliases:
|
||||
- exposed
|
||||
|
||||
force_kill:
|
||||
description:
|
||||
- Use with absent, present, started and stopped states to use the kill command rather
|
||||
than the stop command.
|
||||
default: false
|
||||
|
||||
groups:
|
||||
description:
|
||||
- List of additional group names and/or IDs that the container process will run as.
|
||||
default: null
|
||||
|
||||
hostname:
|
||||
description:
|
||||
- Container hostname.
|
||||
default: null
|
||||
|
||||
image:
|
||||
description:
|
||||
- Container image used to create and match containers.
|
||||
required: true
|
||||
|
||||
interactive:
|
||||
description:
|
||||
- Keep stdin open after a container is launched, even if not attached.
|
||||
default: false
|
||||
|
||||
ipc_mode:
|
||||
description:
|
||||
- Set the IPC mode for the container. Can be one of
|
||||
'container:<name|id>' to reuse another container's IPC namespace
|
||||
or 'host' to use the host's IPC namespace within the container.
|
||||
default: null
|
||||
|
||||
keep_volumes:
|
||||
description:
|
||||
- Retain volumes associated with a removed container.
|
||||
default: false
|
||||
|
||||
kill_signal:
|
||||
description:
|
||||
- Override default signal used to kill a running container.
|
||||
default null:
|
||||
|
||||
kernel_memory:
|
||||
description:
|
||||
- Kernel memory limit (format: <number>[<unit>]). Number is a positive integer.
|
||||
Unit can be one of b, k, m, or g. Minimum is 4M.
|
||||
default: 0
|
||||
|
||||
labels:
|
||||
description:
|
||||
- Dictionary of key value pairs.
|
||||
default: null
|
||||
|
||||
links:
|
||||
description:
|
||||
- List of name aliases for linked containers in the format C(container_name:alias)
|
||||
default: null
|
||||
|
||||
log_driver:
|
||||
description:
|
||||
- Specify the logging driver.
|
||||
choices:
|
||||
- json-file
|
||||
- syslog
|
||||
- journald
|
||||
- gelf
|
||||
- fluentd
|
||||
- awslogs
|
||||
- splunk
|
||||
defult: json-file
|
||||
|
||||
log_options:
|
||||
description:
|
||||
- Dictionary of options specific to the chosen log_driver. See https://docs.docker.com/engine/admin/logging/overview/
|
||||
for details.
|
||||
required: false
|
||||
default: null
|
||||
|
||||
mac_address:
|
||||
description:
|
||||
- Container MAC address (e.g. 92:d0:c6:0a:29:33)
|
||||
default: null
|
||||
|
||||
memory:
|
||||
description:
|
||||
- Memory limit (format: <number>[<unit>]). Number is a positive integer.
|
||||
Unit can be one of b, k, m, or g
|
||||
default: 0
|
||||
|
||||
memory_reservation:
|
||||
description:
|
||||
- Memory soft limit (format: <number>[<unit>]). Number is a positive integer.
|
||||
Unit can be one of b, k, m, or g
|
||||
default: 0
|
||||
|
||||
memory_swap:
|
||||
description:
|
||||
- Total memory limit (memory + swap, format:<number>[<unit>]).
|
||||
Number is a positive integer. Unit can be one of b, k, m, or g.
|
||||
default: 0
|
||||
|
||||
memory_swappiness:
|
||||
description:
|
||||
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
|
||||
default: 0
|
||||
|
||||
name:
|
||||
description:
|
||||
- Assign a name to a new container or match an existing container.
|
||||
- When identifying an existing container name may be a name or a long or short container ID.
|
||||
required: true
|
||||
|
||||
network_mode:
|
||||
description:
|
||||
- Connect the container to a network.
|
||||
choices:
|
||||
- bridge
|
||||
- container:<name|id>
|
||||
- host
|
||||
- none
|
||||
default: null
|
||||
|
||||
networks:
|
||||
description:
|
||||
- Dictionary of networks to which the container will be connected. The dictionary must have a name key (the name of the network).
|
||||
Optional keys include: aliases (a list of container aliases), and links (a list of links in the format C(container_name:alias)).
|
||||
default: null
|
||||
|
||||
oom_killer:
|
||||
desription:
|
||||
- Whether or not to disable OOM Killer for the container.
|
||||
default: false
|
||||
|
||||
paused:
|
||||
description:
|
||||
- Use with the started state to pause running processes inside the container.
|
||||
default: false
|
||||
|
||||
pid_mode:
|
||||
description:
|
||||
- Set the PID namespace mode for the container. Currenly only supports 'host'.
|
||||
default: null
|
||||
|
||||
privileged:
|
||||
description:
|
||||
- Give extended privileges to the container.
|
||||
default: false
|
||||
|
||||
published_ports:
|
||||
description:
|
||||
- List of ports to publish from the container to the host.
|
||||
- Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
|
||||
container port, 9000 is a host port, and 0.0.0.0 is a host interface.
|
||||
- Container ports must be exposed either in the Dockerfile or via the C(expose) option.
|
||||
- A value of ALL will publish all exposed container ports to random host ports, ignoring
|
||||
any other mappings.
|
||||
aliases:
|
||||
- ports
|
||||
|
||||
read_only:
|
||||
description:
|
||||
- Mount the container's root file system as read-only.
|
||||
default: false
|
||||
|
||||
recreate:
|
||||
description:
|
||||
- Use with present and started states to force the re-creation of an existing container.
|
||||
default: false
|
||||
|
||||
restart:
|
||||
description:
|
||||
- Use with started state to force a matching container to be stopped and restarted.
|
||||
default: false
|
||||
|
||||
restart_policy:
|
||||
description:
|
||||
- Container restart policy.
|
||||
choices:
|
||||
- on-failure
|
||||
- always
|
||||
default: on-failure
|
||||
|
||||
restart_retries:
|
||||
description:
|
||||
- Use with restart policy to control maximum number of restart attempts.
|
||||
default: 0
|
||||
|
||||
shm_size:
|
||||
description:
|
||||
- Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater than `0`.
|
||||
Unit is optional and can be `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g` (gigabytes).
|
||||
- Ommitting the unit defaults to bytes. If you omit the size entirely, the system uses `64m`.
|
||||
default: null
|
||||
|
||||
security_opts:
|
||||
description:
|
||||
- List of security options in the form of C("label:user:User")
|
||||
default: null
|
||||
|
||||
state:
|
||||
description:
|
||||
- "absent" - A container matching the specified name will be stopped and removed. Use force_kill to kill the container
|
||||
rather than stopping it. Use keep_volumes to retain volumes associated with the removed container.
|
||||
|
||||
- "present" - Asserts the existence of a container matching the name and any provided configuration parameters. If no
|
||||
container matches the name, a container will be created. If a container matches the name but the provided configuration
|
||||
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
|
||||
with the requested config. Use recreate to force the re-creation of the matching container. Use force_kill to kill the
|
||||
container rather than stopping it. Use keep_volumes to retain volumes associated with a removed container.
|
||||
|
||||
- "started" - Asserts there is a running container matching the name and any provided configuration. If no container
|
||||
matches the name, a container will be created and started. If a container matching the name is found but the
|
||||
configuration does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed
|
||||
and a new container will be created with the requested configuration and started. Use recreate to always re-create a
|
||||
matching container, even if it is running. Use restart to force a matching container to be stopped and restarted. Use
|
||||
force_kill to kill a container rather than stopping it. Use keep_volumes to retain volumes associated with a removed
|
||||
container.
|
||||
|
||||
- "stopped" - a container matching the specified name will be stopped. Use force_kill to kill a container rather than
|
||||
stopping it.
|
||||
|
||||
required: false
|
||||
default: started
|
||||
choices:
|
||||
- absent
|
||||
- present
|
||||
- stopped
|
||||
- started
|
||||
|
||||
stop_signal:
|
||||
description:
|
||||
- Override default signal used to stop the container.
|
||||
default: null
|
||||
|
||||
stop_timeout:
|
||||
description:
|
||||
- Number of seconds to wait for the container to stop before sending SIGKILL.
|
||||
required: false
|
||||
|
||||
trust_image_content:
|
||||
description:
|
||||
- If true, skip image verification.
|
||||
default: false
|
||||
|
||||
tty:
|
||||
description:
|
||||
- Allocate a psuedo-TTY.
|
||||
default: false
|
||||
|
||||
ulimits:
|
||||
description:
|
||||
- List of ulimit options. A ulimit is specified as C(nofile:262144:262144)
|
||||
default: null
|
||||
|
||||
user:
|
||||
description
|
||||
- Sets the username or UID used and optionally the groupname or GID for the specified command.
|
||||
- Can be [ user | user:group | uid | uid:gid | user:gid | uid:group ]
|
||||
default: null
|
||||
|
||||
uts:
|
||||
description:
|
||||
- Set the UTS namespace mode for the container.
|
||||
default: null
|
||||
|
||||
volumes:
|
||||
description:
|
||||
- List of volumes to mount within the container.
|
||||
- 'Use docker CLI-style syntax: C(/host:/container[:mode])'
|
||||
- You can specify a read mode for the mount with either C(ro) or C(rw).
|
||||
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or
|
||||
private label for the volume.
|
||||
default: null
|
||||
|
||||
volume_driver:
|
||||
description:
|
||||
- The container's volume driver.
|
||||
default: none
|
||||
|
||||
volumes_from:
|
||||
description:
|
||||
- List of container names or Ids to get volumes from.
|
||||
default: null
|
||||
```
|
||||
|
||||
|
||||
## Examples:
|
||||
|
||||
```
|
||||
- name: Create a data container
|
||||
docker_container:
|
||||
name: mydata
|
||||
image: busybox
|
||||
volumes:
|
||||
- /data
|
||||
|
||||
- name: Re-create a redis container
|
||||
docker_container:
|
||||
name: myredis
|
||||
image: redis
|
||||
command: redis-server --appendonly yes
|
||||
state: present
|
||||
recreate: yes
|
||||
expose:
|
||||
- 6379
|
||||
volumes_from:
|
||||
- mydata
|
||||
|
||||
- name: Restart a container
|
||||
docker_container:
|
||||
name: myapplication
|
||||
image: someuser/appimage
|
||||
state: started
|
||||
restart: yes
|
||||
links:
|
||||
- "myredis:aliasedredis"
|
||||
devices:
|
||||
- "/dev/sda:/dev/xvda:rwm"
|
||||
ports:
|
||||
- "8080:9000"
|
||||
- "127.0.0.1:8081:9001/udp"
|
||||
env:
|
||||
SECRET_KEY: ssssh
|
||||
|
||||
|
||||
- name: Container present
|
||||
docker_container:
|
||||
name: mycontainer
|
||||
state: present
|
||||
recreate: yes
|
||||
forcekill: yes
|
||||
image: someplace/image
|
||||
command: echo "I'm here!"
|
||||
|
||||
|
||||
- name: Start 4 load-balanced containers
|
||||
docker_container:
|
||||
name: "container{{ item }}"
|
||||
state: started
|
||||
recreate: yes
|
||||
image: someuser/anotherappimage
|
||||
command: sleep 1d
|
||||
with_sequence: count=4
|
||||
|
||||
-name: remove container
|
||||
docker_container:
|
||||
name: ohno
|
||||
state: absent
|
||||
|
||||
- name: Syslogging output
|
||||
docker_container:
|
||||
name: myservice
|
||||
state: started
|
||||
log_driver: syslog
|
||||
log_opt:
|
||||
syslog-address: tcp://my-syslog-server:514
|
||||
syslog-facility: daemon
|
||||
syslog-tag: myservice
|
||||
|
||||
```
|
||||
|
||||
## Returns:
|
||||
|
||||
The JSON object returned by the module will include a *results* object providing `docker inspect` output for the affected container.
|
||||
|
||||
```
|
||||
{
|
||||
changed: True,
|
||||
failed: False,
|
||||
rc: 0
|
||||
results: {
|
||||
< the results of `docker inspect` >
|
||||
}
|
||||
}
|
||||
```
|
@ -0,0 +1,159 @@
|
||||
# Docker_Files Modules Proposal
|
||||
|
||||
## Purpose and Scope
|
||||
|
||||
The purpose of docker_files is to provide for retrieving a file or folder from a container's file system,
|
||||
inserting a file or folder into a container, exporting a container's entire filesystem as a tar archive, or
|
||||
retrieving a list of changed files from a container's file system.
|
||||
|
||||
Docker_files will manage a container using docker-py to communicate with either a local or remote API. It will
|
||||
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar to
|
||||
how other cloud modules operate.
|
||||
|
||||
## Parameters
|
||||
|
||||
Docker_files accepts the parameters listed below. API connection parameters will be part of a shared utility module
|
||||
as mentioned above.
|
||||
|
||||
```
|
||||
diff:
|
||||
description:
|
||||
- Provide a list of container names or IDs. For each container a list of changed files and directories found on the
|
||||
container's file system will be returned. Diff is mutually exclusive from all other options except event_type.
|
||||
Use event_type to choose which events to include in the output.
|
||||
default: null
|
||||
|
||||
export:
|
||||
description:
|
||||
- Provide a container name or ID. The container's file system will be exported to a tar archive. Use dest
|
||||
to provide a path for the archive on the local file system. If the output file already exists, it will not be
|
||||
overwritten. Use the force option to overwrite an existing archive.
|
||||
default: null
|
||||
|
||||
dest:
|
||||
description:
|
||||
- Destination path of copied files. If the destination is a container file system, precede the path with a
|
||||
container name or ID + ':'. For example, C(mycontainer:/path/to/file.txt). If the destination path does not
|
||||
exist, it will be created. If the destination path exists on a the local filesystem, it will not be overwritten.
|
||||
Use the force option to overwrite existing files on the local filesystem.
|
||||
default: null
|
||||
|
||||
force:
|
||||
description:
|
||||
- Overwrite existing files on the local filesystem.
|
||||
default: false
|
||||
|
||||
follow_link:
|
||||
description:
|
||||
- Follow symbolic links in the src path. If src is local and file is a symbolic link, the symbolic link, not the
|
||||
target is copied by default. To copy the link target and not the link, set follow_link to true.
|
||||
default: false
|
||||
|
||||
event_type:
|
||||
description:
|
||||
- Select the specific event type to list in the diff output.
|
||||
choices:
|
||||
- all
|
||||
- add
|
||||
- delete
|
||||
- change
|
||||
default: all
|
||||
|
||||
src:
|
||||
description:
|
||||
- The source path of file(s) to be copied. If source files are found on the container's file system, precede the
|
||||
path with the container name or ID + ':'. For example, C(mycontainer:/path/to/files).
|
||||
default: null
|
||||
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
```
|
||||
- name: Copy files from the local file system to a container's file system
|
||||
docker_files:
|
||||
src: /tmp/rpm
|
||||
dest: mycontainer:/tmp
|
||||
follow_links: yes
|
||||
|
||||
- name: Copy files from the container to the local filesystem and overwrite existing files
|
||||
docker_files:
|
||||
src: container1:/var/lib/data
|
||||
dest: /tmp/container1/data
|
||||
force: yes
|
||||
|
||||
- name: Export container filesystem
|
||||
docker_file:
|
||||
export: container1
|
||||
dest: /tmp/conainer1.tar
|
||||
force: yes
|
||||
|
||||
- name: List all differences for multiple containers.
|
||||
docker_files:
|
||||
diff:
|
||||
- mycontainer1
|
||||
- mycontainer2
|
||||
|
||||
- name: Included changed files only in diff output
|
||||
docker_files:
|
||||
diff:
|
||||
- mycontainer1
|
||||
event_type: change
|
||||
```
|
||||
|
||||
## Returns
|
||||
|
||||
Returned from diff:
|
||||
|
||||
```
|
||||
{
|
||||
changed: false,
|
||||
failed: false,
|
||||
rc: 0,
|
||||
results: {
|
||||
mycontainer1: [
|
||||
{ state: 'C', path: '/dev' },
|
||||
{ state: 'A', path: '/dev/kmsg' },
|
||||
{ state: 'C', path: '/etc' },
|
||||
{ state: 'A', path: '/etc/mtab' }
|
||||
],
|
||||
mycontainer2: [
|
||||
{ state: 'C', path: '/foo' },
|
||||
{ state: 'A', path: '/foo/bar.txt' }
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Returned when copying files:
|
||||
|
||||
```
|
||||
{
|
||||
changed: true,
|
||||
failed: false,
|
||||
rc: 0,
|
||||
results: {
|
||||
src: /tmp/rpms,
|
||||
dest: mycontainer:/tmp
|
||||
files_copied: [
|
||||
'file1.txt',
|
||||
'file2.jpg'
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Return when exporting container filesystem:
|
||||
|
||||
```
|
||||
{
|
||||
changed: true,
|
||||
failed: false,
|
||||
rc: 0,
|
||||
results: {
|
||||
src: container_name,
|
||||
dest: local/path/archive_name.tar
|
||||
}
|
||||
}
|
||||
|
||||
```
|
@ -0,0 +1,47 @@
|
||||
|
||||
# Docker_Image_Facts Module Proposal
|
||||
|
||||
## Purpose and Scope
|
||||
|
||||
The purpose of docker_image_facts is to inspect docker images.
|
||||
|
||||
Docker_image_facts will use docker-py to communicate with either a local or remote API. It will
|
||||
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
|
||||
to how other cloud modules operate.
|
||||
|
||||
## Parameters
|
||||
|
||||
Docker_image_facts will support the parameters listed below. API connection parameters will be part of a shared
|
||||
utility module as mentioned above.
|
||||
|
||||
```
|
||||
name:
|
||||
description:
|
||||
- An image name or list of image names. The image name can include a tag using the format C(name:tag).
|
||||
default: null
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
```
|
||||
- name: Inspect all images
|
||||
docker_image_facts
|
||||
register: image_facts
|
||||
|
||||
- name: Inspect a single image
|
||||
docker_image_facts:
|
||||
name: myimage:v1
|
||||
register: myimage_v1_facts
|
||||
```
|
||||
|
||||
## Returns
|
||||
|
||||
```
|
||||
{
|
||||
changed: False
|
||||
failed: False
|
||||
rc: 0
|
||||
result: [ < inspection output > ]
|
||||
}
|
||||
```
|
||||
|
@ -0,0 +1,207 @@
|
||||
|
||||
# Docker_Image Module Proposal
|
||||
|
||||
## Purpose and Scope
|
||||
|
||||
The purpose is to update the existing docker_image module. The updates include expanding the module's capabilities to
|
||||
match the build, load, pull, push, rmi, and save docker commands and adding support for remote registries.
|
||||
|
||||
Docker_image will manage images using docker-py to communicate with either a local or remote API. It will
|
||||
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
|
||||
to how other cloud modules operate.
|
||||
|
||||
## Parameters
|
||||
|
||||
Docker_image will support the parameters listed below. API connection parameters will be part of a shared utility
|
||||
module as mentioned above.
|
||||
|
||||
```
|
||||
archive_path:
|
||||
description:
|
||||
- Save image to the provided path. Use with state present to always save the image to a tar archive. If
|
||||
intermediate directories in the path do not exist, they will be created. If a matching
|
||||
archive already exists, it will be overwritten.
|
||||
default: null
|
||||
|
||||
config_path:
|
||||
description:
|
||||
- Path to a custom docker config file. Docker-py defaults to using ~/.docker/config.json.
|
||||
|
||||
cgroup_parent:
|
||||
description:
|
||||
- Optional parent cgroup for build containers.
|
||||
default: null
|
||||
|
||||
cpu_shares:
|
||||
description:
|
||||
- CPU shares for build containers. Integer value.
|
||||
default: 0
|
||||
|
||||
cpuset_cpus:
|
||||
description:
|
||||
- CPUs in which to allow build container execution C(1,3) or C(1-3).
|
||||
default: null
|
||||
|
||||
dockerfile:
|
||||
description:
|
||||
- Name of dockerfile to use when building an image.
|
||||
default: Dockerfile
|
||||
|
||||
email:
|
||||
description:
|
||||
- The email for the registry account. Provide with username and password when credentials are not encoded
|
||||
in docker configuration file or when encoded credentials should be updated.
|
||||
default: null
|
||||
nolog: true
|
||||
|
||||
force:
|
||||
description:
|
||||
- Use with absent state to un-tag and remove all images matching the specified name. Use with present state to
|
||||
force a pull or rebuild of the image.
|
||||
default: false
|
||||
|
||||
load_path:
|
||||
description:
|
||||
- Use with state present to load a previously save image. Provide the full path to the image archive file.
|
||||
default: null
|
||||
|
||||
memory:
|
||||
description:
|
||||
- Build container limit. Memory limit specified as a positive integer for number of bytes.
|
||||
|
||||
memswap:
|
||||
description:
|
||||
- Build container limit. Total memory (memory + swap). Specify as a positive integer for number of bytes or
|
||||
-1 to disable swap.
|
||||
default: null
|
||||
|
||||
name:
|
||||
description:
|
||||
- Image name or ID.
|
||||
required: true
|
||||
|
||||
nocache:
|
||||
description:
|
||||
- Do not use cache when building an image.
|
||||
deafult: false
|
||||
|
||||
password:
|
||||
description:
|
||||
- Password used when connecting to the registry. Provide with username and email when credentials are not encoded
|
||||
in docker configuration file or when encoded credentials should be updated.
|
||||
default: null
|
||||
nolog: true
|
||||
|
||||
path:
|
||||
description:
|
||||
- Path to Dockerfile and context from which to build an image.
|
||||
default: null
|
||||
|
||||
push:
|
||||
description:
|
||||
- Use with state present to always push an image to the registry.
|
||||
default: false
|
||||
|
||||
registry:
|
||||
description:
|
||||
- URL of the registry. If not provided, defaults to Docker Hub.
|
||||
default: null
|
||||
|
||||
rm:
|
||||
description:
|
||||
- Remove intermediate containers after build.
|
||||
default: true
|
||||
|
||||
tag:
|
||||
description:
|
||||
- Image tags. When pulling or pushing, set to 'all' to include all tags.
|
||||
default: latest
|
||||
|
||||
url:
|
||||
description:
|
||||
- The location of a Git repository. The repository acts as the context when building an image.
|
||||
- Mutually exclusive with path.
|
||||
|
||||
username:
|
||||
description:
|
||||
- Username used when connecting to the registry. Provide with password and email when credentials are not encoded
|
||||
in docker configuration file or when encoded credentials should be updated.
|
||||
default: null
|
||||
nolog: true
|
||||
|
||||
state:
|
||||
description:
|
||||
- "absent" - if image exists, unconditionally remove it. Use the force option to un-tag and remove all images
|
||||
matching the provided name.
|
||||
- "present" - check if image is present with the provided tag. If the image is not present or the force option
|
||||
is used, the image will either be pulled from the registry, built or loaded from an archive. To build the image,
|
||||
provide a path or url to the context and Dockerfile. To load an image, use load_path to provide a path to
|
||||
an archive file. If no path, url or load_path is provided, the image will be pulled. Use the registry
|
||||
parameters to control the registry from which the image is pulled.
|
||||
|
||||
required: false
|
||||
default: present
|
||||
choices:
|
||||
- absent
|
||||
- present
|
||||
|
||||
http_timeout:
|
||||
description:
|
||||
- Timeout for HTTP requests during the image build operation. Provide a positive integer value for the number of
|
||||
seconds.
|
||||
default: null
|
||||
|
||||
```
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
```
|
||||
- name: build image
|
||||
docker_image:
|
||||
path: "/path/to/build/dir"
|
||||
name: "my_app"
|
||||
tags:
|
||||
- v1.0
|
||||
- mybuild
|
||||
|
||||
- name: force pull an image and all tags
|
||||
docker_image:
|
||||
name: "my/app"
|
||||
force: yes
|
||||
tags: all
|
||||
|
||||
- name: untag and remove image
|
||||
docker_image:
|
||||
name: "my/app"
|
||||
state: absent
|
||||
force: yes
|
||||
|
||||
- name: push an image to Docker Hub with all tags
|
||||
docker_image:
|
||||
name: my_image
|
||||
push: yes
|
||||
tags: all
|
||||
|
||||
- name: pull image from a private registry
|
||||
docker_image:
|
||||
name: centos
|
||||
registry: https://private_registry:8080
|
||||
|
||||
```
|
||||
|
||||
|
||||
## Returns
|
||||
|
||||
```
|
||||
{
|
||||
changed: True
|
||||
failed: False
|
||||
rc: 0
|
||||
action: built | pulled | loaded | removed | none
|
||||
msg: < text confirming the action that was taken >
|
||||
results: {
|
||||
< output from docker inspect for the affected image >
|
||||
}
|
||||
}
|
||||
```
|
@ -0,0 +1,48 @@
|
||||
|
||||
# Docker_Network_Facts Module Proposal
|
||||
|
||||
## Purpose and Scope
|
||||
|
||||
Docker_network_facts will inspect networks.
|
||||
|
||||
Docker_network_facts will use docker-py to communicate with either a local or remote API. It will
|
||||
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
|
||||
to how other cloud modules operate.
|
||||
|
||||
## Parameters
|
||||
|
||||
Docker_network_facts will accept the parameters listed below. API connection parameters will be part of a shared
|
||||
utility module as mentioned above.
|
||||
|
||||
```
|
||||
name:
|
||||
description:
|
||||
- Network name or list of network names.
|
||||
default: null
|
||||
|
||||
```
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
```
|
||||
- name: Inspect all networks
|
||||
docker_network_facts
|
||||
register: network_facts
|
||||
|
||||
- name: Inspect a specific network and format the output
|
||||
docker_network_facts
|
||||
name: web_app
|
||||
register: web_app_facts
|
||||
```
|
||||
|
||||
# Returns
|
||||
|
||||
```
|
||||
{
|
||||
changed: False
|
||||
failed: False
|
||||
rc: 0
|
||||
results: [ < inspection output > ]
|
||||
}
|
||||
```
|
@ -0,0 +1,130 @@
|
||||
# Docker_Network Module Proposal
|
||||
|
||||
## Purpose and Scope:
|
||||
|
||||
The purpose of Docker_network is to create networks, connect containers to networks, disconnect containers from
|
||||
networks, and delete networks.
|
||||
|
||||
Docker network will manage networks using docker-py to communicate with either a local or remote API. It will
|
||||
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar to
|
||||
how other cloud modules operate.
|
||||
|
||||
## Parameters:
|
||||
|
||||
Docker_network will accept the parameters listed below. Parameters related to connecting to the API will be handled in
|
||||
a shared utility module, as mentioned above.
|
||||
|
||||
```
|
||||
connected:
|
||||
description:
|
||||
- List of container names or container IDs to connect to a network.
|
||||
default: null
|
||||
|
||||
driver:
|
||||
description:
|
||||
- Specify the type of network. Docker provides bridge and overlay drivers, but 3rd party drivers can also be used.
|
||||
default: bridge
|
||||
|
||||
driver_options:
|
||||
description:
|
||||
- Dictionary of network settings. Consult docker docs for valid options and values.
|
||||
default: null
|
||||
|
||||
force:
|
||||
description:
|
||||
- With state 'absent' forces disconnecting all containers from the network prior to deleting the network. With
|
||||
state 'present' will disconnect all containers, delete the network and re-create the network.
|
||||
default: false
|
||||
|
||||
incremental:
|
||||
description:
|
||||
- By default the connected list is canonical, meaning containers not on the list are removed from the network.
|
||||
Use incremental to leave existing containers connected.
|
||||
default: false
|
||||
|
||||
ipam_driver:
|
||||
description:
|
||||
- Specifiy an IPAM driver.
|
||||
default: null
|
||||
|
||||
ipam_options:
|
||||
description:
|
||||
- Dictionary of IPAM options.
|
||||
default: null
|
||||
|
||||
network_name:
|
||||
description:
|
||||
- Name of the network to operate on.
|
||||
default: null
|
||||
required: true
|
||||
|
||||
state:
|
||||
description:
|
||||
- "absent" deletes the network. If a network has connected containers, it cannot be deleted. Use the force option
|
||||
to disconnect all containers and delete the network.
|
||||
- "present" creates the network, if it does not already exist with the specified parameters, and connects the list
|
||||
of containers provided via the connected parameter. Containers not on the list will be disconnected. An empty
|
||||
list will leave no containers connected to the network. Use the incremental option to leave existing containers
|
||||
connected. Use the force options to force re-creation of the network.
|
||||
default: present
|
||||
choices:
|
||||
- absent
|
||||
- present
|
||||
```
|
||||
|
||||
|
||||
## Examples:
|
||||
|
||||
```
|
||||
- name: Create a network
|
||||
docker_network:
|
||||
name: network_one
|
||||
|
||||
- name: Remove all but selected list of containers
|
||||
docker_network:
|
||||
name: network_one
|
||||
connected:
|
||||
- containera
|
||||
- containerb
|
||||
- containerc
|
||||
|
||||
- name: Remove a single container
|
||||
docker_network:
|
||||
name: network_one
|
||||
connected: "{{ fulllist|difference(['containera']) }}"
|
||||
|
||||
- name: Add a container to a network, leaving existing containers connected
|
||||
docker_network:
|
||||
name: network_one
|
||||
connected:
|
||||
- containerc
|
||||
incremental: yes
|
||||
|
||||
- name: Create a network with options (Not sure if 'ip_range' is correct key name)
|
||||
docker_network
|
||||
name: network_two
|
||||
options:
|
||||
subnet: '172.3.26.0/16'
|
||||
gateway: 172.3.26.1
|
||||
ip_range: '192.168.1.0/24'
|
||||
|
||||
- name: Delete a network, disconnecting all containers
|
||||
docker_network:
|
||||
name: network_one
|
||||
state: absent
|
||||
force: yes
|
||||
```
|
||||
|
||||
## Returns:
|
||||
|
||||
```
|
||||
{
|
||||
changed: True,
|
||||
failed: false
|
||||
rc: 0
|
||||
action: created | removed | none
|
||||
results: {
|
||||
< results from docker inspect for the affected network >
|
||||
}
|
||||
}
|
||||
```
|
@ -0,0 +1,48 @@
|
||||
|
||||
# Docker_Volume_Facts Module Proposal
|
||||
|
||||
## Purpose and Scope
|
||||
|
||||
Docker_volume_facts will inspect volumes.
|
||||
|
||||
Docker_volume_facts will use docker-py to communicate with either a local or remote API. It will
|
||||
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
|
||||
to how other cloud modules operate.
|
||||
|
||||
## Parameters
|
||||
|
||||
Docker_volume_facts will accept the parameters listed below. API connection parameters will be part of a shared
|
||||
utility module as mentioned above.
|
||||
|
||||
|
||||
```
|
||||
name:
|
||||
description:
|
||||
- Volume name or list of volume names.
|
||||
default: null
|
||||
```
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
```
|
||||
- name: Inspect all volumes
|
||||
docker_volume_facts
|
||||
register: volume_facts
|
||||
|
||||
- name: Inspect a specific volume
|
||||
docker_volume_facts:
|
||||
name: data
|
||||
register: data_vol_facts
|
||||
```
|
||||
|
||||
# Returns
|
||||
|
||||
```
|
||||
{
|
||||
changed: False
|
||||
failed: False
|
||||
rc: 0
|
||||
results: [ < output from volume inspection > ]
|
||||
}
|
||||
```
|
@ -0,0 +1,82 @@
|
||||
# Docker_Volume Modules Proposal
|
||||
|
||||
## Purpose and Scope
|
||||
|
||||
The purpose of docker_volume is to manage volumes.
|
||||
|
||||
Docker_volume will manage volumes using docker-py to communicate with either a local or remote API. It will
|
||||
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
|
||||
to how other cloud modules operate.
|
||||
|
||||
## Parameters
|
||||
|
||||
Docker_volume accepts the parameters listed below. Parameters for connecting to the API are not listed here, as they
|
||||
will be part of the shared module mentioned above.
|
||||
|
||||
```
|
||||
driver:
|
||||
description:
|
||||
- Volume driver.
|
||||
default: local
|
||||
|
||||
force:
|
||||
description:
|
||||
- Use with state 'present' to force removal and re-creation of an existing volume. This will not remove and
|
||||
re-create the volume if it is already in use.
|
||||
|
||||
name:
|
||||
description:
|
||||
- Name of the volume.
|
||||
required: true
|
||||
default: null
|
||||
|
||||
options:
|
||||
description:
|
||||
- Dictionary of driver specific options. The local driver does not currently support
|
||||
any options.
|
||||
default: null
|
||||
|
||||
state:
|
||||
description:
|
||||
- "absent" removes a volume. A volume cannot be removed if it is in use.
|
||||
- "present" create a volume with the specified name, if the volume does not already exist. Use the force
|
||||
option to remove and re-create a volume. Even with the force option a volume cannot be removed and re-created if
|
||||
it is in use.
|
||||
default: present
|
||||
choices:
|
||||
- absent
|
||||
- present
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
```
|
||||
- name: Create a volume
|
||||
docker_volume:
|
||||
name: data
|
||||
|
||||
- name: Remove a volume
|
||||
docker_volume:
|
||||
name: data
|
||||
state: absent
|
||||
|
||||
- name: Re-create an existing volume
|
||||
docker_volume:
|
||||
name: data
|
||||
state: present
|
||||
force: yes
|
||||
```
|
||||
|
||||
## Returns
|
||||
|
||||
```
|
||||
{
|
||||
changed: true,
|
||||
failed: false,
|
||||
rc: 0,
|
||||
action: removed | created | none
|
||||
results: {
|
||||
< show the result of docker inspect of an affected volume >
|
||||
}
|
||||
}
|
||||
```
|
@ -0,0 +1,205 @@
|
||||
# Publish / Subscribe for Handlers
|
||||
|
||||
*Author*: René Moser <@resmo>
|
||||
|
||||
*Date*: 07/03/2016
|
||||
|
||||
## Motivation
|
||||
|
||||
In some use cases a publish/subscribe kind of event to run a handler is more convenient, e.g. restart services after replacing SSL certs.
|
||||
|
||||
However, ansible does not provide a built-in way to handle it yet.
|
||||
|
||||
|
||||
### Problem
|
||||
|
||||
If your SSL cert changes, you usually have to reload/restart services to use the new certificate.
|
||||
|
||||
However, If you have a ssl role or a generic ssl play, you usually don't want to add specific handlers to it.
|
||||
Instead it would be much more convenient to use a publish/subscribe kind of paradigm in the roles where the services are configured in.
|
||||
|
||||
The way we implemented it currently:
|
||||
|
||||
I use notify to set a fact where later (in different plays) we act on a fact using notify again.
|
||||
|
||||
~~~yaml
|
||||
---
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: copy an ssl cert
|
||||
shell: echo cert has been changed
|
||||
notify: publish ssl cert change
|
||||
handlers:
|
||||
- name: publish ssl cert change
|
||||
set_fact:
|
||||
ssl_cert_changed: true
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: subscribe for ssl cert change
|
||||
shell: echo cert changed
|
||||
notify: service restart one
|
||||
when: ssl_cert_changed is defined and ssl_cert_changed
|
||||
handlers:
|
||||
- name: service restart one
|
||||
shell: echo service one restarted
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: subscribe for ssl cert change
|
||||
shell: echo cert changed
|
||||
when: ssl_cert_changed is defined and ssl_cert_changed
|
||||
notify: service restart two
|
||||
handlers:
|
||||
- name: service restart two
|
||||
shell: echo service two restarted
|
||||
~~~
|
||||
|
||||
However, this looks like a workaround of a feature that ansible should provide in a much cleaner way.
|
||||
|
||||
## Approaches
|
||||
|
||||
### Approach 1:
|
||||
|
||||
Provide new `subscribe` keyword on handlers:
|
||||
|
||||
~~~yaml
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: copy an ssl cert
|
||||
shell: echo cert has been changed
|
||||
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
handlers:
|
||||
- name: service restart one
|
||||
shell: echo service one restarted
|
||||
subscribe: copy an ssl cert
|
||||
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
handlers:
|
||||
- name: service restart two
|
||||
shell: echo service two restarted
|
||||
subscribe: copy an ssl cert
|
||||
~~~
|
||||
|
||||
### Approach 2:
|
||||
|
||||
Provide new `subscribe` on handlers and `publish` keywords in tasks:
|
||||
|
||||
~~~yaml
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: copy an ssl cert
|
||||
shell: echo cert has been changed
|
||||
publish: yes
|
||||
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
handlers:
|
||||
- name: service restart one
|
||||
shell: echo service one restarted
|
||||
subscribe: copy an ssl cert
|
||||
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
handlers:
|
||||
- name: service restart two
|
||||
shell: echo service two restarted
|
||||
subscribe: copy an ssl cert
|
||||
~~~
|
||||
|
||||
### Approach 3:
|
||||
|
||||
Provide new `subscribe` module:
|
||||
|
||||
A subscribe module could consume the results of a task by name, optionally the value to react on could be specified (default: `changed`)
|
||||
|
||||
~~~yaml
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: copy an ssl cert
|
||||
shell: echo cert has been changed
|
||||
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- subscribe:
|
||||
name: copy an ssl cert
|
||||
notify: service restart one
|
||||
handlers:
|
||||
- name: service restart one
|
||||
shell: echo service one restarted
|
||||
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- subscribe:
|
||||
name: copy an ssl cert
|
||||
react_on: changed
|
||||
notify: service restart two
|
||||
handlers:
|
||||
- name: service restart two
|
||||
shell: echo service two restarted
|
||||
~~~
|
||||
|
||||
|
||||
### Approach 4:
|
||||
|
||||
Provide new `subscribe` module (same as Approach 3) and `publish` keyword:
|
||||
|
||||
~~~yaml
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: copy an ssl cert
|
||||
shell: echo cert has been changed
|
||||
publish: yes
|
||||
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- subscribe:
|
||||
name: copy an ssl cert
|
||||
notify: service restart one
|
||||
handlers:
|
||||
- name: service restart one
|
||||
shell: echo service one restarted
|
||||
|
||||
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- subscribe:
|
||||
name: copy an ssl cert
|
||||
notify: service restart two
|
||||
handlers:
|
||||
- name: service restart two
|
||||
shell: echo service two restarted
|
||||
~~~
|
||||
|
||||
### Clarifications about role dependencies and publish
|
||||
|
||||
When using service roles having the subscription handlers and the publish task (e.g. cert change) is defined in a depended role (SSL role) only the first service role running the "cert change" task as dependency will trigger the publish.
|
||||
|
||||
In any other service role in the playbook having "SSL role" as dependency, the task won't be `changed` anymore.
|
||||
|
||||
Therefore a once published "message" should not be overwritten or so called "unpublished" by running the same task in a followed role in the playbook.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Feedback is requested to improve any of the above approaches, or provide further approaches to solve this problem.
|
@ -0,0 +1,77 @@
|
||||
# Proposal: Re-run handlers cli option
|
||||
|
||||
*Author*: René Moser <@resmo>
|
||||
|
||||
*Date*: 07/03/2016
|
||||
|
||||
- Status: New
|
||||
|
||||
## Motivation
|
||||
|
||||
The most annoying thing users face using ansible in production is running handlers manually after a task failed after a notified handler.
|
||||
|
||||
### Problems
|
||||
|
||||
Handler notifications get lost after a task failed and there is no help from ansible to catch up the notified handlers in a next ansible playbook run.
|
||||
|
||||
~~~yaml
|
||||
- hosts: localhost
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: simple task
|
||||
shell: echo foo
|
||||
notify: get msg out
|
||||
|
||||
- name: this tasks fails
|
||||
fail: msg="something went wrong"
|
||||
|
||||
handlers:
|
||||
- name: get msg out
|
||||
shell: echo handler run
|
||||
~~~
|
||||
|
||||
Result:
|
||||
|
||||
~~~
|
||||
$ ansible-playbook test.yml
|
||||
|
||||
PLAY ***************************************************************************
|
||||
|
||||
TASK [simple task] *************************************************************
|
||||
changed: [localhost]
|
||||
|
||||
TASK [this tasks fails] ********************************************************
|
||||
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "something went wrong"}
|
||||
|
||||
NO MORE HOSTS LEFT *************************************************************
|
||||
|
||||
RUNNING HANDLER [get msg out] **************************************************
|
||||
to retry, use: --limit @test.retry
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=1 changed=1 unreachable=0 failed=1
|
||||
~~~
|
||||
|
||||
## Solution proposal
|
||||
|
||||
Similar to retry, ansible should provide a way to manully invoke a list of handlers additionaly to the notified handlers in the plays:
|
||||
|
||||
~~~
|
||||
$ ansible-playbook test.yml --notify-handlers <handler>,<handler>,<handler>
|
||||
$ ansible-playbook test.yml --notify-handlers @test.handlers
|
||||
~~~
|
||||
|
||||
Example:
|
||||
|
||||
~~~
|
||||
$ ansible-playbook test.yml --notify-handlers "get msg out"
|
||||
~~~
|
||||
|
||||
The stdout of a failed play should provide an example how to run notified handlers in the next run:
|
||||
|
||||
~~~
|
||||
...
|
||||
RUNNING HANDLER [get msg out] **************************************************
|
||||
to retry, use: --limit @test.retry --notify-handlers @test.handlers
|
||||
~~~
|
||||
|
@ -0,0 +1,34 @@
|
||||
# Rename always_run to ignore_checkmode
|
||||
|
||||
*Author*: René Moser <@resmo>
|
||||
|
||||
*Date*: 02/03/2016
|
||||
|
||||
## Motivation
|
||||
|
||||
The task argument `always_run` is misleading.
|
||||
|
||||
Ansible is known to be readable by users without deep knowledge of creating playbooks, they do not understand
|
||||
what `always_run` does at the first glance.
|
||||
|
||||
### Problems
|
||||
|
||||
The following looks scary if you have no idea, what `always_run` does:
|
||||
|
||||
```
|
||||
- shell: dangerous_cleanup.sh
|
||||
when: cleanup == "yes"
|
||||
always_run: yes
|
||||
```
|
||||
|
||||
You have a conditional but also a word that says `always`. This is a conflict in terms of understanding.
|
||||
|
||||
## Solution Proposal
|
||||
|
||||
Deprecate `always_run` by rename it to `ignore_checkmode`:
|
||||
|
||||
```
|
||||
- shell: dangerous_cleanup.sh
|
||||
when: cleanup == "yes"
|
||||
ignore_checkmode: yes
|
||||
```
|
@ -1,205 +0,0 @@
|
||||
{#
|
||||
basic/layout.html
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Master layout template for Sphinx themes.
|
||||
|
||||
:copyright: Copyright 2007-2013 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
#}
|
||||
{%- block doctype -%}
|
||||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
|
||||
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
|
||||
{%- endblock %}
|
||||
{%- set reldelim1 = reldelim1 is not defined and ' »' or reldelim1 %}
|
||||
{%- set reldelim2 = reldelim2 is not defined and ' |' or reldelim2 %}
|
||||
{%- set render_sidebar = (not embedded) and (not theme_nosidebar|tobool) and
|
||||
(sidebars != []) %}
|
||||
{%- set url_root = pathto('', 1) %}
|
||||
{# XXX necessary? #}
|
||||
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
|
||||
{%- if not embedded and docstitle %}
|
||||
{%- set titlesuffix = " — "|safe + docstitle|e %}
|
||||
{%- else %}
|
||||
{%- set titlesuffix = "" %}
|
||||
{%- endif %}
|
||||
|
||||
{%- macro relbar() %}
|
||||
<div class="related">
|
||||
<h3>{{ _('Navigation') }}</h3>
|
||||
<ul>
|
||||
{%- for rellink in rellinks %}
|
||||
<li class="right" {% if loop.first %}style="margin-right: 10px"{% endif %}>
|
||||
<a href="{{ pathto(rellink[0]) }}" title="{{ rellink[1]|striptags|e }}"
|
||||
{{ accesskey(rellink[2]) }}>{{ rellink[3] }}</a>
|
||||
{%- if not loop.first %}{{ reldelim2 }}{% endif %}</li>
|
||||
{%- endfor %}
|
||||
{%- block rootrellink %}
|
||||
<li><a href="{{ pathto(master_doc) }}">{{ shorttitle|e }}</a>{{ reldelim1 }}</li>
|
||||
{%- endblock %}
|
||||
{%- for parent in parents %}
|
||||
<li><a href="{{ parent.link|e }}" {% if loop.last %}{{ accesskey("U") }}{% endif %}>{{ parent.title }}</a>{{ reldelim1 }}</li>
|
||||
{%- endfor %}
|
||||
{%- block relbaritems %} {% endblock %}
|
||||
</ul>
|
||||
</div>
|
||||
{%- endmacro %}
|
||||
|
||||
{%- macro sidebar() %}
|
||||
{%- if render_sidebar %}
|
||||
<div class="sphinxsidebar">
|
||||
<div class="sphinxsidebarwrapper">
|
||||
{%- block sidebarlogo %}
|
||||
{%- if logo %}
|
||||
<p class="logo"><a href="{{ pathto(master_doc) }}">
|
||||
<img class="logo" src="{{ pathto('_static/' + logo, 1) }}" alt="Logo"/>
|
||||
</a></p>
|
||||
{%- endif %}
|
||||
{%- endblock %}
|
||||
{%- if sidebars != None %}
|
||||
{#- new style sidebar: explicitly include/exclude templates #}
|
||||
{%- for sidebartemplate in sidebars %}
|
||||
{%- include sidebartemplate %}
|
||||
{%- endfor %}
|
||||
{%- else %}
|
||||
{#- old style sidebars: using blocks -- should be deprecated #}
|
||||
{%- block sidebartoc %}
|
||||
{%- include "localtoc.html" %}
|
||||
{%- endblock %}
|
||||
{%- block sidebarrel %}
|
||||
{%- include "relations.html" %}
|
||||
{%- endblock %}
|
||||
{%- block sidebarsourcelink %}
|
||||
{%- include "sourcelink.html" %}
|
||||
{%- endblock %}
|
||||
{%- if customsidebar %}
|
||||
{%- include customsidebar %}
|
||||
{%- endif %}
|
||||
{%- block sidebarsearch %}
|
||||
{%- include "searchbox.html" %}
|
||||
{%- endblock %}
|
||||
{%- endif %}
|
||||
</div>
|
||||
</div>
|
||||
{%- endif %}
|
||||
{%- endmacro %}
|
||||
|
||||
{%- macro script() %}
|
||||
<script type="text/javascript">
|
||||
var DOCUMENTATION_OPTIONS = {
|
||||
URL_ROOT: '{{ url_root }}',
|
||||
VERSION: '{{ release|e }}',
|
||||
COLLAPSE_INDEX: false,
|
||||
FILE_SUFFIX: '{{ '' if no_search_suffix else file_suffix }}',
|
||||
HAS_SOURCE: {{ has_source|lower }}
|
||||
};
|
||||
</script>
|
||||
{%- for scriptfile in script_files %}
|
||||
<script type="text/javascript" src="{{ pathto(scriptfile, 1) }}"></script>
|
||||
{%- endfor %}
|
||||
{%- endmacro %}
|
||||
|
||||
{%- macro css() %}
|
||||
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
|
||||
<link rel="stylesheet" href="{{ pathto('_static/pygments.css', 1) }}" type="text/css" />
|
||||
{%- for cssfile in css_files %}
|
||||
<link rel="stylesheet" href="{{ pathto(cssfile, 1) }}" type="text/css" />
|
||||
{%- endfor %}
|
||||
{%- endmacro %}
|
||||
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<head>
|
||||
<meta http-equiv="Content-Type" content="text/html; charset={{ encoding }}" />
|
||||
{{ metatags }}
|
||||
{%- block htmltitle %}
|
||||
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
|
||||
{%- endblock %}
|
||||
{{ css() }}
|
||||
{%- if not embedded %}
|
||||
{{ script() }}
|
||||
{%- if use_opensearch %}
|
||||
<link rel="search" type="application/opensearchdescription+xml"
|
||||
title="{% trans docstitle=docstitle|e %}Search within {{ docstitle }}{% endtrans %}"
|
||||
href="{{ pathto('_static/opensearch.xml', 1) }}"/>
|
||||
{%- endif %}
|
||||
{%- if favicon %}
|
||||
<link rel="shortcut icon" href="{{ pathto('_static/' + favicon, 1) }}"/>
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- block linktags %}
|
||||
{%- if hasdoc('about') %}
|
||||
<link rel="author" title="{{ _('About these documents') }}" href="{{ pathto('about') }}" />
|
||||
{%- endif %}
|
||||
{%- if hasdoc('genindex') %}
|
||||
<link rel="index" title="{{ _('Index') }}" href="{{ pathto('genindex') }}" />
|
||||
{%- endif %}
|
||||
{%- if hasdoc('search') %}
|
||||
<link rel="search" title="{{ _('Search') }}" href="{{ pathto('search') }}" />
|
||||
{%- endif %}
|
||||
{%- if hasdoc('copyright') %}
|
||||
<link rel="copyright" title="{{ _('Copyright') }}" href="{{ pathto('copyright') }}" />
|
||||
{%- endif %}
|
||||
<link rel="top" title="{{ docstitle|e }}" href="{{ pathto('index') }}" />
|
||||
{%- if parents %}
|
||||
<link rel="up" title="{{ parents[-1].title|striptags|e }}" href="{{ parents[-1].link|e }}" />
|
||||
{%- endif %}
|
||||
{%- if next %}
|
||||
<link rel="next" title="{{ next.title|striptags|e }}" href="{{ next.link|e }}" />
|
||||
{%- endif %}
|
||||
{%- if prev %}
|
||||
<link rel="prev" title="{{ prev.title|striptags|e }}" href="{{ prev.link|e }}" />
|
||||
{%- endif %}
|
||||
{%- endblock %}
|
||||
{%- block extrahead %} {% endblock %}
|
||||
</head>
|
||||
<body>
|
||||
{%- block header %}{% endblock %}
|
||||
|
||||
{%- block relbar1 %}{{ relbar() }}{% endblock %}
|
||||
|
||||
{%- block content %}
|
||||
{%- block sidebar1 %} {# possible location for sidebar #} {% endblock %}
|
||||
|
||||
<div class="document">
|
||||
{%- block document %}
|
||||
<div class="documentwrapper">
|
||||
{%- if render_sidebar %}
|
||||
<div class="bodywrapper">
|
||||
{%- endif %}
|
||||
<div class="body">
|
||||
{% block body %} {% endblock %}
|
||||
</div>
|
||||
{%- if render_sidebar %}
|
||||
</div>
|
||||
{%- endif %}
|
||||
</div>
|
||||
{%- endblock %}
|
||||
|
||||
{%- block sidebar2 %}{{ sidebar() }}{% endblock %}
|
||||
<div class="clearer"></div>
|
||||
</div>
|
||||
{%- endblock %}
|
||||
|
||||
{%- block relbar2 %}{{ relbar() }}{% endblock %}
|
||||
|
||||
{%- block footer %}
|
||||
<div class="footer">
|
||||
{%- if show_copyright %}
|
||||
{%- if hasdoc('copyright') %}
|
||||
{% trans path=pathto('copyright'), copyright=copyright|e %}© <a href="{{ path }}">Copyright</a> {{ copyright }}.{% endtrans %}
|
||||
{%- else %}
|
||||
{% trans copyright=copyright|e %}© Copyright {{ copyright }}.{% endtrans %}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
{%- if last_updated %}
|
||||
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}
|
||||
{%- endif %}
|
||||
{%- if show_sphinx %}
|
||||
{% trans sphinx_version=sphinx_version|e %}Created using <a href="http://sphinx-doc.org/">Sphinx</a> {{ sphinx_version }}.{% endtrans %}
|
||||
{%- endif %}
|
||||
</div>
|
||||
<p>asdf asdf asdf asdf 22</p>
|
||||
{%- endblock %}
|
||||
</body>
|
||||
</html>
|
||||
|
@ -1,61 +0,0 @@
|
||||
<!-- <form class="wy-form" action="{{ pathto('search') }}" method="get">
|
||||
<input type="text" name="q" placeholder="Search docs" />
|
||||
<input type="hidden" name="check_keywords" value="yes" />
|
||||
<input type="hidden" name="area" value="default" />
|
||||
</form> -->
|
||||
|
||||
<script>
|
||||
(function() {
|
||||
var cx = '006019874985968165468:eu5pbnxp4po';
|
||||
var gcse = document.createElement('script');
|
||||
gcse.type = 'text/javascript';
|
||||
gcse.async = true;
|
||||
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
|
||||
'//www.google.com/cse/cse.js?cx=' + cx;
|
||||
var s = document.getElementsByTagName('script')[0];
|
||||
s.parentNode.insertBefore(gcse, s);
|
||||
})();
|
||||
</script>
|
||||
|
||||
<form id="search-form-id" action="">
|
||||
<input type="text" name="query" id="search-box-id" />
|
||||
<a class="search-reset-start" id="search-reset"><i class="fa fa-times"></i></a>
|
||||
<a class="search-reset-start" id="search-start"><i class="fa fa-search"></i></a>
|
||||
</form>
|
||||
|
||||
<script type="text/javascript" src="http://www.google.com/cse/brand?form=search-form-id&inputbox=search-box-id"></script>
|
||||
|
||||
<script>
|
||||
function executeQuery() {
|
||||
var input = document.getElementById('search-box-id');
|
||||
var element = google.search.cse.element.getElement('searchresults-only0');
|
||||
element.resultsUrl = '/htmlout/search.html'
|
||||
if (input.value == '') {
|
||||
element.clearAllResults();
|
||||
$('#page-content, .rst-footer-buttons, #search-start').show();
|
||||
$('#search-results, #search-reset').hide();
|
||||
} else {
|
||||
$('#page-content, .rst-footer-buttons, #search-start').hide();
|
||||
$('#search-results, #search-reset').show();
|
||||
element.execute(input.value);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
$('#search-reset').hide();
|
||||
|
||||
$('#search-box-id').css('background-position', '1em center');
|
||||
|
||||
$('#search-box-id').on('blur', function() {
|
||||
$('#search-box-id').css('background-position', '1em center');
|
||||
});
|
||||
|
||||
$('#search-start').click(function(e) { executeQuery(); });
|
||||
$('#search-reset').click(function(e) { $('#search-box-id').val(''); executeQuery(); });
|
||||
|
||||
$('#search-form-id').submit(function(e) {
|
||||
console.log('submitting!');
|
||||
executeQuery();
|
||||
e.preventDefault();
|
||||
});
|
||||
</script>
|
@ -0,0 +1,48 @@
|
||||
Releases
|
||||
========
|
||||
|
||||
.. contents:: Topics
|
||||
:local:
|
||||
|
||||
.. _schedule:
|
||||
|
||||
Release Schedule
|
||||
````````````````
|
||||
Ansible is on a 'flexible' 4 month release schedule, sometimes this can be extended if there is a major change that requires a longer cycle (i.e. 2.0 core rewrite).
|
||||
Currently modules get released at the same time as the main Ansible repo, even though they are separated into ansible-modules-core and ansible-modules-extras.
|
||||
|
||||
The major features and bugs fixed in a release should be reflected in the CHANGELOG.md, minor ones will be in the commit history (FIXME: add git exmaple to list).
|
||||
When a fix/feature gets added to the `devel` branch it will be part of the next release, some bugfixes can be backported to previous releases and might be part of a minor point release if it is deemed necessary.
|
||||
|
||||
Sometimes an RC can be extended by a few days if a bugfix makes a change that can have far reaching consequences, so users have enough time to find any new issues that may stem from this.
|
||||
|
||||
.. _methods:
|
||||
|
||||
Release methods
|
||||
````````````````
|
||||
|
||||
Ansible normally goes through a 'release candidate', issuing an RC1 for a release, if no major bugs are discovered in it after 5 business days we'll get a final release.
|
||||
Otherwise fixes will be applied and an RC2 will be provided for testing and if no bugs after 2 days, the final release will be made, iterating this last step and incrementing the candidate number as we find major bugs.
|
||||
|
||||
|
||||
.. _freezing:
|
||||
|
||||
Release feature freeze
|
||||
``````````````````````
|
||||
|
||||
During the release candidate process, the focus will be on bugfixes that affect the RC, new features will be delayed while we try to produce a final version. Some bugfixes that are minor or don't affect the RC will also be postponed until after the release is finalized.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:doc:`developing_api`
|
||||
Python API to Playbooks and Ad Hoc Task Execution
|
||||
:doc:`developing_modules`
|
||||
How to develop modules
|
||||
:doc:`developing_plugins`
|
||||
How to develop plugins
|
||||
`Ansible Tower <http://ansible.com/ansible-tower>`_
|
||||
REST API endpoint and GUI for Ansible, syncs with dynamic inventory
|
||||
`Development Mailing List <http://groups.google.com/group/ansible-devel>`_
|
||||
Mailing list for development topics
|
||||
`irc.freenode.net <http://irc.freenode.net>`_
|
||||
#ansible IRC chat channel
|
@ -0,0 +1,59 @@
|
||||
Advanced Syntax
|
||||
===============
|
||||
|
||||
.. contents:: Topics
|
||||
|
||||
This page describes advanced YAML syntax that enables you to have more control over the data placed in YAML files used by Ansible.
|
||||
|
||||
.. _yaml_tags_and_python_types:
|
||||
|
||||
YAML tags and Python types
|
||||
``````````````````````````
|
||||
|
||||
The documentation covered here is an extension of the documentation that can be found in the `PyYAML Documentation <http://pyyaml.org/wiki/PyYAMLDocumentation#YAMLtagsandPythontypes>`_
|
||||
|
||||
.. _unsafe_strings:
|
||||
|
||||
Unsafe or Raw Strings
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
As of Ansible 2.0, there is an internal data type for declaring variable values as "unsafe". This means that the data held within the variables value should be treated as unsafe preventing unsafe character subsitition and information disclosure.
|
||||
|
||||
Jinja2 contains functionality for escaping, or telling Jinja2 to not template data by means of functionality such as ``{% raw %} ... {% endraw %}``, however this uses a more comprehensive implementation to ensure that the value is never templated.
|
||||
|
||||
Using YAML tags, you can also mark a value as "unsafe" by using the ``!unsafe`` tag such as::
|
||||
|
||||
---
|
||||
my_unsafe_variable: !unsafe 'this variable has {{ characters that shouldn't be treated as a jinja2 template'
|
||||
|
||||
In a playbook, this may look like::
|
||||
|
||||
---
|
||||
hosts: all
|
||||
vars:
|
||||
my_unsafe_variable: !unsafe 'unsafe value'
|
||||
tasks:
|
||||
...
|
||||
|
||||
For complex variables such as hashes or arrays, ``!unsafe`` should be used on the individual elements such as::
|
||||
|
||||
---
|
||||
my_unsafe_array:
|
||||
- !unsafe 'unsafe element'
|
||||
- 'safe element'
|
||||
|
||||
my_unsafe_hash:
|
||||
unsafe_key: !unsafe 'unsafe value'
|
||||
|
||||
|
||||
|
||||
.. seealso::
|
||||
|
||||
:doc:`playbooks_variables`
|
||||
All about variables
|
||||
`User Mailing List <http://groups.google.com/group/ansible-project>`_
|
||||
Have a question? Stop by the google group!
|
||||
`irc.freenode.net <http://irc.freenode.net>`_
|
||||
#ansible IRC chat channel
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue