updated guides to avoid connection: local (#44227)

- want they really need is `delegate_to: localhost`
 - also reduced 'local_action' usage in favor of same
pull/44778/head
Brian Coca 6 years ago committed by GitHub
parent ed22efb2a6
commit 893d59fabe
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -234,7 +234,6 @@ Ansible playbook.
- Add the following to the new playbook file::
- name: test my new module
connection: local
hosts: localhost
tasks:
- name: run the new module

@ -74,9 +74,9 @@ and fulfill the missing data by either setting ENV variables or tasks params:
---
- name: provision our VMs
hosts: cloud-vm
connection: local
tasks:
- name: ensure VMs are created and running
delegate_to: localhost
cs_instance:
api_key: your api key
api_secret: your api secret
@ -111,10 +111,11 @@ By passing the argument ``api_region`` with the CloudStack modules, the region w
.. code-block:: yaml
- name: ensure my ssh public key exists on Exoscale
local_action: cs_sshkeypair
cs_sshkeypair:
name: my-ssh-key
public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
api_region: exoscale
delegate_to: localhost
Or by looping over a regions list if you want to do the task in every region:
@ -144,20 +145,19 @@ Below you see an example how it can be used in combination with Ansible's block
tasks:
- block:
- name: ensure my ssh public key
local_action:
module: cs_sshkeypair
cs_sshkeypair:
name: my-ssh-key
public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
- name: ensure my ssh public key
local_action:
module: cs_instance:
cs_instance:
display_name: "{{ inventory_hostname_short }}"
template: Linux Debian 7 64-bit 20GB Disk
service_offering: "{{ cs_offering }}"
ssh_key: my-ssh-key
state: running
delegate_to: localhost
environment:
CLOUDSTACK_DOMAIN: root/customers
CLOUDSTACK_PROJECT: web-app
@ -241,28 +241,30 @@ Now to the fun part. We create a playbook to create our infrastructure we call i
---
- name: provision our VMs
hosts: cloud-vm
connection: local
tasks:
- name: ensure VMs are created and running
cs_instance:
name: "{{ inventory_hostname_short }}"
template: Linux Debian 7 64-bit 20GB Disk
service_offering: "{{ cs_offering }}"
state: running
- name: ensure firewall ports opened
cs_firewall:
ip_address: "{{ public_ip }}"
port: "{{ item.port }}"
cidr: "{{ item.cidr | default('0.0.0.0/0') }}"
loop: "{{ cs_firewall }}"
when: public_ip is defined
- name: ensure static NATs
cs_staticnat: vm="{{ inventory_hostname_short }}" ip_address="{{ public_ip }}"
when: public_ip is defined
In the above play we defined 3 tasks and use the group ``cloud-vm`` as target to handle all VMs in the cloud but instead SSH to these VMs, we use ``connection=local`` to execute the API calls locally from our workstation.
- name: run all enclosed tasks from localhost
delegate_to: localhost
block:
- name: ensure VMs are created and running
cs_instance:
name: "{{ inventory_hostname_short }}"
template: Linux Debian 7 64-bit 20GB Disk
service_offering: "{{ cs_offering }}"
state: running
- name: ensure firewall ports opened
cs_firewall:
ip_address: "{{ public_ip }}"
port: "{{ item.port }}"
cidr: "{{ item.cidr | default('0.0.0.0/0') }}"
loop: "{{ cs_firewall }}"
when: public_ip is defined
- name: ensure static NATs
cs_staticnat: vm="{{ inventory_hostname_short }}" ip_address="{{ public_ip }}"
when: public_ip is defined
In the above play we defined 3 tasks and use the group ``cloud-vm`` as target to handle all VMs in the cloud but instead SSH to these VMs, we use ``delegate_to: localhost`` to execute the API calls locally from our workstation.
In the first task, we ensure we have a running VM created with the Debian template. If the VM is already created but stopped, it would just start it. If you like to change the offering on an existing VM, you must add ``force: yes`` to the task, which would stop the VM, change the offering and start the VM again.
@ -316,7 +318,6 @@ The playbook looks like the following:
---
- name: cloud base setup
hosts: localhost
connection: local
tasks:
- name: upload ssh public key
cs_sshkeypair:
@ -349,26 +350,27 @@ The playbook looks like the following:
- name: install VMs in the cloud
hosts: cloud-vm
connection: local
tasks:
- name: create and run VMs on CloudStack
cs_instance:
name: "{{ inventory_hostname_short }}"
template: Linux Debian 7 64-bit 20GB Disk
service_offering: "{{ cs_offering }}"
security_groups: "{{ cs_securitygroups }}"
ssh_key: defaultkey
state: Running
register: vm
- name: show VM IP
debug: msg="VM {{ inventory_hostname }} {{ vm.default_ip }}"
- name: assign IP to the inventory
set_fact: ansible_ssh_host={{ vm.default_ip }}
- name: waiting for SSH to come up
wait_for: port=22 host={{ vm.default_ip }} delay=5
- delegate_to: localhost
block:
- name: create and run VMs on CloudStack
cs_instance:
name: "{{ inventory_hostname_short }}"
template: Linux Debian 7 64-bit 20GB Disk
service_offering: "{{ cs_offering }}"
security_groups: "{{ cs_securitygroups }}"
ssh_key: defaultkey
state: Running
register: vm
- name: show VM IP
debug: msg="VM {{ inventory_hostname }} {{ vm.default_ip }}"
- name: assign IP to the inventory
set_fact: ansible_ssh_host={{ vm.default_ip }}
- name: waiting for SSH to come up
wait_for: port=22 host={{ vm.default_ip }} delay=5
In the first play we setup the security groups, in the second play the VMs will created be assigned to these groups. Further you see, that we assign the public IP returned from the modules to the host inventory. This is needed as we do not know the IPs we will get in advance. In a next step you would configure the DNS servers with these IPs for accessing the VMs with their DNS name.

@ -177,9 +177,8 @@ examples to get you started:
# Simple playbook to invoke with the above example:
- name: Test docker_inventory
- name: Test docker_inventory, this will not connect to any hosts
hosts: all
connection: local
gather_facts: no
tasks:
- debug: msg="Container - {{ inventory_hostname }}"

@ -24,20 +24,21 @@ package repositories, so you will likely need to install it via pip:
$ pip install pyrax
The following steps will often execute from the control machine against the Rackspace Cloud API, so it makes sense
to add localhost to the inventory file. (Ansible may not require this manual step in the future):
Ansible creates an implicit localhost that executes in the same context as the ``ansible-playbook`` and the other CLI tools.
If for any reason you need or want to have it in your inventory you should do something like the following:
.. code-block:: ini
[localhost]
localhost ansible_connection=local
localhost ansible_connection=local ansilbe_python_interpreter=/usr/local/bin/python2
For more information see :ref:`Implicit Localhost <implicit_localhost>`
In playbook steps, we'll typically be using the following pattern:
.. code-block:: yaml
- hosts: localhost
connection: local
gather_facts: False
tasks:
@ -103,7 +104,7 @@ Here is a basic example of provisioning an instance in ad-hoc mode:
.. code-block:: bash
$ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes" -c local
$ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes"
Here's what it would look like in a playbook, assuming the parameters were defined in variables:
@ -111,8 +112,7 @@ Here's what it would look like in a playbook, assuming the parameters were defin
tasks:
- name: Provision a set of instances
local_action:
module: rax
rax:
name: "{{ rax_name }}"
flavor: "{{ rax_flavor }}"
image: "{{ rax_image }}"
@ -120,14 +120,14 @@ Here's what it would look like in a playbook, assuming the parameters were defin
group: "{{ group }}"
wait: yes
register: rax
delegate_to: localhost
The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called "raxhosts", with each nodes hostname, IP address, and root password being added to the inventory.
.. code-block:: yaml
- name: Add the instances we created (by public IP) to the group 'raxhosts'
local_action:
module: add_host
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
@ -303,11 +303,11 @@ This can be achieved with the ``rax_facts`` module and an inventory file similar
gather_facts: False
tasks:
- name: Get facts about servers
local_action:
module: rax_facts
rax_facts:
credentials: ~/.raxpub
name: "{{ inventory_hostname }}"
region: "{{ rax_region }}"
delegate_to: localhost
- name: Map some facts
set_fact:
ansible_host: "{{ rax_accessipv4 }}"
@ -415,24 +415,22 @@ Network and Server
Create an isolated cloud network and build a server
.. code-block:: yaml
- name: Build Servers on an Isolated Network
hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Network create request
local_action:
module: rax_network
rax_network:
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
region: IAD
state: present
delegate_to: localhost
- name: Server create request
local_action:
module: rax
rax:
credentials: ~/.raxpub
name: web%04d.example.org
flavor: 2
@ -449,6 +447,7 @@ Create an isolated cloud network and build a server
wait: yes
wait_timeout: 360
register: rax
delegate_to: localhost
.. _complete_environment:
@ -458,16 +457,14 @@ Complete Environment
Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html
.. code-block:: yaml
---
- name: Build environment
hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Load Balancer create request
local_action:
module: rax_clb
rax_clb:
credentials: ~/.raxpub
name: my-lb
port: 80
@ -481,20 +478,18 @@ Build a complete webserver environment with servers, custom networks and load ba
meta:
app: my-cool-app
register: clb
- name: Network create request
local_action:
module: rax_network
rax_network:
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
region: IAD
register: network
- name: Server create request
local_action:
module: rax
rax:
credentials: ~/.raxpub
name: web%04d.example.org
flavor: performance1-1
@ -511,10 +506,9 @@ Build a complete webserver environment with servers, custom networks and load ba
group: web
wait: yes
register: rax
- name: Add servers to web host group
local_action:
module: add_host
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
@ -522,10 +516,9 @@ Build a complete webserver environment with servers, custom networks and load ba
groups: web
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Add servers to Load balancer
local_action:
module: rax_clb_nodes
rax_clb_nodes:
credentials: ~/.raxpub
load_balancer_id: "{{ clb.balancer.id }}"
address: "{{ item.rax_networks.private|first }}"
@ -536,22 +529,22 @@ Build a complete webserver environment with servers, custom networks and load ba
region: IAD
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Configure servers
hosts: web
handlers:
- name: restart nginx
service: name=nginx state=restarted
tasks:
- name: Install nginx
apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
notify:
- restart nginx
- name: Ensure nginx starts on boot
service: name=nginx state=started enabled=yes
- name: Create custom index.html
copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html
owner=root group=root mode=0644
@ -578,12 +571,10 @@ Using a Control Machine
- name: Create an exact count of servers
hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Server build requests
local_action:
module: rax
rax:
credentials: ~/.raxpub
name: web%03d.example.org
flavor: performance1-1
@ -596,10 +587,9 @@ Using a Control Machine
group: web
wait: yes
register: rax
- name: Add servers to in memory groups
local_action:
module: add_host
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
@ -608,54 +598,55 @@ Using a Control Machine
groups: web,new_web
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Wait for rackconnect and managed cloud automation to complete
hosts: new_web
gather_facts: false
tasks:
- name: Wait for rackconnnect automation to complete
local_action:
module: rax_facts
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rackconnect_automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
local_action:
module: rax_facts
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rax_service_level_automation']|default('') == 'Complete'
retries: 30
delay: 10
- name: ensure we run all tasks from localhost
delegate_to: localhost
block:
- name: Wait for rackconnnect automation to complete
rax_facts:
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rackconnect_automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
rax_facts:
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rax_service_level_automation']|default('') == 'Complete'
retries: 30
delay: 10
- name: Update new_web hosts with IP that RackConnect assigns
hosts: new_web
gather_facts: false
tasks:
- name: Get facts about servers
local_action:
module: rax_facts
rax_facts:
name: "{{ inventory_hostname }}"
region: DFW
delegate_to: localhost
- name: Map some facts
set_fact:
ansible_host: "{{ rax_accessipv4 }}"
- name: Base Configure Servers
hosts: web
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
.. _using_ansible_pull:
@ -668,51 +659,52 @@ Using Ansible Pull
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
hosts: all
connection: local
tasks:
- name: Check for completed bootstrap
stat:
path: /etc/bootstrap_complete
register: bootstrap
- name: Get region
command: xenstore-read vm-data/provider_data/region
register: rax_region
when: bootstrap.stat.exists != True
- name: Wait for rackconnect automation to complete
uri:
url: "https://{{ rax_region.stdout|trim }}.api.rackconnect.rackspace.com/v1/automation_status?format=json"
return_content: yes
register: automation_status
when: bootstrap.stat.exists != True
until: automation_status['automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
wait_for:
path: /tmp/rs_managed_cloud_automation_complete
delay: 10
when: bootstrap.stat.exists != True
- name: Set bootstrap completed
file:
path: /etc/bootstrap_complete
state: touch
owner: root
group: root
mode: 0400
- name: ensure we run all tasks from localhost
delegate_to: localhost
block:
- name: Check for completed bootstrap
stat:
path: /etc/bootstrap_complete
register: bootstrap
- name: Get region
command: xenstore-read vm-data/provider_data/region
register: rax_region
when: bootstrap.stat.exists != True
- name: Wait for rackconnect automation to complete
uri:
url: "https://{{ rax_region.stdout|trim }}.api.rackconnect.rackspace.com/v1/automation_status?format=json"
return_content: yes
register: automation_status
when: bootstrap.stat.exists != True
until: automation_status['automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
wait_for:
path: /tmp/rs_managed_cloud_automation_complete
delay: 10
when: bootstrap.stat.exists != True
- name: Set bootstrap completed
file:
path: /etc/bootstrap_complete
state: touch
owner: root
group: root
mode: 0400
- name: Base Configure Servers
hosts: all
connection: local
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
.. _using_ansible_pull_with_xenstore:
@ -725,7 +717,6 @@ Using Ansible Pull with XenStore
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
hosts: all
connection: local
tasks:
- name: Check for completed bootstrap
stat:
@ -773,16 +764,15 @@ Using Ansible Pull with XenStore
owner: root
group: root
mode: 0400
- name: Base Configure Servers
hosts: all
connection: local
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
.. _advanced_usage:

@ -70,7 +70,6 @@ In this use case / example, we will be selecting a virtual machine template and
---
- name: Create a VM from a template
hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Clone the template

Loading…
Cancel
Save