Add support to existing rax module to honor the wait (and wait_timeout)
parameters on delete operations. This patch removes existing logic in favor of
the built-in pyrax.utils.wait_until method.
When service module is used on unsupported Linux system where init
script is used directly, LinuxService.svc_cmd is None so .endswith()
fails.
This extends fix from e2f20db534 also
for state=restarted.
Fixes issue #3533
This patch also checks specifically for a return code of 255, which
indicates an unknown SSH error of some kind. When that happens, ansible
will now recommend running with -vvvv (if not enabled) or show the
output from 'ssh -vvv' (when it is enabled)
As it stands now, it is difficult to write idempotent tasks for digital
ocean droplets. Digital ocean assigns new nodes a random id when they
are provisioned and that id is the only key that can be used to identify
it in subsequent runs of that play.
The workflow previously involved manual intervention:
- write a play defining a new node with no specified id
- run it, collect the randomly assigned id by hand
- modify the play to add the id by hand so future runs don't create
duplicate nodes
- perform future re-runs that check if the node exists (by its id)
- if it does exist then do nothing.
- if it does not exist, then create it and return a *new random id*
- collect the new random id by hand, modify the playbook file, and
start all over.
Its a huge pain.
The modifications in this commit allow you to use the 'hostname' as a
primary key for idempotence with digital ocean. By default, digital
ocean will let you create as many hosts with the same hostname as you
like. Here, we provide an option to constrain the user to using only
unique hostnames.
The workflow will now look like:
- write a play defining a new node with a specified hostname and
"unique_name: true""
- run it, create the new node and move on.
- re-run it, notice that a node with that hostname is already created
and move on.
This shouldn't generally be needed unless you're working in an environment
that uses rediculously long FQDNs; if the name is too long, you wind up
hitting unix domain socket filepath limits enforced by ssh.
The default behavior is to update_cache if changed.
If you add more then one repo, you may not want to update cache for every repo separately.
So you can now disable update_cache with this new option e.g. update_cache=no
Updating cache can also be handled using the apt module.
The EXAMPLES block here has two copies of the same docs,
one nicely formatted, the other less so.
It looks like a pass was made to clean up the docs but the old
cruftier ones were never removed.
Two fixes:
* parameter name is selevel, not serange.
* Fix split on selinux context to limit to max of 4 since the selevel
may contain ':' characters. This was fixed in
selinux_default_context() and selinux_context().