- Do not silently ignore malformed pip requirements files.
- Properly reports changed when removing packages.
- "latest" i.e. --upgrade is *not* incompatible with requirements files.
- Less branchy, simpler logic.
- Removed pointless variable "initializations", Python doesn't need that.
Other code simplifications.
- Fun fact; pip install is (kind of) case insensitive, pip freeze is not.
So, 'sqlalchemy' will be reported as installed by install, but missing
by freeze.
The perhaps controversial change and the one that led to finding /
fixing above issues...
Instead of adding command parameters 'index', and 'find', and 'mirrors',
and etc. Added 'extra_args' which are passed onto pip.
The use case for --index-url is having a private pypi repo, like
http://pypi.python.org/pypi/localshop, to which you publish private
packages. I'm sure most every pip option has a use case for someone.
extra_args handles all those. Can reserve ansible command parameters for
the most common.
Tested with pip 1.1.
Add HTML-escaping to code examples in rST tempate of module-formatter
Add support for specifying port, addresses with phrases and attaching files
Add support for custom headers and document version_added for new options
X-Mailer header added :)
protect empty address lists & attachment list, and add bcc
- Added username, password arguments.
- Documented existing revision argument.
- Corrected documentation/docstrings; removed git references, use svn
nomenclature, etc.
- Refactored duplicate code, redundant shell calls, filter abuse,
inconsistent formating, etc.
- Shell quoting so it doesn't break for one guy who has spaces in
pathnames.
- svn called with '--non-interactive' and '--no-auth-cache'.
Move operations that are dependant on a remote branch under a if
is_remote_branch() conditional. While at it, remove assignment to cmd
string in same block that wasn't used when calling _run().
The git module would not pull in updates to a branch when
version=<branch>. This updates that block to checkout the branch
and then do a git reset --hard <remote>/<branch>. This
should now track updates to a branch.
In a virtualenv, pip is called just pip. This fixes the pip module to
search for the virtualenv pip first before trying the pip-python and
python-pip variants. Without this, pip module would not install to the
virtualenv when that parameter is provided.
This updates _is_package_installed() to accept a requirements file
as an argument. This is used later in main() to check if python libs
specified in a requirements file are already installed. I updated
main() to consolidate the handling of install/uninstall in a single
block. This should help if someone wants to remove packages specified
by a requirements file.
This makes the line parsing a lot more robust (and easier to read).
Code supplied by @dhozac, thanks!
Remove re import because this is not used anywhere.
When trying to perform enabled=yes followed by enabled=no
against FreeBSD the module would die with the following error:
TypeError: sub() takes at most 4 arguments (5 given)
The target FreeBSD client (8.2) is running python 2.6.6. It seems the
extra 'flags' argument was added to re.sub() in 2.7.
In fixing this issue I have attempted to create a general atomic method
for modifying a rc.conf file. Hopefully this will make it easier to add
other rc based platorms. The strip/split magic was inspired by the user
module.
* Basically the moving parts from the original service module arranged in
subclasses.
* General structure and helper methods comes from the user module.
* Less forgiving to unsupported platforms: it requires a subclass per platform.
(This makes it easier to work on one platform without having to think about.
what other platform might be affected in unexpected ways).
* Now has basic OpenBSD support.
* Solaris support needs to be added.
Thanks to @dhozac for general advice and Linux testing.
Thanks to @bcoca for clearing up some FreeBSD questions.
I added all known virtualization types from the virt-what project. However, the few virt types that rely on cpuid information have not been implemented lacking native python cpuid access. (hyperv)
Without this fix, generating documentation results in:
```
Traceback (most recent call last):
File "hacking/module_formatter.py", line 376, in <module>
main()
File "hacking/module_formatter.py", line 365, in main
text = template.render(doc)
File "/usr/lib64/python2.6/site-packages/jinja2/environment.py", line 669, in render
return self.environment.handle_exception(exc_info, True)
File "hacking/templates/man.j2", line 20, in top-level template code
{% for desc in v.description %}@{ desc | jpfunc }@{% endfor %}
File "hacking/module_formatter.py", line 94, in man_ify
t = _ITALIC.sub(r'\\fI' + r"\1" + r"\\fR", text)
TypeError: expected string or buffer
```
- Make sure exit_json() always returns a changed= value
- Modify the yum module to not return failed=False
- Modify install() and latest() similar to remove() in yum module
- Changed exit_json(failed=True, **res) into a fail_json(**res)
- Make sure yum rc= value reflects loop (similar to how we fixed remove())
Rewrote switch_version() to read .git/HEAD to find branch associated
with HEAD. If in a detached HEAD state, will read
.git/refs/remotes/<remote>/HEAD.
Rename pull() to fetch(). It does a git fetch and then a
git fetch --tags.
Add _run() method to handle all subprocess.Popen calls. Change
all previous calls to subprocess.Popen to use _run().
There is no need to require thirsty mode when the destination is a directory. We add the basename of the url to the destination directory and proceed with that. If that file exists in non-thirsty mode continue as expected.
I also cleaned up some of the logic that is no longer necessary if we simply rewrite the destination from the very start the way it is expected.
I had made and pushed this change after you already pulled the request.
@dhozac indicated that it would probably be better to use return codes > 255 for anything related to Ansible itself. Which makes sens :)
We use the lineinfile module to modify configuration files of a proprietary application. This application reads configuration options from files, but does not require those files to exist (if the default options are fine). However this application may modify the configuration file at will, so we cannot copy or template those files. And after a silent install the configuration may not exist (depending on the response file).
Whatever the case, during deployment we need to make sure some configuration options are set after the installation.
So the cleanest way to handle this situation is to allow the lineinfile module to create the file if it is missing (and this is the expected behavior). When I proposed this behavior, @sergevanginderachter needed the same functionality and was now working around it as well.
I had made and pushed this change after you already pulled the request.
@dhozac indicated that it would probably be better to use return codes > 255 for anything related to Ansible itself. Which makes sens :)
Split module into a main calling function, and a generic
(Linux useradd/usermod/userdel) User class.
Added a __new__ function that selects most appropriate superclass
Added a FreeBSD User class
Tested against FreeBSD 9.0
If this is not a certainty, playbooks will fail without an 'rc' and checking both if there is an rc, and whether the 'rc' is (not) 0 is very complicated. (especially because ${something.rc} will not be substituted and all that)
Detect when on a 'no branch' branch. If so, checkout the HEAD branch
as reported by 'git remote show <remote>'. That should put the repo
back on a branch such that git can then merge changes as necessary.
In addition, removed hard-coded references to origin and replaced
with remote var.
This allows one to create a SSH key for user. You may define:
ssh_key_type, ssh_key_bits, ssh_key_file, ssh_key_comment,
and ssh_key_passphrase. If no passphrase is provided, the
key will be passphrase-less. This will not overwrite an existing key.
In the JSON returned, it will provide the ssh_fingerprint and
ssh_key_file.
- fixed template (it was the template), adding indentation with Jinja2
- added description of code examples to man-page template (was missing)
- fixed fireball, cron, and debug module examples to confrom
On Red Hat, CentOS and Fedora systems, the pip binary will be called python-pip
instead of pip. This commit makes the pip module also check for python-pip.
The reason we check for python-pip *first*, is to have ansible fail on not
finding 'pip' and reporting *that*. This is consistent with current behaviour
and will not confuse users of Debian et al., where the 'python-pip' binary
never exists.
Tested on Fedora 18 and Ubuntu 12.04.
This commit improves the following items:
- Remove the 'match' functionality, this can now be achieve by using the `fail` module together with `only_if` after running the `hpilo_facts` module. Since this gives more functionality, e.g. comparing server names, but also serial numbers or uuids with other inventory information, this is prefered. An example is added to show how this is achieved.
- Clean up all C() calls in documentation
- Added state=poweroff in order to power off a server. The use-case is here that in general we do not want to provision systems that are already running (this enforcement can be disabled using force=yes) but for test systems we should be able to power them off so we can start the normal provisioning process. (We could also force boot them, but that's less elegant)
- The module now correctly indicates when something has changed. So if a server is powered off that was not off already, this is indicated, or when media boot-settings have been changed, this is also correctly indicated. Previously every call to hpilo_boot was (incorrectly) considered a change.
This commit improves the following items:
- Remove the 'match' functionality, this can now be achieve by using the `fail` module together with `only_if` after running the `hpilo_facts` module. Since this gives more functionality, e.g. comparing server names, but also serial numbers or uuids with other inventory information **and** a proper message, this is prefered. An example is added to show how this is achieved.
- Clean up all C() calls in documentation
- Remove trailing spaces in HP iLO's Serial Number output so that they can be compared to CMDB or other inventory information
Sending mails could be part of the workflow to have teams/responsibles perform specific task. Or simply to notify that a process has finished successfully (e.g. provisioning).
This workaround is recommended from HP iLO's documentation, but may not be sufficient in all cases. Time will tell.
I also made a few cosmetic changes with no impact.
Much like we currently have *setup* register the variable `module_setup`, we would like other facts-modules register their own namespace. This means that:
- *network_facts* registers `module_network`
- *hpilo_facts* registers `module_hw`
- *vsphere_facts* registers `module_hw`
In retrospect, it would have made more sense to have `setup` register `module_ansible` instead as the setup module uses the `ansible_` namesepace.
Having the `module_` namespace allows us to check whether a certain namespace has already been loaded so we can avoid running the facts module a second time using only_if.
```yaml
- action: network_facts host=${ansible_hostname_short}
only_if: is_unset('$module_network')
```
This module gathers facts from a VMWare vSphere guest by querying vSphere. The facts include OS, network info (vlan, macaddress) and system info (cpu, memory, uuid) information. Useful information for provisioning and management.
This module gathers facts from the hardware interface by querying HP iLO. The facts include network info (vlan, macaddress) and system info (cpu, memory, uuid) information. Useful information for provisioning and management.
This module was previously named ilo_facts and mentioned in #1080, #1085, #1125 and #1217.
After helping someone on IRC he was interested to have this debug module in upstream. This module simply 'prints' a message, and can be ordered to fail if needed. It helps to troubleshoot or understand inventory/facts issues and/or experiment with statements and conditions using only_if.
Here is a small example playbook:
```yaml
- hosts: all
tasks:
- local_action: debug msg="System $inventory_hostname has uuid ${ansible_product_uuid}"
- local_action: debug msg="System $inventory_hostname lacks a gateway" fail=yes
only_if: "is_unset('$ansible_default_ipv4.gateway')"
- local_action: debug msg="System $inventory_hostname has gateway ${ansible_default_ipv4.gateway}"
only_if: "is_set('$ansible_default_ipv4.gateway')"
```
outputting:
```
[root@moria ansible]# ansible-playbook -v -l localhost:x220 test6.yml
PLAY [all] *********************
GATHERING FACTS *********************
ok: [localhost]
ok: [x220]
TASK: [debug msg="System $inventory_hostname has uuid $ansible_product_uuid"] *********************
ok: [localhost] => {"msg": "System localhost has uuid d125a48c-364f-4e65-b225-fed42ed61fac"}
ok: [x220] => {"msg": "System x220 has uuid d125a48c-364f-4e65-b225-fed42ed61fac"}
TASK: [debug msg="System $inventory_hostname lacks a gateway" fail=yes] *********************
failed: [localhost] => {"failed": true, "msg": "System localhost lacks a gateway", "rc": 1}
ok: [x220] => {"msg": "System x220 has gateway 192.168.1.1"}
PLAY RECAP *********************
localhost : ok=2 changed=0 unreachable=0 failed=1
x220 : ok=3 changed=0 unreachable=0 failed=0
```
I had some other plans for the module, like displaying host inventory and complete inventory to help understand inventory and facts modules, but that would require an action-plugin for transfering inventory information etc... And I am not sure this is wanted/best done in a module.
In some cases you may want to deliberately fail the execution of a playbook. In our provisioning workflow we want to have safeguards in place to avoid provisioning systems that are already in production. Since we reboot physical and virtual systems, it is mandatory we take all the precautions to prevent accidental provisioning.
So in our use-case we have the following at the very start of the provisioning playbook:
### Safeguard to protect production systems
- local_action: fail msg="System is not ready to be staged according to CMDB"
only_if: "'$cmdb_status' != 'to-be-staged'"
and we repeat the same task in the (separate included) play that takes care of (re)booting the system using our own boot-media, so that it cannot be accidentally separately run by someone.
pipes.quote is a bit overzealous for what we want to do, quoting ;
and other characters that you most likely want to use in your shell
invocations. The regexp is the best I could come up with to be able
to only replace the parts of the arguments that shouldn't be
executed.
- .rst now supresses default if none is set (looks better in HTML)
- .rst now handles empty options list
- Fixed postgresql_user and mysql_user because YAML contained colons
- docs for facter
This gathers LSB facts via lsb_release. This complements the
platform facts collected via the platform module. This reoprts
release, id, description, release, and codename. It also adds
'major_release', which is the major version number of a distribution.
In some cases (see issue #1067) with state=restarted, a failure to stop
the service (which wasn't running) would appear to the module to be a
failure to restart the service even though it successfully started the
service. This changes the behavior of the service module to focus
on the return code of the start command. If the rc of stop is not
0 and the rc of start does equal 0, it considers the service
successfully restarted. It then ignores the rc, stdout, and stderr
from the unsuccessful stop command.
This change includes:
- (on possibly older python versions ?) a string variable test using the 'is' operator fails (so it always return ok immediately after initial delay)
- add a missing socket.settimeout() for the state=started case (if the machine does not exist, timeout defaults to 60 seconds)
- add a connect_timeout option to customize the default connection timeout
- use socket.shutdown(2) to close immediately
- return the elapsed time
The check for the destination being a directory is now done before
checking if the file exists, that way the user is informed that the
thirsty argument is required.
If I create a database from scratch and assign permissions by doing:
- name: ensure database is created
action: postgresql_db db=$dbname
- name: ensure django user has access
action: postgresql_user db=$dbname user=$dbuser priv=ALL password=$dbpassword
Then it fails with the error:
File "/tmp/ansible-1347048449.32-29998829936529/postgresql_user", line 565, in <module>
main()
File "/tmp/ansible-1347048449.32-29998829936529/postgresql_user", line 273, in main
changed = grant_privileges(cursor, user, privs) or changed
File "/tmp/ansible-1347048449.32-29998829936529/postgresql_user", line 174, in grant_privileges
changed = grant_func(cursor, user, name, privilege)\
File "/tmp/ansible-1347048449.32-29998829936529/postgresql_user", line 132, in grant_database_privilege
prev_priv = get_database_privileges(cursor, user, db)
File "/tmp/ansible-1347048449.32-29998829936529/postgresql_user", line 118, in get_database_privileges
r = re.search('%s=(C?T?c?)/[a-z]+\,?' % user, datacl)
File "/usr/lib/python2.7/re.py", line 142, in search
return _compile(pattern, flags).search(string)
TypeError: expected string or buffer
This fix fixes the problem by not executing the regex if the
db query on pg_database returns None.
The use-case here is that based on information in the /proc/cmdline certain actions can be taken.
A practical example in our case is that we have a play at the end of the provisioning phase that reboots the system. Since we don't want to accidentally reboot a system (or restart the network) on a production machine, having a way to separate an Anaconda post-install (sshd in chroot) with a normal system is a good way to make that distinction.
---
- name: reboot
hosts: all
tasks:
- action: command init 6
only_if: "not '${ansible_cmdline.BOOT_IMAGE}'.startswith('$')"
A practical problem here is the fact that we cannot simply check whether it is set or empty:
---
- name: reboot
hosts: all
tasks:
- action: command init 6
only_if: "'${ansible_cmdline.BOOT_IMAGE}'"
If ansible_cmdline was a string, a simple only_if: "'${ansible_cmdline}'.find(' BOOT_IMAGE=')" was an option, but still not very "beautiful" :-/
This implementation uses shlex.split() and uses split(sep, maxsplit=1).
This allows the use of ~ in the chdir argument of the command module
I know the later change is absolutely necessary as the first change
was not sufficient. It may be that the first change fixes shell and
the second fixes command.
Added required as optional argument to get_bin_path(). It defaults to
false. Updated following modules to use required=True when calling
get_bin_path(): apt_repository, easy_install, group, pip,
supervisorctl, and user.
Also removed _find_supervisorctl() from supervisorctl module and updated
_is_running() to not need it.
Will manage values of seboolean on a host. Options are name (name of
boolean), state (on or off), and persistent (on or off). Persistent
defaults to no.
* Migraed easy_install, pip, service, setup, and user.
* Updated fail_json message in apt_repository
* Fixed easy_install to not hardcode location of virtualenv in
/usr/local/bin/.
* Made handling of virtualenv more consistent between easy_install and
pip.
Most of it worked already, except for the enable parameter, because it
tried to use chkconfig which only sees SysV services. First look for
systemctl and use that if it exists.
This takes started, stopped and restarted.
Started returns when connecting is possible.
Stopped when connecting is not possible.
Restarted first waits for connecting to be impossible and returns when it is
possible again.