Make sure all hosts and groups are unique objects
and that those are referenced uniquely everywhere.
Also fixes test_dir_inventory unit tests which were broken after previous
patches.
modified: lib/ansible/inventory/dir.py
Changes include:
* Update Makefile to use credentials.yml when it exists
* Add details on the use of the credentials.yml file to README.md.
* Update credentials.template comments
The `apt` and `yum` modules will automatically install python dependencies.
This change updates the existing integration tests to test whether auto-install
of dependencies is functioning properly.
- unified set attribute functions ... not sure why 2 identical functions
exist with diff names, now there are 3 while i repoint all modules to 1
- fixed issue with symlinks being created w/o existing src when force=no
- refactored conditionals, simplified where possible
- added tests for symlink to nonexistant source, with both force options
- made symlink on existing attomic (force)
Tests several ways to specify the repository. For every repo added, the test
asserts that:
* the apt-cache was updated as expected (depends on `update_cache` parameter)
* the PPA key was installed (depends on `repo` format)
To support parallel cloud test execution, create and provide a random string to
cloud integration tests. The variable 'resource_prefix' can be used in cloud
roles and during resource cleanup to safely create/destroy cloud-based
resources.
Additional changes include:
* The roles test_ec2_key and test_ec2_group were updated to use to
{{resource_prefix}}.
* Additionally, the Makefile was updated to set resource_prefix to a random
string. The Makefile will also use 'resource_prefix' during cloud_cleanup.
* All test_ec2* roles were updated to add 'setup_ec2' as a role dependency.
The 'service' utility was unable to find the 'ansible_test' service due to an
unexpected filename. This patch corrects the filename and adjusts the
permissions to match other service scripts within /etc/init/.
tests issue #5749
same host defined in different groups which in turn are defined
in different ini files in an inventory directory
Conflicts:
test/units/TestInventory.py
0. Uncomment the test.
1. Test fails.
2. Make vars unique per file in test inventory files.
3. Modify token addition to not ast.literal_eval(v) a variable containing a hash.
4. Modify vars to have an escape in test inventory file.
5. Catch exceptions explicitly. Any unknown exceptions should be a bug.
6. Test passes.
tests issue #5749
same host defined in different groups which in turn are defined
in different ini files in an inventory directory
Conflicts:
test/units/TestInventory.py
It came up that fixing this unit test may relate to another ticket that is open. This work allows us to uncomment this unit test by fixing how we pars variables allowing a quoted variable to contain a '#'.
Work also went into cleaning up some of the test data to clarify what was working.
Lastly work went into cleaning up formatting so that the code is easily read.
The unit test infrastructure will remain for things that are mocked out and testable with out filesystem
side effects, and a few cases of things that might not be quite so much (like inventory) that can still
benefit from heavy access to the API.
See the 'tests_new/integration' directory, this will soon fold into tests_new.
We run into some problems because tar --diff will take into account the file ownership and fail if they don't match.
The real-world implication of this is that we could be doing more unarchives then we need to be doing.
There is a bit going on with the changes here. Most of the changes are cleanup of files so that they line up with the standard files.
PR #5136 was merged into the current devel and brought up to working order. A few bug fixes had to be done to get the code to test correctly. Thanks out to @pib!
Issue #5431 was not able to be confirmed as it behaved as expected with a sudo user.
Tests were added via a playbook with archive files to verify functionality.
All tests fire clean including custom playbooks across multiple linux and solaris systems.
Using
```
assert 'changed' in result
```
doesn't actually check if something is changed, which is presumably
the reason for the assertion. What is actually needed is
```
assert result.get('changed')
```
which checks that changed is set and not False. Tests still pass after
this change
This adds two parameters to the git module:
bare (boolean)
Indicates this is to be a bare repositori
reference (string)
Indicates the path or url to the reference repo.
Check out the "--reference" option in the "git clone"
man page
Added appropriate tests.
The validate option is constructed similarly to the template command's
validate option. TestRunner.py has been updated to include two new
tests, one for passing and one for failing validation.
Still compatible with user: but deprecating it so we can have
a matching remote_user: in tasks, cannot be user: because of the
module of the same name. #3932
Signed-off-by: Brian Coca <briancoca+dev@gmail.com>
The 'always_run' task clause allows one to execute a task even in
check mode.
While here implement Runner.noop_on_check() to check if a runner
really should execute its task, with respect to check mode option
and 'always_run' clause.
Also add the optional 'jinja2' argument to check_conditional() :
it allows to give this function a jinja2 expression without exposing
the 'jinja2_compare' implementation mechanism.
Tests `test_playbook_undefined_varsX_fail` check if ansible detects
undefined variables when `error_on_undefined_vars` is enabled. These
tests fail without "Improve behavior with error_on_undefined_vars
enabled" patch.
Tests `test_playbook_undefined_varsX_ignore` check if ansible ignores
undefined variables when `error_on_undefined_vars` is disabled.
Also modify PlayBook._run_task_internal() so error_on_undefined_vars is
testable.
ansible.constants was calling expanduser (by way of shell_expand_path)
on the entire configured value for the library and *_plugins
configuration values, but these values have always been interpreted as
multiple directories separated by os.pathsep. Thus, if you supplied
multiple directories for one of these values, typically only the first
(at least on *nix) would have e.g. "~" expanded to HOME.
Now PluginLoader does expansion on each individual path in each of
these variables.
If someone has a " #" in a quoted var string, it
will interpret that as a comment and refuse to
load the inventory file due to an unbalanced
quote. Noisy failure > unexpected behavior.
PluginLoader._get_paths, as of 391fb98e, was only finding plug-ins that
were in a subdirectory of one of the basedirs (i.e. in a category
directory). For example, action_plugins/foo.py would never be loaded,
but action_plugins/bar/foo.py would work.
This makes it so that "uncategorized" plug-ins in the top level of a
directory such as action_plugins will be loaded, though plug-ins in a
"category" subdirectory will still be preferred. For example,
action_plugins/bar/foo.py would be preferred over action_plugins/foo.py.
The copy action accepts force=no, which tells it not to replace an
existing file even if it differs from the source. The copy action
plug-in wasn't respecting this option when operated in check mode, so it
would report that changes are necessary in check mode even though copy
would make no changes when run normally.
Runner._remote_md5 was changed to make the logic for setting rc perhaps
a little more clear, and to make sure that rc=0 when the file does not
exist.
As documented in #2623, early variable substitution causes when_
tests to fail and possibly other side effects.
I can see the reason for this early substitution, likely introduced
in 1dfe60a6, to allow many playbook parameters to be templated.
This is a valid goal, but the recursive nature of the utils.template
function means that it goes too far.
At this point removing tasks from the list of parameters to be
substituted seems sufficient to make my tests pass. It may be the
case that other parameters should be excluded, but I suspect not.
Adding a test case. I would prefer to analyse not just the aggregate
statistics but also whether the results are as expected - I can't
see an easy way to do that with the available callbacks at present.