Per #3877, the code to wait for spot instance requests to finish would
hang for the full wait time if any spot request failed for any reason.
This commit introduces status checks for spot requests, so if the
request fails, finishes, or is cancelled the task will fail/succeed
accordingly.
One edge case introduced here is tha if a user terminates the instance
associated with the request manually it won't fail the play, under the
presumption that the user *wants* the instance terminated.
A value for the project_id parameter to shade's create_network()
call was always being sent, even if no value for 'project' was
supplied. This was breaking folks with older versions of shade
(< 1.6).
Fixes PR https://github.com/ansible/ansible-modules-core/issues/3567
The AWS API requires that any termination policy list that includes
`Default` must end with Default. The attribute sorting caused any list
of attributes to be lexically sorted, so a list like
`["OldestLaunchConfiguration", "Default"]` would be changed to
`["Default", "OldestLaunchConfiguration"]` because default is earlier
alphabetically. This caused calls to fail with BotoServerError per #4069
This commit also adds proper tracebacks to all botoservererror fail_json
calls.
Closes#4069
Previously calculation of the number of instances that have been
terminated assumed all instances were in the first reservation returned
by AWS. If this is not the case the calculated number of instances
terminated never reaches the number of instances and the module always
times out. By unpacking the instances we get an accurate number and the
module correctly exits.
Currently instances with multiple ENI's can't be started or stopped
because sourceDestCheck is a per-interface attribute, but we use the
boto global access to it (which only works when there's a single ENI).
This patch handles multiple ENI's and applies the sourcedestcheck across
all interfaces the same way.
Fixes#3234
If you apply `wait=yes` and use `instance_tags` as your filter for
stopping/starting EC2 instances, this stack trace happens:
```
An exception occurred during task execution. The full traceback is: │~
Traceback (most recent call last): │~
File "/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py", line 1540, in <module> │~
main() │~
File "/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py", line 1514, in main │~
(changed, instance_dict_array, new_instance_ids) = startstop_instances(module, ec2, instance_ids, state, instance_tags) │~
File "/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py", line 1343, in startstop_instances │~
if len(matched_instances) < len(instance_ids): │~
TypeError: object of type 'NoneType' has no len() │~
│~
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "ec2"}, "module_stderr": "Traceb│~
ack (most recent call last):\n File \"/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py\", line 1540, in <module>\n main()\n File \"/tmp/│~
ryansb/ansible_FwE8VR/ansible_module_ec2.py\", line 1514, in main\n (changed, instance_dict_array, new_instance_ids) = startstop_instances│~
(module, ec2, instance_ids, state, instance_tags)\n File \"/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py\", line 1343, in startstop_insta│~
nces\n if len(matched_instances) < len(instance_ids):\nTypeError: object of type 'NoneType' has no len()\n", "module_stdout": "", "msg": "│~
MODULE FAILURE", "parsed": false}
```
That's because the `instance_ids` variable is None if not supplied
in the task. That means the instances that result from the instance_tags
query aren't going to be included in the wait loop. To fix this, a list
needs to be kept of instances with matching tags and that list needs to
be added to `instance_ids` before the wait loop.
Before this, all spot instance requests would fail because the code
_always_ called module.fail_json when the parameter was set (which it
always was, because the module parameter's default was set to 'stop').
As the comment said, this parameter doesn't make sense for spot
instances at all, so the error message was also misleading.
Due to a mixup of the group/role/user and policy names, policies with
the same name as the group/role/user they are attached to would never be
updated after creation. To fix that, we needed two changes to the logic
of policy comparison:
- Compare the new policy name to *all* matching policies, not just the
first in lexicographical order
- Compare the new policy name to the matching ones, not to the IAM
object the policy is attached to
The ssh_public_keys must be a list otherwise will give the error:
"argument ssh_public_keys is of type <type 'dict'> and we were unable to convert to list"