Ideally it would only be called once, and in future maybe it can, but
right now we need to cope with these cases:
* Downstream parent notifies us of disconnection (DEL_ROUTE)
* We notify ourself of disconnection
* We notify ourself and so does downstream parent
It's case 3 that causes the error.
Simply listen to RouteMonitor's Context "disconnect" and forget
contexts according to RouteMonitor's rules, rather than duplicate them
(and screw it up).
Update _via_by_context earlier; fixes:
Traceback (most recent call last):
File "/Users/dmw/src/mitogen/mitogen/service.py", line 519, in _on_service_call
return invoker.invoke(method_name, kwargs, msg)
File "/Users/dmw/src/mitogen/mitogen/service.py", line 253, in invoke
response = self._invoke(method_name, kwargs, msg)
File "/Users/dmw/src/mitogen/mitogen/service.py", line 239, in _invoke
ret = method(**kwargs)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 454, in get
reraise(*result)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 412, in _wait_or_start
response = self._connect(key, spec, via=via)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 363, in _connect
self._update_lru(context, spec, via)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 266, in _update_lru
self._update_lru_unlocked(new_context, spec, via)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 253, in _update_lru_unlocked
if self._refs_by_context[context] == 0:
KeyError: Context(1008, u'ssh.localhost.sudo.mitogen__user3')
Earlier commit moved Stream.routes attribute into a private map
belonging to RouteMonitor, to make upgrades smoother. This adds a new
accessor method to RouteMonitor.
(Pull #377)
Changes:
- additional_parameters -> extra_args
- Merge with kubectl changes from dmw branch
- Update docs
- Remove unused username class member
- Avoid mutable kubectl_args class member
- Use six.iteritems
This change allows the kubectl connector to support the same options as
Ansible's original connector.
The playbook sample comes with an example of a pod containing two containers
and checking that moving from one container to another, the version of Python
changes as expected.
Reverts 49736b3a, large file copies can't avoid the RTT.
The parent stack must be blocked while FileService progresses, as unlike
the small file path, it does not make a snapshot of the (possibly
temporary) file passed by the action plug-in. So we need to keep that
file alive while the service runs.
Add a new integration test and a new soak test to cover both.
When Ansible abnormally shuts down, the broker begins
force-disconnecting every context, including those for which connection
is currently in-progress.
When that happens, .call(init_child) throws ChannelError, and that needs
returned back to the worker, assuming the worker still even exists.
This solution is incomplete: with sick nodes, it's also possible the
worker died naturally, and so the worker should perhaps respond by
retrying the connection.
Previously, the unhandled ChannelError would spam the console when e.g.
fork() began returning EAGAIN.
The connection multiplexer can expect to not be scheduled at least until
every $forks worker processes has attempted a connection, so the backlog
must be able to hold every worker.