- don't create a new connection during reset if no existing connection
exists
- strip off last hop in connection stack if PlayContext.become is True.
- log a debug message if reset cannot find an existing connection
Move all details of broker/router setup out of connection.py, instead
deferring it to a WorkerModel class exported by process.py via
get_worker_model(). The running strategy can override the configured
worker model via _get_worker_model().
ClassicWorkerModel is installed by default, which implements the
extension's existing process model.
Add optional support for the third party setproctitle module, so
children have pretty names in ps output.
Add optional support for per-CPU multiplexers to classic runs.
Minify-safe files are marked with a magical "# !mitogen: minify_safe"
comment anywhere in the file, which activates the minifier. The result
is naturally cached by ModuleResponder, therefore lru_cache is gone too.
Given:
import os, mitogen
@mitogen.main()
def main(router):
c = router.ssh(hostname='k3')
c.call(os.getpid)
router.sudo(via=c)
SSH footprint drops from 56.2 KiB to 42.75 KiB (-23.9%)
Ansible "shell: hostname" drops 149.26 KiB to 117.42 KiB (-21.3%)
Ideally it would only be called once, and in future maybe it can, but
right now we need to cope with these cases:
* Downstream parent notifies us of disconnection (DEL_ROUTE)
* We notify ourself of disconnection
* We notify ourself and so does downstream parent
It's case 3 that causes the error.
Simply listen to RouteMonitor's Context "disconnect" and forget
contexts according to RouteMonitor's rules, rather than duplicate them
(and screw it up).
Update _via_by_context earlier; fixes:
Traceback (most recent call last):
File "/Users/dmw/src/mitogen/mitogen/service.py", line 519, in _on_service_call
return invoker.invoke(method_name, kwargs, msg)
File "/Users/dmw/src/mitogen/mitogen/service.py", line 253, in invoke
response = self._invoke(method_name, kwargs, msg)
File "/Users/dmw/src/mitogen/mitogen/service.py", line 239, in _invoke
ret = method(**kwargs)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 454, in get
reraise(*result)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 412, in _wait_or_start
response = self._connect(key, spec, via=via)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 363, in _connect
self._update_lru(context, spec, via)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 266, in _update_lru
self._update_lru_unlocked(new_context, spec, via)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 253, in _update_lru_unlocked
if self._refs_by_context[context] == 0:
KeyError: Context(1008, u'ssh.localhost.sudo.mitogen__user3')
Earlier commit moved Stream.routes attribute into a private map
belonging to RouteMonitor, to make upgrades smoother. This adds a new
accessor method to RouteMonitor.
When Ansible abnormally shuts down, the broker begins
force-disconnecting every context, including those for which connection
is currently in-progress.
When that happens, .call(init_child) throws ChannelError, and that needs
returned back to the worker, assuming the worker still even exists.
This solution is incomplete: with sick nodes, it's also possible the
worker died naturally, and so the worker should perhaps respond by
retrying the connection.
Previously, the unhandled ChannelError would spam the console when e.g.
fork() began returning EAGAIN.
Concurrent calls to ModuleDepService would cause significant wasted
work, as potentially all pool threads run the same uncached module dep
scan.
Without:
3243581 function calls (3233009 primitive calls) in 4770.672 seconds
ncalls tottime percall cumtime percall filename:lineno(function)
2523 0.011 0.000 39.849 0.016 services.py:409(scan)
With:
2801561 function calls (2800042 primitive calls) in 5166.843 seconds
ncalls tottime percall cumtime percall filename:lineno(function)
2506 0.009 0.000 1.967 0.001 services.py:411(scan)
Ignore timing variance due to problems with the test job.
* mitogen/ansible_mitogen should only generate ERROR-level logs in
log_path unless -vvv is enabled.
* Targets were accidentally configured to always have DEBUG set, causing
many log messages to be sent on the wire even though they would be
filtered in the master.
Closes#317.
* ansible: use unicode_literals everywhere since it only needs to be
compatible back to 2.6.
* compat/collections.py: delete this entirely and rip out the parts of
functools that require it.
* Introduce serializable Kwargs dict subclass that translates keys to
Unicode on instantiation.
* enable_debug_logging() must set _v/_vv globals.
* cStringIO does not exist in 3.x.
* Treat IOLogger and LogForwarder input as latin-1.
* Avoid ResourceWarnings in first stage by explicitly closing fps.
* Fix preamble_size.py syntax errors.
The controller must know the ID of the forked child in order to
propagate dependencies to it, so forking+starting the module run cannot
happen entirely on the target, without some additional mechanism to
wait-and-repropagate the deps as they arrive on the target.
Rework things so that init_child() also handles starting the fork parent,
and returns it along with the context's home directory in a single round
trip.
Now master knows the identity of the fork parent, it can directly create
fork children and call run_module_async() in them. This necessitates 2
roundtrips to start an asynchronous task.
This whole thing sucks and entirely needs simplified, but for now things
almost work, so keeping it.
connection.py:
* Expect ContextService to return the entire dict return value of
init_child(). Store the fork_contxt from the return value.
planner.py:
* Rework Planner to store the invocation as an instance attribute, to
simplify method calls.
* Add Planner.get_push_files() and Planner.get_module_deps().
* Add _propagate_deps() which takes a Planner and ensures the deps it
describes are sent to a (non forked or forked) context.
* Move async task logic out of target.py and into invoke() /
_invoke_*().
process.py:
* Services no longer need references to each other. planner.py handles
sending module deps with one extra RPC.
services.py:
* Return "init_child_result" key instead of simple "home_dir" key.
* Get rid of dep propagation from ModuleDepService, it lives in
planner.py now.
target.py:
* Get rid of async task start logic, lives in planner.py now.
planner.py:
* Rather than grant FileService access to a file for children, use
PushFileService to trigger deduplicating send of the file through
the hierarchy immediately.
* Send the complete list of Ansible module imports to the target so
runner.py knows which files and scripts must be loaded via
PushFileService prior to detaching.
runner.py:
* Teach NewStyleRunner to use the full module map to block until
everything is loaded prior to detach().
target.py:
* Delete old _get_file(), replace get_file() with get_small_file()
which uses PushFileService instead.
Closes#186
For "ansible -m setup" over a 25ms link, avoids 65 roundtrips and
reduces runtime from 5.7s to 4.1s (-28%).
For "ansible -m setup" over a simulated 250 ms link, reduces runtime
from m27.015s to 0m8.254s (-69%).
This may come back to bite later, but in the meantime it avoids shipping
up to 12KiB of junk metadata for every single task invocation.
For detachment (aka. async), we must ensure the target has two types of
preloads completed (modules and module_utils files) before detaching.
While adding support for non-new style module types, NewStyleRunner
began writing modules to a temporary file, and sys.argv was patched to
actually include the script filename. The argv change was never required
to fix any particular bug, and a search of the standard modules reveals
no argv users. Update argv[0] to be '', like an interactive interpreter
would have.
While fixing #210, new style runner began setting __file__ to the
temporary file path in order to allow apt.py to discover the Ansiballz
temporary directory. 5 out of 1,516 standard modules follow this
pattern, but in each case, none actually attempt to access __file__,
they just call dirname on it. Therefore do not write the contents of
file, simply set it to the path as it would exist, within a real
temporary directory.
Finally move temporary directory creation out of runner and into target.
Now a single directory exists for the duration of a run, and is emptied
by runner.py as necessary after each task invocation.
This could be further extended to stop rewriting non-new-style modules
in a with_items loop, but that's another step.
Finally the last bullet point in the documentation almost isn't a lie
again.