The 'versioner.c' dodging check added in 0ef23d86 was wrong, since the
check occurred on the host machine, when the fix actually needs to apply
to the Darwin target.
Fixes ability to target OS X from a Red Hat controller, manifesting as
an error like:
D mitogen: mitogen.parent.TtyLogStream('local.2472'): 'python(mitogen:dmw@localhost.localdomain:2449): realpath couldn\'t resolve "/usr/bin/python(mitogen:dmw@localhost.localdomain:2449)"'
The "realpath couldn't resolve" error comes from versioner.c:
https://opensource.apple.com/source/perl/perl-104/versioner/versioner.c
It's possible for a message to arrive after .add_handler() but before
Latch construction.
This is papering over a bigger problem with service pool instantiation.
https://travis-ci.org/dw/mitogen/jobs/390409832#L2901
TASK [Spin up a few interpreters] **********************************************
changed: [target] => (item=1)
ERROR! [pid 5355] 14:47:50.224945 E mitogen.ctx.ssh.localhost:2201.sudo.mitogen__user2: mitogen: Router(Broker(0x7f1e93911450))._invoke(Message(19100, 19095, 19095, 110, 1005, '\x80\x02U\x1fmitogen.service.PushFileServiceq\x01U\x11store_and_f'..8955)): <bound method Receiver._on_receive of Receiver(Router(Broker(0x7f1e93911450)), 110)> crashed
Traceback (most recent call last):
File "<stdin>", line 1471, in _invoke
File "<stdin>", line 491, in _on_receive
AttributeError: 'Receiver' object has no attribute '_latch'
See source comment. This behaviour always existed, but it now seems to
be triggered since we started draining the master side input buffer,
which someone was prolonging the life of the PTY.
The failed job result is likely to be "interrupted system call", and we
don't want that to overwrite the SIGALRM handler's "the task timed out",
so just discard it.
If PushService.store_and_forward() loses the race to arrive at a brand
new context first, and the context's main thread is already executing a
CALL_FUNCTION that is blocked on the result of PushService, deadlock
could occur in the old scheme.
Instead (for now) simply spam a thread for each incoming message, and
use the get_or_create_pool() lock to ensure things work out in the end.
This could potentially generate a huge number of threads given the wrong
app, but we'll fix that problem when it appears.
The controller must know the ID of the forked child in order to
propagate dependencies to it, so forking+starting the module run cannot
happen entirely on the target, without some additional mechanism to
wait-and-repropagate the deps as they arrive on the target.
Rework things so that init_child() also handles starting the fork parent,
and returns it along with the context's home directory in a single round
trip.
Now master knows the identity of the fork parent, it can directly create
fork children and call run_module_async() in them. This necessitates 2
roundtrips to start an asynchronous task.
This whole thing sucks and entirely needs simplified, but for now things
almost work, so keeping it.
connection.py:
* Expect ContextService to return the entire dict return value of
init_child(). Store the fork_contxt from the return value.
planner.py:
* Rework Planner to store the invocation as an instance attribute, to
simplify method calls.
* Add Planner.get_push_files() and Planner.get_module_deps().
* Add _propagate_deps() which takes a Planner and ensures the deps it
describes are sent to a (non forked or forked) context.
* Move async task logic out of target.py and into invoke() /
_invoke_*().
process.py:
* Services no longer need references to each other. planner.py handles
sending module deps with one extra RPC.
services.py:
* Return "init_child_result" key instead of simple "home_dir" key.
* Get rid of dep propagation from ModuleDepService, it lives in
planner.py now.
target.py:
* Get rid of async task start logic, lives in planner.py now.
The very first task /must/ be clearing out logging locks, since
_at_fork() functions call LOG.debug() via Side.close(). Additionally,
the root logger is not included in loggerDict, so we must specify it
explicitly.
This is likely to break something, it was definitely needed at some
point, but I never put much effort into figuring out why. Meanwhile,
Python appears to make find_module('ansible.module_utils.facts.')
requests in some circumstances, which causes us to indicate the module
exists while this hack exists.
So remove it, and let's see what breaks.
planner.py:
* Rather than grant FileService access to a file for children, use
PushFileService to trigger deduplicating send of the file through
the hierarchy immediately.
* Send the complete list of Ansible module imports to the target so
runner.py knows which files and scripts must be loaded via
PushFileService prior to detaching.
runner.py:
* Teach NewStyleRunner to use the full module map to block until
everything is loaded prior to detach().
target.py:
* Delete old _get_file(), replace get_file() with get_small_file()
which uses PushFileService instead.
Closes#186