The 'versioner.c' dodging check added in 0ef23d86 was wrong, since the
check occurred on the host machine, when the fix actually needs to apply
to the Darwin target.
Fixes ability to target OS X from a Red Hat controller, manifesting as
an error like:
D mitogen: mitogen.parent.TtyLogStream('local.2472'): 'python(mitogen:dmw@localhost.localdomain:2449): realpath couldn\'t resolve "/usr/bin/python(mitogen:dmw@localhost.localdomain:2449)"'
The "realpath couldn't resolve" error comes from versioner.c:
https://opensource.apple.com/source/perl/perl-104/versioner/versioner.c
It's possible for a message to arrive after .add_handler() but before
Latch construction.
This is papering over a bigger problem with service pool instantiation.
https://travis-ci.org/dw/mitogen/jobs/390409832#L2901
TASK [Spin up a few interpreters] **********************************************
changed: [target] => (item=1)
ERROR! [pid 5355] 14:47:50.224945 E mitogen.ctx.ssh.localhost:2201.sudo.mitogen__user2: mitogen: Router(Broker(0x7f1e93911450))._invoke(Message(19100, 19095, 19095, 110, 1005, '\x80\x02U\x1fmitogen.service.PushFileServiceq\x01U\x11store_and_f'..8955)): <bound method Receiver._on_receive of Receiver(Router(Broker(0x7f1e93911450)), 110)> crashed
Traceback (most recent call last):
File "<stdin>", line 1471, in _invoke
File "<stdin>", line 491, in _on_receive
AttributeError: 'Receiver' object has no attribute '_latch'
See source comment. This behaviour always existed, but it now seems to
be triggered since we started draining the master side input buffer,
which someone was prolonging the life of the PTY.
The failed job result is likely to be "interrupted system call", and we
don't want that to overwrite the SIGALRM handler's "the task timed out",
so just discard it.
If PushService.store_and_forward() loses the race to arrive at a brand
new context first, and the context's main thread is already executing a
CALL_FUNCTION that is blocked on the result of PushService, deadlock
could occur in the old scheme.
Instead (for now) simply spam a thread for each incoming message, and
use the get_or_create_pool() lock to ensure things work out in the end.
This could potentially generate a huge number of threads given the wrong
app, but we'll fix that problem when it appears.