If PushService.store_and_forward() loses the race to arrive at a brand
new context first, and the context's main thread is already executing a
CALL_FUNCTION that is blocked on the result of PushService, deadlock
could occur in the old scheme.
Instead (for now) simply spam a thread for each incoming message, and
use the get_or_create_pool() lock to ensure things work out in the end.
This could potentially generate a huge number of threads given the wrong
app, but we'll fix that problem when it appears.
The very first task /must/ be clearing out logging locks, since
_at_fork() functions call LOG.debug() via Side.close(). Additionally,
the root logger is not included in loggerDict, so we must specify it
explicitly.
This is likely to break something, it was definitely needed at some
point, but I never put much effort into figuring out why. Meanwhile,
Python appears to make find_module('ansible.module_utils.facts.')
requests in some circumstances, which causes us to indicate the module
exists while this hack exists.
So remove it, and let's see what breaks.
For lack of a better place to keep the client function, make it a
classmethod of FileService itself for now.
The old _get_file() is removed in a subsequent commit.
This is like FileService but blocks until the file is pushed by a parent
context, with deduplicating behaviour at each level in the hierarchy. It
does not stream large files, so it is only suitable for small files like
Python modules.
Additionally add SerializedInvoker for use with PushFileService, which
ensures all method calls to a single service occur in sequence.
It's easy to call msg.reply() by accident on a message that never had
reply_to set, resulting in a "invalid handle" error message coming from
router. Instead log a more accurate message on the stack that actualy
caused the problem.
With epoll() there is only one kernel-side object per file descriptor,
which is why _control() is such a pain. Since we merge receive/transmit
watching into that single object, we must always test the mask for both
conditions when reading results.
Kqueue isn't/doesn't appear to be like this. The identity of a Kqueue
event is keyed on (fd, filter), and we register a separate event for
both transmit and receive, so the 'elif' in KqueuePoller.poll() does not
appear to need to change.
Previously, a FD marked for read+write would not indicate writeability
until it was no longer readable.