To support detach, we must be able to preload the target with every
module it will need prior to detachment. This implements the
intermediary part of the process (i.e. the Ansible fork parent) --
receiving LOAD_MODULE/FORWARD_MODULE pairs and ensuring they reach the
child.
This may come back to bite later, but in the meantime it avoids shipping
up to 12KiB of junk metadata for every single task invocation.
For detachment (aka. async), we must ensure the target has two types of
preloads completed (modules and module_utils files) before detaching.
Bootstrap would hang if (as of writing) a pipe sufficient to hold 42,006
bytes was not handed out by the kernel to the first stage. It was luck
that this didn't manifest before, as first stage could write the full
source and exit completely before reading begun.
It is not clear under which circumstances this could previously occur,
but at least since Linux 4.5, it can be triggered if
/proc/sys/fs/pipe-max-size is reduced from the default of 1MiB, which
can have the effect of capping the default pipe buffer size of 64KiB to
something lower.
Suspicion is that buffer pipe size could also be reduced under memory
pressure, as reference to busy machines appeared a few times in the bug
report.
The OpenShift installer modifies /etc/resolv.conf then tests the new
resolver configuration, however, there was no mechanism to reload
resolv.conf in our reuseable interpreter.
https://github.com/openshift/openshift-ansible/blob/release-3.9/roles/openshift_web_console/tasks/install.yml#L137
This inserts an explicit call to res_init() for every new style
invocation, with an approximate cost of ~1usec on Linux since glibc
verifies resolv.conf has changed before reloading it.
There is little to be done for users of the thread-safe resolver APIs,
their state is hidden from us. If bugs like that manifest, whack-a-mole
style 'del sys.modules[thatmod]' patches may suffice.