Reverts 49736b3a, large file copies can't avoid the RTT.
The parent stack must be blocked while FileService progresses, as unlike
the small file path, it does not make a snapshot of the (possibly
temporary) file passed by the action plug-in. So we need to keep that
file alive while the service runs.
Add a new integration test and a new soak test to cover both.
When Ansible abnormally shuts down, the broker begins
force-disconnecting every context, including those for which connection
is currently in-progress.
When that happens, .call(init_child) throws ChannelError, and that needs
returned back to the worker, assuming the worker still even exists.
This solution is incomplete: with sick nodes, it's also possible the
worker died naturally, and so the worker should perhaps respond by
retrying the connection.
Previously, the unhandled ChannelError would spam the console when e.g.
fork() began returning EAGAIN.
The connection multiplexer can expect to not be scheduled at least until
every $forks worker processes has attempted a connection, so the backlog
must be able to hold every worker.
* Always enable the faulthandler module in the top-level process if it
is available.
* Make MITOGEN_DUMP_THREAD_STACKS interval configurable, to better
handle larger runs.
* Add docs subsection on diagnosing hangs.
Conflicts:
ansible_mitogen/process.py
Calls to connect.put_file() where the file is sufficiently small enough
to fit in a single RPC proceed without waiting for an RPC response. If
the write fails the target context will log an exception, and any
subsequent step depending on the written file will fail.
I verified every built-in action plugin for file transfer calls, and
they all depend on the transferred file in the following step, so this
should be safe.
Reduces template/copy actions to 2-RTT, loop-20-templates.yml runtime
reduced from 30 seconds to 10 seconds over a 250ms link compared to
v0.2.2, and from 123 seconds compared to vanilla with pipelining
enabled.
PlayContext.delegate_to is the unexpanded template, Ansible doesn't keep
a copy of it around anywhere convenient. We either need to re-expand it
or take the expanded version that was stored on the Task, which is what
is done here.
This needs more work -- pretty certain that python_path and suchlike are
coming from the wrong place. Possibly we need another config_from_..()
specialized for delegate_to.
This change is relatively incomplete -- ideally we could snapshot
os.environ and /etc/environment at startup and respect key deletions
too, but that's a lot more work. Wait for a bug report instead.
Closes#338.
Concurrent calls to ModuleDepService would cause significant wasted
work, as potentially all pool threads run the same uncached module dep
scan.
Without:
3243581 function calls (3233009 primitive calls) in 4770.672 seconds
ncalls tottime percall cumtime percall filename:lineno(function)
2523 0.011 0.000 39.849 0.016 services.py:409(scan)
With:
2801561 function calls (2800042 primitive calls) in 5166.843 seconds
ncalls tottime percall cumtime percall filename:lineno(function)
2506 0.009 0.000 1.967 0.001 services.py:411(scan)
Ignore timing variance due to problems with the test job.
Given an extracted download of mitogen-2.2.tar.gz, with strategy_plugins
pointing into it, if an old version of the package was pip-installed,
then the old pip-installed package would be imported and override
whatever came from the tarball.
Instead, modify sys.path before attempting any import. This still isn't
perfect, but it's better.
When running any kind of script, rewrite the hashbang like Ansible does,
but subsequently ignore it and explicitly use a fragment of shell from
the ansible_*_interpreter variable to call the interpreter, just like
Ansible does.
This fixes hashbangs containing '/usr/bin/env A=1 bash' on Linux, where
putting that into a hashbang line results in an infinite loop.
* mitogen/ansible_mitogen should only generate ERROR-level logs in
log_path unless -vvv is enabled.
* Targets were accidentally configured to always have DEBUG set, causing
many log messages to be sent on the wire even though they would be
filtered in the master.
Closes#317.
Vanilla Ansible support expandvars-like expansions widely in a variety
of places. Prefer to whitelist those we need, rather than sprinkling
hellish semantics everywhere.
On OS X with case-insensitive filenames, resolving
'ansible.module_utils.facts.base.Hardware' finds
'ansible.module_utils.facts.hardware/__init__.py', because
module_finder's procedure is completely wrong for resolving child
modules. Patch over it for now since it otherwise works for Ansible.
* ansible: use unicode_literals everywhere since it only needs to be
compatible back to 2.6.
* compat/collections.py: delete this entirely and rip out the parts of
functools that require it.
* Introduce serializable Kwargs dict subclass that translates keys to
Unicode on instantiation.
* enable_debug_logging() must set _v/_vv globals.
* cStringIO does not exist in 3.x.
* Treat IOLogger and LogForwarder input as latin-1.
* Avoid ResourceWarnings in first stage by explicitly closing fps.
* Fix preamble_size.py syntax errors.
The failed job result is likely to be "interrupted system call", and we
don't want that to overwrite the SIGALRM handler's "the task timed out",
so just discard it.
The controller must know the ID of the forked child in order to
propagate dependencies to it, so forking+starting the module run cannot
happen entirely on the target, without some additional mechanism to
wait-and-repropagate the deps as they arrive on the target.
Rework things so that init_child() also handles starting the fork parent,
and returns it along with the context's home directory in a single round
trip.
Now master knows the identity of the fork parent, it can directly create
fork children and call run_module_async() in them. This necessitates 2
roundtrips to start an asynchronous task.
This whole thing sucks and entirely needs simplified, but for now things
almost work, so keeping it.
connection.py:
* Expect ContextService to return the entire dict return value of
init_child(). Store the fork_contxt from the return value.
planner.py:
* Rework Planner to store the invocation as an instance attribute, to
simplify method calls.
* Add Planner.get_push_files() and Planner.get_module_deps().
* Add _propagate_deps() which takes a Planner and ensures the deps it
describes are sent to a (non forked or forked) context.
* Move async task logic out of target.py and into invoke() /
_invoke_*().
process.py:
* Services no longer need references to each other. planner.py handles
sending module deps with one extra RPC.
services.py:
* Return "init_child_result" key instead of simple "home_dir" key.
* Get rid of dep propagation from ModuleDepService, it lives in
planner.py now.
target.py:
* Get rid of async task start logic, lives in planner.py now.
planner.py:
* Rather than grant FileService access to a file for children, use
PushFileService to trigger deduplicating send of the file through
the hierarchy immediately.
* Send the complete list of Ansible module imports to the target so
runner.py knows which files and scripts must be loaded via
PushFileService prior to detaching.
runner.py:
* Teach NewStyleRunner to use the full module map to block until
everything is loaded prior to detach().
target.py:
* Delete old _get_file(), replace get_file() with get_small_file()
which uses PushFileService instead.
Closes#186
For lack of a better place to keep the client function, make it a
classmethod of FileService itself for now.
The old _get_file() is removed in a subsequent commit.
It's not simple without executing a module to determine whether the
above refers to a submodule of a package, or an object defined within a
module.
Therefore detect when resolution of a child module yields the same path
as the parent, and ignore the result.
For "ansible -m setup" over a 25ms link, avoids 65 roundtrips and
reduces runtime from 5.7s to 4.1s (-28%).
For "ansible -m setup" over a simulated 250 ms link, reduces runtime
from m27.015s to 0m8.254s (-69%).
This may come back to bite later, but in the meantime it avoids shipping
up to 12KiB of junk metadata for every single task invocation.
For detachment (aka. async), we must ensure the target has two types of
preloads completed (modules and module_utils files) before detaching.
The OpenShift installer modifies /etc/resolv.conf then tests the new
resolver configuration, however, there was no mechanism to reload
resolv.conf in our reuseable interpreter.
https://github.com/openshift/openshift-ansible/blob/release-3.9/roles/openshift_web_console/tasks/install.yml#L137
This inserts an explicit call to res_init() for every new style
invocation, with an approximate cost of ~1usec on Linux since glibc
verifies resolv.conf has changed before reloading it.
There is little to be done for users of the thread-safe resolver APIs,
their state is hidden from us. If bugs like that manifest, whack-a-mole
style 'del sys.modules[thatmod]' patches may suffice.
Traced git log all the way back to beginning of time, and checked
Ansible versions starting Jan 2016. Zero clue where this came from, but
the convention suggests it came from Ansible at some point.
While adding support for non-new style module types, NewStyleRunner
began writing modules to a temporary file, and sys.argv was patched to
actually include the script filename. The argv change was never required
to fix any particular bug, and a search of the standard modules reveals
no argv users. Update argv[0] to be '', like an interactive interpreter
would have.
While fixing #210, new style runner began setting __file__ to the
temporary file path in order to allow apt.py to discover the Ansiballz
temporary directory. 5 out of 1,516 standard modules follow this
pattern, but in each case, none actually attempt to access __file__,
they just call dirname on it. Therefore do not write the contents of
file, simply set it to the path as it would exist, within a real
temporary directory.
Finally move temporary directory creation out of runner and into target.
Now a single directory exists for the duration of a run, and is emptied
by runner.py as necessary after each task invocation.
This could be further extended to stop rewriting non-new-style modules
in a with_items loop, but that's another step.
Finally the last bullet point in the documentation almost isn't a lie
again.
Ideally it would be possible to specify a callback function, but this is
not possible for proxied connections. So simply provide the 3 most
useful modes, defaulting to the most secure.
Closes#127. Closes#134.
mitogen/master.py:
Annotate forwarded log entries with their original source, logger
name, and message.
ansible:
mark stderr in red with -vvv
Tempting to make this appaer 100% of the time, but some crappy
bashrcs may cause lots of junk to be printed.
This implements the first edition of Connection Delegation, where
delegating connection establishment is initially single-threaded.
ansible_mitogen/strategy.py:
ansible_mitogen/plugins/connection/*:
Begin splitting connection.Connection into subclasses, exposing them
directly as "mitogen_ssh", "mitogen_local", etc. connection types.
This is far from removing strategy.py, but it's a tiny start.
ansible_mitogen/connection.py:
* config_from_play_context() and config_from_host_vars() build up a
huge dictionary containing either more or less PlayContext contents,
or our best attempt at reconstructing a host's connection config
from its hostvars, where that config is not the current
WorkerProcess target.
They both produce the same format with the same keys, allowing
remaining code to have a single input format.
These dicts contain fields named after how Ansible refers to them,
e.g. "sudo_exe".
* _config_from_via() parses a basic connection specification like
"username@inventory_name" into one of the aforementioned dicts.
* _stack_from_config() produces a list of dicts describing the order
in which (Mitogen) connections should be established, such that each
element is proxied via= the previous element. The dicts produced by
this function use Mitogen keyword arguments, the former di.
These dicts contain fields named after how Mitogen refers to them,
e.g. "sudo_path".
* Pass the stack to ContextService, which is responsible for actual
setup of the full chain.
ansible_mitogen/services.py:
Teach get() to walk the supplied stack, establishing each connection
in turn, creating refounts for it before continuing.
TODO: refcounting is broken in a variety of cases.
This commit only uses it for the target.get_file() helper, which is only
used for transferring modules. The next commit wires it into the
Connection.transfer_file() API, which is the method the copy module
uses.
The module name comes from YAML via Jinja2.. it's always Unicode. Mixing
it into a temporary directory name produces a Unicode tempdir name,
which ends up in sys.argv via TemporaryArgv.
This is a partial fix, there are still at least 2 cases needing covered:
- In-progress connections must have CallError or similar sent to any
waiters
- Once connection delegation exists, it is possible for other worker
processes to be active (and in any step in the process), trying to
communicate with a context that we know can no longer be communicated
with. The solution to that isn't clear yet.
Additionally ensure root has /bin/bash shell in both Docker images.
And by "compatible" I mean "terrible". This does not implement async job
timeouts, but I'm not going to bother, upstream async implementation is
so buggy and inconsistent it resists even having its behaviour captured
in tests.
Now Connection.close() *must* be called in the worker, to ensure the
reference count for a context drops correctly.
Remove 'discriminator' for now, I'm not using it for testing any more
and it complicated this code.
This code is a car crash, it needs rewritten again. Ideally some/most of
this behaviour could live on services.DeduplicatingService somehow, but
I couldn't come up with a sensible design.
Elements of a with_items loop reuse one WorkerProcess to execute every
iteration, requiring us to reset Connection's idea of the connection on
each iteration, otherwise the tasks will erroneously execute in the
wrong context.
Closes#105.
References #155.
mitogen/service.py:
Refactor services to support individually exposed methods with
different security policies for each method.
- @mitogen.service.expose() to expose a method and set its policy
- @mitogen.service.arg_spec() to validate input.
- Require basic service message format to be a tuple of
`(method, kwargs)`, where kwargs is always a dict.
- Update DeduplicatingService to match the new scheme.
ansible_mitogen/connection.py:
- Rename 'method' to 'method_name' to disambiguate it from the
service.call()'s method= argument.
ansible_mitogen/planner.py:
- Generate an ID for every job, sync or not, and fetch job results
from JobResultService rather than via the initiating function
call's return value.
- Planner subclasses now get to select whether their Runner should
run in a forked process. The base implementation requests this if
the 'mitogen_isolation_mode=fork' task variable is present.
ansible_mitogen/runner.py:
Teach runners to deliver their result via JobResultService executing
in their indirect parent mux process.
ansible_mitogen/plugins/actions/mitogen_async_status.py:
Split the implementation up into methods, and more compatibly
emulate Ansible's existing output.
ansible_mitogen/process.py:
Mux processes now host JobResultService.
ansible_mitogen/services.py:
Update existing services to the new mitogen.service scheme, and
implement JobResultService:
* listen() method for synchronous jobs. planner.invoke() registers a
Sender with the service prior to invoking the job, then sleeps
waiting for the service to write the job result to the
corresponding Receiver.
* Non-blocking get() method for implementing mitogen_async_status
action.
* Child-accessible push() method for delivering task results.
ansible_mitogen/target.py:
New helpers for spawning a virginal subprocess on startup, from
which asynchronous and mitogen_task_isolation=fork jobs are forked.
Necessary to avoid a task inheriting potentially
polluted/monkey-patched parent environment, since remaining jobs
continue to run in the original child process.
docs/ansible.rst:
Add/merge/remove some behaviours/risks.
tests/ansible/integration:
New tests for forking/async.
Before:
$ ANSIBLE_STRATEGY=mitogen ansible -i derp, derp -m setup
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: (''.join(bits)[-300:],)
derp | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
After:
$ ANSIBLE_STRATEGY=mitogen ansible -i derp, derp -m setup
derp | UNREACHABLE! => {
"changed": false,
"msg": "EOF on stream; last 300 bytes received: 'ssh: Could not resolve hostname derp: nodename nor servname provided, or not known\\r\\n'",
"unreachable": true
}
* Use identical logic to select when stdout/stderr are merged, so
'stdout', 'stdout_lines', 'stderr', 'stderr_lines' contain the same
output before/after the extension.
* When stdout/stderr are merged, synthesize carriage returns just like
the TTY layer.
* Mimic the SSH connection multiplexing message on stderr. Not really
for user code, but so compare_output_test.sh needs fewer fixups.
Rather than assume any structure about the Python code:
* Delete the exit_json/fail_json monkeypatches.
* Patch SystemExit rather than a magic monkeypatch-thrown exception
* Setup fake cStringIO stdin, stdout, stderr and return those along with
SystemExit exit status
* Setup _ANSIBLE_ARGS as we used to, since we still want to override
that with '{}' to prevent accidental import hangs, but also provide
the same string via sys.stdin.
* Compile the module bytecode once and re-execute it for every
invocation. May change this back again later, once some benchmarks are
done.
* Remove the fixups stuff for now, it's handled by ^ above.
Should support any "somewhat new style" Python module, including those
that just give up and dump stuff to stdout directly.
Refactor planner.py to look a lot more like runner.py. This 'structural
cutpaste' looks messy -- probably we can simplify this code, even though
it's pretty simple already.
* Add helpers.get_file() that calls back up into FileService as
necessary. This is a stopgap measure.
* Add logging to exec_args() to simplify debugging of binary runners.
It looks a lot like multiple calls to _make_tmp_path() will result in
multiple temporary directories on the remote machine, only the last of
which will be cleaned up.
We must be bug-for-bug compatible for now, so ignore the problem in the
meantime.
Implement Connection.__del__, which is almost certainly going to trigger
more bugs down the line, because the state of the Connection instance is
not guranteed during __del__. Meanwhile, it is temporarily needed for
deployed-today Ansibles that have a buggy synchronize action that does
not call Connection.close().
A better approach to this would be to virtualize the guts of Connection,
and move its management to one central place where we can guarantee
resource destruction happens reliably, but that may entail another
Ansible monkey-patch to give us such a reliable hook.
The strategy is reconstructed for every playbook that is included or
specified on the command line, therefore we can't store the global
Router there without losing all our SSH connections across playbooks.
Turns out Ansible can't be trusted to actually check the result
dictionary everywhere it expects one, so put the real exception text
into -vvv output too.
Ansible's PluginLoader makes up bullshit when it imports a module
(mostly because it has to make up something), therefore we ended up with
duplicate copies of ansible_mitogen loaded: one under
ansible.plugins.*.mitogen, and one under the canonical namespace.
Which broke isinstance().
It's used at least by the copy module, even though the result is still
mostly a no-op. _remote_chmod() doesn't accept octal mode, it accepts
symbolic mode. So implement a symbolic parser in helpers.py.
Still needs a ton of work to emulate argument handling, shell selection,
and output emulation in every case. Unsurprisingly, Ansible documents
none of this.
On Python 2.x, operations on pthread objects with a timeout set actually
cause internal polling. When polling fails to yield a positive result,
it quickly backs off to a 50ms loop, which results in a huge amount of
latency throughout.
Instead, give up using Queue.Queue.get(timeout=...) and replace it with
the UNIX self-pipe trick. Knocks another 45% off my.yml in the Ansible
examples directory against a local VM.
This has the potential to burn a *lot* of file descriptors, but hell,
it's not the 1940s any more, RAM is all but infinite. I can live with
that.
This gets things down to around 75ms per playbook step, still hunting
for additional sources of latency.