It's no longer necessary, since connection attempts are no longer truly
blocking. When CTRL+C is hit in the top-level process, broker will begin
shutdown, which will cancel all pending connection attempts, causing
pool threads to wake. The pool can't block during shutdown anymore.
"self.initialized = False" slipped in a few days ago, on second thoughts
that flag is not needed at all, by simply rearranging ClassicWorkerModel
to have a regular constructor.
This hierarchy is still squishy, it needs more love. Remaining
MuxProcess class attributes should eliminated.
While catching every possible case where "open file limit exceeded" is
not possible, we can at least increase the soft limit to the available
hard limit without any user effort.
Do this in Ansible top-level process, even though we probably only need
it in the MuxProcess. It seems there is no reason this could hurt
Previously we exitted without calling waitpid(), which meant the
top-level process struct rusage did not reflect the resource usage
consumed by the multiplexer processes.
Existing benchmarks are made using perf so this never created a problem,
but it could be confusing to others using the "time" command, and also
allows logging the final exit status of the process.
Move all details of broker/router setup out of connection.py, instead
deferring it to a WorkerModel class exported by process.py via
get_worker_model(). The running strategy can override the configured
worker model via _get_worker_model().
ClassicWorkerModel is installed by default, which implements the
extension's existing process model.
Add optional support for the third party setproctitle module, so
children have pretty names in ps output.
Add optional support for per-CPU multiplexers to classic runs.
This relies on the previous commit resetting global variables.
Update clean_shutdown() to handle duplicate calls, due to tests
repeatedly installing it.
Not clear what the intention is here. Either need to ferret it out of
some other location, or just stop preloading the connection class in the
top-level process.
This is the most minimal change for what might be relatively minimal
edge case. Alternative is replacing reload(), but let's not do that yet.
Closes#555
The idea behind transport=smart is to select between paramiko and
OpenSSH given the availability of connection multiplexing and/or OSX
kernel bugs. We need to make no such choice.
Regardless of the version of simplejson loaded in the master, load up
the ModuleResponder cache with our 2.4-compatible version.
To cope with simplejson being loaded due to modules like ec2_group that
try to import it before importing 'json', also update target.py to
remove it from the whitelist if a local 'json' module import succeeds.
Minify-safe files are marked with a magical "# !mitogen: minify_safe"
comment anywhere in the file, which activates the minifier. The result
is naturally cached by ModuleResponder, therefore lru_cache is gone too.
Given:
import os, mitogen
@mitogen.main()
def main(router):
c = router.ssh(hostname='k3')
c.call(os.getpid)
router.sudo(via=c)
SSH footprint drops from 56.2 KiB to 42.75 KiB (-23.9%)
Ansible "shell: hostname" drops 149.26 KiB to 117.42 KiB (-21.3%)
This has been broken for some time, but somehow it has become noticeable
on recent Ansible.
loop-100-tasks.yml before:
15.532724001 seconds time elapsed
8.453850000 seconds user
5.808627000 seconds sys
loop-100-tasks.yml after:
8.991635735 seconds time elapsed
5.059232000 seconds user
2.578842000 seconds sys
os._exit() subverted calm shutdown, meaning unix.Listener never had a
chance to cleanup its socket.
Move unix.Listener socket cleanup into its class so it is automatic
during shutdown, rather than cutpasted for each consumer.
Disable the watcher thread in the MuxProcess, it is useless.
Add .sock extension to /tmp/mitogen_unix_*, so we can write a test.
Ansible 2.3/Python 2.4 work revealed there is no guarantee a slow target
will have written the initial job status file out before a fast
controller makes an initial check for it. Therefore, provide AsyncRunner
with a sender it should send a message to when the initial job file has
been written.
As a bonus, also catch and report exceptions happening early in
AsyncRunner, rather than leaving them to end up in -vvv output.
Since Python 2.4 fork is so defective, we must use subprocesses for
mitogen_task_isolation=fork. This has plenty of upside, since the long
term goal is to dump forking altogether. This allows a gentle
introduction of its replacement.
This is in part so image_prep can run against an ancient CentOS 5 image
without any upfront help, and in part simply because it's very easy to
support.
This refactors connection.py to pull the two huge dict-building
functions out into new transport_transport_config.PlayContextSpec and
MitogenViaSpec classes, leaving a lot more room to breath in both files
to figure out exactly how connection configuration should work.
The changes made in 1f21a30 / 3d58832 are updated or completely removed,
the original change was misguided, in a bid to fix connection delegation
taking variables from the wrong place when delegate_to was active.
The Python path no longer defaults to '/usr/bin/python', this does not
appear to be Ansible's normal behaviour. This has changed several times,
so it may have to change again, and it may cause breakage after release.
Connection delegation respects the c.DEFAULT_REMOTE_USER whereas the
previous version simply tried to fetch whatever was in the
'ansible_user' hostvar. Many more connection delegation variables closer
match vanilla's handling, but this still requires more work. Some of the
variables need access to the command line, and upstream are in the
process of changing all that stuff around.
This replaces the previous method for capping poorly Popen()
performance, instead entirely monkey-patching the problem function
rather than simply working around it.
Ideally it would only be called once, and in future maybe it can, but
right now we need to cope with these cases:
* Downstream parent notifies us of disconnection (DEL_ROUTE)
* We notify ourself of disconnection
* We notify ourself and so does downstream parent
It's case 3 that causes the error.
Simply listen to RouteMonitor's Context "disconnect" and forget
contexts according to RouteMonitor's rules, rather than duplicate them
(and screw it up).
Update _via_by_context earlier; fixes:
Traceback (most recent call last):
File "/Users/dmw/src/mitogen/mitogen/service.py", line 519, in _on_service_call
return invoker.invoke(method_name, kwargs, msg)
File "/Users/dmw/src/mitogen/mitogen/service.py", line 253, in invoke
response = self._invoke(method_name, kwargs, msg)
File "/Users/dmw/src/mitogen/mitogen/service.py", line 239, in _invoke
ret = method(**kwargs)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 454, in get
reraise(*result)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 412, in _wait_or_start
response = self._connect(key, spec, via=via)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 363, in _connect
self._update_lru(context, spec, via)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 266, in _update_lru
self._update_lru_unlocked(new_context, spec, via)
File "/Users/dmw/src/mitogen/ansible_mitogen/services.py", line 253, in _update_lru_unlocked
if self._refs_by_context[context] == 0:
KeyError: Context(1008, u'ssh.localhost.sudo.mitogen__user3')
Earlier commit moved Stream.routes attribute into a private map
belonging to RouteMonitor, to make upgrades smoother. This adds a new
accessor method to RouteMonitor.
(Pull #377)
Changes:
- additional_parameters -> extra_args
- Merge with kubectl changes from dmw branch
- Update docs
- Remove unused username class member
- Avoid mutable kubectl_args class member
- Use six.iteritems
This change allows the kubectl connector to support the same options as
Ansible's original connector.
The playbook sample comes with an example of a pod containing two containers
and checking that moving from one container to another, the version of Python
changes as expected.
Reverts 49736b3a, large file copies can't avoid the RTT.
The parent stack must be blocked while FileService progresses, as unlike
the small file path, it does not make a snapshot of the (possibly
temporary) file passed by the action plug-in. So we need to keep that
file alive while the service runs.
Add a new integration test and a new soak test to cover both.
When Ansible abnormally shuts down, the broker begins
force-disconnecting every context, including those for which connection
is currently in-progress.
When that happens, .call(init_child) throws ChannelError, and that needs
returned back to the worker, assuming the worker still even exists.
This solution is incomplete: with sick nodes, it's also possible the
worker died naturally, and so the worker should perhaps respond by
retrying the connection.
Previously, the unhandled ChannelError would spam the console when e.g.
fork() began returning EAGAIN.
The connection multiplexer can expect to not be scheduled at least until
every $forks worker processes has attempted a connection, so the backlog
must be able to hold every worker.
* Always enable the faulthandler module in the top-level process if it
is available.
* Make MITOGEN_DUMP_THREAD_STACKS interval configurable, to better
handle larger runs.
* Add docs subsection on diagnosing hangs.
Conflicts:
ansible_mitogen/process.py
Calls to connect.put_file() where the file is sufficiently small enough
to fit in a single RPC proceed without waiting for an RPC response. If
the write fails the target context will log an exception, and any
subsequent step depending on the written file will fail.
I verified every built-in action plugin for file transfer calls, and
they all depend on the transferred file in the following step, so this
should be safe.
Reduces template/copy actions to 2-RTT, loop-20-templates.yml runtime
reduced from 30 seconds to 10 seconds over a 250ms link compared to
v0.2.2, and from 123 seconds compared to vanilla with pipelining
enabled.
PlayContext.delegate_to is the unexpanded template, Ansible doesn't keep
a copy of it around anywhere convenient. We either need to re-expand it
or take the expanded version that was stored on the Task, which is what
is done here.
This needs more work -- pretty certain that python_path and suchlike are
coming from the wrong place. Possibly we need another config_from_..()
specialized for delegate_to.