- don't create a new connection during reset if no existing connection
exists
- strip off last hop in connection stack if PlayContext.become is True.
- log a debug message if reset cannot find an existing connection
It used to be set by on_action_run() from task_vars, but this doesn't
work for meta: reset_connection. That meant MITOGEN_CPU_COUNT>1 would
pick the wrong mux to reset the connection on.
Without this, MuxProcess will start dying too early, before Ansible /
TaskQueueManager.cleanup() has a chance to wait on worker processes.
That would allow WorkerProcess to see ECONNREFUSED from the MuxProcess
socket much more easily.
This 4GB limit was already set for MuxProcess and inherited by all
descendents including the context running on the target host, but it was
not applied to the WorkerProcess router.
That explains why the error from the ticket is being raised by the
router within the WorkerProcess rather than the router on the original
target.
The undocumented 'tmp' parameter controls whether _execute_module()
would delete anything on 2.3, so mimic that. This means
_execute_remove_stat() calls will not blow away the temp directory,
which broke the unarchive plugin.
The previous version quite reliably causes worker deadlocks within 10
minutes running:
# 100 times:
- import_playbook: integration/async/runner_one_job.yml
# 100 times:
- import_playbook: integration/module_utils/adjacent_to_playbook.yml
via .ci/soak/mitogen.sh with PLAYBOOK= set to the above playbook.
Attaching to the worker with gdb reveals it in an instruction
immediately following a futex() call, which likely returned EINTR due to
attaching gdb. Examining the pthread_mutex_t state reveals it to be
completely unlocked.
pthread_mutex_t on Linux should have zero trouble living in shmem, so
it's not clear how this deadlock is happening. Meanwhile POSIX
semaphores are explicitly designed for cross-process use and have a
completely different internal implementation, so try those instead. 1
hour of soaking reveals no deadlock.
This is about avoiding managing a lockable temporary file on disk to
contain our counter, and somehow communicating a reference to it into
subprocesses (despite the subprocess module closing inherited fds, etc),
somehow deleting it reliably at exit, and somehow avoiding concurrent
Ansible runs stepping on the same file. For now ctypes is still less
pain.
A final possibility would be to abandon a shared counter and instead
pick a CPU based on the hash of e.g. the new child's process ID. That
would likely balance equally well, and might be worth exploring when
making this code work on BSD.
It's no longer necessary, since connection attempts are no longer truly
blocking. When CTRL+C is hit in the top-level process, broker will begin
shutdown, which will cancel all pending connection attempts, causing
pool threads to wake. The pool can't block during shutdown anymore.
"self.initialized = False" slipped in a few days ago, on second thoughts
that flag is not needed at all, by simply rearranging ClassicWorkerModel
to have a regular constructor.
This hierarchy is still squishy, it needs more love. Remaining
MuxProcess class attributes should eliminated.
While catching every possible case where "open file limit exceeded" is
not possible, we can at least increase the soft limit to the available
hard limit without any user effort.
Do this in Ansible top-level process, even though we probably only need
it in the MuxProcess. It seems there is no reason this could hurt
Previously we exitted without calling waitpid(), which meant the
top-level process struct rusage did not reflect the resource usage
consumed by the multiplexer processes.
Existing benchmarks are made using perf so this never created a problem,
but it could be confusing to others using the "time" command, and also
allows logging the final exit status of the process.
Move all details of broker/router setup out of connection.py, instead
deferring it to a WorkerModel class exported by process.py via
get_worker_model(). The running strategy can override the configured
worker model via _get_worker_model().
ClassicWorkerModel is installed by default, which implements the
extension's existing process model.
Add optional support for the third party setproctitle module, so
children have pretty names in ps output.
Add optional support for per-CPU multiplexers to classic runs.
This relies on the previous commit resetting global variables.
Update clean_shutdown() to handle duplicate calls, due to tests
repeatedly installing it.
Not clear what the intention is here. Either need to ferret it out of
some other location, or just stop preloading the connection class in the
top-level process.
This is the most minimal change for what might be relatively minimal
edge case. Alternative is replacing reload(), but let's not do that yet.
Closes#555
The idea behind transport=smart is to select between paramiko and
OpenSSH given the availability of connection multiplexing and/or OSX
kernel bugs. We need to make no such choice.
Regardless of the version of simplejson loaded in the master, load up
the ModuleResponder cache with our 2.4-compatible version.
To cope with simplejson being loaded due to modules like ec2_group that
try to import it before importing 'json', also update target.py to
remove it from the whitelist if a local 'json' module import succeeds.
Minify-safe files are marked with a magical "# !mitogen: minify_safe"
comment anywhere in the file, which activates the minifier. The result
is naturally cached by ModuleResponder, therefore lru_cache is gone too.
Given:
import os, mitogen
@mitogen.main()
def main(router):
c = router.ssh(hostname='k3')
c.call(os.getpid)
router.sudo(via=c)
SSH footprint drops from 56.2 KiB to 42.75 KiB (-23.9%)
Ansible "shell: hostname" drops 149.26 KiB to 117.42 KiB (-21.3%)
This has been broken for some time, but somehow it has become noticeable
on recent Ansible.
loop-100-tasks.yml before:
15.532724001 seconds time elapsed
8.453850000 seconds user
5.808627000 seconds sys
loop-100-tasks.yml after:
8.991635735 seconds time elapsed
5.059232000 seconds user
2.578842000 seconds sys
os._exit() subverted calm shutdown, meaning unix.Listener never had a
chance to cleanup its socket.
Move unix.Listener socket cleanup into its class so it is automatic
during shutdown, rather than cutpasted for each consumer.
Disable the watcher thread in the MuxProcess, it is useless.
Add .sock extension to /tmp/mitogen_unix_*, so we can write a test.
Ansible 2.3/Python 2.4 work revealed there is no guarantee a slow target
will have written the initial job status file out before a fast
controller makes an initial check for it. Therefore, provide AsyncRunner
with a sender it should send a message to when the initial job file has
been written.
As a bonus, also catch and report exceptions happening early in
AsyncRunner, rather than leaving them to end up in -vvv output.
Since Python 2.4 fork is so defective, we must use subprocesses for
mitogen_task_isolation=fork. This has plenty of upside, since the long
term goal is to dump forking altogether. This allows a gentle
introduction of its replacement.
This is in part so image_prep can run against an ancient CentOS 5 image
without any upfront help, and in part simply because it's very easy to
support.
This refactors connection.py to pull the two huge dict-building
functions out into new transport_transport_config.PlayContextSpec and
MitogenViaSpec classes, leaving a lot more room to breath in both files
to figure out exactly how connection configuration should work.
The changes made in 1f21a30 / 3d58832 are updated or completely removed,
the original change was misguided, in a bid to fix connection delegation
taking variables from the wrong place when delegate_to was active.
The Python path no longer defaults to '/usr/bin/python', this does not
appear to be Ansible's normal behaviour. This has changed several times,
so it may have to change again, and it may cause breakage after release.
Connection delegation respects the c.DEFAULT_REMOTE_USER whereas the
previous version simply tried to fetch whatever was in the
'ansible_user' hostvar. Many more connection delegation variables closer
match vanilla's handling, but this still requires more work. Some of the
variables need access to the command line, and upstream are in the
process of changing all that stuff around.
This replaces the previous method for capping poorly Popen()
performance, instead entirely monkey-patching the problem function
rather than simply working around it.
Ideally it would only be called once, and in future maybe it can, but
right now we need to cope with these cases:
* Downstream parent notifies us of disconnection (DEL_ROUTE)
* We notify ourself of disconnection
* We notify ourself and so does downstream parent
It's case 3 that causes the error.
Simply listen to RouteMonitor's Context "disconnect" and forget
contexts according to RouteMonitor's rules, rather than duplicate them
(and screw it up).