The undocumented 'tmp' parameter controls whether _execute_module()
would delete anything on 2.3, so mimic that. This means
_execute_remove_stat() calls will not blow away the temp directory,
which broke the unarchive plugin.
* origin/dmw:
issue #613: must await 'exit' and 'disconnect' in wait=False test
Import LGTM config to disable some stuff
Fix up another handful of LGTM errors.
tests: work around AnsibleModule.run_command() race.
docs: mention another __main__ safeguard
docs: tweaks
formatting error
docs: make Sphinx install soft fail on Python 2.
issue #598: allow disabling preempt in terraform
issue #598: update Changelog.
* origin/dmw:
issue #605: update Changelog.
issue #605: ansible: share a sem_t instead of a pthread_mutex_t
issue #613: add tests for all the weird shutdown methods
Add mitogen.core.now() and use it everywhere; closes#614.
docs: move decorator docs into core.py and use autodecorator
preamble_size: make it work on Python 3.
docs: upgrade Sphinx to 2.1.2, require Python 3 to build docs.
docs: fix Sphinx warnings, add LogHandler, more docstrings
docs: tidy up some Changelog text
The previous version quite reliably causes worker deadlocks within 10
minutes running:
# 100 times:
- import_playbook: integration/async/runner_one_job.yml
# 100 times:
- import_playbook: integration/module_utils/adjacent_to_playbook.yml
via .ci/soak/mitogen.sh with PLAYBOOK= set to the above playbook.
Attaching to the worker with gdb reveals it in an instruction
immediately following a futex() call, which likely returned EINTR due to
attaching gdb. Examining the pthread_mutex_t state reveals it to be
completely unlocked.
pthread_mutex_t on Linux should have zero trouble living in shmem, so
it's not clear how this deadlock is happening. Meanwhile POSIX
semaphores are explicitly designed for cross-process use and have a
completely different internal implementation, so try those instead. 1
hour of soaking reveals no deadlock.
This is about avoiding managing a lockable temporary file on disk to
contain our counter, and somehow communicating a reference to it into
subprocesses (despite the subprocess module closing inherited fds, etc),
somehow deleting it reliably at exit, and somehow avoiding concurrent
Ansible runs stepping on the same file. For now ctypes is still less
pain.
A final possibility would be to abandon a shared counter and instead
pick a CPU based on the hash of e.g. the new child's process ID. That
would likely balance equally well, and might be worth exploring when
making this code work on BSD.
* origin/dmw:
issue #482: another Py3 fix
ci: try removing exclude: to make Azure jobs work again
compat: fix Py2.4 SyntaxError
issue #482: remove 'ssh' from checked processes
ci: Py3 fix
issue #279: add one more test for max_message_size
issue #482: ci: add stray process checks to all jobs
tests: fix format string error
core: MitogenProtocol.is_privileged was not set in children
issue #482: tests: fail DockerMixin tests if stray processes exist
docs: update Changelog.
issue #586: update Changelog.
docs: update Changelog.
[security] core: undirectional routing wasn't respected in some cases
docs: tidy up Select.all()
issue #612: update Changelog.
When creating a context using Router.method(via=somechild),
unidirectional mode was set on the new child correctly, however if the
child were to call Router.method(), due to a typing mistake the new
child would start without it.
This doesn't impact the Ansible extension, as only forked tasks are
started directly by children, and they are not responsible for routing
messages.
Add test so it can't happen again.