The undocumented 'tmp' parameter controls whether _execute_module()
would delete anything on 2.3, so mimic that. This means
_execute_remove_stat() calls will not blow away the temp directory,
which broke the unarchive plugin.
When creating a context using Router.method(via=somechild),
unidirectional mode was set on the new child correctly, however if the
child were to call Router.method(), due to a typing mistake the new
child would start without it.
This doesn't impact the Ansible extension, as only forked tasks are
started directly by children, and they are not responsible for routing
messages.
Add test so it can't happen again.
Previously given something like:
l = mitogen.core.Latch()
l.put(1)
l.put(2)
s = mitogen.select.Select([l], oneshot=False)
assert 1 == s.get(block=False)
assert 2 == s.get(block=False)
The second call would throw TimeoutError, because Select.add() only
queued the receiver/latch once if it was non-empty, rather than once for
each item as should happen.
Move all details of broker/router setup out of connection.py, instead
deferring it to a WorkerModel class exported by process.py via
get_worker_model(). The running strategy can override the configured
worker model via _get_worker_model().
ClassicWorkerModel is installed by default, which implements the
extension's existing process model.
Add optional support for the third party setproctitle module, so
children have pretty names in ps output.
Add optional support for per-CPU multiplexers to classic runs.
Unlike on Debian, some environment variables that tickle
getpass.getuser() are being inherited. So use getuid() instead.
Also install the doas binary on CentOS. CI was changed (I believe) to
shrink the configuration matrix, and now these tests run on CentOS too.
This relies on the previous commit resetting global variables.
Update clean_shutdown() to handle duplicate calls, due to tests
repeatedly installing it.
Split Stream into many, many classes
* mitogen.parent.Connection: Handles connection setup logic only.
* Maintain references to stdout and stderr streams.
* Manages TimerList timer to cancel connection attempt after
deadline
* Blocking setup code replaced by async equivalents running on the
broker
* mitogen.parent.Options: Tracks connection-specific options. This
keeps the connection class small, but more importantly, it is
generic to the future desire to build and execute command lines
without starting a full connection.
* mitogen.core.Protocol: Handles program behaviour relating to events
on a stream. Protocol performs no IO of its own, instead deferring
it to Stream and Side. This makes testing much easier, and means
libssh can reimplement Stream and Side to reuse MitogenProtocol
* mitogen.core.MitogenProtocol: Guts of the old Mitogen stream
implementtion
* mitogen.core.BufferedWriter: Guts of the old Mitogen buffered
transmit implementation, made generic
* mitogen.core.DelineatedProtocol: Guts of the old IoLogger, knows how
to split up input and pass it on to a
on_line_received()/on_partial_line_received() callback.
* mitogen.parent.BootstrapProtocol: Asynchronous equivalent of the old
blocking connect code. Waits for various prompts (MITO001 etc) and
writes the bootstrap using a BufferedWriter. On success, switches
the stream to MitogenProtocol.
* mitogen.core.Message: move encoding parts of MitogenProtocol out to
Message (where it belongs) and write a bunch of new tests for
pickling.
* The bizarre Stream.construct() is gone now, Option.__init__ is its
own constructor. Should fix many LGTM errors.
* Update all connection methods: Every connection method is updated to
use async logic, defining protocols as required to handle interactive
prompts like in SSH or su. Add new real integration tests for at least
doas and su.
* Eliminate manual fd management: File descriptors are trapped in file
objects at their point of origin, and Side is updated to use file
objects rather than raw descriptors. This eliminates a whole class of
bugs where unrelated FDs could be closed by the wrong component. Now
an FD's open/closed status is fused to it everywhere in the library.
* Halve file descriptor usage: now FD open/close state is tracked by
its file object, we don't need to duplicate FDs everywhere so that
receive/transmit side can be closed independently. Instead both sides
back on to the same file object. Closes#26, Closes#470.
* Remove most uses of dup/dup2: Closes#256. File descriptors are
trapped in a common file object and shared among classes. The
remaining few uses for dup/dup2 are as close to minimal as possible.
* Introduce mitogen.parent.Process: uniform interface for subprocesses
created either via mitogen.fork or the subprocess module. Remove all
the crap where we steal a pid from subprocess guts. Now we use
subprocess to manage its processes as it should be. Closes#169 by
using the new Timers facility to poll for a slow-to-exit subprocess.
* Fix su password race: Closes#363. DelineatedProtocol naturally
retries partially received lines, preventing the cause of the original
race.
* Delete old blocking IO utility functions
iter_read()/write_all()/discard_until().
Closes#26Closes#147Closes#169Closes#256Closes#363Closes#419Closes#470
../data/stubs/stub-kubectl.py exec -it localhost -- /usr/bin/python -c "...":
Traceback (most recent call last):
File "<string>", line 1, in <module>
LookupError: unknown encoding: base64
It's not clear why this is happening. "stub-kubectl.py" is executed with
the 2.7 virtualenv, while the exec() that happens inside stub-kubectl
was for "/usr/bin/python".
That second Python can't find chunks of its stdlib:
stat("/usr/lib/python2.7/encodings/base64", 0x7ffde8744c60) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.7/encodings/base64.so", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.7/encodings/base64module.so", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.7/encodings/base64.py", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.7/encodings/base64.pyc", O_RDONLY) = -1 ENOENT (No such file or directory)
write(2, "Traceback (most recent call last):\n", 35) = 35
write(2, " File \"<string>\", line 1, in <module>\n", 39) = 39
This is the most minimal change for what might be relatively minimal
edge case. Alternative is replacing reload(), but let's not do that yet.
Closes#555
The idea behind transport=smart is to select between paramiko and
OpenSSH given the availability of connection multiplexing and/or OSX
kernel bugs. We need to make no such choice.
There has always been a race in PushFileService since given a parent
asked to forward modules to two children via some intermediary:
interm = router.local()
c1 = router.local(via=interm)
c2 = router.local(via=interm)
service.propagate_to(c1, 'foo/bar.py')
service.propagate_to(c2, 'foo/bar.py')
Two calls will be emitted to 'interm':
PushFileService.store_and_forward(c1, 'foo/bar.py', [blob])
PushFileService.store(c2, 'foo/bar.py')
Which will be processed in-order up to the point where service pool
threads in 'interm' are woken to process the message.
While it is guaranteed store_and_forward() will be processed first, no
guarantee existed that its assigned pool thread would wake and take
_lock first, thus it was possible for forward() to win the race, and for
a request to arrive to forward a file that had not been placed in local
cache yet.
Here we get rid of SerializedInvoker entirely, as it is partially to
blame for hiding the race: SerializedInvoker can only ensure no two
messages are processed simultaneously, it cannot ensure the messages are
processed in their intended order.
Instead, teach forward() that it may be called before
store_and_forward(), and if that is the case, to place the forward
request on to _waiters alongside any local threads blocked in get().
This was needed at some point in the past, but the tests don't seem to
care about it any more. We'll fix any CI breakage by changing the tests,
since verifying implicit localhost behaviour is important.
Minify-safe files are marked with a magical "# !mitogen: minify_safe"
comment anywhere in the file, which activates the minifier. The result
is naturally cached by ModuleResponder, therefore lru_cache is gone too.
Given:
import os, mitogen
@mitogen.main()
def main(router):
c = router.ssh(hostname='k3')
c.call(os.getpid)
router.sudo(via=c)
SSH footprint drops from 56.2 KiB to 42.75 KiB (-23.9%)
Ansible "shell: hostname" drops 149.26 KiB to 117.42 KiB (-21.3%)
Ansible 2.3/Python 2.4 work revealed there is no guarantee a slow target
will have written the initial job status file out before a fast
controller makes an initial check for it. Therefore, provide AsyncRunner
with a sender it should send a message to when the initial job file has
been written.
As a bonus, also catch and report exceptions happening early in
AsyncRunner, rather than leaving them to end up in -vvv output.