By the the time Waker.wake() writes to the pipe, any shared state that
would trigger action on the Broker thread has already been updated, so
reading subsequent bytes caused by later Waker.wake() can't miss
anything.
This avoids spewing tons of logs messages every time there is a storm of
user-generated log messages.
* Header now contains (src, dst) context IDs for routing.
* econtext.context_id now contains current process' context ID.
* Now do 16kb-sized reads rather than 4kb.
* econtext package is uniformly imported in econtext/core.py in slave
and master.
* Introduce econtext.core.Message() to centralize pickling policy, and
various function interfaces, may rip it out again later.
* Teach slave/first stage to preserve the copy of econtext.core sent to
it, so that it can be used for subsequent slave-of-slave bootstraps.
* Disconnect Stream from Context, and teach Context to send messages via
Router. In this way the Context class works identically for slaves
directly connected via a Stream, or those for whom other slaves are
acting as proxies.
* Implement Router, which knows a list of contexts reachable via a
Stream. Move context registry out of Broker and into Router.
* Move _invoke crap out of stream and into Context.
* Try to avoid pickling on the Broker thread wherever possible.
* Delete connection-specific fields from Context, they live on the
associated Stream subclass now instead.
* Merge alloc_handle() and add_handle_cb() into add_handler().
* s/enqueue/send/
* Add a hacky guard to prevent send_await() deadlock from Broker thread.
* Temporarily break shutdown logic: graceful shutdown is broken since
Broker doesn't know about which contexts exist any more.
* Handle EIO in iter_read() too. Also need to support ECONNRESET in here.
* Make iter_read() show last 100 bytes on failure.
* econtext.master.connect() is now econtext.master.Router.connect(),
move most of the context/stream construction cutpaste into a single
function, and Stream.construct().
* Stop using sys.executable, since it is the empty string when Python
has been started with a custom argv[0]. Hard-wire python2.7 for now.
* Streams now have names, which are used as the default name for the
associated Context during construction. That way Stream<->Context
association is still fairly obviously and Stream.repr() prints
something nice.
Stop using cPickle on the broker thread where it is not known whether
the pickle data would cause the import machinery to be invoked, which
currently relies on blocking calls. Huge mess but it works.
This is due to:
context.call(some.module.func, another.module.func)
We stringify ("some.module", "func"), but the reference to
another.module.func is passed into the pickle machinery, and there's no
way to generically stringify all function references in user data for
reification on the main thread, without doing something like this
instead.
PyFunction_New() and type_new() both simply lookup the __name__ of the
global scope in which a function or class is defined in order to
determine its __module__. So we can do a better job of ensuring
__module__ is set correctly by simply overriding __name__ before
defining any functions or classes.
Works identically in Python 3.
Since all Ansible modules ever written are using worst practice Python,
the module loader must be rewritten to cope with their horrors.
Ansible is woeful software:
* AnsibleModule argument declarations appear within the main() function,
so they can't be introspected prior to execution.
* No if __name__ == '__main__' guard means they can't be introspected
without triggering execution.
* By default the main() function attempts to read from stdin, hanging
our IO thread.
* So much unspeakable crap.
This rewrites the module loader to avoid actually running a module if it
can possibly be avoided. The downside is that the new loader must be
aware of far more details of the Python module mechanism. For example
with the new importer, namespace packages are broken at the very least.
On the plus side, now the module loader will be able to cope with
Django.
* Start splitting docs up into internals.rst / api.rst
* Docs for lots more of econtext.core.
* Get rid of _update_stream() and has_output(), replace with individual
functions called as state changes.
* Add Broker.on_thread() and remove Stream._lock: simply call
on_thread() to ensure buffer management is linearized.
* Rename read_side/write_side to receive_side/transmit_side like event
handler names.
* Clean up some more repr / debug logs.
* Move handle cleanup to Context.on_shutdown where it belongs.
* Make wake() a noop when called from broker thread.
* Replace graceful_count crap with Side.graceful attribute, add
Broker.keep_alive() to check whether any registered readers want to
be kept alive for graceful shutdown() or any child contexts with a
connected stream exist.
* Make master.Broker timeout slightly longer than slave broker.
* Add generic on_thread() to allow running code on the IO thread.
* Use TLS to track whether importer is currently running. Avoids
needing to maintain an ignore stack.
* Print more debugging around cases where Importer skips a module.
* If a module is part of a package, import the package and examine its
__loader__. If we are not the loader, refuse to load it.