Now there is a separate SHUTDOWN message that relies only on being
received by the broker thread, the main thread can be hung horribly and
the process will still eventually receive a SIGTERM.
* Header now contains (src, dst) context IDs for routing.
* econtext.context_id now contains current process' context ID.
* Now do 16kb-sized reads rather than 4kb.
* econtext package is uniformly imported in econtext/core.py in slave
and master.
* Introduce econtext.core.Message() to centralize pickling policy, and
various function interfaces, may rip it out again later.
* Teach slave/first stage to preserve the copy of econtext.core sent to
it, so that it can be used for subsequent slave-of-slave bootstraps.
* Disconnect Stream from Context, and teach Context to send messages via
Router. In this way the Context class works identically for slaves
directly connected via a Stream, or those for whom other slaves are
acting as proxies.
* Implement Router, which knows a list of contexts reachable via a
Stream. Move context registry out of Broker and into Router.
* Move _invoke crap out of stream and into Context.
* Try to avoid pickling on the Broker thread wherever possible.
* Delete connection-specific fields from Context, they live on the
associated Stream subclass now instead.
* Merge alloc_handle() and add_handle_cb() into add_handler().
* s/enqueue/send/
* Add a hacky guard to prevent send_await() deadlock from Broker thread.
* Temporarily break shutdown logic: graceful shutdown is broken since
Broker doesn't know about which contexts exist any more.
* Handle EIO in iter_read() too. Also need to support ECONNRESET in here.
* Make iter_read() show last 100 bytes on failure.
* econtext.master.connect() is now econtext.master.Router.connect(),
move most of the context/stream construction cutpaste into a single
function, and Stream.construct().
* Stop using sys.executable, since it is the empty string when Python
has been started with a custom argv[0]. Hard-wire python2.7 for now.
* Streams now have names, which are used as the default name for the
associated Context during construction. That way Stream<->Context
association is still fairly obviously and Stream.repr() prints
something nice.
Stop using cPickle on the broker thread where it is not known whether
the pickle data would cause the import machinery to be invoked, which
currently relies on blocking calls. Huge mess but it works.
This is due to:
context.call(some.module.func, another.module.func)
We stringify ("some.module", "func"), but the reference to
another.module.func is passed into the pickle machinery, and there's no
way to generically stringify all function references in user data for
reification on the main thread, without doing something like this
instead.
* Start splitting docs up into internals.rst / api.rst
* Docs for lots more of econtext.core.
* Get rid of _update_stream() and has_output(), replace with individual
functions called as state changes.
* Add Broker.on_thread() and remove Stream._lock: simply call
on_thread() to ensure buffer management is linearized.
* Rename read_side/write_side to receive_side/transmit_side like event
handler names.
* Clean up some more repr / debug logs.
* Move handle cleanup to Context.on_shutdown where it belongs.
* Make wake() a noop when called from broker thread.
* Replace graceful_count crap with Side.graceful attribute, add
Broker.keep_alive() to check whether any registered readers want to
be kept alive for graceful shutdown() or any child contexts with a
connected stream exist.
* Make master.Broker timeout slightly longer than slave broker.
* Add generic on_thread() to allow running code on the IO thread.