Refactor planner.py to look a lot more like runner.py. This 'structural
cutpaste' looks messy -- probably we can simplify this code, even though
it's pretty simple already.
* Add helpers.get_file() that calls back up into FileService as
necessary. This is a stopgap measure.
* Add logging to exec_args() to simplify debugging of binary runners.
These are generated by any child calling .context_by_id() without
previously knowing the context's name, such as Contexts set in
ExternalContext.master and ExternalContext.parent.
When a stream (such as unix.connect()) has its auth_id set to the
current context's, we should allow those requests too, since the request
is working with the privilege of the current context.
It looks a lot like multiple calls to _make_tmp_path() will result in
multiple temporary directories on the remote machine, only the last of
which will be cleaned up.
We must be bug-for-bug compatible for now, so ignore the problem in the
meantime.
Exits with an error if a command is not found, any undefined variable is
used, or a command in a pipeline returns an error.
Should make Travis detect failed tests.
This means test files are imported as modules, not run as scripts. THey
can still be run individually if so desired. Test coverage is measured,
and an html report generated in htmlcov/. Test cases are automativally
discovered, so they need not be listed twice. An overall
passed/failed/skipped summary is printed, rather than for each file.
Arguments passed to ./test are passed on to unit2. For instance
./test -v
will print each test name as it is run.
strip_comments() currently ignores comments on lines 1 and 2, in order
to preserve lines such as
The comments test had normal comments on those lines, hence it was
failing.
Benefits:
- More correct than re.sub()
- Better handling of trailing whitespace
- Recognises doc-strings regardless of quoting style
Limitations:
- Still not entirely correct
- Creates a syntax error when function/class body is only a docstring
- Doesn't handle indented docstrings yet
- Slower by 50x - 8-10 ms vs 0.2 ms for re.sub()
- Not much scope for improving this, tokenize is 100% pure Python
- Complex state machine, harder to understand
- Higher line count in parent.py
- Untested with Mitogen parent on Python 2.x and child on Python 2.x+y
No change
- Only requires Python stdlib modules