Rather than assume any structure about the Python code:
* Delete the exit_json/fail_json monkeypatches.
* Patch SystemExit rather than a magic monkeypatch-thrown exception
* Setup fake cStringIO stdin, stdout, stderr and return those along with
SystemExit exit status
* Setup _ANSIBLE_ARGS as we used to, since we still want to override
that with '{}' to prevent accidental import hangs, but also provide
the same string via sys.stdin.
* Compile the module bytecode once and re-execute it for every
invocation. May change this back again later, once some benchmarks are
done.
* Remove the fixups stuff for now, it's handled by ^ above.
Should support any "somewhat new style" Python module, including those
that just give up and dump stuff to stdout directly.
Refactor planner.py to look a lot more like runner.py. This 'structural
cutpaste' looks messy -- probably we can simplify this code, even though
it's pretty simple already.
* Add helpers.get_file() that calls back up into FileService as
necessary. This is a stopgap measure.
* Add logging to exec_args() to simplify debugging of binary runners.
These are generated by any child calling .context_by_id() without
previously knowing the context's name, such as Contexts set in
ExternalContext.master and ExternalContext.parent.
When a stream (such as unix.connect()) has its auth_id set to the
current context's, we should allow those requests too, since the request
is working with the privilege of the current context.
It looks a lot like multiple calls to _make_tmp_path() will result in
multiple temporary directories on the remote machine, only the last of
which will be cleaned up.
We must be bug-for-bug compatible for now, so ignore the problem in the
meantime.
Exits with an error if a command is not found, any undefined variable is
used, or a command in a pipeline returns an error.
Should make Travis detect failed tests.
This means test files are imported as modules, not run as scripts. THey
can still be run individually if so desired. Test coverage is measured,
and an html report generated in htmlcov/. Test cases are automativally
discovered, so they need not be listed twice. An overall
passed/failed/skipped summary is printed, rather than for each file.
Arguments passed to ./test are passed on to unit2. For instance
./test -v
will print each test name as it is run.
strip_comments() currently ignores comments on lines 1 and 2, in order
to preserve lines such as
The comments test had normal comments on those lines, hence it was
failing.